Test Report: KVM_Linux_crio 19787

                    
                      c1252a7f2092ae156b37572b060158ae23786afe:2024-10-10:36592
                    
                

Test fail (31/318)

Order failed test Duration
35 TestAddons/parallel/Ingress 150.45
37 TestAddons/parallel/MetricsServer 350.41
45 TestAddons/StoppedEnableDisable 154.32
164 TestMultiControlPlane/serial/StopSecondaryNode 141.81
165 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 5.74
166 TestMultiControlPlane/serial/RestartSecondaryNode 6.74
167 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 6.16
168 TestMultiControlPlane/serial/RestartClusterKeepsNodes 401.91
171 TestMultiControlPlane/serial/StopCluster 142.2
231 TestMultiNode/serial/RestartKeepsNodes 323.29
233 TestMultiNode/serial/StopMultiNode 145.24
240 TestPreload 271.66
248 TestKubernetesUpgrade 371.95
322 TestStartStop/group/old-k8s-version/serial/FirstStart 286.53
345 TestStartStop/group/no-preload/serial/Stop 139.38
353 TestStartStop/group/embed-certs/serial/Stop 139.33
360 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.42
362 TestStartStop/group/old-k8s-version/serial/DeployApp 0.51
363 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 83.7
366 TestStartStop/group/default-k8s-diff-port/serial/Stop 139.14
367 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.42
371 TestStartStop/group/old-k8s-version/serial/SecondStart 714.09
372 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.38
374 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 544.36
375 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 544.61
376 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 544.77
377 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.55
378 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 439.72
379 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 536.07
380 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 351.36
381 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 172.71
x
+
TestAddons/parallel/Ingress (150.45s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-473910 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-473910 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-473910 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [2d2384a6-8648-46d4-94c5-9c3ec997ecdc] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [2d2384a6-8648-46d4-94c5-9c3ec997ecdc] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.00645567s
I1010 18:00:47.173810   88876 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-473910 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-473910 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m9.27401952s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-473910 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-473910 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.238
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-473910 -n addons-473910
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-473910 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-473910 logs -n 25: (1.266665468s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                                                                       | minikube             | jenkins | v1.34.0 | 10 Oct 24 17:57 UTC | 10 Oct 24 17:57 UTC |
	| delete  | -p download-only-497455                                                                     | download-only-497455 | jenkins | v1.34.0 | 10 Oct 24 17:57 UTC | 10 Oct 24 17:57 UTC |
	| delete  | -p download-only-058787                                                                     | download-only-058787 | jenkins | v1.34.0 | 10 Oct 24 17:57 UTC | 10 Oct 24 17:57 UTC |
	| delete  | -p download-only-497455                                                                     | download-only-497455 | jenkins | v1.34.0 | 10 Oct 24 17:57 UTC | 10 Oct 24 17:57 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-244092 | jenkins | v1.34.0 | 10 Oct 24 17:57 UTC |                     |
	|         | binary-mirror-244092                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:46773                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-244092                                                                     | binary-mirror-244092 | jenkins | v1.34.0 | 10 Oct 24 17:57 UTC | 10 Oct 24 17:57 UTC |
	| addons  | enable dashboard -p                                                                         | addons-473910        | jenkins | v1.34.0 | 10 Oct 24 17:57 UTC |                     |
	|         | addons-473910                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-473910        | jenkins | v1.34.0 | 10 Oct 24 17:57 UTC |                     |
	|         | addons-473910                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-473910 --wait=true                                                                | addons-473910        | jenkins | v1.34.0 | 10 Oct 24 17:57 UTC | 10 Oct 24 17:59 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	| addons  | addons-473910 addons disable                                                                | addons-473910        | jenkins | v1.34.0 | 10 Oct 24 17:59 UTC | 10 Oct 24 17:59 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-473910 addons disable                                                                | addons-473910        | jenkins | v1.34.0 | 10 Oct 24 18:00 UTC | 10 Oct 24 18:00 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-473910 addons disable                                                                | addons-473910        | jenkins | v1.34.0 | 10 Oct 24 18:00 UTC | 10 Oct 24 18:00 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| ip      | addons-473910 ip                                                                            | addons-473910        | jenkins | v1.34.0 | 10 Oct 24 18:00 UTC | 10 Oct 24 18:00 UTC |
	| addons  | addons-473910 addons disable                                                                | addons-473910        | jenkins | v1.34.0 | 10 Oct 24 18:00 UTC | 10 Oct 24 18:00 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-473910 addons                                                                        | addons-473910        | jenkins | v1.34.0 | 10 Oct 24 18:00 UTC | 10 Oct 24 18:00 UTC |
	|         | disable inspektor-gadget                                                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-473910 ssh curl -s                                                                   | addons-473910        | jenkins | v1.34.0 | 10 Oct 24 18:00 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| ssh     | addons-473910 ssh cat                                                                       | addons-473910        | jenkins | v1.34.0 | 10 Oct 24 18:00 UTC | 10 Oct 24 18:00 UTC |
	|         | /opt/local-path-provisioner/pvc-4c005375-b770-4d67-a3b3-31e1e4368658_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-473910 addons disable                                                                | addons-473910        | jenkins | v1.34.0 | 10 Oct 24 18:00 UTC | 10 Oct 24 18:01 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-473910 addons                                                                        | addons-473910        | jenkins | v1.34.0 | 10 Oct 24 18:01 UTC | 10 Oct 24 18:01 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-473910 addons                                                                        | addons-473910        | jenkins | v1.34.0 | 10 Oct 24 18:01 UTC | 10 Oct 24 18:01 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-473910 addons                                                                        | addons-473910        | jenkins | v1.34.0 | 10 Oct 24 18:01 UTC | 10 Oct 24 18:01 UTC |
	|         | disable nvidia-device-plugin                                                                |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-473910        | jenkins | v1.34.0 | 10 Oct 24 18:01 UTC | 10 Oct 24 18:01 UTC |
	|         | -p addons-473910                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-473910 addons                                                                        | addons-473910        | jenkins | v1.34.0 | 10 Oct 24 18:01 UTC | 10 Oct 24 18:01 UTC |
	|         | disable cloud-spanner                                                                       |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-473910 addons disable                                                                | addons-473910        | jenkins | v1.34.0 | 10 Oct 24 18:01 UTC | 10 Oct 24 18:02 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-473910 ip                                                                            | addons-473910        | jenkins | v1.34.0 | 10 Oct 24 18:02 UTC | 10 Oct 24 18:02 UTC |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/10 17:57:48
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1010 17:57:48.070881   89506 out.go:345] Setting OutFile to fd 1 ...
	I1010 17:57:48.071011   89506 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 17:57:48.071023   89506 out.go:358] Setting ErrFile to fd 2...
	I1010 17:57:48.071030   89506 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 17:57:48.071233   89506 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19787-81676/.minikube/bin
	I1010 17:57:48.071830   89506 out.go:352] Setting JSON to false
	I1010 17:57:48.072647   89506 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":6014,"bootTime":1728577054,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1010 17:57:48.072748   89506 start.go:139] virtualization: kvm guest
	I1010 17:57:48.074963   89506 out.go:177] * [addons-473910] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1010 17:57:48.076366   89506 out.go:177]   - MINIKUBE_LOCATION=19787
	I1010 17:57:48.076387   89506 notify.go:220] Checking for updates...
	I1010 17:57:48.079485   89506 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1010 17:57:48.081066   89506 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19787-81676/kubeconfig
	I1010 17:57:48.082398   89506 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19787-81676/.minikube
	I1010 17:57:48.083656   89506 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1010 17:57:48.084804   89506 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1010 17:57:48.086204   89506 driver.go:394] Setting default libvirt URI to qemu:///system
	I1010 17:57:48.120527   89506 out.go:177] * Using the kvm2 driver based on user configuration
	I1010 17:57:48.121879   89506 start.go:297] selected driver: kvm2
	I1010 17:57:48.121898   89506 start.go:901] validating driver "kvm2" against <nil>
	I1010 17:57:48.121909   89506 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1010 17:57:48.122655   89506 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 17:57:48.122736   89506 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19787-81676/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1010 17:57:48.137648   89506 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1010 17:57:48.137702   89506 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1010 17:57:48.137941   89506 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1010 17:57:48.137973   89506 cni.go:84] Creating CNI manager for ""
	I1010 17:57:48.138021   89506 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1010 17:57:48.138029   89506 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1010 17:57:48.138098   89506 start.go:340] cluster config:
	{Name:addons-473910 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-473910 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 17:57:48.138194   89506 iso.go:125] acquiring lock: {Name:mk53f4624ffe67cac4f5d6b29b9ed87292a010a5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 17:57:48.140093   89506 out.go:177] * Starting "addons-473910" primary control-plane node in "addons-473910" cluster
	I1010 17:57:48.141415   89506 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1010 17:57:48.141448   89506 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1010 17:57:48.141457   89506 cache.go:56] Caching tarball of preloaded images
	I1010 17:57:48.141538   89506 preload.go:172] Found /home/jenkins/minikube-integration/19787-81676/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1010 17:57:48.141550   89506 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1010 17:57:48.141881   89506 profile.go:143] Saving config to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/addons-473910/config.json ...
	I1010 17:57:48.141907   89506 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/addons-473910/config.json: {Name:mke534372be6f27906cf058c392cb887dd55fb57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 17:57:48.142055   89506 start.go:360] acquireMachinesLock for addons-473910: {Name:mkf50a8a3600473176c10a3ea212772e747151e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1010 17:57:48.142099   89506 start.go:364] duration metric: took 31.957µs to acquireMachinesLock for "addons-473910"
	I1010 17:57:48.142115   89506 start.go:93] Provisioning new machine with config: &{Name:addons-473910 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:addons-473910 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1010 17:57:48.142169   89506 start.go:125] createHost starting for "" (driver="kvm2")
	I1010 17:57:48.143734   89506 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1010 17:57:48.143874   89506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 17:57:48.143914   89506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 17:57:48.158502   89506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40005
	I1010 17:57:48.159083   89506 main.go:141] libmachine: () Calling .GetVersion
	I1010 17:57:48.159752   89506 main.go:141] libmachine: Using API Version  1
	I1010 17:57:48.159776   89506 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 17:57:48.160146   89506 main.go:141] libmachine: () Calling .GetMachineName
	I1010 17:57:48.160365   89506 main.go:141] libmachine: (addons-473910) Calling .GetMachineName
	I1010 17:57:48.160554   89506 main.go:141] libmachine: (addons-473910) Calling .DriverName
	I1010 17:57:48.160716   89506 start.go:159] libmachine.API.Create for "addons-473910" (driver="kvm2")
	I1010 17:57:48.160747   89506 client.go:168] LocalClient.Create starting
	I1010 17:57:48.160786   89506 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem
	I1010 17:57:48.228829   89506 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem
	I1010 17:57:48.488615   89506 main.go:141] libmachine: Running pre-create checks...
	I1010 17:57:48.488644   89506 main.go:141] libmachine: (addons-473910) Calling .PreCreateCheck
	I1010 17:57:48.489161   89506 main.go:141] libmachine: (addons-473910) Calling .GetConfigRaw
	I1010 17:57:48.489670   89506 main.go:141] libmachine: Creating machine...
	I1010 17:57:48.489687   89506 main.go:141] libmachine: (addons-473910) Calling .Create
	I1010 17:57:48.489903   89506 main.go:141] libmachine: (addons-473910) Creating KVM machine...
	I1010 17:57:48.491281   89506 main.go:141] libmachine: (addons-473910) DBG | found existing default KVM network
	I1010 17:57:48.492080   89506 main.go:141] libmachine: (addons-473910) DBG | I1010 17:57:48.491917   89528 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002211f0}
	I1010 17:57:48.492142   89506 main.go:141] libmachine: (addons-473910) DBG | created network xml: 
	I1010 17:57:48.492168   89506 main.go:141] libmachine: (addons-473910) DBG | <network>
	I1010 17:57:48.492178   89506 main.go:141] libmachine: (addons-473910) DBG |   <name>mk-addons-473910</name>
	I1010 17:57:48.492192   89506 main.go:141] libmachine: (addons-473910) DBG |   <dns enable='no'/>
	I1010 17:57:48.492201   89506 main.go:141] libmachine: (addons-473910) DBG |   
	I1010 17:57:48.492210   89506 main.go:141] libmachine: (addons-473910) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1010 17:57:48.492219   89506 main.go:141] libmachine: (addons-473910) DBG |     <dhcp>
	I1010 17:57:48.492227   89506 main.go:141] libmachine: (addons-473910) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1010 17:57:48.492255   89506 main.go:141] libmachine: (addons-473910) DBG |     </dhcp>
	I1010 17:57:48.492279   89506 main.go:141] libmachine: (addons-473910) DBG |   </ip>
	I1010 17:57:48.492292   89506 main.go:141] libmachine: (addons-473910) DBG |   
	I1010 17:57:48.492301   89506 main.go:141] libmachine: (addons-473910) DBG | </network>
	I1010 17:57:48.492314   89506 main.go:141] libmachine: (addons-473910) DBG | 
	I1010 17:57:48.497633   89506 main.go:141] libmachine: (addons-473910) DBG | trying to create private KVM network mk-addons-473910 192.168.39.0/24...
	I1010 17:57:48.565742   89506 main.go:141] libmachine: (addons-473910) DBG | private KVM network mk-addons-473910 192.168.39.0/24 created
	I1010 17:57:48.565778   89506 main.go:141] libmachine: (addons-473910) DBG | I1010 17:57:48.565696   89528 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19787-81676/.minikube
	I1010 17:57:48.565809   89506 main.go:141] libmachine: (addons-473910) Setting up store path in /home/jenkins/minikube-integration/19787-81676/.minikube/machines/addons-473910 ...
	I1010 17:57:48.565829   89506 main.go:141] libmachine: (addons-473910) Building disk image from file:///home/jenkins/minikube-integration/19787-81676/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso
	I1010 17:57:48.565925   89506 main.go:141] libmachine: (addons-473910) Downloading /home/jenkins/minikube-integration/19787-81676/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19787-81676/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso...
	I1010 17:57:48.849550   89506 main.go:141] libmachine: (addons-473910) DBG | I1010 17:57:48.849386   89528 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/addons-473910/id_rsa...
	I1010 17:57:48.967274   89506 main.go:141] libmachine: (addons-473910) DBG | I1010 17:57:48.967133   89528 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/addons-473910/addons-473910.rawdisk...
	I1010 17:57:48.967309   89506 main.go:141] libmachine: (addons-473910) DBG | Writing magic tar header
	I1010 17:57:48.967323   89506 main.go:141] libmachine: (addons-473910) DBG | Writing SSH key tar header
	I1010 17:57:48.967336   89506 main.go:141] libmachine: (addons-473910) DBG | I1010 17:57:48.967252   89528 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19787-81676/.minikube/machines/addons-473910 ...
	I1010 17:57:48.967352   89506 main.go:141] libmachine: (addons-473910) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/addons-473910
	I1010 17:57:48.967370   89506 main.go:141] libmachine: (addons-473910) Setting executable bit set on /home/jenkins/minikube-integration/19787-81676/.minikube/machines/addons-473910 (perms=drwx------)
	I1010 17:57:48.967385   89506 main.go:141] libmachine: (addons-473910) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19787-81676/.minikube/machines
	I1010 17:57:48.967395   89506 main.go:141] libmachine: (addons-473910) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19787-81676/.minikube
	I1010 17:57:48.967401   89506 main.go:141] libmachine: (addons-473910) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19787-81676
	I1010 17:57:48.967408   89506 main.go:141] libmachine: (addons-473910) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1010 17:57:48.967412   89506 main.go:141] libmachine: (addons-473910) DBG | Checking permissions on dir: /home/jenkins
	I1010 17:57:48.967420   89506 main.go:141] libmachine: (addons-473910) DBG | Checking permissions on dir: /home
	I1010 17:57:48.967428   89506 main.go:141] libmachine: (addons-473910) DBG | Skipping /home - not owner
	I1010 17:57:48.967473   89506 main.go:141] libmachine: (addons-473910) Setting executable bit set on /home/jenkins/minikube-integration/19787-81676/.minikube/machines (perms=drwxr-xr-x)
	I1010 17:57:48.967503   89506 main.go:141] libmachine: (addons-473910) Setting executable bit set on /home/jenkins/minikube-integration/19787-81676/.minikube (perms=drwxr-xr-x)
	I1010 17:57:48.967515   89506 main.go:141] libmachine: (addons-473910) Setting executable bit set on /home/jenkins/minikube-integration/19787-81676 (perms=drwxrwxr-x)
	I1010 17:57:48.967523   89506 main.go:141] libmachine: (addons-473910) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1010 17:57:48.967537   89506 main.go:141] libmachine: (addons-473910) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1010 17:57:48.967545   89506 main.go:141] libmachine: (addons-473910) Creating domain...
	I1010 17:57:48.968726   89506 main.go:141] libmachine: (addons-473910) define libvirt domain using xml: 
	I1010 17:57:48.968757   89506 main.go:141] libmachine: (addons-473910) <domain type='kvm'>
	I1010 17:57:48.968768   89506 main.go:141] libmachine: (addons-473910)   <name>addons-473910</name>
	I1010 17:57:48.968779   89506 main.go:141] libmachine: (addons-473910)   <memory unit='MiB'>4000</memory>
	I1010 17:57:48.968787   89506 main.go:141] libmachine: (addons-473910)   <vcpu>2</vcpu>
	I1010 17:57:48.968797   89506 main.go:141] libmachine: (addons-473910)   <features>
	I1010 17:57:48.968804   89506 main.go:141] libmachine: (addons-473910)     <acpi/>
	I1010 17:57:48.968811   89506 main.go:141] libmachine: (addons-473910)     <apic/>
	I1010 17:57:48.968821   89506 main.go:141] libmachine: (addons-473910)     <pae/>
	I1010 17:57:48.968829   89506 main.go:141] libmachine: (addons-473910)     
	I1010 17:57:48.968864   89506 main.go:141] libmachine: (addons-473910)   </features>
	I1010 17:57:48.968883   89506 main.go:141] libmachine: (addons-473910)   <cpu mode='host-passthrough'>
	I1010 17:57:48.968894   89506 main.go:141] libmachine: (addons-473910)   
	I1010 17:57:48.968919   89506 main.go:141] libmachine: (addons-473910)   </cpu>
	I1010 17:57:48.968929   89506 main.go:141] libmachine: (addons-473910)   <os>
	I1010 17:57:48.968937   89506 main.go:141] libmachine: (addons-473910)     <type>hvm</type>
	I1010 17:57:48.968948   89506 main.go:141] libmachine: (addons-473910)     <boot dev='cdrom'/>
	I1010 17:57:48.968956   89506 main.go:141] libmachine: (addons-473910)     <boot dev='hd'/>
	I1010 17:57:48.968964   89506 main.go:141] libmachine: (addons-473910)     <bootmenu enable='no'/>
	I1010 17:57:48.968973   89506 main.go:141] libmachine: (addons-473910)   </os>
	I1010 17:57:48.969011   89506 main.go:141] libmachine: (addons-473910)   <devices>
	I1010 17:57:48.969033   89506 main.go:141] libmachine: (addons-473910)     <disk type='file' device='cdrom'>
	I1010 17:57:48.969043   89506 main.go:141] libmachine: (addons-473910)       <source file='/home/jenkins/minikube-integration/19787-81676/.minikube/machines/addons-473910/boot2docker.iso'/>
	I1010 17:57:48.969054   89506 main.go:141] libmachine: (addons-473910)       <target dev='hdc' bus='scsi'/>
	I1010 17:57:48.969061   89506 main.go:141] libmachine: (addons-473910)       <readonly/>
	I1010 17:57:48.969069   89506 main.go:141] libmachine: (addons-473910)     </disk>
	I1010 17:57:48.969076   89506 main.go:141] libmachine: (addons-473910)     <disk type='file' device='disk'>
	I1010 17:57:48.969085   89506 main.go:141] libmachine: (addons-473910)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1010 17:57:48.969099   89506 main.go:141] libmachine: (addons-473910)       <source file='/home/jenkins/minikube-integration/19787-81676/.minikube/machines/addons-473910/addons-473910.rawdisk'/>
	I1010 17:57:48.969107   89506 main.go:141] libmachine: (addons-473910)       <target dev='hda' bus='virtio'/>
	I1010 17:57:48.969112   89506 main.go:141] libmachine: (addons-473910)     </disk>
	I1010 17:57:48.969119   89506 main.go:141] libmachine: (addons-473910)     <interface type='network'>
	I1010 17:57:48.969124   89506 main.go:141] libmachine: (addons-473910)       <source network='mk-addons-473910'/>
	I1010 17:57:48.969131   89506 main.go:141] libmachine: (addons-473910)       <model type='virtio'/>
	I1010 17:57:48.969136   89506 main.go:141] libmachine: (addons-473910)     </interface>
	I1010 17:57:48.969142   89506 main.go:141] libmachine: (addons-473910)     <interface type='network'>
	I1010 17:57:48.969148   89506 main.go:141] libmachine: (addons-473910)       <source network='default'/>
	I1010 17:57:48.969154   89506 main.go:141] libmachine: (addons-473910)       <model type='virtio'/>
	I1010 17:57:48.969159   89506 main.go:141] libmachine: (addons-473910)     </interface>
	I1010 17:57:48.969164   89506 main.go:141] libmachine: (addons-473910)     <serial type='pty'>
	I1010 17:57:48.969169   89506 main.go:141] libmachine: (addons-473910)       <target port='0'/>
	I1010 17:57:48.969175   89506 main.go:141] libmachine: (addons-473910)     </serial>
	I1010 17:57:48.969180   89506 main.go:141] libmachine: (addons-473910)     <console type='pty'>
	I1010 17:57:48.969190   89506 main.go:141] libmachine: (addons-473910)       <target type='serial' port='0'/>
	I1010 17:57:48.969198   89506 main.go:141] libmachine: (addons-473910)     </console>
	I1010 17:57:48.969202   89506 main.go:141] libmachine: (addons-473910)     <rng model='virtio'>
	I1010 17:57:48.969214   89506 main.go:141] libmachine: (addons-473910)       <backend model='random'>/dev/random</backend>
	I1010 17:57:48.969222   89506 main.go:141] libmachine: (addons-473910)     </rng>
	I1010 17:57:48.969229   89506 main.go:141] libmachine: (addons-473910)     
	I1010 17:57:48.969233   89506 main.go:141] libmachine: (addons-473910)     
	I1010 17:57:48.969238   89506 main.go:141] libmachine: (addons-473910)   </devices>
	I1010 17:57:48.969244   89506 main.go:141] libmachine: (addons-473910) </domain>
	I1010 17:57:48.969281   89506 main.go:141] libmachine: (addons-473910) 
	I1010 17:57:48.973836   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined MAC address 52:54:00:6d:5c:17 in network default
	I1010 17:57:48.974505   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:57:48.974529   89506 main.go:141] libmachine: (addons-473910) Ensuring networks are active...
	I1010 17:57:48.975241   89506 main.go:141] libmachine: (addons-473910) Ensuring network default is active
	I1010 17:57:48.975698   89506 main.go:141] libmachine: (addons-473910) Ensuring network mk-addons-473910 is active
	I1010 17:57:48.976185   89506 main.go:141] libmachine: (addons-473910) Getting domain xml...
	I1010 17:57:48.976915   89506 main.go:141] libmachine: (addons-473910) Creating domain...
	I1010 17:57:50.189530   89506 main.go:141] libmachine: (addons-473910) Waiting to get IP...
	I1010 17:57:50.190355   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:57:50.190802   89506 main.go:141] libmachine: (addons-473910) DBG | unable to find current IP address of domain addons-473910 in network mk-addons-473910
	I1010 17:57:50.190831   89506 main.go:141] libmachine: (addons-473910) DBG | I1010 17:57:50.190755   89528 retry.go:31] will retry after 227.603693ms: waiting for machine to come up
	I1010 17:57:50.420362   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:57:50.420824   89506 main.go:141] libmachine: (addons-473910) DBG | unable to find current IP address of domain addons-473910 in network mk-addons-473910
	I1010 17:57:50.420864   89506 main.go:141] libmachine: (addons-473910) DBG | I1010 17:57:50.420768   89528 retry.go:31] will retry after 387.707808ms: waiting for machine to come up
	I1010 17:57:50.811303   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:57:50.811780   89506 main.go:141] libmachine: (addons-473910) DBG | unable to find current IP address of domain addons-473910 in network mk-addons-473910
	I1010 17:57:50.811810   89506 main.go:141] libmachine: (addons-473910) DBG | I1010 17:57:50.811717   89528 retry.go:31] will retry after 461.409061ms: waiting for machine to come up
	I1010 17:57:51.274344   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:57:51.274796   89506 main.go:141] libmachine: (addons-473910) DBG | unable to find current IP address of domain addons-473910 in network mk-addons-473910
	I1010 17:57:51.274820   89506 main.go:141] libmachine: (addons-473910) DBG | I1010 17:57:51.274757   89528 retry.go:31] will retry after 450.992562ms: waiting for machine to come up
	I1010 17:57:51.727071   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:57:51.727456   89506 main.go:141] libmachine: (addons-473910) DBG | unable to find current IP address of domain addons-473910 in network mk-addons-473910
	I1010 17:57:51.727491   89506 main.go:141] libmachine: (addons-473910) DBG | I1010 17:57:51.727386   89528 retry.go:31] will retry after 742.174885ms: waiting for machine to come up
	I1010 17:57:52.471303   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:57:52.471624   89506 main.go:141] libmachine: (addons-473910) DBG | unable to find current IP address of domain addons-473910 in network mk-addons-473910
	I1010 17:57:52.471651   89506 main.go:141] libmachine: (addons-473910) DBG | I1010 17:57:52.471583   89528 retry.go:31] will retry after 814.191957ms: waiting for machine to come up
	I1010 17:57:53.287336   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:57:53.287807   89506 main.go:141] libmachine: (addons-473910) DBG | unable to find current IP address of domain addons-473910 in network mk-addons-473910
	I1010 17:57:53.287831   89506 main.go:141] libmachine: (addons-473910) DBG | I1010 17:57:53.287751   89528 retry.go:31] will retry after 1.101513633s: waiting for machine to come up
	I1010 17:57:54.390576   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:57:54.390993   89506 main.go:141] libmachine: (addons-473910) DBG | unable to find current IP address of domain addons-473910 in network mk-addons-473910
	I1010 17:57:54.391016   89506 main.go:141] libmachine: (addons-473910) DBG | I1010 17:57:54.390947   89528 retry.go:31] will retry after 1.215556072s: waiting for machine to come up
	I1010 17:57:55.608558   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:57:55.608950   89506 main.go:141] libmachine: (addons-473910) DBG | unable to find current IP address of domain addons-473910 in network mk-addons-473910
	I1010 17:57:55.608974   89506 main.go:141] libmachine: (addons-473910) DBG | I1010 17:57:55.608921   89528 retry.go:31] will retry after 1.607661932s: waiting for machine to come up
	I1010 17:57:57.218960   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:57:57.219583   89506 main.go:141] libmachine: (addons-473910) DBG | unable to find current IP address of domain addons-473910 in network mk-addons-473910
	I1010 17:57:57.219608   89506 main.go:141] libmachine: (addons-473910) DBG | I1010 17:57:57.219530   89528 retry.go:31] will retry after 1.778765799s: waiting for machine to come up
	I1010 17:57:58.999898   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:57:59.000435   89506 main.go:141] libmachine: (addons-473910) DBG | unable to find current IP address of domain addons-473910 in network mk-addons-473910
	I1010 17:57:59.000469   89506 main.go:141] libmachine: (addons-473910) DBG | I1010 17:57:59.000377   89528 retry.go:31] will retry after 1.840094334s: waiting for machine to come up
	I1010 17:58:00.843706   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:00.844181   89506 main.go:141] libmachine: (addons-473910) DBG | unable to find current IP address of domain addons-473910 in network mk-addons-473910
	I1010 17:58:00.844210   89506 main.go:141] libmachine: (addons-473910) DBG | I1010 17:58:00.844106   89528 retry.go:31] will retry after 2.961379135s: waiting for machine to come up
	I1010 17:58:03.806890   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:03.807309   89506 main.go:141] libmachine: (addons-473910) DBG | unable to find current IP address of domain addons-473910 in network mk-addons-473910
	I1010 17:58:03.807337   89506 main.go:141] libmachine: (addons-473910) DBG | I1010 17:58:03.807256   89528 retry.go:31] will retry after 3.630385898s: waiting for machine to come up
	I1010 17:58:07.442208   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:07.442842   89506 main.go:141] libmachine: (addons-473910) DBG | unable to find current IP address of domain addons-473910 in network mk-addons-473910
	I1010 17:58:07.442864   89506 main.go:141] libmachine: (addons-473910) DBG | I1010 17:58:07.442802   89528 retry.go:31] will retry after 3.657313932s: waiting for machine to come up
	I1010 17:58:11.103605   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:11.103959   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has current primary IP address 192.168.39.238 and MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:11.103984   89506 main.go:141] libmachine: (addons-473910) Found IP for machine: 192.168.39.238
	I1010 17:58:11.103996   89506 main.go:141] libmachine: (addons-473910) Reserving static IP address...
	I1010 17:58:11.104342   89506 main.go:141] libmachine: (addons-473910) DBG | unable to find host DHCP lease matching {name: "addons-473910", mac: "52:54:00:6b:7f:56", ip: "192.168.39.238"} in network mk-addons-473910
	I1010 17:58:11.181678   89506 main.go:141] libmachine: (addons-473910) DBG | Getting to WaitForSSH function...
	I1010 17:58:11.181706   89506 main.go:141] libmachine: (addons-473910) Reserved static IP address: 192.168.39.238
	I1010 17:58:11.181774   89506 main.go:141] libmachine: (addons-473910) Waiting for SSH to be available...
	I1010 17:58:11.184533   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:11.185035   89506 main.go:141] libmachine: (addons-473910) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:7f:56", ip: ""} in network mk-addons-473910: {Iface:virbr1 ExpiryTime:2024-10-10 18:58:03 +0000 UTC Type:0 Mac:52:54:00:6b:7f:56 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:minikube Clientid:01:52:54:00:6b:7f:56}
	I1010 17:58:11.185075   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined IP address 192.168.39.238 and MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:11.185293   89506 main.go:141] libmachine: (addons-473910) DBG | Using SSH client type: external
	I1010 17:58:11.185316   89506 main.go:141] libmachine: (addons-473910) DBG | Using SSH private key: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/addons-473910/id_rsa (-rw-------)
	I1010 17:58:11.185337   89506 main.go:141] libmachine: (addons-473910) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.238 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19787-81676/.minikube/machines/addons-473910/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1010 17:58:11.185349   89506 main.go:141] libmachine: (addons-473910) DBG | About to run SSH command:
	I1010 17:58:11.185357   89506 main.go:141] libmachine: (addons-473910) DBG | exit 0
	I1010 17:58:11.309009   89506 main.go:141] libmachine: (addons-473910) DBG | SSH cmd err, output: <nil>: 
	I1010 17:58:11.309275   89506 main.go:141] libmachine: (addons-473910) KVM machine creation complete!
	I1010 17:58:11.309595   89506 main.go:141] libmachine: (addons-473910) Calling .GetConfigRaw
	I1010 17:58:11.310247   89506 main.go:141] libmachine: (addons-473910) Calling .DriverName
	I1010 17:58:11.310456   89506 main.go:141] libmachine: (addons-473910) Calling .DriverName
	I1010 17:58:11.310664   89506 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1010 17:58:11.310701   89506 main.go:141] libmachine: (addons-473910) Calling .GetState
	I1010 17:58:11.311947   89506 main.go:141] libmachine: Detecting operating system of created instance...
	I1010 17:58:11.311982   89506 main.go:141] libmachine: Waiting for SSH to be available...
	I1010 17:58:11.311987   89506 main.go:141] libmachine: Getting to WaitForSSH function...
	I1010 17:58:11.311997   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHHostname
	I1010 17:58:11.314265   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:11.314609   89506 main.go:141] libmachine: (addons-473910) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:7f:56", ip: ""} in network mk-addons-473910: {Iface:virbr1 ExpiryTime:2024-10-10 18:58:03 +0000 UTC Type:0 Mac:52:54:00:6b:7f:56 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:addons-473910 Clientid:01:52:54:00:6b:7f:56}
	I1010 17:58:11.314634   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined IP address 192.168.39.238 and MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:11.314725   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHPort
	I1010 17:58:11.314896   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHKeyPath
	I1010 17:58:11.315054   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHKeyPath
	I1010 17:58:11.315245   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHUsername
	I1010 17:58:11.315439   89506 main.go:141] libmachine: Using SSH client type: native
	I1010 17:58:11.315758   89506 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I1010 17:58:11.315781   89506 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1010 17:58:11.424413   89506 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1010 17:58:11.424435   89506 main.go:141] libmachine: Detecting the provisioner...
	I1010 17:58:11.424444   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHHostname
	I1010 17:58:11.427172   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:11.427546   89506 main.go:141] libmachine: (addons-473910) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:7f:56", ip: ""} in network mk-addons-473910: {Iface:virbr1 ExpiryTime:2024-10-10 18:58:03 +0000 UTC Type:0 Mac:52:54:00:6b:7f:56 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:addons-473910 Clientid:01:52:54:00:6b:7f:56}
	I1010 17:58:11.427578   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined IP address 192.168.39.238 and MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:11.427764   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHPort
	I1010 17:58:11.427973   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHKeyPath
	I1010 17:58:11.428150   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHKeyPath
	I1010 17:58:11.428302   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHUsername
	I1010 17:58:11.428504   89506 main.go:141] libmachine: Using SSH client type: native
	I1010 17:58:11.428720   89506 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I1010 17:58:11.428735   89506 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1010 17:58:11.537810   89506 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1010 17:58:11.537946   89506 main.go:141] libmachine: found compatible host: buildroot
	I1010 17:58:11.537961   89506 main.go:141] libmachine: Provisioning with buildroot...
	I1010 17:58:11.537971   89506 main.go:141] libmachine: (addons-473910) Calling .GetMachineName
	I1010 17:58:11.538246   89506 buildroot.go:166] provisioning hostname "addons-473910"
	I1010 17:58:11.538274   89506 main.go:141] libmachine: (addons-473910) Calling .GetMachineName
	I1010 17:58:11.538480   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHHostname
	I1010 17:58:11.541271   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:11.541705   89506 main.go:141] libmachine: (addons-473910) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:7f:56", ip: ""} in network mk-addons-473910: {Iface:virbr1 ExpiryTime:2024-10-10 18:58:03 +0000 UTC Type:0 Mac:52:54:00:6b:7f:56 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:addons-473910 Clientid:01:52:54:00:6b:7f:56}
	I1010 17:58:11.541722   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined IP address 192.168.39.238 and MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:11.541938   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHPort
	I1010 17:58:11.542147   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHKeyPath
	I1010 17:58:11.542311   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHKeyPath
	I1010 17:58:11.542454   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHUsername
	I1010 17:58:11.542633   89506 main.go:141] libmachine: Using SSH client type: native
	I1010 17:58:11.542798   89506 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I1010 17:58:11.542809   89506 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-473910 && echo "addons-473910" | sudo tee /etc/hostname
	I1010 17:58:11.662958   89506 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-473910
	
	I1010 17:58:11.662986   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHHostname
	I1010 17:58:11.665789   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:11.666201   89506 main.go:141] libmachine: (addons-473910) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:7f:56", ip: ""} in network mk-addons-473910: {Iface:virbr1 ExpiryTime:2024-10-10 18:58:03 +0000 UTC Type:0 Mac:52:54:00:6b:7f:56 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:addons-473910 Clientid:01:52:54:00:6b:7f:56}
	I1010 17:58:11.666232   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined IP address 192.168.39.238 and MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:11.666603   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHPort
	I1010 17:58:11.666771   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHKeyPath
	I1010 17:58:11.666942   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHKeyPath
	I1010 17:58:11.667074   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHUsername
	I1010 17:58:11.667368   89506 main.go:141] libmachine: Using SSH client type: native
	I1010 17:58:11.667599   89506 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I1010 17:58:11.667620   89506 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-473910' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-473910/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-473910' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1010 17:58:11.781956   89506 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1010 17:58:11.781988   89506 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19787-81676/.minikube CaCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19787-81676/.minikube}
	I1010 17:58:11.782073   89506 buildroot.go:174] setting up certificates
	I1010 17:58:11.782091   89506 provision.go:84] configureAuth start
	I1010 17:58:11.782113   89506 main.go:141] libmachine: (addons-473910) Calling .GetMachineName
	I1010 17:58:11.782422   89506 main.go:141] libmachine: (addons-473910) Calling .GetIP
	I1010 17:58:11.785044   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:11.785381   89506 main.go:141] libmachine: (addons-473910) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:7f:56", ip: ""} in network mk-addons-473910: {Iface:virbr1 ExpiryTime:2024-10-10 18:58:03 +0000 UTC Type:0 Mac:52:54:00:6b:7f:56 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:addons-473910 Clientid:01:52:54:00:6b:7f:56}
	I1010 17:58:11.785416   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined IP address 192.168.39.238 and MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:11.785523   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHHostname
	I1010 17:58:11.787667   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:11.787976   89506 main.go:141] libmachine: (addons-473910) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:7f:56", ip: ""} in network mk-addons-473910: {Iface:virbr1 ExpiryTime:2024-10-10 18:58:03 +0000 UTC Type:0 Mac:52:54:00:6b:7f:56 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:addons-473910 Clientid:01:52:54:00:6b:7f:56}
	I1010 17:58:11.788007   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined IP address 192.168.39.238 and MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:11.788191   89506 provision.go:143] copyHostCerts
	I1010 17:58:11.788278   89506 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem (1078 bytes)
	I1010 17:58:11.788423   89506 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem (1123 bytes)
	I1010 17:58:11.788529   89506 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem (1675 bytes)
	I1010 17:58:11.788616   89506 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem org=jenkins.addons-473910 san=[127.0.0.1 192.168.39.238 addons-473910 localhost minikube]
	I1010 17:58:11.974798   89506 provision.go:177] copyRemoteCerts
	I1010 17:58:11.974886   89506 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1010 17:58:11.974920   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHHostname
	I1010 17:58:11.977540   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:11.977864   89506 main.go:141] libmachine: (addons-473910) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:7f:56", ip: ""} in network mk-addons-473910: {Iface:virbr1 ExpiryTime:2024-10-10 18:58:03 +0000 UTC Type:0 Mac:52:54:00:6b:7f:56 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:addons-473910 Clientid:01:52:54:00:6b:7f:56}
	I1010 17:58:11.977895   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined IP address 192.168.39.238 and MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:11.978023   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHPort
	I1010 17:58:11.978223   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHKeyPath
	I1010 17:58:11.978370   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHUsername
	I1010 17:58:11.978497   89506 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/addons-473910/id_rsa Username:docker}
	I1010 17:58:12.063621   89506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1010 17:58:12.088428   89506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1010 17:58:12.113481   89506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1010 17:58:12.137853   89506 provision.go:87] duration metric: took 355.745133ms to configureAuth
	I1010 17:58:12.137885   89506 buildroot.go:189] setting minikube options for container-runtime
	I1010 17:58:12.138114   89506 config.go:182] Loaded profile config "addons-473910": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 17:58:12.138226   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHHostname
	I1010 17:58:12.141100   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:12.141447   89506 main.go:141] libmachine: (addons-473910) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:7f:56", ip: ""} in network mk-addons-473910: {Iface:virbr1 ExpiryTime:2024-10-10 18:58:03 +0000 UTC Type:0 Mac:52:54:00:6b:7f:56 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:addons-473910 Clientid:01:52:54:00:6b:7f:56}
	I1010 17:58:12.141473   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined IP address 192.168.39.238 and MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:12.141663   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHPort
	I1010 17:58:12.141847   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHKeyPath
	I1010 17:58:12.142008   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHKeyPath
	I1010 17:58:12.142138   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHUsername
	I1010 17:58:12.142300   89506 main.go:141] libmachine: Using SSH client type: native
	I1010 17:58:12.142474   89506 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I1010 17:58:12.142488   89506 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1010 17:58:12.362220   89506 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1010 17:58:12.362250   89506 main.go:141] libmachine: Checking connection to Docker...
	I1010 17:58:12.362259   89506 main.go:141] libmachine: (addons-473910) Calling .GetURL
	I1010 17:58:12.363730   89506 main.go:141] libmachine: (addons-473910) DBG | Using libvirt version 6000000
	I1010 17:58:12.366338   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:12.366716   89506 main.go:141] libmachine: (addons-473910) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:7f:56", ip: ""} in network mk-addons-473910: {Iface:virbr1 ExpiryTime:2024-10-10 18:58:03 +0000 UTC Type:0 Mac:52:54:00:6b:7f:56 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:addons-473910 Clientid:01:52:54:00:6b:7f:56}
	I1010 17:58:12.366744   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined IP address 192.168.39.238 and MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:12.366912   89506 main.go:141] libmachine: Docker is up and running!
	I1010 17:58:12.366923   89506 main.go:141] libmachine: Reticulating splines...
	I1010 17:58:12.366931   89506 client.go:171] duration metric: took 24.20617252s to LocalClient.Create
	I1010 17:58:12.366956   89506 start.go:167] duration metric: took 24.206240514s to libmachine.API.Create "addons-473910"
	I1010 17:58:12.366978   89506 start.go:293] postStartSetup for "addons-473910" (driver="kvm2")
	I1010 17:58:12.366995   89506 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1010 17:58:12.367019   89506 main.go:141] libmachine: (addons-473910) Calling .DriverName
	I1010 17:58:12.367297   89506 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1010 17:58:12.367327   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHHostname
	I1010 17:58:12.369615   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:12.369900   89506 main.go:141] libmachine: (addons-473910) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:7f:56", ip: ""} in network mk-addons-473910: {Iface:virbr1 ExpiryTime:2024-10-10 18:58:03 +0000 UTC Type:0 Mac:52:54:00:6b:7f:56 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:addons-473910 Clientid:01:52:54:00:6b:7f:56}
	I1010 17:58:12.369927   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined IP address 192.168.39.238 and MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:12.370061   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHPort
	I1010 17:58:12.370274   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHKeyPath
	I1010 17:58:12.370464   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHUsername
	I1010 17:58:12.370623   89506 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/addons-473910/id_rsa Username:docker}
	I1010 17:58:12.455668   89506 ssh_runner.go:195] Run: cat /etc/os-release
	I1010 17:58:12.460127   89506 info.go:137] Remote host: Buildroot 2023.02.9
	I1010 17:58:12.460161   89506 filesync.go:126] Scanning /home/jenkins/minikube-integration/19787-81676/.minikube/addons for local assets ...
	I1010 17:58:12.460253   89506 filesync.go:126] Scanning /home/jenkins/minikube-integration/19787-81676/.minikube/files for local assets ...
	I1010 17:58:12.460281   89506 start.go:296] duration metric: took 93.293264ms for postStartSetup
	I1010 17:58:12.460315   89506 main.go:141] libmachine: (addons-473910) Calling .GetConfigRaw
	I1010 17:58:12.460930   89506 main.go:141] libmachine: (addons-473910) Calling .GetIP
	I1010 17:58:12.463785   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:12.464123   89506 main.go:141] libmachine: (addons-473910) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:7f:56", ip: ""} in network mk-addons-473910: {Iface:virbr1 ExpiryTime:2024-10-10 18:58:03 +0000 UTC Type:0 Mac:52:54:00:6b:7f:56 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:addons-473910 Clientid:01:52:54:00:6b:7f:56}
	I1010 17:58:12.464148   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined IP address 192.168.39.238 and MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:12.464388   89506 profile.go:143] Saving config to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/addons-473910/config.json ...
	I1010 17:58:12.464573   89506 start.go:128] duration metric: took 24.322393923s to createHost
	I1010 17:58:12.464598   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHHostname
	I1010 17:58:12.466862   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:12.467243   89506 main.go:141] libmachine: (addons-473910) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:7f:56", ip: ""} in network mk-addons-473910: {Iface:virbr1 ExpiryTime:2024-10-10 18:58:03 +0000 UTC Type:0 Mac:52:54:00:6b:7f:56 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:addons-473910 Clientid:01:52:54:00:6b:7f:56}
	I1010 17:58:12.467275   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined IP address 192.168.39.238 and MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:12.467434   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHPort
	I1010 17:58:12.467610   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHKeyPath
	I1010 17:58:12.467763   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHKeyPath
	I1010 17:58:12.467879   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHUsername
	I1010 17:58:12.468038   89506 main.go:141] libmachine: Using SSH client type: native
	I1010 17:58:12.468196   89506 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I1010 17:58:12.468205   89506 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1010 17:58:12.573941   89506 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728583092.548842688
	
	I1010 17:58:12.573976   89506 fix.go:216] guest clock: 1728583092.548842688
	I1010 17:58:12.573985   89506 fix.go:229] Guest: 2024-10-10 17:58:12.548842688 +0000 UTC Remote: 2024-10-10 17:58:12.464587124 +0000 UTC m=+24.431579336 (delta=84.255564ms)
	I1010 17:58:12.574025   89506 fix.go:200] guest clock delta is within tolerance: 84.255564ms
	I1010 17:58:12.574031   89506 start.go:83] releasing machines lock for "addons-473910", held for 24.431922793s
	I1010 17:58:12.574058   89506 main.go:141] libmachine: (addons-473910) Calling .DriverName
	I1010 17:58:12.574342   89506 main.go:141] libmachine: (addons-473910) Calling .GetIP
	I1010 17:58:12.577145   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:12.577517   89506 main.go:141] libmachine: (addons-473910) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:7f:56", ip: ""} in network mk-addons-473910: {Iface:virbr1 ExpiryTime:2024-10-10 18:58:03 +0000 UTC Type:0 Mac:52:54:00:6b:7f:56 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:addons-473910 Clientid:01:52:54:00:6b:7f:56}
	I1010 17:58:12.577553   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined IP address 192.168.39.238 and MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:12.577707   89506 main.go:141] libmachine: (addons-473910) Calling .DriverName
	I1010 17:58:12.578164   89506 main.go:141] libmachine: (addons-473910) Calling .DriverName
	I1010 17:58:12.578331   89506 main.go:141] libmachine: (addons-473910) Calling .DriverName
	I1010 17:58:12.578435   89506 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1010 17:58:12.578502   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHHostname
	I1010 17:58:12.578541   89506 ssh_runner.go:195] Run: cat /version.json
	I1010 17:58:12.578564   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHHostname
	I1010 17:58:12.581152   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:12.581356   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:12.581529   89506 main.go:141] libmachine: (addons-473910) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:7f:56", ip: ""} in network mk-addons-473910: {Iface:virbr1 ExpiryTime:2024-10-10 18:58:03 +0000 UTC Type:0 Mac:52:54:00:6b:7f:56 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:addons-473910 Clientid:01:52:54:00:6b:7f:56}
	I1010 17:58:12.581550   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined IP address 192.168.39.238 and MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:12.581673   89506 main.go:141] libmachine: (addons-473910) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:7f:56", ip: ""} in network mk-addons-473910: {Iface:virbr1 ExpiryTime:2024-10-10 18:58:03 +0000 UTC Type:0 Mac:52:54:00:6b:7f:56 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:addons-473910 Clientid:01:52:54:00:6b:7f:56}
	I1010 17:58:12.581707   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined IP address 192.168.39.238 and MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:12.581709   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHPort
	I1010 17:58:12.581920   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHKeyPath
	I1010 17:58:12.581924   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHPort
	I1010 17:58:12.582101   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHUsername
	I1010 17:58:12.582151   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHKeyPath
	I1010 17:58:12.582217   89506 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/addons-473910/id_rsa Username:docker}
	I1010 17:58:12.582281   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHUsername
	I1010 17:58:12.582396   89506 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/addons-473910/id_rsa Username:docker}
	I1010 17:58:12.692202   89506 ssh_runner.go:195] Run: systemctl --version
	I1010 17:58:12.698535   89506 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1010 17:58:12.867274   89506 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1010 17:58:12.874143   89506 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1010 17:58:12.874211   89506 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1010 17:58:12.891055   89506 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1010 17:58:12.891082   89506 start.go:495] detecting cgroup driver to use...
	I1010 17:58:12.891149   89506 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1010 17:58:12.907042   89506 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1010 17:58:12.921019   89506 docker.go:217] disabling cri-docker service (if available) ...
	I1010 17:58:12.921088   89506 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1010 17:58:12.935362   89506 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1010 17:58:12.949688   89506 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1010 17:58:13.064978   89506 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1010 17:58:13.236648   89506 docker.go:233] disabling docker service ...
	I1010 17:58:13.236730   89506 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1010 17:58:13.251723   89506 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1010 17:58:13.266615   89506 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1010 17:58:13.387840   89506 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1010 17:58:13.526894   89506 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1010 17:58:13.541511   89506 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1010 17:58:13.561578   89506 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1010 17:58:13.561647   89506 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 17:58:13.573035   89506 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1010 17:58:13.573124   89506 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 17:58:13.584217   89506 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 17:58:13.595490   89506 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 17:58:13.606340   89506 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1010 17:58:13.617278   89506 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 17:58:13.628056   89506 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 17:58:13.646179   89506 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 17:58:13.657029   89506 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1010 17:58:13.666931   89506 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1010 17:58:13.666991   89506 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1010 17:58:13.680240   89506 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1010 17:58:13.690852   89506 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 17:58:13.812846   89506 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1010 17:58:13.906985   89506 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1010 17:58:13.907092   89506 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1010 17:58:13.912457   89506 start.go:563] Will wait 60s for crictl version
	I1010 17:58:13.912538   89506 ssh_runner.go:195] Run: which crictl
	I1010 17:58:13.916599   89506 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1010 17:58:13.957494   89506 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1010 17:58:13.957577   89506 ssh_runner.go:195] Run: crio --version
	I1010 17:58:13.987726   89506 ssh_runner.go:195] Run: crio --version
	I1010 17:58:14.018586   89506 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1010 17:58:14.020146   89506 main.go:141] libmachine: (addons-473910) Calling .GetIP
	I1010 17:58:14.023405   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:14.023883   89506 main.go:141] libmachine: (addons-473910) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:7f:56", ip: ""} in network mk-addons-473910: {Iface:virbr1 ExpiryTime:2024-10-10 18:58:03 +0000 UTC Type:0 Mac:52:54:00:6b:7f:56 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:addons-473910 Clientid:01:52:54:00:6b:7f:56}
	I1010 17:58:14.023911   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined IP address 192.168.39.238 and MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:14.024173   89506 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1010 17:58:14.028415   89506 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 17:58:14.041888   89506 kubeadm.go:883] updating cluster {Name:addons-473910 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:addons-473910 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.238 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1010 17:58:14.042027   89506 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1010 17:58:14.042088   89506 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 17:58:14.076600   89506 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1010 17:58:14.076676   89506 ssh_runner.go:195] Run: which lz4
	I1010 17:58:14.080811   89506 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1010 17:58:14.085225   89506 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1010 17:58:14.085259   89506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1010 17:58:15.478999   89506 crio.go:462] duration metric: took 1.398201803s to copy over tarball
	I1010 17:58:15.479100   89506 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1010 17:58:17.682395   89506 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.203257299s)
	I1010 17:58:17.682439   89506 crio.go:469] duration metric: took 2.203401621s to extract the tarball
	I1010 17:58:17.682449   89506 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1010 17:58:17.719543   89506 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 17:58:17.764809   89506 crio.go:514] all images are preloaded for cri-o runtime.
	I1010 17:58:17.764842   89506 cache_images.go:84] Images are preloaded, skipping loading
	I1010 17:58:17.764864   89506 kubeadm.go:934] updating node { 192.168.39.238 8443 v1.31.1 crio true true} ...
	I1010 17:58:17.764995   89506 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-473910 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.238
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-473910 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1010 17:58:17.765080   89506 ssh_runner.go:195] Run: crio config
	I1010 17:58:17.817292   89506 cni.go:84] Creating CNI manager for ""
	I1010 17:58:17.817318   89506 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1010 17:58:17.817329   89506 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1010 17:58:17.817353   89506 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.238 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-473910 NodeName:addons-473910 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.238"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.238 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1010 17:58:17.817482   89506 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.238
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-473910"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.238
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.238"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1010 17:58:17.817543   89506 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1010 17:58:17.827789   89506 binaries.go:44] Found k8s binaries, skipping transfer
	I1010 17:58:17.827853   89506 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1010 17:58:17.838088   89506 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1010 17:58:17.855875   89506 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1010 17:58:17.873764   89506 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I1010 17:58:17.892082   89506 ssh_runner.go:195] Run: grep 192.168.39.238	control-plane.minikube.internal$ /etc/hosts
	I1010 17:58:17.896318   89506 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.238	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 17:58:17.910294   89506 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 17:58:18.023518   89506 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 17:58:18.041735   89506 certs.go:68] Setting up /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/addons-473910 for IP: 192.168.39.238
	I1010 17:58:18.041789   89506 certs.go:194] generating shared ca certs ...
	I1010 17:58:18.041807   89506 certs.go:226] acquiring lock for ca certs: {Name:mk934aa2a82050d7b4e8800492bf64f91d140517 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 17:58:18.041972   89506 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key
	I1010 17:58:18.136832   89506 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt ...
	I1010 17:58:18.136880   89506 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt: {Name:mk26dc60dafe21a2c355d9cd6a7d904857d94548 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 17:58:18.137082   89506 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key ...
	I1010 17:58:18.137094   89506 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key: {Name:mk5e6f6eacdcc7a936a93570e1aec51b070fcd42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 17:58:18.137185   89506 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key
	I1010 17:58:18.208301   89506 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.crt ...
	I1010 17:58:18.208338   89506 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.crt: {Name:mkaa3880d369175d1a77a4ace6e6011fb87b8637 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 17:58:18.208521   89506 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key ...
	I1010 17:58:18.208534   89506 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key: {Name:mk50f0454a0d3d1eb4b5e1ea0f31373d75aaaa8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 17:58:18.208628   89506 certs.go:256] generating profile certs ...
	I1010 17:58:18.208687   89506 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/addons-473910/client.key
	I1010 17:58:18.208699   89506 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/addons-473910/client.crt with IP's: []
	I1010 17:58:18.253772   89506 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/addons-473910/client.crt ...
	I1010 17:58:18.253816   89506 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/addons-473910/client.crt: {Name:mkf5586b4869f587f7f271b82015415265fc91e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 17:58:18.253993   89506 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/addons-473910/client.key ...
	I1010 17:58:18.254006   89506 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/addons-473910/client.key: {Name:mkaef9e8d93ccb9cec6c394557644727c0cd33f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 17:58:18.254082   89506 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/addons-473910/apiserver.key.c19a97bb
	I1010 17:58:18.254103   89506 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/addons-473910/apiserver.crt.c19a97bb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.238]
	I1010 17:58:18.526577   89506 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/addons-473910/apiserver.crt.c19a97bb ...
	I1010 17:58:18.526617   89506 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/addons-473910/apiserver.crt.c19a97bb: {Name:mk18dab76c910ff98e0593284f313379df8daf13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 17:58:18.526816   89506 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/addons-473910/apiserver.key.c19a97bb ...
	I1010 17:58:18.526834   89506 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/addons-473910/apiserver.key.c19a97bb: {Name:mkc7bbbfffd01b4407381c22059791d6cc3e5c24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 17:58:18.526935   89506 certs.go:381] copying /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/addons-473910/apiserver.crt.c19a97bb -> /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/addons-473910/apiserver.crt
	I1010 17:58:18.527028   89506 certs.go:385] copying /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/addons-473910/apiserver.key.c19a97bb -> /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/addons-473910/apiserver.key
	I1010 17:58:18.527097   89506 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/addons-473910/proxy-client.key
	I1010 17:58:18.527122   89506 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/addons-473910/proxy-client.crt with IP's: []
	I1010 17:58:18.614263   89506 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/addons-473910/proxy-client.crt ...
	I1010 17:58:18.614298   89506 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/addons-473910/proxy-client.crt: {Name:mk398d3b1ec4ae05deda29a9610094de716fff2d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 17:58:18.614481   89506 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/addons-473910/proxy-client.key ...
	I1010 17:58:18.614496   89506 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/addons-473910/proxy-client.key: {Name:mkd238c47ccda9310b2990edc780446fe7ba11d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 17:58:18.614692   89506 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem (1679 bytes)
	I1010 17:58:18.614737   89506 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem (1078 bytes)
	I1010 17:58:18.614772   89506 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem (1123 bytes)
	I1010 17:58:18.614805   89506 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem (1675 bytes)
	I1010 17:58:18.615432   89506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1010 17:58:18.641430   89506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1010 17:58:18.665212   89506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1010 17:58:18.688766   89506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1010 17:58:18.715021   89506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/addons-473910/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1010 17:58:18.758339   89506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/addons-473910/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1010 17:58:18.784450   89506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/addons-473910/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1010 17:58:18.808045   89506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/addons-473910/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1010 17:58:18.831879   89506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1010 17:58:18.857009   89506 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1010 17:58:18.878860   89506 ssh_runner.go:195] Run: openssl version
	I1010 17:58:18.885621   89506 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1010 17:58:18.900666   89506 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1010 17:58:18.905811   89506 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 10 17:58 /usr/share/ca-certificates/minikubeCA.pem
	I1010 17:58:18.905875   89506 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1010 17:58:18.912419   89506 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1010 17:58:18.924812   89506 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1010 17:58:18.929887   89506 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1010 17:58:18.929947   89506 kubeadm.go:392] StartCluster: {Name:addons-473910 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:addons-473910 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.238 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 17:58:18.930050   89506 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1010 17:58:18.930139   89506 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1010 17:58:18.970482   89506 cri.go:89] found id: ""
	I1010 17:58:18.970569   89506 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1010 17:58:18.981639   89506 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1010 17:58:18.992242   89506 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1010 17:58:19.003093   89506 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1010 17:58:19.003156   89506 kubeadm.go:157] found existing configuration files:
	
	I1010 17:58:19.003212   89506 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1010 17:58:19.013394   89506 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1010 17:58:19.013463   89506 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1010 17:58:19.023807   89506 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1010 17:58:19.033824   89506 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1010 17:58:19.033885   89506 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1010 17:58:19.044196   89506 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1010 17:58:19.053925   89506 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1010 17:58:19.054006   89506 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1010 17:58:19.064001   89506 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1010 17:58:19.073698   89506 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1010 17:58:19.073779   89506 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1010 17:58:19.083855   89506 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1010 17:58:19.138336   89506 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1010 17:58:19.138439   89506 kubeadm.go:310] [preflight] Running pre-flight checks
	I1010 17:58:19.245490   89506 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1010 17:58:19.245649   89506 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1010 17:58:19.245797   89506 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1010 17:58:19.254129   89506 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1010 17:58:19.287403   89506 out.go:235]   - Generating certificates and keys ...
	I1010 17:58:19.287569   89506 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1010 17:58:19.287678   89506 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1010 17:58:19.429909   89506 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1010 17:58:19.596602   89506 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1010 17:58:19.829898   89506 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1010 17:58:19.968113   89506 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1010 17:58:20.082793   89506 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1010 17:58:20.083061   89506 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-473910 localhost] and IPs [192.168.39.238 127.0.0.1 ::1]
	I1010 17:58:20.198718   89506 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1010 17:58:20.198889   89506 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-473910 localhost] and IPs [192.168.39.238 127.0.0.1 ::1]
	I1010 17:58:20.414158   89506 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1010 17:58:20.542564   89506 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1010 17:58:20.679787   89506 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1010 17:58:20.679965   89506 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1010 17:58:20.803829   89506 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1010 17:58:21.160425   89506 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1010 17:58:21.508888   89506 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1010 17:58:21.652583   89506 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1010 17:58:21.874428   89506 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1010 17:58:21.875064   89506 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1010 17:58:21.879825   89506 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1010 17:58:21.882416   89506 out.go:235]   - Booting up control plane ...
	I1010 17:58:21.882537   89506 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1010 17:58:21.882652   89506 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1010 17:58:21.882750   89506 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1010 17:58:21.897946   89506 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1010 17:58:21.906834   89506 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1010 17:58:21.906888   89506 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1010 17:58:22.033176   89506 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1010 17:58:22.033334   89506 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1010 17:58:23.048551   89506 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001627938s
	I1010 17:58:23.048766   89506 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1010 17:58:28.048016   89506 kubeadm.go:310] [api-check] The API server is healthy after 5.017057619s
	I1010 17:58:28.066325   89506 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1010 17:58:28.090379   89506 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1010 17:58:28.134513   89506 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1010 17:58:28.134727   89506 kubeadm.go:310] [mark-control-plane] Marking the node addons-473910 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1010 17:58:28.147951   89506 kubeadm.go:310] [bootstrap-token] Using token: urf6qy.dqcbghijitdjybk3
	I1010 17:58:28.149572   89506 out.go:235]   - Configuring RBAC rules ...
	I1010 17:58:28.149689   89506 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1010 17:58:28.159529   89506 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1010 17:58:28.170709   89506 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1010 17:58:28.174299   89506 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1010 17:58:28.177513   89506 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1010 17:58:28.180657   89506 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1010 17:58:28.456897   89506 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1010 17:58:28.881476   89506 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1010 17:58:29.458261   89506 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1010 17:58:29.459365   89506 kubeadm.go:310] 
	I1010 17:58:29.459470   89506 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1010 17:58:29.459496   89506 kubeadm.go:310] 
	I1010 17:58:29.459665   89506 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1010 17:58:29.459678   89506 kubeadm.go:310] 
	I1010 17:58:29.459731   89506 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1010 17:58:29.459789   89506 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1010 17:58:29.459836   89506 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1010 17:58:29.459842   89506 kubeadm.go:310] 
	I1010 17:58:29.459884   89506 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1010 17:58:29.459900   89506 kubeadm.go:310] 
	I1010 17:58:29.459984   89506 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1010 17:58:29.459993   89506 kubeadm.go:310] 
	I1010 17:58:29.460064   89506 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1010 17:58:29.460177   89506 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1010 17:58:29.460263   89506 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1010 17:58:29.460277   89506 kubeadm.go:310] 
	I1010 17:58:29.460371   89506 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1010 17:58:29.460475   89506 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1010 17:58:29.460487   89506 kubeadm.go:310] 
	I1010 17:58:29.460617   89506 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token urf6qy.dqcbghijitdjybk3 \
	I1010 17:58:29.460753   89506 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:bc8568f96f96483e4403a7eff9866ac7bd8b0e76d8203796a49dd1a769f11bda \
	I1010 17:58:29.460789   89506 kubeadm.go:310] 	--control-plane 
	I1010 17:58:29.460796   89506 kubeadm.go:310] 
	I1010 17:58:29.460924   89506 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1010 17:58:29.460939   89506 kubeadm.go:310] 
	I1010 17:58:29.461054   89506 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token urf6qy.dqcbghijitdjybk3 \
	I1010 17:58:29.461205   89506 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:bc8568f96f96483e4403a7eff9866ac7bd8b0e76d8203796a49dd1a769f11bda 
	I1010 17:58:29.462071   89506 kubeadm.go:310] W1010 17:58:19.118276     818 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1010 17:58:29.462451   89506 kubeadm.go:310] W1010 17:58:19.119144     818 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1010 17:58:29.462592   89506 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1010 17:58:29.462609   89506 cni.go:84] Creating CNI manager for ""
	I1010 17:58:29.462619   89506 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1010 17:58:29.464485   89506 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1010 17:58:29.465978   89506 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1010 17:58:29.478126   89506 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1010 17:58:29.502974   89506 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1010 17:58:29.503047   89506 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 17:58:29.503114   89506 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-473910 minikube.k8s.io/updated_at=2024_10_10T17_58_29_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=e90d7de550e839433dc631623053398b652747fc minikube.k8s.io/name=addons-473910 minikube.k8s.io/primary=true
	I1010 17:58:29.628639   89506 ops.go:34] apiserver oom_adj: -16
	I1010 17:58:29.628702   89506 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 17:58:30.129418   89506 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 17:58:30.629386   89506 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 17:58:31.129511   89506 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 17:58:31.628971   89506 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 17:58:32.129173   89506 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 17:58:32.629131   89506 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 17:58:33.129178   89506 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 17:58:33.212926   89506 kubeadm.go:1113] duration metric: took 3.709946724s to wait for elevateKubeSystemPrivileges
	I1010 17:58:33.212967   89506 kubeadm.go:394] duration metric: took 14.283023824s to StartCluster
	I1010 17:58:33.212993   89506 settings.go:142] acquiring lock: {Name:mk8d73e472e6ca16e47be8eb712406a95625b8da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 17:58:33.213159   89506 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19787-81676/kubeconfig
	I1010 17:58:33.213639   89506 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19787-81676/kubeconfig: {Name:mk31297b607b774870865548d952622c04d970cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 17:58:33.213860   89506 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1010 17:58:33.213886   89506 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.238 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1010 17:58:33.213954   89506 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1010 17:58:33.214101   89506 addons.go:69] Setting yakd=true in profile "addons-473910"
	I1010 17:58:33.214119   89506 addons.go:69] Setting inspektor-gadget=true in profile "addons-473910"
	I1010 17:58:33.214137   89506 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-473910"
	I1010 17:58:33.214145   89506 addons.go:234] Setting addon inspektor-gadget=true in "addons-473910"
	I1010 17:58:33.214143   89506 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-473910"
	I1010 17:58:33.214156   89506 addons.go:69] Setting cloud-spanner=true in profile "addons-473910"
	I1010 17:58:33.214169   89506 addons.go:69] Setting volumesnapshots=true in profile "addons-473910"
	I1010 17:58:33.214161   89506 addons.go:69] Setting metrics-server=true in profile "addons-473910"
	I1010 17:58:33.214170   89506 addons.go:69] Setting registry=true in profile "addons-473910"
	I1010 17:58:33.214218   89506 addons.go:234] Setting addon registry=true in "addons-473910"
	I1010 17:58:33.214262   89506 host.go:66] Checking if "addons-473910" exists ...
	I1010 17:58:33.214126   89506 addons.go:69] Setting default-storageclass=true in profile "addons-473910"
	I1010 17:58:33.214299   89506 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-473910"
	I1010 17:58:33.214160   89506 addons.go:69] Setting volcano=true in profile "addons-473910"
	I1010 17:58:33.214329   89506 addons.go:234] Setting addon volcano=true in "addons-473910"
	I1010 17:58:33.214363   89506 host.go:66] Checking if "addons-473910" exists ...
	I1010 17:58:33.214125   89506 config.go:182] Loaded profile config "addons-473910": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 17:58:33.214152   89506 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-473910"
	I1010 17:58:33.214164   89506 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-473910"
	I1010 17:58:33.214585   89506 host.go:66] Checking if "addons-473910" exists ...
	I1010 17:58:33.214152   89506 addons.go:69] Setting storage-provisioner=true in profile "addons-473910"
	I1010 17:58:33.214657   89506 addons.go:234] Setting addon storage-provisioner=true in "addons-473910"
	I1010 17:58:33.214171   89506 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-473910"
	I1010 17:58:33.214684   89506 host.go:66] Checking if "addons-473910" exists ...
	I1010 17:58:33.214711   89506 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-473910"
	I1010 17:58:33.214743   89506 host.go:66] Checking if "addons-473910" exists ...
	I1010 17:58:33.214755   89506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 17:58:33.214770   89506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 17:58:33.214780   89506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 17:58:33.214793   89506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 17:58:33.214832   89506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 17:58:33.214855   89506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 17:58:33.214873   89506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 17:58:33.214882   89506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 17:58:33.214967   89506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 17:58:33.215003   89506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 17:58:33.215050   89506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 17:58:33.214174   89506 addons.go:69] Setting ingress=true in profile "addons-473910"
	I1010 17:58:33.215096   89506 addons.go:234] Setting addon ingress=true in "addons-473910"
	I1010 17:58:33.215097   89506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 17:58:33.215139   89506 host.go:66] Checking if "addons-473910" exists ...
	I1010 17:58:33.215187   89506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 17:58:33.215215   89506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 17:58:33.214179   89506 addons.go:234] Setting addon cloud-spanner=true in "addons-473910"
	I1010 17:58:33.215408   89506 host.go:66] Checking if "addons-473910" exists ...
	I1010 17:58:33.214180   89506 addons.go:69] Setting gcp-auth=true in profile "addons-473910"
	I1010 17:58:33.215506   89506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 17:58:33.214180   89506 addons.go:234] Setting addon volumesnapshots=true in "addons-473910"
	I1010 17:58:33.215539   89506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 17:58:33.215545   89506 mustload.go:65] Loading cluster: addons-473910
	I1010 17:58:33.215556   89506 host.go:66] Checking if "addons-473910" exists ...
	I1010 17:58:33.215725   89506 config.go:182] Loaded profile config "addons-473910": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 17:58:33.215781   89506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 17:58:33.215813   89506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 17:58:33.214183   89506 addons.go:69] Setting ingress-dns=true in profile "addons-473910"
	I1010 17:58:33.215896   89506 addons.go:234] Setting addon ingress-dns=true in "addons-473910"
	I1010 17:58:33.215933   89506 host.go:66] Checking if "addons-473910" exists ...
	I1010 17:58:33.216007   89506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 17:58:33.216059   89506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 17:58:33.214188   89506 host.go:66] Checking if "addons-473910" exists ...
	I1010 17:58:33.214128   89506 addons.go:234] Setting addon yakd=true in "addons-473910"
	I1010 17:58:33.216304   89506 host.go:66] Checking if "addons-473910" exists ...
	I1010 17:58:33.214184   89506 addons.go:234] Setting addon metrics-server=true in "addons-473910"
	I1010 17:58:33.216425   89506 host.go:66] Checking if "addons-473910" exists ...
	I1010 17:58:33.216714   89506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 17:58:33.216735   89506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 17:58:33.216714   89506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 17:58:33.216803   89506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 17:58:33.216821   89506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 17:58:33.216835   89506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 17:58:33.217963   89506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 17:58:33.218046   89506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 17:58:33.220938   89506 out.go:177] * Verifying Kubernetes components...
	I1010 17:58:33.233526   89506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 17:58:33.233582   89506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 17:58:33.233747   89506 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 17:58:33.239210   89506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36351
	I1010 17:58:33.241357   89506 main.go:141] libmachine: () Calling .GetVersion
	I1010 17:58:33.244055   89506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43477
	I1010 17:58:33.244203   89506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34357
	I1010 17:58:33.244235   89506 main.go:141] libmachine: Using API Version  1
	I1010 17:58:33.244252   89506 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 17:58:33.244686   89506 main.go:141] libmachine: () Calling .GetVersion
	I1010 17:58:33.244780   89506 main.go:141] libmachine: () Calling .GetVersion
	I1010 17:58:33.245200   89506 main.go:141] libmachine: Using API Version  1
	I1010 17:58:33.245220   89506 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 17:58:33.245365   89506 main.go:141] libmachine: Using API Version  1
	I1010 17:58:33.245376   89506 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 17:58:33.245654   89506 main.go:141] libmachine: () Calling .GetMachineName
	I1010 17:58:33.245707   89506 main.go:141] libmachine: () Calling .GetMachineName
	I1010 17:58:33.246238   89506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 17:58:33.246291   89506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 17:58:33.246497   89506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45101
	I1010 17:58:33.246524   89506 main.go:141] libmachine: (addons-473910) Calling .GetState
	I1010 17:58:33.246855   89506 main.go:141] libmachine: () Calling .GetVersion
	I1010 17:58:33.247618   89506 main.go:141] libmachine: Using API Version  1
	I1010 17:58:33.247635   89506 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 17:58:33.247664   89506 main.go:141] libmachine: () Calling .GetMachineName
	I1010 17:58:33.248064   89506 main.go:141] libmachine: () Calling .GetMachineName
	I1010 17:58:33.248517   89506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 17:58:33.248547   89506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 17:58:33.250849   89506 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-473910"
	I1010 17:58:33.250903   89506 host.go:66] Checking if "addons-473910" exists ...
	I1010 17:58:33.251282   89506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 17:58:33.251303   89506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 17:58:33.252123   89506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 17:58:33.252167   89506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 17:58:33.258889   89506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35939
	I1010 17:58:33.259751   89506 main.go:141] libmachine: () Calling .GetVersion
	I1010 17:58:33.260609   89506 main.go:141] libmachine: Using API Version  1
	I1010 17:58:33.260637   89506 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 17:58:33.261204   89506 main.go:141] libmachine: () Calling .GetMachineName
	I1010 17:58:33.261548   89506 main.go:141] libmachine: (addons-473910) Calling .GetState
	I1010 17:58:33.264965   89506 addons.go:234] Setting addon default-storageclass=true in "addons-473910"
	I1010 17:58:33.265022   89506 host.go:66] Checking if "addons-473910" exists ...
	I1010 17:58:33.265431   89506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 17:58:33.265481   89506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 17:58:33.271189   89506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33083
	I1010 17:58:33.271244   89506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39365
	I1010 17:58:33.271672   89506 main.go:141] libmachine: () Calling .GetVersion
	I1010 17:58:33.272175   89506 main.go:141] libmachine: Using API Version  1
	I1010 17:58:33.272199   89506 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 17:58:33.272611   89506 main.go:141] libmachine: () Calling .GetMachineName
	I1010 17:58:33.273233   89506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 17:58:33.273275   89506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 17:58:33.274110   89506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45525
	I1010 17:58:33.274375   89506 main.go:141] libmachine: () Calling .GetVersion
	I1010 17:58:33.274933   89506 main.go:141] libmachine: Using API Version  1
	I1010 17:58:33.274957   89506 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 17:58:33.275030   89506 main.go:141] libmachine: () Calling .GetVersion
	I1010 17:58:33.275302   89506 main.go:141] libmachine: () Calling .GetMachineName
	I1010 17:58:33.275828   89506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 17:58:33.275857   89506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 17:58:33.276168   89506 main.go:141] libmachine: Using API Version  1
	I1010 17:58:33.276194   89506 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 17:58:33.277874   89506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42571
	I1010 17:58:33.278475   89506 main.go:141] libmachine: () Calling .GetVersion
	I1010 17:58:33.279099   89506 main.go:141] libmachine: Using API Version  1
	I1010 17:58:33.279118   89506 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 17:58:33.279542   89506 main.go:141] libmachine: () Calling .GetMachineName
	I1010 17:58:33.280121   89506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 17:58:33.280174   89506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 17:58:33.280378   89506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41915
	I1010 17:58:33.280816   89506 main.go:141] libmachine: () Calling .GetVersion
	I1010 17:58:33.281350   89506 main.go:141] libmachine: Using API Version  1
	I1010 17:58:33.281369   89506 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 17:58:33.281497   89506 main.go:141] libmachine: () Calling .GetMachineName
	I1010 17:58:33.281874   89506 main.go:141] libmachine: () Calling .GetMachineName
	I1010 17:58:33.281927   89506 main.go:141] libmachine: (addons-473910) Calling .GetState
	I1010 17:58:33.282518   89506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 17:58:33.282559   89506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 17:58:33.283865   89506 host.go:66] Checking if "addons-473910" exists ...
	I1010 17:58:33.284253   89506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 17:58:33.284276   89506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 17:58:33.285057   89506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35683
	I1010 17:58:33.285453   89506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38729
	I1010 17:58:33.285481   89506 main.go:141] libmachine: () Calling .GetVersion
	I1010 17:58:33.285859   89506 main.go:141] libmachine: () Calling .GetVersion
	I1010 17:58:33.286041   89506 main.go:141] libmachine: Using API Version  1
	I1010 17:58:33.286057   89506 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 17:58:33.286379   89506 main.go:141] libmachine: () Calling .GetMachineName
	I1010 17:58:33.286520   89506 main.go:141] libmachine: Using API Version  1
	I1010 17:58:33.286539   89506 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 17:58:33.287109   89506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 17:58:33.287153   89506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 17:58:33.287831   89506 main.go:141] libmachine: () Calling .GetMachineName
	I1010 17:58:33.288696   89506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42921
	I1010 17:58:33.289172   89506 main.go:141] libmachine: () Calling .GetVersion
	I1010 17:58:33.289740   89506 main.go:141] libmachine: Using API Version  1
	I1010 17:58:33.289757   89506 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 17:58:33.290134   89506 main.go:141] libmachine: () Calling .GetMachineName
	I1010 17:58:33.290656   89506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 17:58:33.290697   89506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 17:58:33.293405   89506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33445
	I1010 17:58:33.293468   89506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33291
	I1010 17:58:33.294917   89506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38971
	I1010 17:58:33.299518   89506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45167
	I1010 17:58:33.300067   89506 main.go:141] libmachine: () Calling .GetVersion
	I1010 17:58:33.300689   89506 main.go:141] libmachine: Using API Version  1
	I1010 17:58:33.300717   89506 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 17:58:33.301003   89506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46175
	I1010 17:58:33.303337   89506 main.go:141] libmachine: () Calling .GetMachineName
	I1010 17:58:33.303411   89506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34631
	I1010 17:58:33.303519   89506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34333
	I1010 17:58:33.304019   89506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 17:58:33.304039   89506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 17:58:33.304190   89506 main.go:141] libmachine: () Calling .GetVersion
	I1010 17:58:33.304501   89506 main.go:141] libmachine: () Calling .GetVersion
	I1010 17:58:33.304674   89506 main.go:141] libmachine: Using API Version  1
	I1010 17:58:33.304686   89506 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 17:58:33.304964   89506 main.go:141] libmachine: Using API Version  1
	I1010 17:58:33.304983   89506 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 17:58:33.305051   89506 main.go:141] libmachine: () Calling .GetMachineName
	I1010 17:58:33.305453   89506 main.go:141] libmachine: () Calling .GetMachineName
	I1010 17:58:33.307407   89506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33519
	I1010 17:58:33.309219   89506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39975
	I1010 17:58:33.309307   89506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33605
	I1010 17:58:33.309365   89506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 17:58:33.309364   89506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 17:58:33.309424   89506 main.go:141] libmachine: () Calling .GetVersion
	I1010 17:58:33.309452   89506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 17:58:33.309416   89506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 17:58:33.309666   89506 main.go:141] libmachine: () Calling .GetVersion
	I1010 17:58:33.309770   89506 main.go:141] libmachine: () Calling .GetVersion
	I1010 17:58:33.309850   89506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 17:58:33.309871   89506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 17:58:33.310279   89506 main.go:141] libmachine: Using API Version  1
	I1010 17:58:33.310296   89506 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 17:58:33.310384   89506 main.go:141] libmachine: Using API Version  1
	I1010 17:58:33.310393   89506 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 17:58:33.310431   89506 main.go:141] libmachine: () Calling .GetVersion
	I1010 17:58:33.310482   89506 main.go:141] libmachine: () Calling .GetVersion
	I1010 17:58:33.310512   89506 main.go:141] libmachine: () Calling .GetVersion
	I1010 17:58:33.310820   89506 main.go:141] libmachine: () Calling .GetMachineName
	I1010 17:58:33.310878   89506 main.go:141] libmachine: () Calling .GetMachineName
	I1010 17:58:33.311008   89506 main.go:141] libmachine: Using API Version  1
	I1010 17:58:33.311011   89506 main.go:141] libmachine: Using API Version  1
	I1010 17:58:33.311020   89506 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 17:58:33.311029   89506 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 17:58:33.311165   89506 main.go:141] libmachine: Using API Version  1
	I1010 17:58:33.311180   89506 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 17:58:33.311391   89506 main.go:141] libmachine: () Calling .GetMachineName
	I1010 17:58:33.311451   89506 main.go:141] libmachine: (addons-473910) Calling .GetState
	I1010 17:58:33.311595   89506 main.go:141] libmachine: Using API Version  1
	I1010 17:58:33.311619   89506 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 17:58:33.312146   89506 main.go:141] libmachine: () Calling .GetMachineName
	I1010 17:58:33.312149   89506 main.go:141] libmachine: () Calling .GetMachineName
	I1010 17:58:33.312177   89506 main.go:141] libmachine: () Calling .GetMachineName
	I1010 17:58:33.312222   89506 main.go:141] libmachine: () Calling .GetVersion
	I1010 17:58:33.312665   89506 main.go:141] libmachine: (addons-473910) Calling .GetState
	I1010 17:58:33.312690   89506 main.go:141] libmachine: (addons-473910) Calling .GetState
	I1010 17:58:33.312665   89506 main.go:141] libmachine: (addons-473910) Calling .GetState
	I1010 17:58:33.312729   89506 main.go:141] libmachine: Using API Version  1
	I1010 17:58:33.312744   89506 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 17:58:33.313064   89506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 17:58:33.313112   89506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 17:58:33.313570   89506 main.go:141] libmachine: () Calling .GetMachineName
	I1010 17:58:33.313738   89506 main.go:141] libmachine: (addons-473910) Calling .GetState
	I1010 17:58:33.314917   89506 main.go:141] libmachine: (addons-473910) Calling .GetState
	I1010 17:58:33.315669   89506 main.go:141] libmachine: (addons-473910) Calling .DriverName
	I1010 17:58:33.316204   89506 main.go:141] libmachine: (addons-473910) Calling .DriverName
	I1010 17:58:33.317082   89506 main.go:141] libmachine: (addons-473910) Calling .DriverName
	I1010 17:58:33.317297   89506 main.go:141] libmachine: Making call to close driver server
	I1010 17:58:33.317314   89506 main.go:141] libmachine: (addons-473910) Calling .Close
	I1010 17:58:33.317343   89506 main.go:141] libmachine: (addons-473910) Calling .DriverName
	I1010 17:58:33.318483   89506 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.33.0
	I1010 17:58:33.318501   89506 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1010 17:58:33.318888   89506 main.go:141] libmachine: (addons-473910) Calling .DriverName
	I1010 17:58:33.318945   89506 main.go:141] libmachine: (addons-473910) Calling .DriverName
	I1010 17:58:33.318961   89506 main.go:141] libmachine: (addons-473910) DBG | Closing plugin on server side
	I1010 17:58:33.318976   89506 main.go:141] libmachine: Successfully made call to close driver server
	I1010 17:58:33.319315   89506 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 17:58:33.319324   89506 main.go:141] libmachine: Making call to close driver server
	I1010 17:58:33.319329   89506 main.go:141] libmachine: (addons-473910) Calling .Close
	I1010 17:58:33.319613   89506 main.go:141] libmachine: Successfully made call to close driver server
	I1010 17:58:33.319630   89506 main.go:141] libmachine: Making call to close connection to plugin binary
	W1010 17:58:33.319713   89506 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1010 17:58:33.319815   89506 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1010 17:58:33.319840   89506 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1010 17:58:33.319863   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHHostname
	I1010 17:58:33.319925   89506 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I1010 17:58:33.319936   89506 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I1010 17:58:33.319950   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHHostname
	I1010 17:58:33.320612   89506 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I1010 17:58:33.320623   89506 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 17:58:33.320671   89506 out.go:177]   - Using image docker.io/registry:2.8.3
	I1010 17:58:33.322089   89506 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I1010 17:58:33.322109   89506 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1010 17:58:33.322130   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHHostname
	I1010 17:58:33.322206   89506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38351
	I1010 17:58:33.322786   89506 main.go:141] libmachine: () Calling .GetVersion
	I1010 17:58:33.323172   89506 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 17:58:33.323196   89506 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1010 17:58:33.323215   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHHostname
	I1010 17:58:33.323568   89506 main.go:141] libmachine: Using API Version  1
	I1010 17:58:33.323586   89506 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 17:58:33.324094   89506 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I1010 17:58:33.324454   89506 main.go:141] libmachine: () Calling .GetMachineName
	I1010 17:58:33.324954   89506 main.go:141] libmachine: (addons-473910) Calling .GetState
	I1010 17:58:33.325101   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:33.325335   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:33.325564   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHPort
	I1010 17:58:33.325622   89506 main.go:141] libmachine: (addons-473910) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:7f:56", ip: ""} in network mk-addons-473910: {Iface:virbr1 ExpiryTime:2024-10-10 18:58:03 +0000 UTC Type:0 Mac:52:54:00:6b:7f:56 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:addons-473910 Clientid:01:52:54:00:6b:7f:56}
	I1010 17:58:33.325641   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined IP address 192.168.39.238 and MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:33.325908   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHKeyPath
	I1010 17:58:33.325913   89506 main.go:141] libmachine: (addons-473910) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:7f:56", ip: ""} in network mk-addons-473910: {Iface:virbr1 ExpiryTime:2024-10-10 18:58:03 +0000 UTC Type:0 Mac:52:54:00:6b:7f:56 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:addons-473910 Clientid:01:52:54:00:6b:7f:56}
	I1010 17:58:33.325932   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined IP address 192.168.39.238 and MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:33.326230   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHUsername
	I1010 17:58:33.326230   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHPort
	I1010 17:58:33.326324   89506 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I1010 17:58:33.326342   89506 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1010 17:58:33.326363   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHHostname
	I1010 17:58:33.326489   89506 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/addons-473910/id_rsa Username:docker}
	I1010 17:58:33.326800   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHKeyPath
	I1010 17:58:33.327086   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHUsername
	I1010 17:58:33.327305   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:33.327338   89506 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/addons-473910/id_rsa Username:docker}
	I1010 17:58:33.327497   89506 main.go:141] libmachine: (addons-473910) Calling .DriverName
	I1010 17:58:33.328164   89506 main.go:141] libmachine: (addons-473910) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:7f:56", ip: ""} in network mk-addons-473910: {Iface:virbr1 ExpiryTime:2024-10-10 18:58:03 +0000 UTC Type:0 Mac:52:54:00:6b:7f:56 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:addons-473910 Clientid:01:52:54:00:6b:7f:56}
	I1010 17:58:33.328188   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined IP address 192.168.39.238 and MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:33.328666   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHPort
	I1010 17:58:33.328973   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHKeyPath
	I1010 17:58:33.329132   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHUsername
	I1010 17:58:33.329265   89506 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I1010 17:58:33.329383   89506 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/addons-473910/id_rsa Username:docker}
	I1010 17:58:33.329637   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:33.330162   89506 main.go:141] libmachine: (addons-473910) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:7f:56", ip: ""} in network mk-addons-473910: {Iface:virbr1 ExpiryTime:2024-10-10 18:58:03 +0000 UTC Type:0 Mac:52:54:00:6b:7f:56 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:addons-473910 Clientid:01:52:54:00:6b:7f:56}
	I1010 17:58:33.330184   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined IP address 192.168.39.238 and MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:33.330463   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHPort
	I1010 17:58:33.330630   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHKeyPath
	I1010 17:58:33.330694   89506 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1010 17:58:33.330713   89506 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1010 17:58:33.330731   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHHostname
	I1010 17:58:33.330752   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHUsername
	I1010 17:58:33.331353   89506 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/addons-473910/id_rsa Username:docker}
	I1010 17:58:33.332069   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:33.332540   89506 main.go:141] libmachine: (addons-473910) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:7f:56", ip: ""} in network mk-addons-473910: {Iface:virbr1 ExpiryTime:2024-10-10 18:58:03 +0000 UTC Type:0 Mac:52:54:00:6b:7f:56 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:addons-473910 Clientid:01:52:54:00:6b:7f:56}
	I1010 17:58:33.332571   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined IP address 192.168.39.238 and MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:33.332934   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHPort
	I1010 17:58:33.333087   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHKeyPath
	I1010 17:58:33.333212   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHUsername
	I1010 17:58:33.333331   89506 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/addons-473910/id_rsa Username:docker}
	I1010 17:58:33.334850   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:33.335391   89506 main.go:141] libmachine: (addons-473910) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:7f:56", ip: ""} in network mk-addons-473910: {Iface:virbr1 ExpiryTime:2024-10-10 18:58:03 +0000 UTC Type:0 Mac:52:54:00:6b:7f:56 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:addons-473910 Clientid:01:52:54:00:6b:7f:56}
	I1010 17:58:33.335417   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined IP address 192.168.39.238 and MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:33.335590   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHPort
	I1010 17:58:33.335659   89506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42281
	I1010 17:58:33.335917   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHKeyPath
	I1010 17:58:33.336052   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHUsername
	I1010 17:58:33.336114   89506 main.go:141] libmachine: () Calling .GetVersion
	I1010 17:58:33.336266   89506 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/addons-473910/id_rsa Username:docker}
	I1010 17:58:33.336736   89506 main.go:141] libmachine: Using API Version  1
	I1010 17:58:33.336761   89506 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 17:58:33.337134   89506 main.go:141] libmachine: () Calling .GetMachineName
	I1010 17:58:33.337334   89506 main.go:141] libmachine: (addons-473910) Calling .GetState
	I1010 17:58:33.339025   89506 main.go:141] libmachine: (addons-473910) Calling .DriverName
	I1010 17:58:33.341041   89506 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I1010 17:58:33.342395   89506 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1010 17:58:33.344695   89506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44527
	I1010 17:58:33.345160   89506 main.go:141] libmachine: () Calling .GetVersion
	I1010 17:58:33.345634   89506 main.go:141] libmachine: Using API Version  1
	I1010 17:58:33.345649   89506 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 17:58:33.346154   89506 main.go:141] libmachine: () Calling .GetMachineName
	I1010 17:58:33.346459   89506 main.go:141] libmachine: (addons-473910) Calling .GetState
	I1010 17:58:33.347020   89506 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1010 17:58:33.348149   89506 main.go:141] libmachine: (addons-473910) Calling .DriverName
	I1010 17:58:33.348695   89506 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1010 17:58:33.348715   89506 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1010 17:58:33.348740   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHHostname
	I1010 17:58:33.349523   89506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46651
	I1010 17:58:33.349930   89506 main.go:141] libmachine: () Calling .GetVersion
	I1010 17:58:33.350517   89506 main.go:141] libmachine: Using API Version  1
	I1010 17:58:33.350534   89506 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 17:58:33.351293   89506 main.go:141] libmachine: () Calling .GetMachineName
	I1010 17:58:33.351377   89506 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I1010 17:58:33.351658   89506 main.go:141] libmachine: (addons-473910) Calling .GetState
	I1010 17:58:33.352813   89506 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1010 17:58:33.352832   89506 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1010 17:58:33.352869   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHHostname
	I1010 17:58:33.353473   89506 main.go:141] libmachine: (addons-473910) Calling .DriverName
	I1010 17:58:33.353709   89506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40039
	I1010 17:58:33.354211   89506 main.go:141] libmachine: () Calling .GetVersion
	I1010 17:58:33.354391   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:33.354748   89506 main.go:141] libmachine: (addons-473910) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:7f:56", ip: ""} in network mk-addons-473910: {Iface:virbr1 ExpiryTime:2024-10-10 18:58:03 +0000 UTC Type:0 Mac:52:54:00:6b:7f:56 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:addons-473910 Clientid:01:52:54:00:6b:7f:56}
	I1010 17:58:33.354769   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined IP address 192.168.39.238 and MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:33.355061   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHPort
	I1010 17:58:33.355188   89506 main.go:141] libmachine: Using API Version  1
	I1010 17:58:33.355204   89506 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 17:58:33.355359   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHKeyPath
	I1010 17:58:33.355423   89506 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1010 17:58:33.355576   89506 main.go:141] libmachine: () Calling .GetMachineName
	I1010 17:58:33.355787   89506 main.go:141] libmachine: (addons-473910) Calling .GetState
	I1010 17:58:33.356124   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHUsername
	I1010 17:58:33.356325   89506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41605
	I1010 17:58:33.356613   89506 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/addons-473910/id_rsa Username:docker}
	I1010 17:58:33.356736   89506 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1010 17:58:33.356751   89506 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1010 17:58:33.356770   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHHostname
	I1010 17:58:33.357675   89506 main.go:141] libmachine: () Calling .GetVersion
	I1010 17:58:33.358130   89506 main.go:141] libmachine: (addons-473910) Calling .DriverName
	I1010 17:58:33.358422   89506 main.go:141] libmachine: Using API Version  1
	I1010 17:58:33.358442   89506 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 17:58:33.358691   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:33.358793   89506 main.go:141] libmachine: () Calling .GetMachineName
	I1010 17:58:33.358891   89506 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1010 17:58:33.358932   89506 main.go:141] libmachine: (addons-473910) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:7f:56", ip: ""} in network mk-addons-473910: {Iface:virbr1 ExpiryTime:2024-10-10 18:58:03 +0000 UTC Type:0 Mac:52:54:00:6b:7f:56 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:addons-473910 Clientid:01:52:54:00:6b:7f:56}
	I1010 17:58:33.358961   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined IP address 192.168.39.238 and MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:33.358974   89506 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1010 17:58:33.358995   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHHostname
	I1010 17:58:33.359006   89506 main.go:141] libmachine: (addons-473910) Calling .DriverName
	I1010 17:58:33.359184   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHPort
	I1010 17:58:33.359417   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHKeyPath
	I1010 17:58:33.359547   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHUsername
	I1010 17:58:33.359818   89506 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/addons-473910/id_rsa Username:docker}
	I1010 17:58:33.360250   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:33.360749   89506 main.go:141] libmachine: (addons-473910) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:7f:56", ip: ""} in network mk-addons-473910: {Iface:virbr1 ExpiryTime:2024-10-10 18:58:03 +0000 UTC Type:0 Mac:52:54:00:6b:7f:56 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:addons-473910 Clientid:01:52:54:00:6b:7f:56}
	I1010 17:58:33.360768   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined IP address 192.168.39.238 and MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:33.361013   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHPort
	I1010 17:58:33.361138   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHKeyPath
	I1010 17:58:33.361243   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHUsername
	I1010 17:58:33.361356   89506 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/addons-473910/id_rsa Username:docker}
	I1010 17:58:33.361858   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:33.362274   89506 main.go:141] libmachine: (addons-473910) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:7f:56", ip: ""} in network mk-addons-473910: {Iface:virbr1 ExpiryTime:2024-10-10 18:58:03 +0000 UTC Type:0 Mac:52:54:00:6b:7f:56 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:addons-473910 Clientid:01:52:54:00:6b:7f:56}
	I1010 17:58:33.362293   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined IP address 192.168.39.238 and MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:33.362439   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHPort
	I1010 17:58:33.362610   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHKeyPath
	I1010 17:58:33.362737   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHUsername
	I1010 17:58:33.362836   89506 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/addons-473910/id_rsa Username:docker}
	I1010 17:58:33.364890   89506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40369
	I1010 17:58:33.365302   89506 main.go:141] libmachine: () Calling .GetVersion
	I1010 17:58:33.365756   89506 main.go:141] libmachine: Using API Version  1
	I1010 17:58:33.365774   89506 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 17:58:33.365902   89506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35833
	I1010 17:58:33.366081   89506 main.go:141] libmachine: () Calling .GetMachineName
	I1010 17:58:33.366229   89506 main.go:141] libmachine: (addons-473910) Calling .GetState
	I1010 17:58:33.366593   89506 main.go:141] libmachine: () Calling .GetVersion
	I1010 17:58:33.367166   89506 main.go:141] libmachine: Using API Version  1
	I1010 17:58:33.367189   89506 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 17:58:33.367516   89506 main.go:141] libmachine: () Calling .GetMachineName
	I1010 17:58:33.367690   89506 main.go:141] libmachine: (addons-473910) Calling .DriverName
	I1010 17:58:33.367705   89506 main.go:141] libmachine: (addons-473910) Calling .GetState
	I1010 17:58:33.369251   89506 main.go:141] libmachine: (addons-473910) Calling .DriverName
	I1010 17:58:33.369739   89506 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1010 17:58:33.371182   89506 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I1010 17:58:33.372772   89506 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1010 17:58:33.372793   89506 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1010 17:58:33.372816   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHHostname
	I1010 17:58:33.372901   89506 out.go:177]   - Using image docker.io/busybox:stable
	I1010 17:58:33.374654   89506 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1010 17:58:33.374674   89506 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1010 17:58:33.374697   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHHostname
	I1010 17:58:33.376408   89506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38875
	I1010 17:58:33.376890   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:33.376915   89506 main.go:141] libmachine: (addons-473910) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:7f:56", ip: ""} in network mk-addons-473910: {Iface:virbr1 ExpiryTime:2024-10-10 18:58:03 +0000 UTC Type:0 Mac:52:54:00:6b:7f:56 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:addons-473910 Clientid:01:52:54:00:6b:7f:56}
	I1010 17:58:33.376934   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined IP address 192.168.39.238 and MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:33.377121   89506 main.go:141] libmachine: () Calling .GetVersion
	I1010 17:58:33.377126   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHPort
	I1010 17:58:33.377343   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHKeyPath
	I1010 17:58:33.377537   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHUsername
	I1010 17:58:33.377704   89506 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/addons-473910/id_rsa Username:docker}
	I1010 17:58:33.377828   89506 main.go:141] libmachine: Using API Version  1
	I1010 17:58:33.377846   89506 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 17:58:33.378213   89506 main.go:141] libmachine: () Calling .GetMachineName
	I1010 17:58:33.378315   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:33.378480   89506 main.go:141] libmachine: (addons-473910) Calling .GetState
	I1010 17:58:33.378727   89506 main.go:141] libmachine: (addons-473910) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:7f:56", ip: ""} in network mk-addons-473910: {Iface:virbr1 ExpiryTime:2024-10-10 18:58:03 +0000 UTC Type:0 Mac:52:54:00:6b:7f:56 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:addons-473910 Clientid:01:52:54:00:6b:7f:56}
	I1010 17:58:33.378749   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined IP address 192.168.39.238 and MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	W1010 17:58:33.378850   89506 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1010 17:58:33.378878   89506 retry.go:31] will retry after 324.019951ms: ssh: handshake failed: EOF
	I1010 17:58:33.378932   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHPort
	I1010 17:58:33.379149   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHKeyPath
	I1010 17:58:33.379319   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHUsername
	I1010 17:58:33.379486   89506 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/addons-473910/id_rsa Username:docker}
	I1010 17:58:33.380027   89506 main.go:141] libmachine: (addons-473910) Calling .DriverName
	I1010 17:58:33.381888   89506 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	W1010 17:58:33.383311   89506 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:43324->192.168.39.238:22: read: connection reset by peer
	I1010 17:58:33.383343   89506 retry.go:31] will retry after 193.322587ms: ssh: handshake failed: read tcp 192.168.39.1:43324->192.168.39.238:22: read: connection reset by peer
	I1010 17:58:33.385241   89506 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1010 17:58:33.386766   89506 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1010 17:58:33.388117   89506 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1010 17:58:33.389307   89506 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1010 17:58:33.390822   89506 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1010 17:58:33.392330   89506 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1010 17:58:33.393720   89506 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1010 17:58:33.395092   89506 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1010 17:58:33.395138   89506 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1010 17:58:33.395176   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHHostname
	I1010 17:58:33.398560   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:33.399013   89506 main.go:141] libmachine: (addons-473910) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:7f:56", ip: ""} in network mk-addons-473910: {Iface:virbr1 ExpiryTime:2024-10-10 18:58:03 +0000 UTC Type:0 Mac:52:54:00:6b:7f:56 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:addons-473910 Clientid:01:52:54:00:6b:7f:56}
	I1010 17:58:33.399041   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined IP address 192.168.39.238 and MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:33.399273   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHPort
	I1010 17:58:33.399507   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHKeyPath
	I1010 17:58:33.399654   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHUsername
	I1010 17:58:33.399810   89506 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/addons-473910/id_rsa Username:docker}
	I1010 17:58:33.763625   89506 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 17:58:33.764231   89506 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 17:58:33.764262   89506 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1010 17:58:33.770487   89506 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1010 17:58:33.881674   89506 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1010 17:58:33.881700   89506 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1010 17:58:33.896480   89506 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1010 17:58:33.896513   89506 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1010 17:58:33.921560   89506 addons.go:431] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1010 17:58:33.921596   89506 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14451 bytes)
	I1010 17:58:33.945283   89506 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1010 17:58:33.945316   89506 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1010 17:58:33.946380   89506 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I1010 17:58:33.946410   89506 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1010 17:58:33.976668   89506 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1010 17:58:33.990537   89506 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1010 17:58:34.009872   89506 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1010 17:58:34.009903   89506 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1010 17:58:34.055701   89506 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1010 17:58:34.081759   89506 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1010 17:58:34.081789   89506 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1010 17:58:34.082201   89506 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1010 17:58:34.132576   89506 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1010 17:58:34.132610   89506 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1010 17:58:34.190285   89506 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1010 17:58:34.190324   89506 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1010 17:58:34.234662   89506 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1010 17:58:34.251140   89506 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1010 17:58:34.252868   89506 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1010 17:58:34.252887   89506 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1010 17:58:34.253278   89506 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1010 17:58:34.253296   89506 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1010 17:58:34.290479   89506 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1010 17:58:34.290505   89506 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1010 17:58:34.324813   89506 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1010 17:58:34.324859   89506 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1010 17:58:34.380170   89506 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1010 17:58:34.426944   89506 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1010 17:58:34.426972   89506 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1010 17:58:34.453372   89506 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1010 17:58:34.453400   89506 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1010 17:58:34.522711   89506 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1010 17:58:34.590593   89506 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1010 17:58:34.590630   89506 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1010 17:58:34.609365   89506 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1010 17:58:34.609395   89506 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1010 17:58:34.639642   89506 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1010 17:58:34.639667   89506 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1010 17:58:34.842298   89506 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1010 17:58:34.842324   89506 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1010 17:58:34.873055   89506 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1010 17:58:34.873083   89506 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1010 17:58:34.894285   89506 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1010 17:58:35.116629   89506 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1010 17:58:35.225561   89506 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1010 17:58:35.225587   89506 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1010 17:58:35.603920   89506 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1010 17:58:35.603947   89506 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1010 17:58:36.046596   89506 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1010 17:58:36.046626   89506 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1010 17:58:36.518357   89506 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1010 17:58:36.518385   89506 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1010 17:58:36.862694   89506 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1010 17:58:36.862723   89506 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1010 17:58:37.179728   89506 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1010 17:58:38.211699   89506 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.448035829s)
	I1010 17:58:38.211748   89506 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (4.447453296s)
	I1010 17:58:38.211768   89506 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1010 17:58:38.211800   89506 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (4.447538086s)
	I1010 17:58:38.211771   89506 main.go:141] libmachine: Making call to close driver server
	I1010 17:58:38.211873   89506 main.go:141] libmachine: (addons-473910) Calling .Close
	I1010 17:58:38.211899   89506 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.441386033s)
	I1010 17:58:38.211971   89506 main.go:141] libmachine: Making call to close driver server
	I1010 17:58:38.211989   89506 main.go:141] libmachine: (addons-473910) Calling .Close
	I1010 17:58:38.212309   89506 main.go:141] libmachine: Successfully made call to close driver server
	I1010 17:58:38.212328   89506 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 17:58:38.212338   89506 main.go:141] libmachine: Making call to close driver server
	I1010 17:58:38.212346   89506 main.go:141] libmachine: (addons-473910) Calling .Close
	I1010 17:58:38.212476   89506 main.go:141] libmachine: Successfully made call to close driver server
	I1010 17:58:38.212536   89506 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 17:58:38.212559   89506 main.go:141] libmachine: Making call to close driver server
	I1010 17:58:38.212577   89506 main.go:141] libmachine: (addons-473910) Calling .Close
	I1010 17:58:38.212466   89506 main.go:141] libmachine: (addons-473910) DBG | Closing plugin on server side
	I1010 17:58:38.213136   89506 node_ready.go:35] waiting up to 6m0s for node "addons-473910" to be "Ready" ...
	I1010 17:58:38.213332   89506 main.go:141] libmachine: (addons-473910) DBG | Closing plugin on server side
	I1010 17:58:38.213344   89506 main.go:141] libmachine: (addons-473910) DBG | Closing plugin on server side
	I1010 17:58:38.213370   89506 main.go:141] libmachine: Successfully made call to close driver server
	I1010 17:58:38.213383   89506 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 17:58:38.213399   89506 main.go:141] libmachine: Successfully made call to close driver server
	I1010 17:58:38.213424   89506 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 17:58:38.241250   89506 node_ready.go:49] node "addons-473910" has status "Ready":"True"
	I1010 17:58:38.241286   89506 node_ready.go:38] duration metric: took 28.127142ms for node "addons-473910" to be "Ready" ...
	I1010 17:58:38.241298   89506 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 17:58:38.293987   89506 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-b5dd8" in "kube-system" namespace to be "Ready" ...
	I1010 17:58:38.731251   89506 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-473910" context rescaled to 1 replicas
	I1010 17:58:39.808135   89506 pod_ready.go:93] pod "coredns-7c65d6cfc9-b5dd8" in "kube-system" namespace has status "Ready":"True"
	I1010 17:58:39.808162   89506 pod_ready.go:82] duration metric: took 1.514148752s for pod "coredns-7c65d6cfc9-b5dd8" in "kube-system" namespace to be "Ready" ...
	I1010 17:58:39.808174   89506 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-m8z9n" in "kube-system" namespace to be "Ready" ...
	I1010 17:58:40.369825   89506 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1010 17:58:40.369893   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHHostname
	I1010 17:58:40.373954   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:40.374559   89506 main.go:141] libmachine: (addons-473910) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:7f:56", ip: ""} in network mk-addons-473910: {Iface:virbr1 ExpiryTime:2024-10-10 18:58:03 +0000 UTC Type:0 Mac:52:54:00:6b:7f:56 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:addons-473910 Clientid:01:52:54:00:6b:7f:56}
	I1010 17:58:40.374594   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined IP address 192.168.39.238 and MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:40.374843   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHPort
	I1010 17:58:40.375220   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHKeyPath
	I1010 17:58:40.375433   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHUsername
	I1010 17:58:40.375614   89506 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/addons-473910/id_rsa Username:docker}
	I1010 17:58:40.769765   89506 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1010 17:58:40.920415   89506 addons.go:234] Setting addon gcp-auth=true in "addons-473910"
	I1010 17:58:40.920470   89506 host.go:66] Checking if "addons-473910" exists ...
	I1010 17:58:40.920820   89506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 17:58:40.920888   89506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 17:58:40.936420   89506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39961
	I1010 17:58:40.936970   89506 main.go:141] libmachine: () Calling .GetVersion
	I1010 17:58:40.937487   89506 main.go:141] libmachine: Using API Version  1
	I1010 17:58:40.937508   89506 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 17:58:40.937865   89506 main.go:141] libmachine: () Calling .GetMachineName
	I1010 17:58:40.938485   89506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 17:58:40.938534   89506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 17:58:40.953991   89506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36643
	I1010 17:58:40.954516   89506 main.go:141] libmachine: () Calling .GetVersion
	I1010 17:58:40.955042   89506 main.go:141] libmachine: Using API Version  1
	I1010 17:58:40.955068   89506 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 17:58:40.955412   89506 main.go:141] libmachine: () Calling .GetMachineName
	I1010 17:58:40.955685   89506 main.go:141] libmachine: (addons-473910) Calling .GetState
	I1010 17:58:40.957168   89506 main.go:141] libmachine: (addons-473910) Calling .DriverName
	I1010 17:58:40.957514   89506 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1010 17:58:40.957549   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHHostname
	I1010 17:58:40.960919   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:40.961395   89506 main.go:141] libmachine: (addons-473910) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:7f:56", ip: ""} in network mk-addons-473910: {Iface:virbr1 ExpiryTime:2024-10-10 18:58:03 +0000 UTC Type:0 Mac:52:54:00:6b:7f:56 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:addons-473910 Clientid:01:52:54:00:6b:7f:56}
	I1010 17:58:40.961428   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined IP address 192.168.39.238 and MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:40.961555   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHPort
	I1010 17:58:40.961813   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHKeyPath
	I1010 17:58:40.962007   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHUsername
	I1010 17:58:40.962184   89506 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/addons-473910/id_rsa Username:docker}
	I1010 17:58:41.851281   89506 pod_ready.go:103] pod "coredns-7c65d6cfc9-m8z9n" in "kube-system" namespace has status "Ready":"False"
	I1010 17:58:42.201577   89506 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.224859812s)
	I1010 17:58:42.201642   89506 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (8.211064893s)
	I1010 17:58:42.201658   89506 main.go:141] libmachine: Making call to close driver server
	I1010 17:58:42.201675   89506 main.go:141] libmachine: (addons-473910) Calling .Close
	I1010 17:58:42.201688   89506 main.go:141] libmachine: Making call to close driver server
	I1010 17:58:42.201706   89506 main.go:141] libmachine: (addons-473910) Calling .Close
	I1010 17:58:42.201715   89506 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.145982818s)
	I1010 17:58:42.201753   89506 main.go:141] libmachine: Making call to close driver server
	I1010 17:58:42.201769   89506 main.go:141] libmachine: (addons-473910) Calling .Close
	I1010 17:58:42.201827   89506 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.119597488s)
	I1010 17:58:42.201858   89506 main.go:141] libmachine: Making call to close driver server
	I1010 17:58:42.201868   89506 main.go:141] libmachine: (addons-473910) Calling .Close
	I1010 17:58:42.201912   89506 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.967207128s)
	I1010 17:58:42.201942   89506 main.go:141] libmachine: Making call to close driver server
	I1010 17:58:42.201952   89506 main.go:141] libmachine: (addons-473910) Calling .Close
	I1010 17:58:42.202079   89506 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (7.950910413s)
	I1010 17:58:42.202097   89506 main.go:141] libmachine: Making call to close driver server
	I1010 17:58:42.202105   89506 main.go:141] libmachine: (addons-473910) Calling .Close
	I1010 17:58:42.202182   89506 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.821985372s)
	I1010 17:58:42.202200   89506 main.go:141] libmachine: Making call to close driver server
	I1010 17:58:42.202209   89506 main.go:141] libmachine: (addons-473910) Calling .Close
	I1010 17:58:42.202295   89506 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.679542364s)
	I1010 17:58:42.202325   89506 main.go:141] libmachine: Making call to close driver server
	I1010 17:58:42.202337   89506 main.go:141] libmachine: (addons-473910) Calling .Close
	I1010 17:58:42.202434   89506 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.308113924s)
	I1010 17:58:42.202455   89506 main.go:141] libmachine: Making call to close driver server
	I1010 17:58:42.202464   89506 main.go:141] libmachine: (addons-473910) Calling .Close
	I1010 17:58:42.202613   89506 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.085942304s)
	W1010 17:58:42.202649   89506 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1010 17:58:42.202687   89506 retry.go:31] will retry after 371.604527ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1010 17:58:42.205079   89506 main.go:141] libmachine: (addons-473910) DBG | Closing plugin on server side
	I1010 17:58:42.205081   89506 main.go:141] libmachine: (addons-473910) DBG | Closing plugin on server side
	I1010 17:58:42.205091   89506 main.go:141] libmachine: Successfully made call to close driver server
	I1010 17:58:42.205103   89506 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 17:58:42.205113   89506 main.go:141] libmachine: Making call to close driver server
	I1010 17:58:42.205120   89506 main.go:141] libmachine: (addons-473910) Calling .Close
	I1010 17:58:42.205232   89506 main.go:141] libmachine: Successfully made call to close driver server
	I1010 17:58:42.205244   89506 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 17:58:42.205253   89506 main.go:141] libmachine: Making call to close driver server
	I1010 17:58:42.205259   89506 main.go:141] libmachine: (addons-473910) DBG | Closing plugin on server side
	I1010 17:58:42.205262   89506 main.go:141] libmachine: (addons-473910) Calling .Close
	I1010 17:58:42.205277   89506 main.go:141] libmachine: (addons-473910) DBG | Closing plugin on server side
	I1010 17:58:42.205286   89506 main.go:141] libmachine: Successfully made call to close driver server
	I1010 17:58:42.205300   89506 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 17:58:42.205310   89506 main.go:141] libmachine: Making call to close driver server
	I1010 17:58:42.205317   89506 main.go:141] libmachine: (addons-473910) Calling .Close
	I1010 17:58:42.205334   89506 main.go:141] libmachine: (addons-473910) DBG | Closing plugin on server side
	I1010 17:58:42.205339   89506 main.go:141] libmachine: Successfully made call to close driver server
	I1010 17:58:42.205351   89506 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 17:58:42.205358   89506 main.go:141] libmachine: Making call to close driver server
	I1010 17:58:42.205360   89506 main.go:141] libmachine: Successfully made call to close driver server
	I1010 17:58:42.205366   89506 main.go:141] libmachine: (addons-473910) Calling .Close
	I1010 17:58:42.205368   89506 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 17:58:42.205375   89506 main.go:141] libmachine: Making call to close driver server
	I1010 17:58:42.205381   89506 main.go:141] libmachine: (addons-473910) Calling .Close
	I1010 17:58:42.205417   89506 main.go:141] libmachine: (addons-473910) DBG | Closing plugin on server side
	I1010 17:58:42.205425   89506 main.go:141] libmachine: Successfully made call to close driver server
	I1010 17:58:42.205434   89506 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 17:58:42.205434   89506 main.go:141] libmachine: (addons-473910) DBG | Closing plugin on server side
	I1010 17:58:42.205441   89506 main.go:141] libmachine: Making call to close driver server
	I1010 17:58:42.205447   89506 main.go:141] libmachine: (addons-473910) Calling .Close
	I1010 17:58:42.205459   89506 main.go:141] libmachine: Successfully made call to close driver server
	I1010 17:58:42.205465   89506 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 17:58:42.205472   89506 main.go:141] libmachine: Making call to close driver server
	I1010 17:58:42.205478   89506 main.go:141] libmachine: (addons-473910) Calling .Close
	I1010 17:58:42.205526   89506 main.go:141] libmachine: (addons-473910) DBG | Closing plugin on server side
	I1010 17:58:42.205546   89506 main.go:141] libmachine: Successfully made call to close driver server
	I1010 17:58:42.205552   89506 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 17:58:42.205559   89506 main.go:141] libmachine: Making call to close driver server
	I1010 17:58:42.205565   89506 main.go:141] libmachine: (addons-473910) Calling .Close
	I1010 17:58:42.205739   89506 main.go:141] libmachine: Successfully made call to close driver server
	I1010 17:58:42.205749   89506 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 17:58:42.205757   89506 main.go:141] libmachine: Making call to close driver server
	I1010 17:58:42.205300   89506 main.go:141] libmachine: (addons-473910) DBG | Closing plugin on server side
	I1010 17:58:42.205763   89506 main.go:141] libmachine: (addons-473910) Calling .Close
	I1010 17:58:42.210057   89506 main.go:141] libmachine: (addons-473910) DBG | Closing plugin on server side
	I1010 17:58:42.210068   89506 main.go:141] libmachine: Successfully made call to close driver server
	I1010 17:58:42.210081   89506 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 17:58:42.210130   89506 main.go:141] libmachine: (addons-473910) DBG | Closing plugin on server side
	I1010 17:58:42.210167   89506 main.go:141] libmachine: Successfully made call to close driver server
	I1010 17:58:42.210189   89506 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 17:58:42.210199   89506 main.go:141] libmachine: Successfully made call to close driver server
	I1010 17:58:42.210213   89506 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 17:58:42.210224   89506 main.go:141] libmachine: (addons-473910) DBG | Closing plugin on server side
	I1010 17:58:42.210226   89506 addons.go:475] Verifying addon metrics-server=true in "addons-473910"
	I1010 17:58:42.210243   89506 main.go:141] libmachine: (addons-473910) DBG | Closing plugin on server side
	I1010 17:58:42.210272   89506 main.go:141] libmachine: Successfully made call to close driver server
	I1010 17:58:42.210275   89506 main.go:141] libmachine: Successfully made call to close driver server
	I1010 17:58:42.210281   89506 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 17:58:42.210283   89506 main.go:141] libmachine: (addons-473910) DBG | Closing plugin on server side
	I1010 17:58:42.210289   89506 main.go:141] libmachine: (addons-473910) DBG | Closing plugin on server side
	I1010 17:58:42.210289   89506 addons.go:475] Verifying addon ingress=true in "addons-473910"
	I1010 17:58:42.210295   89506 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 17:58:42.210326   89506 main.go:141] libmachine: Successfully made call to close driver server
	I1010 17:58:42.210335   89506 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 17:58:42.210341   89506 main.go:141] libmachine: (addons-473910) DBG | Closing plugin on server side
	I1010 17:58:42.210354   89506 main.go:141] libmachine: Successfully made call to close driver server
	I1010 17:58:42.210364   89506 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 17:58:42.210365   89506 main.go:141] libmachine: Successfully made call to close driver server
	I1010 17:58:42.210498   89506 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 17:58:42.210372   89506 main.go:141] libmachine: Successfully made call to close driver server
	I1010 17:58:42.210565   89506 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 17:58:42.210578   89506 addons.go:475] Verifying addon registry=true in "addons-473910"
	I1010 17:58:42.212263   89506 out.go:177] * Verifying registry addon...
	I1010 17:58:42.212273   89506 out.go:177] * Verifying ingress addon...
	I1010 17:58:42.212277   89506 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-473910 service yakd-dashboard -n yakd-dashboard
	
	I1010 17:58:42.214541   89506 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1010 17:58:42.214715   89506 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1010 17:58:42.238213   89506 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1010 17:58:42.238247   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:58:42.238284   89506 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1010 17:58:42.238303   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:58:42.260493   89506 main.go:141] libmachine: Making call to close driver server
	I1010 17:58:42.260528   89506 main.go:141] libmachine: (addons-473910) Calling .Close
	I1010 17:58:42.260844   89506 main.go:141] libmachine: Successfully made call to close driver server
	I1010 17:58:42.260878   89506 main.go:141] libmachine: Making call to close connection to plugin binary
	W1010 17:58:42.261035   89506 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1010 17:58:42.284140   89506 main.go:141] libmachine: Making call to close driver server
	I1010 17:58:42.284173   89506 main.go:141] libmachine: (addons-473910) Calling .Close
	I1010 17:58:42.284466   89506 main.go:141] libmachine: Successfully made call to close driver server
	I1010 17:58:42.284490   89506 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 17:58:42.284506   89506 main.go:141] libmachine: (addons-473910) DBG | Closing plugin on server side
	I1010 17:58:42.574843   89506 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1010 17:58:42.728905   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:58:42.729511   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:58:43.145904   89506 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.966113437s)
	I1010 17:58:43.145962   89506 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.188418776s)
	I1010 17:58:43.145985   89506 main.go:141] libmachine: Making call to close driver server
	I1010 17:58:43.146016   89506 main.go:141] libmachine: (addons-473910) Calling .Close
	I1010 17:58:43.146313   89506 main.go:141] libmachine: Successfully made call to close driver server
	I1010 17:58:43.146333   89506 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 17:58:43.146342   89506 main.go:141] libmachine: Making call to close driver server
	I1010 17:58:43.146342   89506 main.go:141] libmachine: (addons-473910) DBG | Closing plugin on server side
	I1010 17:58:43.146349   89506 main.go:141] libmachine: (addons-473910) Calling .Close
	I1010 17:58:43.146568   89506 main.go:141] libmachine: Successfully made call to close driver server
	I1010 17:58:43.146581   89506 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 17:58:43.146593   89506 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-473910"
	I1010 17:58:43.148070   89506 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1010 17:58:43.148079   89506 out.go:177] * Verifying csi-hostpath-driver addon...
	I1010 17:58:43.149882   89506 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1010 17:58:43.150503   89506 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1010 17:58:43.151715   89506 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1010 17:58:43.151740   89506 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1010 17:58:43.166627   89506 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1010 17:58:43.166654   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:58:43.223467   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:58:43.223491   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:58:43.336864   89506 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1010 17:58:43.336891   89506 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1010 17:58:43.400142   89506 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1010 17:58:43.400170   89506 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1010 17:58:43.497283   89506 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1010 17:58:43.655571   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:58:43.718488   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:58:43.719035   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:58:44.019146   89506 pod_ready.go:103] pod "coredns-7c65d6cfc9-m8z9n" in "kube-system" namespace has status "Ready":"False"
	I1010 17:58:44.158722   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:58:44.222414   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:58:44.222456   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:58:44.669807   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:58:44.764206   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:58:44.764244   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:58:45.157044   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:58:45.223326   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:58:45.223984   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:58:45.263172   89506 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.68826102s)
	I1010 17:58:45.263255   89506 main.go:141] libmachine: Making call to close driver server
	I1010 17:58:45.263273   89506 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.765946133s)
	I1010 17:58:45.263324   89506 main.go:141] libmachine: Making call to close driver server
	I1010 17:58:45.263286   89506 main.go:141] libmachine: (addons-473910) Calling .Close
	I1010 17:58:45.263357   89506 main.go:141] libmachine: (addons-473910) Calling .Close
	I1010 17:58:45.263749   89506 main.go:141] libmachine: Successfully made call to close driver server
	I1010 17:58:45.263765   89506 main.go:141] libmachine: Successfully made call to close driver server
	I1010 17:58:45.263786   89506 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 17:58:45.263790   89506 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 17:58:45.263796   89506 main.go:141] libmachine: Making call to close driver server
	I1010 17:58:45.263800   89506 main.go:141] libmachine: Making call to close driver server
	I1010 17:58:45.263806   89506 main.go:141] libmachine: (addons-473910) Calling .Close
	I1010 17:58:45.263808   89506 main.go:141] libmachine: (addons-473910) Calling .Close
	I1010 17:58:45.263833   89506 main.go:141] libmachine: (addons-473910) DBG | Closing plugin on server side
	I1010 17:58:45.263761   89506 main.go:141] libmachine: (addons-473910) DBG | Closing plugin on server side
	I1010 17:58:45.264006   89506 main.go:141] libmachine: Successfully made call to close driver server
	I1010 17:58:45.264022   89506 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 17:58:45.264133   89506 main.go:141] libmachine: Successfully made call to close driver server
	I1010 17:58:45.264147   89506 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 17:58:45.266253   89506 addons.go:475] Verifying addon gcp-auth=true in "addons-473910"
	I1010 17:58:45.268487   89506 out.go:177] * Verifying gcp-auth addon...
	I1010 17:58:45.270870   89506 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1010 17:58:45.274245   89506 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1010 17:58:45.274273   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:58:45.656157   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:58:45.756106   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:58:45.756813   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:58:45.774841   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:58:46.156328   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:58:46.223449   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:58:46.225192   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:58:46.274342   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:58:46.324651   89506 pod_ready.go:98] pod "coredns-7c65d6cfc9-m8z9n" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-10-10 17:58:45 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-10-10 17:58:33 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-10-10 17:58:33 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-10-10 17:58:33 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-10-10 17:58:33 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.238 HostIPs:[{IP:192.168.39
.238}] PodIP:10.244.0.3 PodIPs:[{IP:10.244.0.3}] StartTime:2024-10-10 17:58:33 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-10-10 17:58:38 +0000 UTC,FinishedAt:2024-10-10 17:58:44 +0000 UTC,ContainerID:cri-o://f1cd54ec71477ff4945cddb307bce9f5664f3364b54c2c3dfab6a38ba7f8f896,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://f1cd54ec71477ff4945cddb307bce9f5664f3364b54c2c3dfab6a38ba7f8f896 Started:0xc0025ebda0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc0025d9610} {Name:kube-api-access-gvn8m MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc0025d9620}] User:ni
l AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1010 17:58:46.324686   89506 pod_ready.go:82] duration metric: took 6.516505756s for pod "coredns-7c65d6cfc9-m8z9n" in "kube-system" namespace to be "Ready" ...
	E1010 17:58:46.324700   89506 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-7c65d6cfc9-m8z9n" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-10-10 17:58:45 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-10-10 17:58:33 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-10-10 17:58:33 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-10-10 17:58:33 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-10-10 17:58:33 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.3
9.238 HostIPs:[{IP:192.168.39.238}] PodIP:10.244.0.3 PodIPs:[{IP:10.244.0.3}] StartTime:2024-10-10 17:58:33 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-10-10 17:58:38 +0000 UTC,FinishedAt:2024-10-10 17:58:44 +0000 UTC,ContainerID:cri-o://f1cd54ec71477ff4945cddb307bce9f5664f3364b54c2c3dfab6a38ba7f8f896,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://f1cd54ec71477ff4945cddb307bce9f5664f3364b54c2c3dfab6a38ba7f8f896 Started:0xc0025ebda0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc0025d9610} {Name:kube-api-access-gvn8m MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveRe
adOnly:0xc0025d9620}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1010 17:58:46.324711   89506 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-473910" in "kube-system" namespace to be "Ready" ...
	I1010 17:58:46.329352   89506 pod_ready.go:93] pod "etcd-addons-473910" in "kube-system" namespace has status "Ready":"True"
	I1010 17:58:46.329375   89506 pod_ready.go:82] duration metric: took 4.656281ms for pod "etcd-addons-473910" in "kube-system" namespace to be "Ready" ...
	I1010 17:58:46.329386   89506 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-473910" in "kube-system" namespace to be "Ready" ...
	I1010 17:58:46.336117   89506 pod_ready.go:93] pod "kube-apiserver-addons-473910" in "kube-system" namespace has status "Ready":"True"
	I1010 17:58:46.336138   89506 pod_ready.go:82] duration metric: took 6.746483ms for pod "kube-apiserver-addons-473910" in "kube-system" namespace to be "Ready" ...
	I1010 17:58:46.336148   89506 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-473910" in "kube-system" namespace to be "Ready" ...
	I1010 17:58:46.341140   89506 pod_ready.go:93] pod "kube-controller-manager-addons-473910" in "kube-system" namespace has status "Ready":"True"
	I1010 17:58:46.341160   89506 pod_ready.go:82] duration metric: took 5.005743ms for pod "kube-controller-manager-addons-473910" in "kube-system" namespace to be "Ready" ...
	I1010 17:58:46.341171   89506 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-qx6m4" in "kube-system" namespace to be "Ready" ...
	I1010 17:58:46.347142   89506 pod_ready.go:93] pod "kube-proxy-qx6m4" in "kube-system" namespace has status "Ready":"True"
	I1010 17:58:46.347164   89506 pod_ready.go:82] duration metric: took 5.987241ms for pod "kube-proxy-qx6m4" in "kube-system" namespace to be "Ready" ...
	I1010 17:58:46.347175   89506 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-473910" in "kube-system" namespace to be "Ready" ...
	I1010 17:58:46.656395   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:58:46.712574   89506 pod_ready.go:93] pod "kube-scheduler-addons-473910" in "kube-system" namespace has status "Ready":"True"
	I1010 17:58:46.712600   89506 pod_ready.go:82] duration metric: took 365.416615ms for pod "kube-scheduler-addons-473910" in "kube-system" namespace to be "Ready" ...
	I1010 17:58:46.712618   89506 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-6cgkn" in "kube-system" namespace to be "Ready" ...
	I1010 17:58:46.756408   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:58:46.757208   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:58:46.774438   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:58:47.155677   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:58:47.222557   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:58:47.222961   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:58:47.275095   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:58:47.655534   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:58:47.720858   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:58:47.721226   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:58:47.775381   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:58:48.155166   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:58:48.218982   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:58:48.219548   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:58:48.275282   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:58:48.656028   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:58:48.722051   89506 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-6cgkn" in "kube-system" namespace has status "Ready":"False"
	I1010 17:58:48.722291   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:58:48.722695   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:58:49.042552   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:58:49.156488   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:58:49.220399   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:58:49.220692   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:58:49.274299   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:58:49.655944   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:58:49.721802   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:58:49.723015   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:58:49.775797   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:58:50.155947   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:58:50.219210   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:58:50.219840   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:58:50.275476   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:58:50.656112   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:58:50.719599   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:58:50.719930   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:58:50.774375   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:58:51.157705   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:58:51.219816   89506 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-6cgkn" in "kube-system" namespace has status "Ready":"False"
	I1010 17:58:51.221032   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:58:51.221450   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:58:51.275911   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:58:51.655286   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:58:51.721918   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:58:51.723043   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:58:51.775305   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:58:52.155792   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:58:52.219737   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:58:52.219886   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:58:52.275455   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:58:52.655755   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:58:52.719149   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:58:52.720636   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:58:52.775482   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:58:53.155338   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:58:53.220963   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:58:53.221934   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:58:53.222946   89506 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-6cgkn" in "kube-system" namespace has status "Ready":"False"
	I1010 17:58:53.274415   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:58:53.655730   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:58:53.720497   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:58:53.721260   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:58:53.774781   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:58:54.155248   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:58:54.219233   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:58:54.219574   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:58:54.275402   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:58:54.656141   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:58:54.719828   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:58:54.720254   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:58:54.774869   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:58:55.238174   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:58:55.238309   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:58:55.238329   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:58:55.239482   89506 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-6cgkn" in "kube-system" namespace has status "Ready":"False"
	I1010 17:58:55.274657   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:58:55.654798   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:58:55.721031   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:58:55.721695   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:58:55.774769   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:58:56.156015   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:58:56.220928   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:58:56.221181   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:58:56.275608   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:58:56.655407   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:58:56.718994   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:58:56.719576   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:58:56.775062   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:58:57.206282   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:58:57.218877   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:58:57.219060   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:58:57.488636   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:58:57.655929   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:58:57.719210   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:58:57.719457   89506 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-6cgkn" in "kube-system" namespace has status "Ready":"False"
	I1010 17:58:57.720305   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:58:57.775238   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:58:58.155584   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:58:58.220085   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:58:58.220650   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:58:58.274869   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:58:58.654750   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:58:58.720368   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:58:58.720400   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:58:58.774451   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:58:59.156225   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:58:59.219041   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:58:59.219365   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:58:59.274914   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:58:59.655258   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:58:59.720115   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:58:59.720772   89506 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-6cgkn" in "kube-system" namespace has status "Ready":"False"
	I1010 17:58:59.720817   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:58:59.774908   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:00.157458   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:00.219695   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:00.220106   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:59:00.220292   89506 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-6cgkn" in "kube-system" namespace has status "Ready":"True"
	I1010 17:59:00.220309   89506 pod_ready.go:82] duration metric: took 13.507684315s for pod "nvidia-device-plugin-daemonset-6cgkn" in "kube-system" namespace to be "Ready" ...
	I1010 17:59:00.220327   89506 pod_ready.go:39] duration metric: took 21.97901843s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 17:59:00.220364   89506 api_server.go:52] waiting for apiserver process to appear ...
	I1010 17:59:00.220433   89506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 17:59:00.238835   89506 api_server.go:72] duration metric: took 27.024908677s to wait for apiserver process to appear ...
	I1010 17:59:00.238865   89506 api_server.go:88] waiting for apiserver healthz status ...
	I1010 17:59:00.238889   89506 api_server.go:253] Checking apiserver healthz at https://192.168.39.238:8443/healthz ...
	I1010 17:59:00.245346   89506 api_server.go:279] https://192.168.39.238:8443/healthz returned 200:
	ok
	I1010 17:59:00.247043   89506 api_server.go:141] control plane version: v1.31.1
	I1010 17:59:00.247074   89506 api_server.go:131] duration metric: took 8.202236ms to wait for apiserver health ...
	I1010 17:59:00.247083   89506 system_pods.go:43] waiting for kube-system pods to appear ...
	I1010 17:59:00.257576   89506 system_pods.go:59] 17 kube-system pods found
	I1010 17:59:00.257612   89506 system_pods.go:61] "coredns-7c65d6cfc9-b5dd8" [fc517273-4630-428f-99ab-0965a9e1b483] Running
	I1010 17:59:00.257621   89506 system_pods.go:61] "csi-hostpath-attacher-0" [5b8784e3-5271-4a1e-a3fd-aa6f61bef065] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1010 17:59:00.257630   89506 system_pods.go:61] "csi-hostpath-resizer-0" [66c8883a-7176-4819-a2d9-e88a9f7e9311] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1010 17:59:00.257638   89506 system_pods.go:61] "csi-hostpathplugin-fmhgf" [b9750fdd-c60e-4cdb-ac1e-6d7ac5ec9aab] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1010 17:59:00.257647   89506 system_pods.go:61] "etcd-addons-473910" [796b099e-eee7-4be3-9845-d78a9d74cbd6] Running
	I1010 17:59:00.257652   89506 system_pods.go:61] "kube-apiserver-addons-473910" [cd91ec20-324a-4b99-bd28-7d32f89d1e56] Running
	I1010 17:59:00.257656   89506 system_pods.go:61] "kube-controller-manager-addons-473910" [615b8b3b-c358-4f08-b0e5-63448f99a101] Running
	I1010 17:59:00.257661   89506 system_pods.go:61] "kube-ingress-dns-minikube" [292a5f4d-bcd5-4dd5-8530-4228a6d71ff5] Running
	I1010 17:59:00.257666   89506 system_pods.go:61] "kube-proxy-qx6m4" [5a52a8d5-4cda-449b-b74f-cbc835d4dc37] Running
	I1010 17:59:00.257669   89506 system_pods.go:61] "kube-scheduler-addons-473910" [a2234379-3bab-4bb8-be1e-da56ef4f0f89] Running
	I1010 17:59:00.257675   89506 system_pods.go:61] "metrics-server-84c5f94fbc-sr88b" [562db437-e740-4818-a2fd-dec917bd22cf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1010 17:59:00.257682   89506 system_pods.go:61] "nvidia-device-plugin-daemonset-6cgkn" [a63e13d1-dda1-4177-8dda-1a4d528ccd30] Running
	I1010 17:59:00.257688   89506 system_pods.go:61] "registry-66c9cd494c-4k74q" [604b3b36-a2fa-4e21-ab57-959fbdee9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1010 17:59:00.257696   89506 system_pods.go:61] "registry-proxy-f4hnz" [5d8faf25-5998-4727-be43-6800e479cc59] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1010 17:59:00.257704   89506 system_pods.go:61] "snapshot-controller-56fcc65765-k4k5t" [c574bf49-6f95-46f0-8719-6fabfdc878ca] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1010 17:59:00.257713   89506 system_pods.go:61] "snapshot-controller-56fcc65765-pfvnl" [49fa4565-6758-44de-aac1-ab5277b25c51] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1010 17:59:00.257717   89506 system_pods.go:61] "storage-provisioner" [32649d6b-8dd1-4e8c-b16f-9fcc465018b5] Running
	I1010 17:59:00.257724   89506 system_pods.go:74] duration metric: took 10.632494ms to wait for pod list to return data ...
	I1010 17:59:00.257735   89506 default_sa.go:34] waiting for default service account to be created ...
	I1010 17:59:00.261134   89506 default_sa.go:45] found service account: "default"
	I1010 17:59:00.261168   89506 default_sa.go:55] duration metric: took 3.418904ms for default service account to be created ...
	I1010 17:59:00.261179   89506 system_pods.go:116] waiting for k8s-apps to be running ...
	I1010 17:59:00.271177   89506 system_pods.go:86] 17 kube-system pods found
	I1010 17:59:00.271214   89506 system_pods.go:89] "coredns-7c65d6cfc9-b5dd8" [fc517273-4630-428f-99ab-0965a9e1b483] Running
	I1010 17:59:00.271227   89506 system_pods.go:89] "csi-hostpath-attacher-0" [5b8784e3-5271-4a1e-a3fd-aa6f61bef065] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1010 17:59:00.271237   89506 system_pods.go:89] "csi-hostpath-resizer-0" [66c8883a-7176-4819-a2d9-e88a9f7e9311] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1010 17:59:00.271253   89506 system_pods.go:89] "csi-hostpathplugin-fmhgf" [b9750fdd-c60e-4cdb-ac1e-6d7ac5ec9aab] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1010 17:59:00.271259   89506 system_pods.go:89] "etcd-addons-473910" [796b099e-eee7-4be3-9845-d78a9d74cbd6] Running
	I1010 17:59:00.271265   89506 system_pods.go:89] "kube-apiserver-addons-473910" [cd91ec20-324a-4b99-bd28-7d32f89d1e56] Running
	I1010 17:59:00.271272   89506 system_pods.go:89] "kube-controller-manager-addons-473910" [615b8b3b-c358-4f08-b0e5-63448f99a101] Running
	I1010 17:59:00.271278   89506 system_pods.go:89] "kube-ingress-dns-minikube" [292a5f4d-bcd5-4dd5-8530-4228a6d71ff5] Running
	I1010 17:59:00.271284   89506 system_pods.go:89] "kube-proxy-qx6m4" [5a52a8d5-4cda-449b-b74f-cbc835d4dc37] Running
	I1010 17:59:00.271291   89506 system_pods.go:89] "kube-scheduler-addons-473910" [a2234379-3bab-4bb8-be1e-da56ef4f0f89] Running
	I1010 17:59:00.271306   89506 system_pods.go:89] "metrics-server-84c5f94fbc-sr88b" [562db437-e740-4818-a2fd-dec917bd22cf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1010 17:59:00.271312   89506 system_pods.go:89] "nvidia-device-plugin-daemonset-6cgkn" [a63e13d1-dda1-4177-8dda-1a4d528ccd30] Running
	I1010 17:59:00.271322   89506 system_pods.go:89] "registry-66c9cd494c-4k74q" [604b3b36-a2fa-4e21-ab57-959fbdee9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1010 17:59:00.271332   89506 system_pods.go:89] "registry-proxy-f4hnz" [5d8faf25-5998-4727-be43-6800e479cc59] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1010 17:59:00.271346   89506 system_pods.go:89] "snapshot-controller-56fcc65765-k4k5t" [c574bf49-6f95-46f0-8719-6fabfdc878ca] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1010 17:59:00.271356   89506 system_pods.go:89] "snapshot-controller-56fcc65765-pfvnl" [49fa4565-6758-44de-aac1-ab5277b25c51] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1010 17:59:00.271365   89506 system_pods.go:89] "storage-provisioner" [32649d6b-8dd1-4e8c-b16f-9fcc465018b5] Running
	I1010 17:59:00.271376   89506 system_pods.go:126] duration metric: took 10.189933ms to wait for k8s-apps to be running ...
	I1010 17:59:00.271391   89506 system_svc.go:44] waiting for kubelet service to be running ....
	I1010 17:59:00.271449   89506 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 17:59:00.276561   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:00.289564   89506 system_svc.go:56] duration metric: took 18.165533ms WaitForService to wait for kubelet
	I1010 17:59:00.289597   89506 kubeadm.go:582] duration metric: took 27.075678479s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1010 17:59:00.289619   89506 node_conditions.go:102] verifying NodePressure condition ...
	I1010 17:59:00.293057   89506 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1010 17:59:00.293087   89506 node_conditions.go:123] node cpu capacity is 2
	I1010 17:59:00.293103   89506 node_conditions.go:105] duration metric: took 3.477847ms to run NodePressure ...
	I1010 17:59:00.293120   89506 start.go:241] waiting for startup goroutines ...
	I1010 17:59:00.654810   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:00.719265   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:00.724028   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:59:00.775491   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:01.156275   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:01.219464   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:59:01.219534   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:01.275270   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:01.655539   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:01.718453   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:59:01.719066   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:01.775290   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:02.155833   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:02.219091   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:59:02.219455   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:02.275261   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:02.656151   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:02.736597   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:59:02.737149   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:02.774868   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:03.156263   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:03.218670   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:59:03.219880   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:03.274933   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:03.657591   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:03.719186   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:59:03.719663   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:03.775038   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:04.155797   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:04.220281   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:59:04.220759   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:04.275082   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:04.655834   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:04.719491   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:59:04.719923   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:04.774651   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:05.154994   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:05.218493   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:59:05.219237   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:05.274504   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:05.655813   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:05.719409   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:59:05.720082   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:05.775240   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:06.155356   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:06.219382   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:59:06.220191   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:06.274558   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:06.655977   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:06.720230   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:59:06.720620   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:06.775774   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:07.157686   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:07.257442   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:59:07.257578   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:07.274190   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:07.656221   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:07.756872   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:07.756873   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:59:07.774365   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:08.155397   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:08.218503   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:59:08.219300   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:08.275138   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:08.655251   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:08.719850   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:59:08.720047   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:08.774604   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:09.155825   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:09.218857   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:59:09.219699   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:09.275424   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:09.656835   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:09.757613   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:59:09.757737   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:09.775171   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:10.156325   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:10.220598   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:59:10.220843   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:10.275070   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:10.656092   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:10.719357   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:59:10.720749   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:10.774431   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:11.156150   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:11.219058   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:59:11.219096   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:11.275571   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:11.656069   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:11.720060   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:59:11.720625   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:11.774172   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:12.155883   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:12.219727   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:12.219785   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:59:12.275084   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:12.655697   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:13.114135   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:13.114350   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:59:13.114573   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:13.155493   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:13.219336   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:59:13.220093   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:13.274716   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:13.657994   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:13.719945   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:59:13.721434   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:13.775911   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:14.159480   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:14.219048   89506 kapi.go:107] duration metric: took 32.004326756s to wait for kubernetes.io/minikube-addons=registry ...
	I1010 17:59:14.219312   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:14.274874   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:14.655368   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:14.719400   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:14.774954   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:15.156030   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:15.220089   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:15.274950   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:15.656080   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:15.719488   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:15.775122   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:16.158176   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:16.219756   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:16.274091   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:16.656013   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:16.719249   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:16.774517   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:17.155193   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:17.219167   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:17.275081   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:17.655977   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:17.756170   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:17.774487   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:18.155363   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:18.218904   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:18.274487   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:18.656249   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:18.719194   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:18.775679   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:19.154686   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:19.218429   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:19.275366   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:20.015510   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:20.100910   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:20.100911   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:20.201860   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:20.301548   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:20.301868   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:20.655408   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:20.720464   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:20.775026   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:21.155802   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:21.219030   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:21.274313   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:21.655886   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:21.719724   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:21.774470   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:22.154804   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:22.218989   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:22.275050   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:22.655289   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:22.721021   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:23.044907   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:23.155431   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:23.219446   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:23.275154   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:23.656169   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:23.720546   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:23.775236   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:24.155575   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:24.218295   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:24.274597   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:24.654822   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:24.719964   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:24.819763   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:25.154755   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:25.218952   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:25.274774   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:25.657996   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:25.720681   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:25.775260   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:26.348765   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:26.349244   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:26.349577   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:26.655599   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:26.755613   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:26.774420   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:27.155832   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:27.218719   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:27.274249   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:27.655915   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:27.756923   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:27.774408   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:28.156277   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:28.220955   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:28.275246   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:28.655980   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:28.756807   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:28.774486   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:29.157195   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:29.219609   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:29.274956   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:29.655062   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:29.756002   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:29.776714   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:30.157526   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:30.218526   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:30.274848   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:30.666142   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:30.720341   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:30.775606   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:31.156072   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:31.218774   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:31.275271   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:31.656324   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:31.756181   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:31.774629   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:32.155894   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:32.218982   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:32.276279   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:32.655701   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:32.757053   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:32.774598   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:33.154908   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:33.219830   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:33.274055   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:33.657683   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:33.758318   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:33.776227   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:34.156608   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:34.219031   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:34.274886   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:34.658300   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:35.103311   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:35.120595   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:35.202983   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:35.218813   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:35.300975   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:35.655896   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:35.719603   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:35.775452   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:36.156363   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:36.227185   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:36.274841   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:36.655672   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:36.756671   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:36.775115   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:37.156118   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:37.235167   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:37.276675   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:37.656527   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:37.720904   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:37.823041   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:38.157988   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:38.219946   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:38.274323   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:38.656612   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:38.719982   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:38.774461   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:39.155867   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:39.219121   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:39.274532   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:39.655941   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:39.720979   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:39.776022   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:40.157172   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:40.222962   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:40.275864   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:40.655612   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:40.718976   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:40.774426   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:41.159848   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:41.219037   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:41.274781   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:41.655624   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:41.719210   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:41.774802   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:42.156325   89506 kapi.go:107] duration metric: took 59.005818145s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1010 17:59:42.219542   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:42.275521   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:42.720416   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:42.775019   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:43.219493   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:43.274914   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:43.719889   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:43.775066   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:44.219610   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:44.274050   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:44.719760   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:44.774420   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:45.219856   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:45.274513   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:45.719146   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:45.774773   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:46.219462   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:46.276284   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:46.719879   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:46.819389   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:47.219145   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:47.274924   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:48.054223   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:48.054505   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:48.222071   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:48.276143   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:48.721598   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:48.778613   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:49.219442   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:49.276224   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:49.722069   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:49.775020   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:50.219699   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:50.274717   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:50.719202   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:50.774871   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:51.218911   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:51.274411   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:51.720228   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:51.822178   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:52.219302   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:52.274894   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:52.722520   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:52.775384   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:53.219670   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:53.274567   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:53.719683   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:53.775098   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:54.219661   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:54.275098   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:54.720691   89506 kapi.go:107] duration metric: took 1m12.506145807s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1010 17:59:54.775347   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:55.275093   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:55.775095   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:56.274445   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:57.080577   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:57.275409   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:57.775492   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:58.275211   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:58.775454   89506 kapi.go:107] duration metric: took 1m13.504575758s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1010 17:59:58.777577   89506 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-473910 cluster.
	I1010 17:59:58.779170   89506 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1010 17:59:58.780864   89506 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1010 17:59:58.782468   89506 out.go:177] * Enabled addons: storage-provisioner, cloud-spanner, inspektor-gadget, metrics-server, ingress-dns, nvidia-device-plugin, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1010 17:59:58.783844   89506 addons.go:510] duration metric: took 1m25.569891907s for enable addons: enabled=[storage-provisioner cloud-spanner inspektor-gadget metrics-server ingress-dns nvidia-device-plugin yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1010 17:59:58.783887   89506 start.go:246] waiting for cluster config update ...
	I1010 17:59:58.783906   89506 start.go:255] writing updated cluster config ...
	I1010 17:59:58.784242   89506 ssh_runner.go:195] Run: rm -f paused
	I1010 17:59:58.835759   89506 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1010 17:59:58.837921   89506 out.go:177] * Done! kubectl is now configured to use "addons-473910" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 10 18:02:57 addons-473910 crio[664]: time="2024-10-10 18:02:57.883143689Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5eac6f96-5ac6-4908-a343-e31b8a773d6f name=/runtime.v1.RuntimeService/Version
	Oct 10 18:02:57 addons-473910 crio[664]: time="2024-10-10 18:02:57.884347616Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=427b3578-6833-4efb-9c9b-924992715067 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 10 18:02:57 addons-473910 crio[664]: time="2024-10-10 18:02:57.885617003Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728583377885588070,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:574227,},InodesUsed:&UInt64Value{Value:197,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=427b3578-6833-4efb-9c9b-924992715067 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 10 18:02:57 addons-473910 crio[664]: time="2024-10-10 18:02:57.886393622Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=933e9c31-d852-49c4-b5a9-538f7380bada name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 18:02:57 addons-473910 crio[664]: time="2024-10-10 18:02:57.886463551Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=933e9c31-d852-49c4-b5a9-538f7380bada name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 18:02:57 addons-473910 crio[664]: time="2024-10-10 18:02:57.886891368Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ac61bcb449dd008022a57025113e8ce0f569b64dd266ea48ce07233fb05c610a,PodSandboxId:e7a9e829f252f54805a998ef17bcea567c25ab25439ae6921adb0f1744739151,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1728583240469506019,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2d2384a6-8648-46d4-94c5-9c3ec997ecdc,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6690ee966904d3752b6c5ddc92710db914930936530246b1664d693c89208591,PodSandboxId:1c0c6796f2cdf53539eb2350758502cf2e8a059ecc2ef6925b7599ad07bf6e3a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1728583202391782724,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ee51e877-54f4-4ca0-84f2-3fa775f67d92,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43566588dce3fd6712aff49734940d5425bbf6f288ceb8d7bd40072f10a81e0f,PodSandboxId:e1de7e9b77a08d6abe8c07e4adab08364c1ca36ff23a5074cb98570dbc96cfb8,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1728583193823901773,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-5f85ff4588-v52ps,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: f8c4e168-e257-42e4-ba5b-7f8cb5719888,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:76ee693953c9b65b221ccf5f803b049898c204aee4420e37ef9e5c71b1e6843c,PodSandboxId:fd32d9d914762c515678c42ef59d4509db0a85ba76a99d05ca8d656573b91188,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,St
ate:CONTAINER_EXITED,CreatedAt:1728583185813843875,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-27sgk,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4624cbff-d48c-49bd-b234-31c140de96e7,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad185252d1c0b76c72be21abc80db975afc738ffa5a3df39d70cf854c9a1bfcb,PodSandboxId:f0703dfc07f5a0d8268cd5b3cf4b7806d199babb0de9f6fb267fcbbdc43b354e,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f61806552
90afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1728583171850208615,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-l94zq,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 56dedb76-0534-4a92-9bf2-b7cb973f888c,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b6348a73c59af413c47d026c6c02ed02603f8f76800efceafe9257887000383,PodSandboxId:fd615680300bfde797e70660c3cef4366d0e0d9c9f827fdc718b34c6623f669a,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,}
,ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1728583155020959661,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-sr88b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 562db437-e740-4818-a2fd-dec917bd22cf,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02a1abff35d7f67569365ef7bbf788348314c995f8337e6f8ffcbfd7f9f85f17,PodSandboxId:5beda9cff6889d9ac1f2aa524e7826bb3a0fce880ef726ae5df78c45f5dae72b,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns
@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1728583130926611151,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 292a5f4d-bcd5-4dd5-8530-4228a6d71ff5,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:deac66408019d59749c7de76502f032afd1679bedfd1fd5c55e029101716b9b0,PodSandboxId:1452b1a36b355be1d36ee0c3bb69f0ff0092a9cd99a2ccaf71e36d984
c28d1ce,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728583120245559496,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32649d6b-8dd1-4e8c-b16f-9fcc465018b5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b221dbce7cbc7bb27dd0ca4b197a5089677416b54742e3f3eb98801b6f4b4f1a,PodSandboxId:e506ba1f2ea95b63a31f8b44fe7cfe38a0314c47c9e68a19780cfff48a00b77f,Meta
data:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728583117647931369,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-b5dd8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc517273-4630-428f-99ab-0965a9e1b483,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.p
od.terminationGracePeriod: 30,},},&Container{Id:a58a6a528d510cfebfd4412ea54a8ecf08518f84f3f8fdd857854d977196d4a3,PodSandboxId:eaa0c1c7802d21db183fd3618cfdc9827890205fede0d3a3891830b77372914d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728583114775787325,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qx6m4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a52a8d5-4cda-449b-b74f-cbc835d4dc37,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},}
,&Container{Id:f79440a61b5920283b88621a6580dd1741cd751a10ace64cc621653297aede51,PodSandboxId:777a98389002e1503fe2d4f0869e14ff24d933bee6b6395063c6987f3e685cbf,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728583103283194523,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-473910,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d27f453211eff4a3155dfbdf6354ecec,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76288df1302c87280618ccfd56c890ed
f9404223642b2c7063f14fe9814b69e4,PodSandboxId:9913494a438b435077fd647dd0bd9a6d6542fbfba7d4dc003975ba06766ff53e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728583103261371138,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-473910,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3020ffd8df3abe5b746c768a102d7868,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d18ceead39c176ae405041
27165ef6f70d34a3524c690cf597693a5cea85eb9f,PodSandboxId:8c0c63bcacfc537f877bae552d8cc5eff1e964ca3e05d5b23e5034f63a754d2c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728583103294195727,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-473910,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd938ef3bfcffd67f7fa1a9a06d155c1,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dda4917567e30a8dffe800dded48fb64958951c
261adef9310b2ade87d30bf28,PodSandboxId:acd84339bc29fa286098e565057879464c6add2bf5ea93422bcc1376d3be191f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728583103275484443,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-473910,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e53a2912e9524a040780063467147bc8,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=933e9c31-d852-49c4-b5
a9-538f7380bada name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 18:02:57 addons-473910 crio[664]: time="2024-10-10 18:02:57.890777225Z" level=debug msg="Content-Type from manifest GET is \"application/vnd.docker.distribution.manifest.v2+json\"" file="docker/docker_client.go:964"
	Oct 10 18:02:57 addons-473910 crio[664]: time="2024-10-10 18:02:57.890980149Z" level=debug msg="reference \"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]docker.io/kicbase/echo-server:1.0\" does not resolve to an image ID" file="storage/storage_reference.go:149"
	Oct 10 18:02:57 addons-473910 crio[664]: time="2024-10-10 18:02:57.892140958Z" level=debug msg="Using registries.d directory /etc/containers/registries.d" file="docker/registries_d.go:80"
	Oct 10 18:02:57 addons-473910 crio[664]: time="2024-10-10 18:02:57.892208008Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\"" file="docker/docker_image_src.go:87"
	Oct 10 18:02:57 addons-473910 crio[664]: time="2024-10-10 18:02:57.892250416Z" level=debug msg="No credentials matching docker.io/kicbase/echo-server found in /run/containers/0/auth.json" file="config/config.go:846"
	Oct 10 18:02:57 addons-473910 crio[664]: time="2024-10-10 18:02:57.892282744Z" level=debug msg="No credentials matching docker.io/kicbase/echo-server found in /root/.config/containers/auth.json" file="config/config.go:846"
	Oct 10 18:02:57 addons-473910 crio[664]: time="2024-10-10 18:02:57.892308596Z" level=debug msg="No credentials matching docker.io/kicbase/echo-server found in /root/.docker/config.json" file="config/config.go:846"
	Oct 10 18:02:57 addons-473910 crio[664]: time="2024-10-10 18:02:57.892336008Z" level=debug msg="No credentials matching docker.io/kicbase/echo-server found in /root/.dockercfg" file="config/config.go:846"
	Oct 10 18:02:57 addons-473910 crio[664]: time="2024-10-10 18:02:57.892357683Z" level=debug msg="No credentials for docker.io/kicbase/echo-server found" file="config/config.go:272"
	Oct 10 18:02:57 addons-473910 crio[664]: time="2024-10-10 18:02:57.892388574Z" level=debug msg=" No signature storage configuration found for docker.io/kicbase/echo-server:1.0, using built-in default file:///var/lib/containers/sigstore" file="docker/registries_d.go:176"
	Oct 10 18:02:57 addons-473910 crio[664]: time="2024-10-10 18:02:57.892421338Z" level=debug msg="Looking for TLS certificates and private keys in /etc/docker/certs.d/docker.io" file="tlsclientconfig/tlsclientconfig.go:20"
	Oct 10 18:02:57 addons-473910 crio[664]: time="2024-10-10 18:02:57.892458433Z" level=debug msg="GET https://registry-1.docker.io/v2/" file="docker/docker_client.go:631"
	Oct 10 18:02:57 addons-473910 crio[664]: time="2024-10-10 18:02:57.924585917Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6fb7fcb5-357f-45aa-b952-96a354321714 name=/runtime.v1.RuntimeService/Version
	Oct 10 18:02:57 addons-473910 crio[664]: time="2024-10-10 18:02:57.924675305Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6fb7fcb5-357f-45aa-b952-96a354321714 name=/runtime.v1.RuntimeService/Version
	Oct 10 18:02:57 addons-473910 crio[664]: time="2024-10-10 18:02:57.925681212Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b29a3a43-a206-49a7-8cc8-40cf5b1af6f0 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 10 18:02:57 addons-473910 crio[664]: time="2024-10-10 18:02:57.926891043Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728583377926863372,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:574227,},InodesUsed:&UInt64Value{Value:197,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b29a3a43-a206-49a7-8cc8-40cf5b1af6f0 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 10 18:02:57 addons-473910 crio[664]: time="2024-10-10 18:02:57.927393882Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=25d9ba11-7b20-47a3-a2ef-27cacf4b3d44 name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 18:02:57 addons-473910 crio[664]: time="2024-10-10 18:02:57.927445842Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=25d9ba11-7b20-47a3-a2ef-27cacf4b3d44 name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 18:02:57 addons-473910 crio[664]: time="2024-10-10 18:02:57.927791738Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ac61bcb449dd008022a57025113e8ce0f569b64dd266ea48ce07233fb05c610a,PodSandboxId:e7a9e829f252f54805a998ef17bcea567c25ab25439ae6921adb0f1744739151,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1728583240469506019,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2d2384a6-8648-46d4-94c5-9c3ec997ecdc,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6690ee966904d3752b6c5ddc92710db914930936530246b1664d693c89208591,PodSandboxId:1c0c6796f2cdf53539eb2350758502cf2e8a059ecc2ef6925b7599ad07bf6e3a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1728583202391782724,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ee51e877-54f4-4ca0-84f2-3fa775f67d92,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43566588dce3fd6712aff49734940d5425bbf6f288ceb8d7bd40072f10a81e0f,PodSandboxId:e1de7e9b77a08d6abe8c07e4adab08364c1ca36ff23a5074cb98570dbc96cfb8,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1728583193823901773,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-5f85ff4588-v52ps,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: f8c4e168-e257-42e4-ba5b-7f8cb5719888,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:76ee693953c9b65b221ccf5f803b049898c204aee4420e37ef9e5c71b1e6843c,PodSandboxId:fd32d9d914762c515678c42ef59d4509db0a85ba76a99d05ca8d656573b91188,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,St
ate:CONTAINER_EXITED,CreatedAt:1728583185813843875,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-27sgk,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4624cbff-d48c-49bd-b234-31c140de96e7,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad185252d1c0b76c72be21abc80db975afc738ffa5a3df39d70cf854c9a1bfcb,PodSandboxId:f0703dfc07f5a0d8268cd5b3cf4b7806d199babb0de9f6fb267fcbbdc43b354e,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f61806552
90afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1728583171850208615,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-l94zq,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 56dedb76-0534-4a92-9bf2-b7cb973f888c,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b6348a73c59af413c47d026c6c02ed02603f8f76800efceafe9257887000383,PodSandboxId:fd615680300bfde797e70660c3cef4366d0e0d9c9f827fdc718b34c6623f669a,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,}
,ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1728583155020959661,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-sr88b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 562db437-e740-4818-a2fd-dec917bd22cf,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02a1abff35d7f67569365ef7bbf788348314c995f8337e6f8ffcbfd7f9f85f17,PodSandboxId:5beda9cff6889d9ac1f2aa524e7826bb3a0fce880ef726ae5df78c45f5dae72b,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns
@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1728583130926611151,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 292a5f4d-bcd5-4dd5-8530-4228a6d71ff5,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:deac66408019d59749c7de76502f032afd1679bedfd1fd5c55e029101716b9b0,PodSandboxId:1452b1a36b355be1d36ee0c3bb69f0ff0092a9cd99a2ccaf71e36d984
c28d1ce,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728583120245559496,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32649d6b-8dd1-4e8c-b16f-9fcc465018b5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b221dbce7cbc7bb27dd0ca4b197a5089677416b54742e3f3eb98801b6f4b4f1a,PodSandboxId:e506ba1f2ea95b63a31f8b44fe7cfe38a0314c47c9e68a19780cfff48a00b77f,Meta
data:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728583117647931369,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-b5dd8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc517273-4630-428f-99ab-0965a9e1b483,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.p
od.terminationGracePeriod: 30,},},&Container{Id:a58a6a528d510cfebfd4412ea54a8ecf08518f84f3f8fdd857854d977196d4a3,PodSandboxId:eaa0c1c7802d21db183fd3618cfdc9827890205fede0d3a3891830b77372914d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728583114775787325,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qx6m4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a52a8d5-4cda-449b-b74f-cbc835d4dc37,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},}
,&Container{Id:f79440a61b5920283b88621a6580dd1741cd751a10ace64cc621653297aede51,PodSandboxId:777a98389002e1503fe2d4f0869e14ff24d933bee6b6395063c6987f3e685cbf,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728583103283194523,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-473910,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d27f453211eff4a3155dfbdf6354ecec,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76288df1302c87280618ccfd56c890ed
f9404223642b2c7063f14fe9814b69e4,PodSandboxId:9913494a438b435077fd647dd0bd9a6d6542fbfba7d4dc003975ba06766ff53e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728583103261371138,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-473910,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3020ffd8df3abe5b746c768a102d7868,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d18ceead39c176ae405041
27165ef6f70d34a3524c690cf597693a5cea85eb9f,PodSandboxId:8c0c63bcacfc537f877bae552d8cc5eff1e964ca3e05d5b23e5034f63a754d2c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728583103294195727,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-473910,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd938ef3bfcffd67f7fa1a9a06d155c1,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dda4917567e30a8dffe800dded48fb64958951c
261adef9310b2ade87d30bf28,PodSandboxId:acd84339bc29fa286098e565057879464c6add2bf5ea93422bcc1376d3be191f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728583103275484443,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-473910,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e53a2912e9524a040780063467147bc8,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=25d9ba11-7b20-47a3-a2
ef-27cacf4b3d44 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ac61bcb449dd0       docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250                              2 minutes ago       Running             nginx                     0                   e7a9e829f252f       nginx
	6690ee966904d       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          2 minutes ago       Running             busybox                   0                   1c0c6796f2cdf       busybox
	43566588dce3f       registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b             3 minutes ago       Running             controller                0                   e1de7e9b77a08       ingress-nginx-controller-5f85ff4588-v52ps
	76ee693953c9b       a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb                                                             3 minutes ago       Exited              patch                     2                   fd32d9d914762       ingress-nginx-admission-patch-27sgk
	ad185252d1c0b       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f   3 minutes ago       Exited              create                    0                   f0703dfc07f5a       ingress-nginx-admission-create-l94zq
	0b6348a73c59a       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a        3 minutes ago       Running             metrics-server            0                   fd615680300bf       metrics-server-84c5f94fbc-sr88b
	02a1abff35d7f       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab             4 minutes ago       Running             minikube-ingress-dns      0                   5beda9cff6889       kube-ingress-dns-minikube
	deac66408019d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       0                   1452b1a36b355       storage-provisioner
	b221dbce7cbc7       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                             4 minutes ago       Running             coredns                   0                   e506ba1f2ea95       coredns-7c65d6cfc9-b5dd8
	a58a6a528d510       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                             4 minutes ago       Running             kube-proxy                0                   eaa0c1c7802d2       kube-proxy-qx6m4
	d18ceead39c17       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                                             4 minutes ago       Running             kube-apiserver            0                   8c0c63bcacfc5       kube-apiserver-addons-473910
	f79440a61b592       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                             4 minutes ago       Running             etcd                      0                   777a98389002e       etcd-addons-473910
	dda4917567e30       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                             4 minutes ago       Running             kube-scheduler            0                   acd84339bc29f       kube-scheduler-addons-473910
	76288df1302c8       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                             4 minutes ago       Running             kube-controller-manager   0                   9913494a438b4       kube-controller-manager-addons-473910
	
	
	==> coredns [b221dbce7cbc7bb27dd0ca4b197a5089677416b54742e3f3eb98801b6f4b4f1a] <==
	[INFO] 10.244.0.7:47959 - 7982 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000093971s
	[INFO] 10.244.0.7:47959 - 26352 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000094174s
	[INFO] 10.244.0.7:47959 - 11531 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000066822s
	[INFO] 10.244.0.7:47959 - 26373 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000085319s
	[INFO] 10.244.0.7:47959 - 12351 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000111335s
	[INFO] 10.244.0.7:47959 - 40152 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000089472s
	[INFO] 10.244.0.7:47959 - 47397 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000405674s
	[INFO] 10.244.0.7:49472 - 63965 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000528973s
	[INFO] 10.244.0.7:49472 - 64255 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000048678s
	[INFO] 10.244.0.7:38473 - 51842 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000049937s
	[INFO] 10.244.0.7:38473 - 52045 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000070746s
	[INFO] 10.244.0.7:40947 - 1388 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00003482s
	[INFO] 10.244.0.7:40947 - 1640 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000032043s
	[INFO] 10.244.0.7:41931 - 23426 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000044408s
	[INFO] 10.244.0.7:41931 - 23597 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000077184s
	[INFO] 10.244.0.22:52058 - 54140 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000530915s
	[INFO] 10.244.0.22:51435 - 11009 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000097536s
	[INFO] 10.244.0.22:34751 - 27477 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000211459s
	[INFO] 10.244.0.22:39882 - 50700 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000126653s
	[INFO] 10.244.0.22:57428 - 35982 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000100495s
	[INFO] 10.244.0.22:35727 - 13149 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000090232s
	[INFO] 10.244.0.22:60268 - 48476 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 458 0.001047762s
	[INFO] 10.244.0.22:37980 - 23237 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001423016s
	[INFO] 10.244.0.24:39378 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000468715s
	[INFO] 10.244.0.24:48030 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000112463s
	
	
	==> describe nodes <==
	Name:               addons-473910
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-473910
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e90d7de550e839433dc631623053398b652747fc
	                    minikube.k8s.io/name=addons-473910
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_10T17_58_29_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-473910
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 10 Oct 2024 17:58:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-473910
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 10 Oct 2024 18:02:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 10 Oct 2024 18:02:03 +0000   Thu, 10 Oct 2024 17:58:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 10 Oct 2024 18:02:03 +0000   Thu, 10 Oct 2024 17:58:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 10 Oct 2024 18:02:03 +0000   Thu, 10 Oct 2024 17:58:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 10 Oct 2024 18:02:03 +0000   Thu, 10 Oct 2024 17:58:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.238
	  Hostname:    addons-473910
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 4b93c70cc0c0471baa6df81ff59b007c
	  System UUID:                4b93c70c-c0c0-471b-aa6d-f81ff59b007c
	  Boot ID:                    74cb2697-df15-48c6-999e-efbc2fa7d0aa
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m59s
	  default                     hello-world-app-55bf9c44b4-hjz49             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m20s
	  ingress-nginx               ingress-nginx-controller-5f85ff4588-v52ps    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         4m16s
	  kube-system                 coredns-7c65d6cfc9-b5dd8                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     4m25s
	  kube-system                 etcd-addons-473910                           100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         4m30s
	  kube-system                 kube-apiserver-addons-473910                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m30s
	  kube-system                 kube-controller-manager-addons-473910        200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m30s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m20s
	  kube-system                 kube-proxy-qx6m4                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m25s
	  kube-system                 kube-scheduler-addons-473910                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m30s
	  kube-system                 metrics-server-84c5f94fbc-sr88b              100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         4m19s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m20s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             460Mi (12%)  170Mi (4%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m22s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  4m36s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m36s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m36s (x8 over 4m36s)  kubelet          Node addons-473910 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m36s (x8 over 4m36s)  kubelet          Node addons-473910 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m36s (x7 over 4m36s)  kubelet          Node addons-473910 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m30s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m30s (x2 over 4m30s)  kubelet          Node addons-473910 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m30s (x2 over 4m30s)  kubelet          Node addons-473910 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m30s (x2 over 4m30s)  kubelet          Node addons-473910 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m30s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                4m29s                  kubelet          Node addons-473910 status is now: NodeReady
	  Normal  RegisteredNode           4m26s                  node-controller  Node addons-473910 event: Registered Node addons-473910 in Controller
	  Normal  CIDRAssignmentFailed     4m26s                  cidrAllocator    Node addons-473910 status is now: CIDRAssignmentFailed
	
	
	==> dmesg <==
	[  +5.036302] kauditd_printk_skb: 101 callbacks suppressed
	[  +6.508460] kauditd_printk_skb: 88 callbacks suppressed
	[Oct10 17:59] kauditd_printk_skb: 2 callbacks suppressed
	[  +6.344045] kauditd_printk_skb: 2 callbacks suppressed
	[  +6.696273] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.052645] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.228022] kauditd_printk_skb: 24 callbacks suppressed
	[  +5.303127] kauditd_printk_skb: 34 callbacks suppressed
	[  +5.327054] kauditd_printk_skb: 25 callbacks suppressed
	[ +12.787507] kauditd_printk_skb: 24 callbacks suppressed
	[  +5.941270] kauditd_printk_skb: 6 callbacks suppressed
	[Oct10 18:00] kauditd_printk_skb: 7 callbacks suppressed
	[ +13.549335] kauditd_printk_skb: 2 callbacks suppressed
	[ +12.836367] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.020223] kauditd_printk_skb: 19 callbacks suppressed
	[  +6.999029] kauditd_printk_skb: 41 callbacks suppressed
	[  +5.062895] kauditd_printk_skb: 13 callbacks suppressed
	[  +5.995859] kauditd_printk_skb: 16 callbacks suppressed
	[Oct10 18:01] kauditd_printk_skb: 25 callbacks suppressed
	[ +19.827274] kauditd_printk_skb: 6 callbacks suppressed
	[  +7.885716] kauditd_printk_skb: 7 callbacks suppressed
	[ +14.838694] kauditd_printk_skb: 57 callbacks suppressed
	[  +5.002637] kauditd_printk_skb: 22 callbacks suppressed
	[  +6.844652] kauditd_printk_skb: 3 callbacks suppressed
	[Oct10 18:02] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [f79440a61b5920283b88621a6580dd1741cd751a10ace64cc621653297aede51] <==
	{"level":"info","ts":"2024-10-10T17:59:57.065973Z","caller":"traceutil/trace.go:171","msg":"trace[927349461] linearizableReadLoop","detail":"{readStateIndex:1185; appliedIndex:1184; }","duration":"361.535534ms","start":"2024-10-10T17:59:56.704419Z","end":"2024-10-10T17:59:57.065955Z","steps":["trace[927349461] 'read index received'  (duration: 361.296293ms)","trace[927349461] 'applied index is now lower than readState.Index'  (duration: 238.723µs)"],"step_count":2}
	{"level":"info","ts":"2024-10-10T17:59:57.066123Z","caller":"traceutil/trace.go:171","msg":"trace[892081535] transaction","detail":"{read_only:false; response_revision:1147; number_of_response:1; }","duration":"421.415918ms","start":"2024-10-10T17:59:56.644646Z","end":"2024-10-10T17:59:57.066062Z","steps":["trace[892081535] 'process raft request'  (duration: 421.20048ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-10T17:59:57.066206Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-10T17:59:56.644632Z","time spent":"421.511982ms","remote":"127.0.0.1:46698","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1143 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-10-10T17:59:57.066205Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"303.533801ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-10T17:59:57.066241Z","caller":"traceutil/trace.go:171","msg":"trace[1787784892] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1147; }","duration":"303.577779ms","start":"2024-10-10T17:59:56.762655Z","end":"2024-10-10T17:59:57.066233Z","steps":["trace[1787784892] 'agreement among raft nodes before linearized reading'  (duration: 303.5145ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-10T17:59:57.066275Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-10T17:59:56.762615Z","time spent":"303.655046ms","remote":"127.0.0.1:46708","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2024-10-10T17:59:57.066426Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"263.563539ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-10T17:59:57.066442Z","caller":"traceutil/trace.go:171","msg":"trace[1757354927] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1147; }","duration":"263.579522ms","start":"2024-10-10T17:59:56.802857Z","end":"2024-10-10T17:59:57.066436Z","steps":["trace[1757354927] 'agreement among raft nodes before linearized reading'  (duration: 263.555747ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-10T17:59:57.066456Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"252.171757ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" ","response":"range_response_count:1 size:499"}
	{"level":"info","ts":"2024-10-10T17:59:57.066474Z","caller":"traceutil/trace.go:171","msg":"trace[979404040] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1147; }","duration":"252.190694ms","start":"2024-10-10T17:59:56.814279Z","end":"2024-10-10T17:59:57.066469Z","steps":["trace[979404040] 'agreement among raft nodes before linearized reading'  (duration: 252.115879ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-10T17:59:57.066601Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"362.197016ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/snapshot.storage.k8s.io/volumesnapshotclasses/\" range_end:\"/registry/snapshot.storage.k8s.io/volumesnapshotclasses0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-10-10T17:59:57.066617Z","caller":"traceutil/trace.go:171","msg":"trace[2140921366] range","detail":"{range_begin:/registry/snapshot.storage.k8s.io/volumesnapshotclasses/; range_end:/registry/snapshot.storage.k8s.io/volumesnapshotclasses0; response_count:0; response_revision:1147; }","duration":"362.219092ms","start":"2024-10-10T17:59:56.704393Z","end":"2024-10-10T17:59:57.066612Z","steps":["trace[2140921366] 'agreement among raft nodes before linearized reading'  (duration: 362.18563ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-10T17:59:57.066640Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-10T17:59:56.704359Z","time spent":"362.274782ms","remote":"127.0.0.1:40238","response type":"/etcdserverpb.KV/Range","request count":0,"request size":118,"response count":1,"response size":30,"request content":"key:\"/registry/snapshot.storage.k8s.io/volumesnapshotclasses/\" range_end:\"/registry/snapshot.storage.k8s.io/volumesnapshotclasses0\" count_only:true "}
	{"level":"info","ts":"2024-10-10T18:01:05.903713Z","caller":"traceutil/trace.go:171","msg":"trace[1475457963] linearizableReadLoop","detail":"{readStateIndex:1623; appliedIndex:1622; }","duration":"391.489148ms","start":"2024-10-10T18:01:05.512207Z","end":"2024-10-10T18:01:05.903696Z","steps":["trace[1475457963] 'read index received'  (duration: 391.346663ms)","trace[1475457963] 'applied index is now lower than readState.Index'  (duration: 142.075µs)"],"step_count":2}
	{"level":"info","ts":"2024-10-10T18:01:05.903937Z","caller":"traceutil/trace.go:171","msg":"trace[1298268550] transaction","detail":"{read_only:false; response_revision:1562; number_of_response:1; }","duration":"402.691115ms","start":"2024-10-10T18:01:05.501239Z","end":"2024-10-10T18:01:05.903930Z","steps":["trace[1298268550] 'process raft request'  (duration: 402.36557ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-10T18:01:05.904176Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-10T18:01:05.501209Z","time spent":"402.772231ms","remote":"127.0.0.1:46698","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1548 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-10-10T18:01:05.904174Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"324.936917ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/local-path-storage\" ","response":"range_response_count:1 size:636"}
	{"level":"info","ts":"2024-10-10T18:01:05.904932Z","caller":"traceutil/trace.go:171","msg":"trace[61111919] range","detail":"{range_begin:/registry/namespaces/local-path-storage; range_end:; response_count:1; response_revision:1562; }","duration":"325.701218ms","start":"2024-10-10T18:01:05.579220Z","end":"2024-10-10T18:01:05.904922Z","steps":["trace[61111919] 'agreement among raft nodes before linearized reading'  (duration: 324.820888ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-10T18:01:05.904971Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-10T18:01:05.579186Z","time spent":"325.775435ms","remote":"127.0.0.1:46622","response type":"/etcdserverpb.KV/Range","request count":0,"request size":41,"response count":1,"response size":659,"request content":"key:\"/registry/namespaces/local-path-storage\" "}
	{"level":"warn","ts":"2024-10-10T18:01:05.904237Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"392.025689ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/persistentvolumeclaims/default/hpvc-restore\" ","response":"range_response_count:1 size:982"}
	{"level":"info","ts":"2024-10-10T18:01:05.905124Z","caller":"traceutil/trace.go:171","msg":"trace[1421856629] range","detail":"{range_begin:/registry/persistentvolumeclaims/default/hpvc-restore; range_end:; response_count:1; response_revision:1562; }","duration":"392.914078ms","start":"2024-10-10T18:01:05.512203Z","end":"2024-10-10T18:01:05.905117Z","steps":["trace[1421856629] 'agreement among raft nodes before linearized reading'  (duration: 391.999233ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-10T18:01:05.905148Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-10T18:01:05.512169Z","time spent":"392.972751ms","remote":"127.0.0.1:46688","response type":"/etcdserverpb.KV/Range","request count":0,"request size":55,"response count":1,"response size":1005,"request content":"key:\"/registry/persistentvolumeclaims/default/hpvc-restore\" "}
	{"level":"warn","ts":"2024-10-10T18:01:05.904259Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.045374ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-10T18:01:05.905385Z","caller":"traceutil/trace.go:171","msg":"trace[297005619] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1562; }","duration":"102.17041ms","start":"2024-10-10T18:01:05.803205Z","end":"2024-10-10T18:01:05.905375Z","steps":["trace[297005619] 'agreement among raft nodes before linearized reading'  (duration: 101.040888ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-10T18:02:21.702911Z","caller":"traceutil/trace.go:171","msg":"trace[1826958245] transaction","detail":"{read_only:false; response_revision:1878; number_of_response:1; }","duration":"218.689166ms","start":"2024-10-10T18:02:21.483969Z","end":"2024-10-10T18:02:21.702658Z","steps":["trace[1826958245] 'process raft request'  (duration: 218.553419ms)"],"step_count":1}
	
	
	==> kernel <==
	 18:02:58 up 5 min,  0 users,  load average: 0.48, 0.91, 0.47
	Linux addons-473910 5.10.207 #1 SMP Tue Oct 8 15:16:25 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [d18ceead39c176ae40504127165ef6f70d34a3524c690cf597693a5cea85eb9f] <==
	E1010 18:00:19.877357       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.99.141.215:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.99.141.215:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.99.141.215:443: connect: connection refused" logger="UnhandledError"
	E1010 18:00:19.882401       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.99.141.215:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.99.141.215:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.99.141.215:443: connect: connection refused" logger="UnhandledError"
	I1010 18:00:19.958438       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1010 18:00:37.971510       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I1010 18:00:38.146175       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.111.147.204"}
	I1010 18:00:38.815000       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1010 18:00:39.970282       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1010 18:01:02.543718       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E1010 18:01:15.873369       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1010 18:01:31.261684       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1010 18:01:31.262056       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1010 18:01:31.297493       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1010 18:01:31.297604       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1010 18:01:31.315010       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1010 18:01:31.315179       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1010 18:01:31.332984       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1010 18:01:31.333042       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1010 18:01:31.512675       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1010 18:01:31.512744       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	E1010 18:01:31.623236       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"snapshot-controller\" not found]"
	W1010 18:01:32.315475       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1010 18:01:32.515268       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1010 18:01:32.550600       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1010 18:01:45.020949       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.98.41.79"}
	I1010 18:02:56.750163       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.107.212.100"}
	
	
	==> kube-controller-manager [76288df1302c87280618ccfd56c890edf9404223642b2c7063f14fe9814b69e4] <==
	I1010 18:01:51.038636       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7b5c95b59d" duration="76.656µs"
	W1010 18:01:51.357009       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1010 18:01:51.357047       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1010 18:01:52.552930       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1010 18:01:52.552967       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1010 18:01:53.653244       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1010 18:01:53.653301       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1010 18:01:56.585207       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7b5c95b59d" duration="4.145µs"
	I1010 18:02:03.752451       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-473910"
	I1010 18:02:06.688729       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="headlamp"
	W1010 18:02:08.644724       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1010 18:02:08.644785       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1010 18:02:10.961398       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1010 18:02:10.961474       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1010 18:02:11.186708       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1010 18:02:11.186773       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1010 18:02:36.247225       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1010 18:02:36.247322       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1010 18:02:38.357909       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1010 18:02:38.357961       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1010 18:02:41.224494       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1010 18:02:41.224619       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1010 18:02:56.586658       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="33.801321ms"
	I1010 18:02:56.619443       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="32.67697ms"
	I1010 18:02:56.621265       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="95.136µs"
	
	
	==> kube-proxy [a58a6a528d510cfebfd4412ea54a8ecf08518f84f3f8fdd857854d977196d4a3] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1010 17:58:35.616671       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1010 17:58:35.626711       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.238"]
	E1010 17:58:35.626834       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1010 17:58:35.717285       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1010 17:58:35.717356       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1010 17:58:35.717382       1 server_linux.go:169] "Using iptables Proxier"
	I1010 17:58:35.753313       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1010 17:58:35.753613       1 server.go:483] "Version info" version="v1.31.1"
	I1010 17:58:35.753626       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1010 17:58:35.756156       1 config.go:199] "Starting service config controller"
	I1010 17:58:35.756185       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1010 17:58:35.756207       1 config.go:105] "Starting endpoint slice config controller"
	I1010 17:58:35.756211       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1010 17:58:35.756560       1 config.go:328] "Starting node config controller"
	I1010 17:58:35.756590       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1010 17:58:35.856444       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1010 17:58:35.856507       1 shared_informer.go:320] Caches are synced for service config
	I1010 17:58:35.856751       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [dda4917567e30a8dffe800dded48fb64958951c261adef9310b2ade87d30bf28] <==
	W1010 17:58:26.788527       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1010 17:58:26.788561       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1010 17:58:26.821495       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1010 17:58:26.821600       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1010 17:58:26.853322       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1010 17:58:26.854804       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1010 17:58:26.859947       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1010 17:58:26.860061       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1010 17:58:26.910697       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1010 17:58:26.910749       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1010 17:58:27.137255       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1010 17:58:27.137388       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1010 17:58:27.177491       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1010 17:58:27.177543       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1010 17:58:27.218491       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1010 17:58:27.218627       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1010 17:58:27.227220       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1010 17:58:27.227295       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1010 17:58:27.234612       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1010 17:58:27.234664       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1010 17:58:27.241135       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1010 17:58:27.241212       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1010 17:58:27.372241       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1010 17:58:27.372506       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I1010 17:58:29.131164       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 10 18:01:58 addons-473910 kubelet[1205]: I1010 18:01:58.788024    1205 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c102612e-1e74-4371-a685-fd6e81194b45" path="/var/lib/kubelet/pods/c102612e-1e74-4371-a685-fd6e81194b45/volumes"
	Oct 10 18:01:58 addons-473910 kubelet[1205]: E1010 18:01:58.992546    1205 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728583318992169265,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:574227,},InodesUsed:&UInt64Value{Value:197,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 18:01:58 addons-473910 kubelet[1205]: E1010 18:01:58.992595    1205 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728583318992169265,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:574227,},InodesUsed:&UInt64Value{Value:197,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 18:02:08 addons-473910 kubelet[1205]: E1010 18:02:08.996787    1205 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728583328995800059,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:574227,},InodesUsed:&UInt64Value{Value:197,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 18:02:08 addons-473910 kubelet[1205]: E1010 18:02:08.996895    1205 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728583328995800059,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:574227,},InodesUsed:&UInt64Value{Value:197,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 18:02:19 addons-473910 kubelet[1205]: E1010 18:02:19.004887    1205 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728583338999603155,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:574227,},InodesUsed:&UInt64Value{Value:197,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 18:02:19 addons-473910 kubelet[1205]: E1010 18:02:19.005278    1205 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728583338999603155,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:574227,},InodesUsed:&UInt64Value{Value:197,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 18:02:28 addons-473910 kubelet[1205]: E1010 18:02:28.818538    1205 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 10 18:02:28 addons-473910 kubelet[1205]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 10 18:02:28 addons-473910 kubelet[1205]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 10 18:02:28 addons-473910 kubelet[1205]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 10 18:02:28 addons-473910 kubelet[1205]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 10 18:02:29 addons-473910 kubelet[1205]: E1010 18:02:29.009414    1205 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728583349008784280,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:574227,},InodesUsed:&UInt64Value{Value:197,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 18:02:29 addons-473910 kubelet[1205]: E1010 18:02:29.009639    1205 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728583349008784280,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:574227,},InodesUsed:&UInt64Value{Value:197,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 18:02:29 addons-473910 kubelet[1205]: I1010 18:02:29.198941    1205 scope.go:117] "RemoveContainer" containerID="a1ebcaeab205f18b245f6448b656e819ce4eafe0dc4218b4c0d2d896443eec86"
	Oct 10 18:02:39 addons-473910 kubelet[1205]: E1010 18:02:39.012932    1205 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728583359012572736,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:574227,},InodesUsed:&UInt64Value{Value:197,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 18:02:39 addons-473910 kubelet[1205]: E1010 18:02:39.012981    1205 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728583359012572736,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:574227,},InodesUsed:&UInt64Value{Value:197,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 18:02:39 addons-473910 kubelet[1205]: I1010 18:02:39.784582    1205 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Oct 10 18:02:49 addons-473910 kubelet[1205]: E1010 18:02:49.015600    1205 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728583369015202341,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:574227,},InodesUsed:&UInt64Value{Value:197,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 18:02:49 addons-473910 kubelet[1205]: E1010 18:02:49.015674    1205 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728583369015202341,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:574227,},InodesUsed:&UInt64Value{Value:197,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 18:02:56 addons-473910 kubelet[1205]: E1010 18:02:56.583936    1205 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5cfe6418-5799-4105-8960-ac0bedaff76f" containerName="cloud-spanner-emulator"
	Oct 10 18:02:56 addons-473910 kubelet[1205]: E1010 18:02:56.583987    1205 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c102612e-1e74-4371-a685-fd6e81194b45" containerName="headlamp"
	Oct 10 18:02:56 addons-473910 kubelet[1205]: I1010 18:02:56.584028    1205 memory_manager.go:354] "RemoveStaleState removing state" podUID="5cfe6418-5799-4105-8960-ac0bedaff76f" containerName="cloud-spanner-emulator"
	Oct 10 18:02:56 addons-473910 kubelet[1205]: I1010 18:02:56.584035    1205 memory_manager.go:354] "RemoveStaleState removing state" podUID="c102612e-1e74-4371-a685-fd6e81194b45" containerName="headlamp"
	Oct 10 18:02:56 addons-473910 kubelet[1205]: I1010 18:02:56.701193    1205 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwwn8\" (UniqueName: \"kubernetes.io/projected/5c2a63b0-9bc9-4632-83c5-68ff53b86390-kube-api-access-nwwn8\") pod \"hello-world-app-55bf9c44b4-hjz49\" (UID: \"5c2a63b0-9bc9-4632-83c5-68ff53b86390\") " pod="default/hello-world-app-55bf9c44b4-hjz49"
	
	
	==> storage-provisioner [deac66408019d59749c7de76502f032afd1679bedfd1fd5c55e029101716b9b0] <==
	I1010 17:58:40.658372       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1010 17:58:40.692360       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1010 17:58:40.692440       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1010 17:58:40.710950       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1010 17:58:40.711127       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-473910_27d4675b-7e18-4c78-8a84-45db36aafbd9!
	I1010 17:58:40.713970       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"335f240f-ce46-419a-9030-88aacf16bc62", APIVersion:"v1", ResourceVersion:"644", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-473910_27d4675b-7e18-4c78-8a84-45db36aafbd9 became leader
	I1010 17:58:40.814288       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-473910_27d4675b-7e18-4c78-8a84-45db36aafbd9!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-473910 -n addons-473910
helpers_test.go:261: (dbg) Run:  kubectl --context addons-473910 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: hello-world-app-55bf9c44b4-hjz49 ingress-nginx-admission-create-l94zq ingress-nginx-admission-patch-27sgk
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-473910 describe pod hello-world-app-55bf9c44b4-hjz49 ingress-nginx-admission-create-l94zq ingress-nginx-admission-patch-27sgk
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-473910 describe pod hello-world-app-55bf9c44b4-hjz49 ingress-nginx-admission-create-l94zq ingress-nginx-admission-patch-27sgk: exit status 1 (80.32018ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-55bf9c44b4-hjz49
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-473910/192.168.39.238
	Start Time:       Thu, 10 Oct 2024 18:02:56 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=55bf9c44b4
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-55bf9c44b4
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-nwwn8 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-nwwn8:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  3s    default-scheduler  Successfully assigned default/hello-world-app-55bf9c44b4-hjz49 to addons-473910
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-l94zq" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-27sgk" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-473910 describe pod hello-world-app-55bf9c44b4-hjz49 ingress-nginx-admission-create-l94zq ingress-nginx-admission-patch-27sgk: exit status 1
addons_test.go:975: (dbg) Run:  out/minikube-linux-amd64 -p addons-473910 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:975: (dbg) Done: out/minikube-linux-amd64 -p addons-473910 addons disable ingress-dns --alsologtostderr -v=1: (1.197748826s)
addons_test.go:975: (dbg) Run:  out/minikube-linux-amd64 -p addons-473910 addons disable ingress --alsologtostderr -v=1
addons_test.go:975: (dbg) Done: out/minikube-linux-amd64 -p addons-473910 addons disable ingress --alsologtostderr -v=1: (7.735798229s)
--- FAIL: TestAddons/parallel/Ingress (150.45s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (350.41s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
I1010 18:00:21.172786   88876 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
addons_test.go:394: metrics-server stabilized in 2.571648ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-sr88b" [562db437-e740-4818-a2fd-dec917bd22cf] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.00653587s
addons_test.go:402: (dbg) Run:  kubectl --context addons-473910 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-473910 top pods -n kube-system: exit status 1 (77.55188ms)

                                                
                                                
** stderr ** 
	error: metrics not available yet

                                                
                                                
** /stderr **
I1010 18:00:27.260174   88876 retry.go:31] will retry after 4.204327519s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-473910 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-473910 top pods -n kube-system: exit status 1 (68.145967ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/etcd-addons-473910, age: 2m3.531543961s

                                                
                                                
** /stderr **
I1010 18:00:31.533712   88876 retry.go:31] will retry after 4.920039679s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-473910 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-473910 top pods -n kube-system: exit status 1 (66.598368ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-b5dd8, age: 2m3.518503468s

                                                
                                                
** /stderr **
I1010 18:00:36.520630   88876 retry.go:31] will retry after 9.501393182s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-473910 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-473910 top pods -n kube-system: exit status 1 (68.004694ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-b5dd8, age: 2m13.089151977s

                                                
                                                
** /stderr **
I1010 18:00:46.091304   88876 retry.go:31] will retry after 11.82751395s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-473910 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-473910 top pods -n kube-system: exit status 1 (72.733301ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-b5dd8, age: 2m24.990022043s

                                                
                                                
** /stderr **
I1010 18:00:57.992563   88876 retry.go:31] will retry after 21.477045296s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-473910 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-473910 top pods -n kube-system: exit status 1 (70.284205ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-b5dd8, age: 2m46.538823056s

                                                
                                                
** /stderr **
I1010 18:01:19.540790   88876 retry.go:31] will retry after 16.432096549s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-473910 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-473910 top pods -n kube-system: exit status 1 (67.144515ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-b5dd8, age: 3m3.041655204s

                                                
                                                
** /stderr **
I1010 18:01:36.043697   88876 retry.go:31] will retry after 27.149549719s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-473910 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-473910 top pods -n kube-system: exit status 1 (66.901093ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-b5dd8, age: 3m30.258856989s

                                                
                                                
** /stderr **
I1010 18:02:03.260975   88876 retry.go:31] will retry after 33.046408066s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-473910 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-473910 top pods -n kube-system: exit status 1 (65.596319ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-b5dd8, age: 4m3.37470901s

                                                
                                                
** /stderr **
I1010 18:02:36.376692   88876 retry.go:31] will retry after 1m3.776326225s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-473910 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-473910 top pods -n kube-system: exit status 1 (65.968568ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-b5dd8, age: 5m7.217158448s

                                                
                                                
** /stderr **
I1010 18:03:40.219582   88876 retry.go:31] will retry after 34.90496235s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-473910 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-473910 top pods -n kube-system: exit status 1 (67.380701ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-b5dd8, age: 5m42.192396738s

                                                
                                                
** /stderr **
I1010 18:04:15.194544   88876 retry.go:31] will retry after 49.384109866s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-473910 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-473910 top pods -n kube-system: exit status 1 (65.763059ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-b5dd8, age: 6m31.643425683s

                                                
                                                
** /stderr **
I1010 18:05:04.646011   88876 retry.go:31] will retry after 1m4.181529492s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-473910 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-473910 top pods -n kube-system: exit status 1 (63.898034ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-b5dd8, age: 7m35.895097355s

                                                
                                                
** /stderr **
addons_test.go:416: failed checking metric server: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-473910 -n addons-473910
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-473910 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-473910 logs -n 25: (1.26694882s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-058787                                                                     | download-only-058787 | jenkins | v1.34.0 | 10 Oct 24 17:57 UTC | 10 Oct 24 17:57 UTC |
	| delete  | -p download-only-497455                                                                     | download-only-497455 | jenkins | v1.34.0 | 10 Oct 24 17:57 UTC | 10 Oct 24 17:57 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-244092 | jenkins | v1.34.0 | 10 Oct 24 17:57 UTC |                     |
	|         | binary-mirror-244092                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:46773                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-244092                                                                     | binary-mirror-244092 | jenkins | v1.34.0 | 10 Oct 24 17:57 UTC | 10 Oct 24 17:57 UTC |
	| addons  | enable dashboard -p                                                                         | addons-473910        | jenkins | v1.34.0 | 10 Oct 24 17:57 UTC |                     |
	|         | addons-473910                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-473910        | jenkins | v1.34.0 | 10 Oct 24 17:57 UTC |                     |
	|         | addons-473910                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-473910 --wait=true                                                                | addons-473910        | jenkins | v1.34.0 | 10 Oct 24 17:57 UTC | 10 Oct 24 17:59 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	| addons  | addons-473910 addons disable                                                                | addons-473910        | jenkins | v1.34.0 | 10 Oct 24 17:59 UTC | 10 Oct 24 17:59 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-473910 addons disable                                                                | addons-473910        | jenkins | v1.34.0 | 10 Oct 24 18:00 UTC | 10 Oct 24 18:00 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-473910 addons disable                                                                | addons-473910        | jenkins | v1.34.0 | 10 Oct 24 18:00 UTC | 10 Oct 24 18:00 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| ip      | addons-473910 ip                                                                            | addons-473910        | jenkins | v1.34.0 | 10 Oct 24 18:00 UTC | 10 Oct 24 18:00 UTC |
	| addons  | addons-473910 addons disable                                                                | addons-473910        | jenkins | v1.34.0 | 10 Oct 24 18:00 UTC | 10 Oct 24 18:00 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-473910 addons                                                                        | addons-473910        | jenkins | v1.34.0 | 10 Oct 24 18:00 UTC | 10 Oct 24 18:00 UTC |
	|         | disable inspektor-gadget                                                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-473910 ssh curl -s                                                                   | addons-473910        | jenkins | v1.34.0 | 10 Oct 24 18:00 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| ssh     | addons-473910 ssh cat                                                                       | addons-473910        | jenkins | v1.34.0 | 10 Oct 24 18:00 UTC | 10 Oct 24 18:00 UTC |
	|         | /opt/local-path-provisioner/pvc-4c005375-b770-4d67-a3b3-31e1e4368658_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-473910 addons disable                                                                | addons-473910        | jenkins | v1.34.0 | 10 Oct 24 18:00 UTC | 10 Oct 24 18:01 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-473910 addons                                                                        | addons-473910        | jenkins | v1.34.0 | 10 Oct 24 18:01 UTC | 10 Oct 24 18:01 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-473910 addons                                                                        | addons-473910        | jenkins | v1.34.0 | 10 Oct 24 18:01 UTC | 10 Oct 24 18:01 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-473910 addons                                                                        | addons-473910        | jenkins | v1.34.0 | 10 Oct 24 18:01 UTC | 10 Oct 24 18:01 UTC |
	|         | disable nvidia-device-plugin                                                                |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-473910        | jenkins | v1.34.0 | 10 Oct 24 18:01 UTC | 10 Oct 24 18:01 UTC |
	|         | -p addons-473910                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-473910 addons                                                                        | addons-473910        | jenkins | v1.34.0 | 10 Oct 24 18:01 UTC | 10 Oct 24 18:01 UTC |
	|         | disable cloud-spanner                                                                       |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-473910 addons disable                                                                | addons-473910        | jenkins | v1.34.0 | 10 Oct 24 18:01 UTC | 10 Oct 24 18:02 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-473910 ip                                                                            | addons-473910        | jenkins | v1.34.0 | 10 Oct 24 18:02 UTC | 10 Oct 24 18:02 UTC |
	| addons  | addons-473910 addons disable                                                                | addons-473910        | jenkins | v1.34.0 | 10 Oct 24 18:02 UTC | 10 Oct 24 18:03 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-473910 addons disable                                                                | addons-473910        | jenkins | v1.34.0 | 10 Oct 24 18:03 UTC | 10 Oct 24 18:03 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/10 17:57:48
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1010 17:57:48.070881   89506 out.go:345] Setting OutFile to fd 1 ...
	I1010 17:57:48.071011   89506 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 17:57:48.071023   89506 out.go:358] Setting ErrFile to fd 2...
	I1010 17:57:48.071030   89506 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 17:57:48.071233   89506 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19787-81676/.minikube/bin
	I1010 17:57:48.071830   89506 out.go:352] Setting JSON to false
	I1010 17:57:48.072647   89506 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":6014,"bootTime":1728577054,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1010 17:57:48.072748   89506 start.go:139] virtualization: kvm guest
	I1010 17:57:48.074963   89506 out.go:177] * [addons-473910] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1010 17:57:48.076366   89506 out.go:177]   - MINIKUBE_LOCATION=19787
	I1010 17:57:48.076387   89506 notify.go:220] Checking for updates...
	I1010 17:57:48.079485   89506 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1010 17:57:48.081066   89506 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19787-81676/kubeconfig
	I1010 17:57:48.082398   89506 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19787-81676/.minikube
	I1010 17:57:48.083656   89506 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1010 17:57:48.084804   89506 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1010 17:57:48.086204   89506 driver.go:394] Setting default libvirt URI to qemu:///system
	I1010 17:57:48.120527   89506 out.go:177] * Using the kvm2 driver based on user configuration
	I1010 17:57:48.121879   89506 start.go:297] selected driver: kvm2
	I1010 17:57:48.121898   89506 start.go:901] validating driver "kvm2" against <nil>
	I1010 17:57:48.121909   89506 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1010 17:57:48.122655   89506 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 17:57:48.122736   89506 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19787-81676/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1010 17:57:48.137648   89506 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1010 17:57:48.137702   89506 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1010 17:57:48.137941   89506 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1010 17:57:48.137973   89506 cni.go:84] Creating CNI manager for ""
	I1010 17:57:48.138021   89506 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1010 17:57:48.138029   89506 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1010 17:57:48.138098   89506 start.go:340] cluster config:
	{Name:addons-473910 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-473910 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 17:57:48.138194   89506 iso.go:125] acquiring lock: {Name:mk53f4624ffe67cac4f5d6b29b9ed87292a010a5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 17:57:48.140093   89506 out.go:177] * Starting "addons-473910" primary control-plane node in "addons-473910" cluster
	I1010 17:57:48.141415   89506 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1010 17:57:48.141448   89506 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1010 17:57:48.141457   89506 cache.go:56] Caching tarball of preloaded images
	I1010 17:57:48.141538   89506 preload.go:172] Found /home/jenkins/minikube-integration/19787-81676/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1010 17:57:48.141550   89506 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1010 17:57:48.141881   89506 profile.go:143] Saving config to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/addons-473910/config.json ...
	I1010 17:57:48.141907   89506 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/addons-473910/config.json: {Name:mke534372be6f27906cf058c392cb887dd55fb57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 17:57:48.142055   89506 start.go:360] acquireMachinesLock for addons-473910: {Name:mkf50a8a3600473176c10a3ea212772e747151e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1010 17:57:48.142099   89506 start.go:364] duration metric: took 31.957µs to acquireMachinesLock for "addons-473910"
	I1010 17:57:48.142115   89506 start.go:93] Provisioning new machine with config: &{Name:addons-473910 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:addons-473910 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1010 17:57:48.142169   89506 start.go:125] createHost starting for "" (driver="kvm2")
	I1010 17:57:48.143734   89506 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1010 17:57:48.143874   89506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 17:57:48.143914   89506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 17:57:48.158502   89506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40005
	I1010 17:57:48.159083   89506 main.go:141] libmachine: () Calling .GetVersion
	I1010 17:57:48.159752   89506 main.go:141] libmachine: Using API Version  1
	I1010 17:57:48.159776   89506 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 17:57:48.160146   89506 main.go:141] libmachine: () Calling .GetMachineName
	I1010 17:57:48.160365   89506 main.go:141] libmachine: (addons-473910) Calling .GetMachineName
	I1010 17:57:48.160554   89506 main.go:141] libmachine: (addons-473910) Calling .DriverName
	I1010 17:57:48.160716   89506 start.go:159] libmachine.API.Create for "addons-473910" (driver="kvm2")
	I1010 17:57:48.160747   89506 client.go:168] LocalClient.Create starting
	I1010 17:57:48.160786   89506 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem
	I1010 17:57:48.228829   89506 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem
	I1010 17:57:48.488615   89506 main.go:141] libmachine: Running pre-create checks...
	I1010 17:57:48.488644   89506 main.go:141] libmachine: (addons-473910) Calling .PreCreateCheck
	I1010 17:57:48.489161   89506 main.go:141] libmachine: (addons-473910) Calling .GetConfigRaw
	I1010 17:57:48.489670   89506 main.go:141] libmachine: Creating machine...
	I1010 17:57:48.489687   89506 main.go:141] libmachine: (addons-473910) Calling .Create
	I1010 17:57:48.489903   89506 main.go:141] libmachine: (addons-473910) Creating KVM machine...
	I1010 17:57:48.491281   89506 main.go:141] libmachine: (addons-473910) DBG | found existing default KVM network
	I1010 17:57:48.492080   89506 main.go:141] libmachine: (addons-473910) DBG | I1010 17:57:48.491917   89528 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002211f0}
	I1010 17:57:48.492142   89506 main.go:141] libmachine: (addons-473910) DBG | created network xml: 
	I1010 17:57:48.492168   89506 main.go:141] libmachine: (addons-473910) DBG | <network>
	I1010 17:57:48.492178   89506 main.go:141] libmachine: (addons-473910) DBG |   <name>mk-addons-473910</name>
	I1010 17:57:48.492192   89506 main.go:141] libmachine: (addons-473910) DBG |   <dns enable='no'/>
	I1010 17:57:48.492201   89506 main.go:141] libmachine: (addons-473910) DBG |   
	I1010 17:57:48.492210   89506 main.go:141] libmachine: (addons-473910) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1010 17:57:48.492219   89506 main.go:141] libmachine: (addons-473910) DBG |     <dhcp>
	I1010 17:57:48.492227   89506 main.go:141] libmachine: (addons-473910) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1010 17:57:48.492255   89506 main.go:141] libmachine: (addons-473910) DBG |     </dhcp>
	I1010 17:57:48.492279   89506 main.go:141] libmachine: (addons-473910) DBG |   </ip>
	I1010 17:57:48.492292   89506 main.go:141] libmachine: (addons-473910) DBG |   
	I1010 17:57:48.492301   89506 main.go:141] libmachine: (addons-473910) DBG | </network>
	I1010 17:57:48.492314   89506 main.go:141] libmachine: (addons-473910) DBG | 
	I1010 17:57:48.497633   89506 main.go:141] libmachine: (addons-473910) DBG | trying to create private KVM network mk-addons-473910 192.168.39.0/24...
	I1010 17:57:48.565742   89506 main.go:141] libmachine: (addons-473910) DBG | private KVM network mk-addons-473910 192.168.39.0/24 created
	I1010 17:57:48.565778   89506 main.go:141] libmachine: (addons-473910) DBG | I1010 17:57:48.565696   89528 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19787-81676/.minikube
	I1010 17:57:48.565809   89506 main.go:141] libmachine: (addons-473910) Setting up store path in /home/jenkins/minikube-integration/19787-81676/.minikube/machines/addons-473910 ...
	I1010 17:57:48.565829   89506 main.go:141] libmachine: (addons-473910) Building disk image from file:///home/jenkins/minikube-integration/19787-81676/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso
	I1010 17:57:48.565925   89506 main.go:141] libmachine: (addons-473910) Downloading /home/jenkins/minikube-integration/19787-81676/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19787-81676/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso...
	I1010 17:57:48.849550   89506 main.go:141] libmachine: (addons-473910) DBG | I1010 17:57:48.849386   89528 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/addons-473910/id_rsa...
	I1010 17:57:48.967274   89506 main.go:141] libmachine: (addons-473910) DBG | I1010 17:57:48.967133   89528 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/addons-473910/addons-473910.rawdisk...
	I1010 17:57:48.967309   89506 main.go:141] libmachine: (addons-473910) DBG | Writing magic tar header
	I1010 17:57:48.967323   89506 main.go:141] libmachine: (addons-473910) DBG | Writing SSH key tar header
	I1010 17:57:48.967336   89506 main.go:141] libmachine: (addons-473910) DBG | I1010 17:57:48.967252   89528 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19787-81676/.minikube/machines/addons-473910 ...
	I1010 17:57:48.967352   89506 main.go:141] libmachine: (addons-473910) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/addons-473910
	I1010 17:57:48.967370   89506 main.go:141] libmachine: (addons-473910) Setting executable bit set on /home/jenkins/minikube-integration/19787-81676/.minikube/machines/addons-473910 (perms=drwx------)
	I1010 17:57:48.967385   89506 main.go:141] libmachine: (addons-473910) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19787-81676/.minikube/machines
	I1010 17:57:48.967395   89506 main.go:141] libmachine: (addons-473910) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19787-81676/.minikube
	I1010 17:57:48.967401   89506 main.go:141] libmachine: (addons-473910) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19787-81676
	I1010 17:57:48.967408   89506 main.go:141] libmachine: (addons-473910) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1010 17:57:48.967412   89506 main.go:141] libmachine: (addons-473910) DBG | Checking permissions on dir: /home/jenkins
	I1010 17:57:48.967420   89506 main.go:141] libmachine: (addons-473910) DBG | Checking permissions on dir: /home
	I1010 17:57:48.967428   89506 main.go:141] libmachine: (addons-473910) DBG | Skipping /home - not owner
	I1010 17:57:48.967473   89506 main.go:141] libmachine: (addons-473910) Setting executable bit set on /home/jenkins/minikube-integration/19787-81676/.minikube/machines (perms=drwxr-xr-x)
	I1010 17:57:48.967503   89506 main.go:141] libmachine: (addons-473910) Setting executable bit set on /home/jenkins/minikube-integration/19787-81676/.minikube (perms=drwxr-xr-x)
	I1010 17:57:48.967515   89506 main.go:141] libmachine: (addons-473910) Setting executable bit set on /home/jenkins/minikube-integration/19787-81676 (perms=drwxrwxr-x)
	I1010 17:57:48.967523   89506 main.go:141] libmachine: (addons-473910) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1010 17:57:48.967537   89506 main.go:141] libmachine: (addons-473910) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1010 17:57:48.967545   89506 main.go:141] libmachine: (addons-473910) Creating domain...
	I1010 17:57:48.968726   89506 main.go:141] libmachine: (addons-473910) define libvirt domain using xml: 
	I1010 17:57:48.968757   89506 main.go:141] libmachine: (addons-473910) <domain type='kvm'>
	I1010 17:57:48.968768   89506 main.go:141] libmachine: (addons-473910)   <name>addons-473910</name>
	I1010 17:57:48.968779   89506 main.go:141] libmachine: (addons-473910)   <memory unit='MiB'>4000</memory>
	I1010 17:57:48.968787   89506 main.go:141] libmachine: (addons-473910)   <vcpu>2</vcpu>
	I1010 17:57:48.968797   89506 main.go:141] libmachine: (addons-473910)   <features>
	I1010 17:57:48.968804   89506 main.go:141] libmachine: (addons-473910)     <acpi/>
	I1010 17:57:48.968811   89506 main.go:141] libmachine: (addons-473910)     <apic/>
	I1010 17:57:48.968821   89506 main.go:141] libmachine: (addons-473910)     <pae/>
	I1010 17:57:48.968829   89506 main.go:141] libmachine: (addons-473910)     
	I1010 17:57:48.968864   89506 main.go:141] libmachine: (addons-473910)   </features>
	I1010 17:57:48.968883   89506 main.go:141] libmachine: (addons-473910)   <cpu mode='host-passthrough'>
	I1010 17:57:48.968894   89506 main.go:141] libmachine: (addons-473910)   
	I1010 17:57:48.968919   89506 main.go:141] libmachine: (addons-473910)   </cpu>
	I1010 17:57:48.968929   89506 main.go:141] libmachine: (addons-473910)   <os>
	I1010 17:57:48.968937   89506 main.go:141] libmachine: (addons-473910)     <type>hvm</type>
	I1010 17:57:48.968948   89506 main.go:141] libmachine: (addons-473910)     <boot dev='cdrom'/>
	I1010 17:57:48.968956   89506 main.go:141] libmachine: (addons-473910)     <boot dev='hd'/>
	I1010 17:57:48.968964   89506 main.go:141] libmachine: (addons-473910)     <bootmenu enable='no'/>
	I1010 17:57:48.968973   89506 main.go:141] libmachine: (addons-473910)   </os>
	I1010 17:57:48.969011   89506 main.go:141] libmachine: (addons-473910)   <devices>
	I1010 17:57:48.969033   89506 main.go:141] libmachine: (addons-473910)     <disk type='file' device='cdrom'>
	I1010 17:57:48.969043   89506 main.go:141] libmachine: (addons-473910)       <source file='/home/jenkins/minikube-integration/19787-81676/.minikube/machines/addons-473910/boot2docker.iso'/>
	I1010 17:57:48.969054   89506 main.go:141] libmachine: (addons-473910)       <target dev='hdc' bus='scsi'/>
	I1010 17:57:48.969061   89506 main.go:141] libmachine: (addons-473910)       <readonly/>
	I1010 17:57:48.969069   89506 main.go:141] libmachine: (addons-473910)     </disk>
	I1010 17:57:48.969076   89506 main.go:141] libmachine: (addons-473910)     <disk type='file' device='disk'>
	I1010 17:57:48.969085   89506 main.go:141] libmachine: (addons-473910)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1010 17:57:48.969099   89506 main.go:141] libmachine: (addons-473910)       <source file='/home/jenkins/minikube-integration/19787-81676/.minikube/machines/addons-473910/addons-473910.rawdisk'/>
	I1010 17:57:48.969107   89506 main.go:141] libmachine: (addons-473910)       <target dev='hda' bus='virtio'/>
	I1010 17:57:48.969112   89506 main.go:141] libmachine: (addons-473910)     </disk>
	I1010 17:57:48.969119   89506 main.go:141] libmachine: (addons-473910)     <interface type='network'>
	I1010 17:57:48.969124   89506 main.go:141] libmachine: (addons-473910)       <source network='mk-addons-473910'/>
	I1010 17:57:48.969131   89506 main.go:141] libmachine: (addons-473910)       <model type='virtio'/>
	I1010 17:57:48.969136   89506 main.go:141] libmachine: (addons-473910)     </interface>
	I1010 17:57:48.969142   89506 main.go:141] libmachine: (addons-473910)     <interface type='network'>
	I1010 17:57:48.969148   89506 main.go:141] libmachine: (addons-473910)       <source network='default'/>
	I1010 17:57:48.969154   89506 main.go:141] libmachine: (addons-473910)       <model type='virtio'/>
	I1010 17:57:48.969159   89506 main.go:141] libmachine: (addons-473910)     </interface>
	I1010 17:57:48.969164   89506 main.go:141] libmachine: (addons-473910)     <serial type='pty'>
	I1010 17:57:48.969169   89506 main.go:141] libmachine: (addons-473910)       <target port='0'/>
	I1010 17:57:48.969175   89506 main.go:141] libmachine: (addons-473910)     </serial>
	I1010 17:57:48.969180   89506 main.go:141] libmachine: (addons-473910)     <console type='pty'>
	I1010 17:57:48.969190   89506 main.go:141] libmachine: (addons-473910)       <target type='serial' port='0'/>
	I1010 17:57:48.969198   89506 main.go:141] libmachine: (addons-473910)     </console>
	I1010 17:57:48.969202   89506 main.go:141] libmachine: (addons-473910)     <rng model='virtio'>
	I1010 17:57:48.969214   89506 main.go:141] libmachine: (addons-473910)       <backend model='random'>/dev/random</backend>
	I1010 17:57:48.969222   89506 main.go:141] libmachine: (addons-473910)     </rng>
	I1010 17:57:48.969229   89506 main.go:141] libmachine: (addons-473910)     
	I1010 17:57:48.969233   89506 main.go:141] libmachine: (addons-473910)     
	I1010 17:57:48.969238   89506 main.go:141] libmachine: (addons-473910)   </devices>
	I1010 17:57:48.969244   89506 main.go:141] libmachine: (addons-473910) </domain>
	I1010 17:57:48.969281   89506 main.go:141] libmachine: (addons-473910) 
	I1010 17:57:48.973836   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined MAC address 52:54:00:6d:5c:17 in network default
	I1010 17:57:48.974505   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:57:48.974529   89506 main.go:141] libmachine: (addons-473910) Ensuring networks are active...
	I1010 17:57:48.975241   89506 main.go:141] libmachine: (addons-473910) Ensuring network default is active
	I1010 17:57:48.975698   89506 main.go:141] libmachine: (addons-473910) Ensuring network mk-addons-473910 is active
	I1010 17:57:48.976185   89506 main.go:141] libmachine: (addons-473910) Getting domain xml...
	I1010 17:57:48.976915   89506 main.go:141] libmachine: (addons-473910) Creating domain...
	I1010 17:57:50.189530   89506 main.go:141] libmachine: (addons-473910) Waiting to get IP...
	I1010 17:57:50.190355   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:57:50.190802   89506 main.go:141] libmachine: (addons-473910) DBG | unable to find current IP address of domain addons-473910 in network mk-addons-473910
	I1010 17:57:50.190831   89506 main.go:141] libmachine: (addons-473910) DBG | I1010 17:57:50.190755   89528 retry.go:31] will retry after 227.603693ms: waiting for machine to come up
	I1010 17:57:50.420362   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:57:50.420824   89506 main.go:141] libmachine: (addons-473910) DBG | unable to find current IP address of domain addons-473910 in network mk-addons-473910
	I1010 17:57:50.420864   89506 main.go:141] libmachine: (addons-473910) DBG | I1010 17:57:50.420768   89528 retry.go:31] will retry after 387.707808ms: waiting for machine to come up
	I1010 17:57:50.811303   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:57:50.811780   89506 main.go:141] libmachine: (addons-473910) DBG | unable to find current IP address of domain addons-473910 in network mk-addons-473910
	I1010 17:57:50.811810   89506 main.go:141] libmachine: (addons-473910) DBG | I1010 17:57:50.811717   89528 retry.go:31] will retry after 461.409061ms: waiting for machine to come up
	I1010 17:57:51.274344   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:57:51.274796   89506 main.go:141] libmachine: (addons-473910) DBG | unable to find current IP address of domain addons-473910 in network mk-addons-473910
	I1010 17:57:51.274820   89506 main.go:141] libmachine: (addons-473910) DBG | I1010 17:57:51.274757   89528 retry.go:31] will retry after 450.992562ms: waiting for machine to come up
	I1010 17:57:51.727071   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:57:51.727456   89506 main.go:141] libmachine: (addons-473910) DBG | unable to find current IP address of domain addons-473910 in network mk-addons-473910
	I1010 17:57:51.727491   89506 main.go:141] libmachine: (addons-473910) DBG | I1010 17:57:51.727386   89528 retry.go:31] will retry after 742.174885ms: waiting for machine to come up
	I1010 17:57:52.471303   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:57:52.471624   89506 main.go:141] libmachine: (addons-473910) DBG | unable to find current IP address of domain addons-473910 in network mk-addons-473910
	I1010 17:57:52.471651   89506 main.go:141] libmachine: (addons-473910) DBG | I1010 17:57:52.471583   89528 retry.go:31] will retry after 814.191957ms: waiting for machine to come up
	I1010 17:57:53.287336   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:57:53.287807   89506 main.go:141] libmachine: (addons-473910) DBG | unable to find current IP address of domain addons-473910 in network mk-addons-473910
	I1010 17:57:53.287831   89506 main.go:141] libmachine: (addons-473910) DBG | I1010 17:57:53.287751   89528 retry.go:31] will retry after 1.101513633s: waiting for machine to come up
	I1010 17:57:54.390576   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:57:54.390993   89506 main.go:141] libmachine: (addons-473910) DBG | unable to find current IP address of domain addons-473910 in network mk-addons-473910
	I1010 17:57:54.391016   89506 main.go:141] libmachine: (addons-473910) DBG | I1010 17:57:54.390947   89528 retry.go:31] will retry after 1.215556072s: waiting for machine to come up
	I1010 17:57:55.608558   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:57:55.608950   89506 main.go:141] libmachine: (addons-473910) DBG | unable to find current IP address of domain addons-473910 in network mk-addons-473910
	I1010 17:57:55.608974   89506 main.go:141] libmachine: (addons-473910) DBG | I1010 17:57:55.608921   89528 retry.go:31] will retry after 1.607661932s: waiting for machine to come up
	I1010 17:57:57.218960   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:57:57.219583   89506 main.go:141] libmachine: (addons-473910) DBG | unable to find current IP address of domain addons-473910 in network mk-addons-473910
	I1010 17:57:57.219608   89506 main.go:141] libmachine: (addons-473910) DBG | I1010 17:57:57.219530   89528 retry.go:31] will retry after 1.778765799s: waiting for machine to come up
	I1010 17:57:58.999898   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:57:59.000435   89506 main.go:141] libmachine: (addons-473910) DBG | unable to find current IP address of domain addons-473910 in network mk-addons-473910
	I1010 17:57:59.000469   89506 main.go:141] libmachine: (addons-473910) DBG | I1010 17:57:59.000377   89528 retry.go:31] will retry after 1.840094334s: waiting for machine to come up
	I1010 17:58:00.843706   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:00.844181   89506 main.go:141] libmachine: (addons-473910) DBG | unable to find current IP address of domain addons-473910 in network mk-addons-473910
	I1010 17:58:00.844210   89506 main.go:141] libmachine: (addons-473910) DBG | I1010 17:58:00.844106   89528 retry.go:31] will retry after 2.961379135s: waiting for machine to come up
	I1010 17:58:03.806890   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:03.807309   89506 main.go:141] libmachine: (addons-473910) DBG | unable to find current IP address of domain addons-473910 in network mk-addons-473910
	I1010 17:58:03.807337   89506 main.go:141] libmachine: (addons-473910) DBG | I1010 17:58:03.807256   89528 retry.go:31] will retry after 3.630385898s: waiting for machine to come up
	I1010 17:58:07.442208   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:07.442842   89506 main.go:141] libmachine: (addons-473910) DBG | unable to find current IP address of domain addons-473910 in network mk-addons-473910
	I1010 17:58:07.442864   89506 main.go:141] libmachine: (addons-473910) DBG | I1010 17:58:07.442802   89528 retry.go:31] will retry after 3.657313932s: waiting for machine to come up
	I1010 17:58:11.103605   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:11.103959   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has current primary IP address 192.168.39.238 and MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:11.103984   89506 main.go:141] libmachine: (addons-473910) Found IP for machine: 192.168.39.238
	I1010 17:58:11.103996   89506 main.go:141] libmachine: (addons-473910) Reserving static IP address...
	I1010 17:58:11.104342   89506 main.go:141] libmachine: (addons-473910) DBG | unable to find host DHCP lease matching {name: "addons-473910", mac: "52:54:00:6b:7f:56", ip: "192.168.39.238"} in network mk-addons-473910
	I1010 17:58:11.181678   89506 main.go:141] libmachine: (addons-473910) DBG | Getting to WaitForSSH function...
	I1010 17:58:11.181706   89506 main.go:141] libmachine: (addons-473910) Reserved static IP address: 192.168.39.238
	I1010 17:58:11.181774   89506 main.go:141] libmachine: (addons-473910) Waiting for SSH to be available...
	I1010 17:58:11.184533   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:11.185035   89506 main.go:141] libmachine: (addons-473910) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:7f:56", ip: ""} in network mk-addons-473910: {Iface:virbr1 ExpiryTime:2024-10-10 18:58:03 +0000 UTC Type:0 Mac:52:54:00:6b:7f:56 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:minikube Clientid:01:52:54:00:6b:7f:56}
	I1010 17:58:11.185075   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined IP address 192.168.39.238 and MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:11.185293   89506 main.go:141] libmachine: (addons-473910) DBG | Using SSH client type: external
	I1010 17:58:11.185316   89506 main.go:141] libmachine: (addons-473910) DBG | Using SSH private key: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/addons-473910/id_rsa (-rw-------)
	I1010 17:58:11.185337   89506 main.go:141] libmachine: (addons-473910) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.238 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19787-81676/.minikube/machines/addons-473910/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1010 17:58:11.185349   89506 main.go:141] libmachine: (addons-473910) DBG | About to run SSH command:
	I1010 17:58:11.185357   89506 main.go:141] libmachine: (addons-473910) DBG | exit 0
	I1010 17:58:11.309009   89506 main.go:141] libmachine: (addons-473910) DBG | SSH cmd err, output: <nil>: 
	I1010 17:58:11.309275   89506 main.go:141] libmachine: (addons-473910) KVM machine creation complete!
	I1010 17:58:11.309595   89506 main.go:141] libmachine: (addons-473910) Calling .GetConfigRaw
	I1010 17:58:11.310247   89506 main.go:141] libmachine: (addons-473910) Calling .DriverName
	I1010 17:58:11.310456   89506 main.go:141] libmachine: (addons-473910) Calling .DriverName
	I1010 17:58:11.310664   89506 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1010 17:58:11.310701   89506 main.go:141] libmachine: (addons-473910) Calling .GetState
	I1010 17:58:11.311947   89506 main.go:141] libmachine: Detecting operating system of created instance...
	I1010 17:58:11.311982   89506 main.go:141] libmachine: Waiting for SSH to be available...
	I1010 17:58:11.311987   89506 main.go:141] libmachine: Getting to WaitForSSH function...
	I1010 17:58:11.311997   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHHostname
	I1010 17:58:11.314265   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:11.314609   89506 main.go:141] libmachine: (addons-473910) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:7f:56", ip: ""} in network mk-addons-473910: {Iface:virbr1 ExpiryTime:2024-10-10 18:58:03 +0000 UTC Type:0 Mac:52:54:00:6b:7f:56 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:addons-473910 Clientid:01:52:54:00:6b:7f:56}
	I1010 17:58:11.314634   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined IP address 192.168.39.238 and MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:11.314725   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHPort
	I1010 17:58:11.314896   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHKeyPath
	I1010 17:58:11.315054   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHKeyPath
	I1010 17:58:11.315245   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHUsername
	I1010 17:58:11.315439   89506 main.go:141] libmachine: Using SSH client type: native
	I1010 17:58:11.315758   89506 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I1010 17:58:11.315781   89506 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1010 17:58:11.424413   89506 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1010 17:58:11.424435   89506 main.go:141] libmachine: Detecting the provisioner...
	I1010 17:58:11.424444   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHHostname
	I1010 17:58:11.427172   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:11.427546   89506 main.go:141] libmachine: (addons-473910) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:7f:56", ip: ""} in network mk-addons-473910: {Iface:virbr1 ExpiryTime:2024-10-10 18:58:03 +0000 UTC Type:0 Mac:52:54:00:6b:7f:56 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:addons-473910 Clientid:01:52:54:00:6b:7f:56}
	I1010 17:58:11.427578   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined IP address 192.168.39.238 and MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:11.427764   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHPort
	I1010 17:58:11.427973   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHKeyPath
	I1010 17:58:11.428150   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHKeyPath
	I1010 17:58:11.428302   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHUsername
	I1010 17:58:11.428504   89506 main.go:141] libmachine: Using SSH client type: native
	I1010 17:58:11.428720   89506 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I1010 17:58:11.428735   89506 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1010 17:58:11.537810   89506 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1010 17:58:11.537946   89506 main.go:141] libmachine: found compatible host: buildroot
	I1010 17:58:11.537961   89506 main.go:141] libmachine: Provisioning with buildroot...
	I1010 17:58:11.537971   89506 main.go:141] libmachine: (addons-473910) Calling .GetMachineName
	I1010 17:58:11.538246   89506 buildroot.go:166] provisioning hostname "addons-473910"
	I1010 17:58:11.538274   89506 main.go:141] libmachine: (addons-473910) Calling .GetMachineName
	I1010 17:58:11.538480   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHHostname
	I1010 17:58:11.541271   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:11.541705   89506 main.go:141] libmachine: (addons-473910) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:7f:56", ip: ""} in network mk-addons-473910: {Iface:virbr1 ExpiryTime:2024-10-10 18:58:03 +0000 UTC Type:0 Mac:52:54:00:6b:7f:56 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:addons-473910 Clientid:01:52:54:00:6b:7f:56}
	I1010 17:58:11.541722   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined IP address 192.168.39.238 and MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:11.541938   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHPort
	I1010 17:58:11.542147   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHKeyPath
	I1010 17:58:11.542311   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHKeyPath
	I1010 17:58:11.542454   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHUsername
	I1010 17:58:11.542633   89506 main.go:141] libmachine: Using SSH client type: native
	I1010 17:58:11.542798   89506 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I1010 17:58:11.542809   89506 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-473910 && echo "addons-473910" | sudo tee /etc/hostname
	I1010 17:58:11.662958   89506 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-473910
	
	I1010 17:58:11.662986   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHHostname
	I1010 17:58:11.665789   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:11.666201   89506 main.go:141] libmachine: (addons-473910) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:7f:56", ip: ""} in network mk-addons-473910: {Iface:virbr1 ExpiryTime:2024-10-10 18:58:03 +0000 UTC Type:0 Mac:52:54:00:6b:7f:56 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:addons-473910 Clientid:01:52:54:00:6b:7f:56}
	I1010 17:58:11.666232   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined IP address 192.168.39.238 and MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:11.666603   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHPort
	I1010 17:58:11.666771   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHKeyPath
	I1010 17:58:11.666942   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHKeyPath
	I1010 17:58:11.667074   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHUsername
	I1010 17:58:11.667368   89506 main.go:141] libmachine: Using SSH client type: native
	I1010 17:58:11.667599   89506 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I1010 17:58:11.667620   89506 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-473910' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-473910/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-473910' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1010 17:58:11.781956   89506 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1010 17:58:11.781988   89506 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19787-81676/.minikube CaCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19787-81676/.minikube}
	I1010 17:58:11.782073   89506 buildroot.go:174] setting up certificates
	I1010 17:58:11.782091   89506 provision.go:84] configureAuth start
	I1010 17:58:11.782113   89506 main.go:141] libmachine: (addons-473910) Calling .GetMachineName
	I1010 17:58:11.782422   89506 main.go:141] libmachine: (addons-473910) Calling .GetIP
	I1010 17:58:11.785044   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:11.785381   89506 main.go:141] libmachine: (addons-473910) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:7f:56", ip: ""} in network mk-addons-473910: {Iface:virbr1 ExpiryTime:2024-10-10 18:58:03 +0000 UTC Type:0 Mac:52:54:00:6b:7f:56 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:addons-473910 Clientid:01:52:54:00:6b:7f:56}
	I1010 17:58:11.785416   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined IP address 192.168.39.238 and MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:11.785523   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHHostname
	I1010 17:58:11.787667   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:11.787976   89506 main.go:141] libmachine: (addons-473910) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:7f:56", ip: ""} in network mk-addons-473910: {Iface:virbr1 ExpiryTime:2024-10-10 18:58:03 +0000 UTC Type:0 Mac:52:54:00:6b:7f:56 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:addons-473910 Clientid:01:52:54:00:6b:7f:56}
	I1010 17:58:11.788007   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined IP address 192.168.39.238 and MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:11.788191   89506 provision.go:143] copyHostCerts
	I1010 17:58:11.788278   89506 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem (1078 bytes)
	I1010 17:58:11.788423   89506 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem (1123 bytes)
	I1010 17:58:11.788529   89506 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem (1675 bytes)
	I1010 17:58:11.788616   89506 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem org=jenkins.addons-473910 san=[127.0.0.1 192.168.39.238 addons-473910 localhost minikube]
	I1010 17:58:11.974798   89506 provision.go:177] copyRemoteCerts
	I1010 17:58:11.974886   89506 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1010 17:58:11.974920   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHHostname
	I1010 17:58:11.977540   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:11.977864   89506 main.go:141] libmachine: (addons-473910) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:7f:56", ip: ""} in network mk-addons-473910: {Iface:virbr1 ExpiryTime:2024-10-10 18:58:03 +0000 UTC Type:0 Mac:52:54:00:6b:7f:56 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:addons-473910 Clientid:01:52:54:00:6b:7f:56}
	I1010 17:58:11.977895   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined IP address 192.168.39.238 and MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:11.978023   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHPort
	I1010 17:58:11.978223   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHKeyPath
	I1010 17:58:11.978370   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHUsername
	I1010 17:58:11.978497   89506 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/addons-473910/id_rsa Username:docker}
	I1010 17:58:12.063621   89506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1010 17:58:12.088428   89506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1010 17:58:12.113481   89506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1010 17:58:12.137853   89506 provision.go:87] duration metric: took 355.745133ms to configureAuth
	I1010 17:58:12.137885   89506 buildroot.go:189] setting minikube options for container-runtime
	I1010 17:58:12.138114   89506 config.go:182] Loaded profile config "addons-473910": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 17:58:12.138226   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHHostname
	I1010 17:58:12.141100   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:12.141447   89506 main.go:141] libmachine: (addons-473910) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:7f:56", ip: ""} in network mk-addons-473910: {Iface:virbr1 ExpiryTime:2024-10-10 18:58:03 +0000 UTC Type:0 Mac:52:54:00:6b:7f:56 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:addons-473910 Clientid:01:52:54:00:6b:7f:56}
	I1010 17:58:12.141473   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined IP address 192.168.39.238 and MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:12.141663   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHPort
	I1010 17:58:12.141847   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHKeyPath
	I1010 17:58:12.142008   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHKeyPath
	I1010 17:58:12.142138   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHUsername
	I1010 17:58:12.142300   89506 main.go:141] libmachine: Using SSH client type: native
	I1010 17:58:12.142474   89506 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I1010 17:58:12.142488   89506 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1010 17:58:12.362220   89506 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1010 17:58:12.362250   89506 main.go:141] libmachine: Checking connection to Docker...
	I1010 17:58:12.362259   89506 main.go:141] libmachine: (addons-473910) Calling .GetURL
	I1010 17:58:12.363730   89506 main.go:141] libmachine: (addons-473910) DBG | Using libvirt version 6000000
	I1010 17:58:12.366338   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:12.366716   89506 main.go:141] libmachine: (addons-473910) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:7f:56", ip: ""} in network mk-addons-473910: {Iface:virbr1 ExpiryTime:2024-10-10 18:58:03 +0000 UTC Type:0 Mac:52:54:00:6b:7f:56 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:addons-473910 Clientid:01:52:54:00:6b:7f:56}
	I1010 17:58:12.366744   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined IP address 192.168.39.238 and MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:12.366912   89506 main.go:141] libmachine: Docker is up and running!
	I1010 17:58:12.366923   89506 main.go:141] libmachine: Reticulating splines...
	I1010 17:58:12.366931   89506 client.go:171] duration metric: took 24.20617252s to LocalClient.Create
	I1010 17:58:12.366956   89506 start.go:167] duration metric: took 24.206240514s to libmachine.API.Create "addons-473910"
	I1010 17:58:12.366978   89506 start.go:293] postStartSetup for "addons-473910" (driver="kvm2")
	I1010 17:58:12.366995   89506 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1010 17:58:12.367019   89506 main.go:141] libmachine: (addons-473910) Calling .DriverName
	I1010 17:58:12.367297   89506 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1010 17:58:12.367327   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHHostname
	I1010 17:58:12.369615   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:12.369900   89506 main.go:141] libmachine: (addons-473910) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:7f:56", ip: ""} in network mk-addons-473910: {Iface:virbr1 ExpiryTime:2024-10-10 18:58:03 +0000 UTC Type:0 Mac:52:54:00:6b:7f:56 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:addons-473910 Clientid:01:52:54:00:6b:7f:56}
	I1010 17:58:12.369927   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined IP address 192.168.39.238 and MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:12.370061   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHPort
	I1010 17:58:12.370274   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHKeyPath
	I1010 17:58:12.370464   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHUsername
	I1010 17:58:12.370623   89506 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/addons-473910/id_rsa Username:docker}
	I1010 17:58:12.455668   89506 ssh_runner.go:195] Run: cat /etc/os-release
	I1010 17:58:12.460127   89506 info.go:137] Remote host: Buildroot 2023.02.9
	I1010 17:58:12.460161   89506 filesync.go:126] Scanning /home/jenkins/minikube-integration/19787-81676/.minikube/addons for local assets ...
	I1010 17:58:12.460253   89506 filesync.go:126] Scanning /home/jenkins/minikube-integration/19787-81676/.minikube/files for local assets ...
	I1010 17:58:12.460281   89506 start.go:296] duration metric: took 93.293264ms for postStartSetup
	I1010 17:58:12.460315   89506 main.go:141] libmachine: (addons-473910) Calling .GetConfigRaw
	I1010 17:58:12.460930   89506 main.go:141] libmachine: (addons-473910) Calling .GetIP
	I1010 17:58:12.463785   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:12.464123   89506 main.go:141] libmachine: (addons-473910) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:7f:56", ip: ""} in network mk-addons-473910: {Iface:virbr1 ExpiryTime:2024-10-10 18:58:03 +0000 UTC Type:0 Mac:52:54:00:6b:7f:56 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:addons-473910 Clientid:01:52:54:00:6b:7f:56}
	I1010 17:58:12.464148   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined IP address 192.168.39.238 and MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:12.464388   89506 profile.go:143] Saving config to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/addons-473910/config.json ...
	I1010 17:58:12.464573   89506 start.go:128] duration metric: took 24.322393923s to createHost
	I1010 17:58:12.464598   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHHostname
	I1010 17:58:12.466862   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:12.467243   89506 main.go:141] libmachine: (addons-473910) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:7f:56", ip: ""} in network mk-addons-473910: {Iface:virbr1 ExpiryTime:2024-10-10 18:58:03 +0000 UTC Type:0 Mac:52:54:00:6b:7f:56 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:addons-473910 Clientid:01:52:54:00:6b:7f:56}
	I1010 17:58:12.467275   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined IP address 192.168.39.238 and MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:12.467434   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHPort
	I1010 17:58:12.467610   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHKeyPath
	I1010 17:58:12.467763   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHKeyPath
	I1010 17:58:12.467879   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHUsername
	I1010 17:58:12.468038   89506 main.go:141] libmachine: Using SSH client type: native
	I1010 17:58:12.468196   89506 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I1010 17:58:12.468205   89506 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1010 17:58:12.573941   89506 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728583092.548842688
	
	I1010 17:58:12.573976   89506 fix.go:216] guest clock: 1728583092.548842688
	I1010 17:58:12.573985   89506 fix.go:229] Guest: 2024-10-10 17:58:12.548842688 +0000 UTC Remote: 2024-10-10 17:58:12.464587124 +0000 UTC m=+24.431579336 (delta=84.255564ms)
	I1010 17:58:12.574025   89506 fix.go:200] guest clock delta is within tolerance: 84.255564ms
	I1010 17:58:12.574031   89506 start.go:83] releasing machines lock for "addons-473910", held for 24.431922793s
	I1010 17:58:12.574058   89506 main.go:141] libmachine: (addons-473910) Calling .DriverName
	I1010 17:58:12.574342   89506 main.go:141] libmachine: (addons-473910) Calling .GetIP
	I1010 17:58:12.577145   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:12.577517   89506 main.go:141] libmachine: (addons-473910) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:7f:56", ip: ""} in network mk-addons-473910: {Iface:virbr1 ExpiryTime:2024-10-10 18:58:03 +0000 UTC Type:0 Mac:52:54:00:6b:7f:56 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:addons-473910 Clientid:01:52:54:00:6b:7f:56}
	I1010 17:58:12.577553   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined IP address 192.168.39.238 and MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:12.577707   89506 main.go:141] libmachine: (addons-473910) Calling .DriverName
	I1010 17:58:12.578164   89506 main.go:141] libmachine: (addons-473910) Calling .DriverName
	I1010 17:58:12.578331   89506 main.go:141] libmachine: (addons-473910) Calling .DriverName
	I1010 17:58:12.578435   89506 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1010 17:58:12.578502   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHHostname
	I1010 17:58:12.578541   89506 ssh_runner.go:195] Run: cat /version.json
	I1010 17:58:12.578564   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHHostname
	I1010 17:58:12.581152   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:12.581356   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:12.581529   89506 main.go:141] libmachine: (addons-473910) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:7f:56", ip: ""} in network mk-addons-473910: {Iface:virbr1 ExpiryTime:2024-10-10 18:58:03 +0000 UTC Type:0 Mac:52:54:00:6b:7f:56 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:addons-473910 Clientid:01:52:54:00:6b:7f:56}
	I1010 17:58:12.581550   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined IP address 192.168.39.238 and MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:12.581673   89506 main.go:141] libmachine: (addons-473910) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:7f:56", ip: ""} in network mk-addons-473910: {Iface:virbr1 ExpiryTime:2024-10-10 18:58:03 +0000 UTC Type:0 Mac:52:54:00:6b:7f:56 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:addons-473910 Clientid:01:52:54:00:6b:7f:56}
	I1010 17:58:12.581707   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined IP address 192.168.39.238 and MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:12.581709   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHPort
	I1010 17:58:12.581920   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHKeyPath
	I1010 17:58:12.581924   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHPort
	I1010 17:58:12.582101   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHUsername
	I1010 17:58:12.582151   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHKeyPath
	I1010 17:58:12.582217   89506 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/addons-473910/id_rsa Username:docker}
	I1010 17:58:12.582281   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHUsername
	I1010 17:58:12.582396   89506 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/addons-473910/id_rsa Username:docker}
	I1010 17:58:12.692202   89506 ssh_runner.go:195] Run: systemctl --version
	I1010 17:58:12.698535   89506 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1010 17:58:12.867274   89506 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1010 17:58:12.874143   89506 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1010 17:58:12.874211   89506 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1010 17:58:12.891055   89506 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1010 17:58:12.891082   89506 start.go:495] detecting cgroup driver to use...
	I1010 17:58:12.891149   89506 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1010 17:58:12.907042   89506 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1010 17:58:12.921019   89506 docker.go:217] disabling cri-docker service (if available) ...
	I1010 17:58:12.921088   89506 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1010 17:58:12.935362   89506 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1010 17:58:12.949688   89506 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1010 17:58:13.064978   89506 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1010 17:58:13.236648   89506 docker.go:233] disabling docker service ...
	I1010 17:58:13.236730   89506 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1010 17:58:13.251723   89506 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1010 17:58:13.266615   89506 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1010 17:58:13.387840   89506 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1010 17:58:13.526894   89506 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1010 17:58:13.541511   89506 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1010 17:58:13.561578   89506 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1010 17:58:13.561647   89506 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 17:58:13.573035   89506 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1010 17:58:13.573124   89506 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 17:58:13.584217   89506 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 17:58:13.595490   89506 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 17:58:13.606340   89506 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1010 17:58:13.617278   89506 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 17:58:13.628056   89506 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 17:58:13.646179   89506 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 17:58:13.657029   89506 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1010 17:58:13.666931   89506 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1010 17:58:13.666991   89506 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1010 17:58:13.680240   89506 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1010 17:58:13.690852   89506 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 17:58:13.812846   89506 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1010 17:58:13.906985   89506 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1010 17:58:13.907092   89506 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1010 17:58:13.912457   89506 start.go:563] Will wait 60s for crictl version
	I1010 17:58:13.912538   89506 ssh_runner.go:195] Run: which crictl
	I1010 17:58:13.916599   89506 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1010 17:58:13.957494   89506 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1010 17:58:13.957577   89506 ssh_runner.go:195] Run: crio --version
	I1010 17:58:13.987726   89506 ssh_runner.go:195] Run: crio --version
	I1010 17:58:14.018586   89506 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1010 17:58:14.020146   89506 main.go:141] libmachine: (addons-473910) Calling .GetIP
	I1010 17:58:14.023405   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:14.023883   89506 main.go:141] libmachine: (addons-473910) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:7f:56", ip: ""} in network mk-addons-473910: {Iface:virbr1 ExpiryTime:2024-10-10 18:58:03 +0000 UTC Type:0 Mac:52:54:00:6b:7f:56 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:addons-473910 Clientid:01:52:54:00:6b:7f:56}
	I1010 17:58:14.023911   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined IP address 192.168.39.238 and MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:14.024173   89506 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1010 17:58:14.028415   89506 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 17:58:14.041888   89506 kubeadm.go:883] updating cluster {Name:addons-473910 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:addons-473910 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.238 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1010 17:58:14.042027   89506 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1010 17:58:14.042088   89506 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 17:58:14.076600   89506 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1010 17:58:14.076676   89506 ssh_runner.go:195] Run: which lz4
	I1010 17:58:14.080811   89506 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1010 17:58:14.085225   89506 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1010 17:58:14.085259   89506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1010 17:58:15.478999   89506 crio.go:462] duration metric: took 1.398201803s to copy over tarball
	I1010 17:58:15.479100   89506 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1010 17:58:17.682395   89506 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.203257299s)
	I1010 17:58:17.682439   89506 crio.go:469] duration metric: took 2.203401621s to extract the tarball
	I1010 17:58:17.682449   89506 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1010 17:58:17.719543   89506 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 17:58:17.764809   89506 crio.go:514] all images are preloaded for cri-o runtime.
	I1010 17:58:17.764842   89506 cache_images.go:84] Images are preloaded, skipping loading
	I1010 17:58:17.764864   89506 kubeadm.go:934] updating node { 192.168.39.238 8443 v1.31.1 crio true true} ...
	I1010 17:58:17.764995   89506 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-473910 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.238
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-473910 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1010 17:58:17.765080   89506 ssh_runner.go:195] Run: crio config
	I1010 17:58:17.817292   89506 cni.go:84] Creating CNI manager for ""
	I1010 17:58:17.817318   89506 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1010 17:58:17.817329   89506 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1010 17:58:17.817353   89506 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.238 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-473910 NodeName:addons-473910 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.238"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.238 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1010 17:58:17.817482   89506 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.238
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-473910"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.238
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.238"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1010 17:58:17.817543   89506 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1010 17:58:17.827789   89506 binaries.go:44] Found k8s binaries, skipping transfer
	I1010 17:58:17.827853   89506 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1010 17:58:17.838088   89506 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1010 17:58:17.855875   89506 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1010 17:58:17.873764   89506 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I1010 17:58:17.892082   89506 ssh_runner.go:195] Run: grep 192.168.39.238	control-plane.minikube.internal$ /etc/hosts
	I1010 17:58:17.896318   89506 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.238	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 17:58:17.910294   89506 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 17:58:18.023518   89506 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 17:58:18.041735   89506 certs.go:68] Setting up /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/addons-473910 for IP: 192.168.39.238
	I1010 17:58:18.041789   89506 certs.go:194] generating shared ca certs ...
	I1010 17:58:18.041807   89506 certs.go:226] acquiring lock for ca certs: {Name:mk934aa2a82050d7b4e8800492bf64f91d140517 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 17:58:18.041972   89506 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key
	I1010 17:58:18.136832   89506 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt ...
	I1010 17:58:18.136880   89506 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt: {Name:mk26dc60dafe21a2c355d9cd6a7d904857d94548 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 17:58:18.137082   89506 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key ...
	I1010 17:58:18.137094   89506 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key: {Name:mk5e6f6eacdcc7a936a93570e1aec51b070fcd42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 17:58:18.137185   89506 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key
	I1010 17:58:18.208301   89506 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.crt ...
	I1010 17:58:18.208338   89506 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.crt: {Name:mkaa3880d369175d1a77a4ace6e6011fb87b8637 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 17:58:18.208521   89506 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key ...
	I1010 17:58:18.208534   89506 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key: {Name:mk50f0454a0d3d1eb4b5e1ea0f31373d75aaaa8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 17:58:18.208628   89506 certs.go:256] generating profile certs ...
	I1010 17:58:18.208687   89506 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/addons-473910/client.key
	I1010 17:58:18.208699   89506 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/addons-473910/client.crt with IP's: []
	I1010 17:58:18.253772   89506 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/addons-473910/client.crt ...
	I1010 17:58:18.253816   89506 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/addons-473910/client.crt: {Name:mkf5586b4869f587f7f271b82015415265fc91e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 17:58:18.253993   89506 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/addons-473910/client.key ...
	I1010 17:58:18.254006   89506 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/addons-473910/client.key: {Name:mkaef9e8d93ccb9cec6c394557644727c0cd33f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 17:58:18.254082   89506 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/addons-473910/apiserver.key.c19a97bb
	I1010 17:58:18.254103   89506 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/addons-473910/apiserver.crt.c19a97bb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.238]
	I1010 17:58:18.526577   89506 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/addons-473910/apiserver.crt.c19a97bb ...
	I1010 17:58:18.526617   89506 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/addons-473910/apiserver.crt.c19a97bb: {Name:mk18dab76c910ff98e0593284f313379df8daf13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 17:58:18.526816   89506 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/addons-473910/apiserver.key.c19a97bb ...
	I1010 17:58:18.526834   89506 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/addons-473910/apiserver.key.c19a97bb: {Name:mkc7bbbfffd01b4407381c22059791d6cc3e5c24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 17:58:18.526935   89506 certs.go:381] copying /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/addons-473910/apiserver.crt.c19a97bb -> /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/addons-473910/apiserver.crt
	I1010 17:58:18.527028   89506 certs.go:385] copying /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/addons-473910/apiserver.key.c19a97bb -> /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/addons-473910/apiserver.key
	I1010 17:58:18.527097   89506 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/addons-473910/proxy-client.key
	I1010 17:58:18.527122   89506 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/addons-473910/proxy-client.crt with IP's: []
	I1010 17:58:18.614263   89506 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/addons-473910/proxy-client.crt ...
	I1010 17:58:18.614298   89506 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/addons-473910/proxy-client.crt: {Name:mk398d3b1ec4ae05deda29a9610094de716fff2d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 17:58:18.614481   89506 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/addons-473910/proxy-client.key ...
	I1010 17:58:18.614496   89506 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/addons-473910/proxy-client.key: {Name:mkd238c47ccda9310b2990edc780446fe7ba11d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 17:58:18.614692   89506 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem (1679 bytes)
	I1010 17:58:18.614737   89506 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem (1078 bytes)
	I1010 17:58:18.614772   89506 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem (1123 bytes)
	I1010 17:58:18.614805   89506 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem (1675 bytes)
	I1010 17:58:18.615432   89506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1010 17:58:18.641430   89506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1010 17:58:18.665212   89506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1010 17:58:18.688766   89506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1010 17:58:18.715021   89506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/addons-473910/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1010 17:58:18.758339   89506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/addons-473910/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1010 17:58:18.784450   89506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/addons-473910/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1010 17:58:18.808045   89506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/addons-473910/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1010 17:58:18.831879   89506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1010 17:58:18.857009   89506 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1010 17:58:18.878860   89506 ssh_runner.go:195] Run: openssl version
	I1010 17:58:18.885621   89506 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1010 17:58:18.900666   89506 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1010 17:58:18.905811   89506 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 10 17:58 /usr/share/ca-certificates/minikubeCA.pem
	I1010 17:58:18.905875   89506 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1010 17:58:18.912419   89506 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1010 17:58:18.924812   89506 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1010 17:58:18.929887   89506 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1010 17:58:18.929947   89506 kubeadm.go:392] StartCluster: {Name:addons-473910 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:addons-473910 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.238 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 17:58:18.930050   89506 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1010 17:58:18.930139   89506 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1010 17:58:18.970482   89506 cri.go:89] found id: ""
	I1010 17:58:18.970569   89506 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1010 17:58:18.981639   89506 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1010 17:58:18.992242   89506 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1010 17:58:19.003093   89506 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1010 17:58:19.003156   89506 kubeadm.go:157] found existing configuration files:
	
	I1010 17:58:19.003212   89506 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1010 17:58:19.013394   89506 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1010 17:58:19.013463   89506 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1010 17:58:19.023807   89506 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1010 17:58:19.033824   89506 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1010 17:58:19.033885   89506 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1010 17:58:19.044196   89506 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1010 17:58:19.053925   89506 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1010 17:58:19.054006   89506 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1010 17:58:19.064001   89506 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1010 17:58:19.073698   89506 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1010 17:58:19.073779   89506 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1010 17:58:19.083855   89506 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1010 17:58:19.138336   89506 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1010 17:58:19.138439   89506 kubeadm.go:310] [preflight] Running pre-flight checks
	I1010 17:58:19.245490   89506 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1010 17:58:19.245649   89506 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1010 17:58:19.245797   89506 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1010 17:58:19.254129   89506 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1010 17:58:19.287403   89506 out.go:235]   - Generating certificates and keys ...
	I1010 17:58:19.287569   89506 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1010 17:58:19.287678   89506 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1010 17:58:19.429909   89506 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1010 17:58:19.596602   89506 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1010 17:58:19.829898   89506 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1010 17:58:19.968113   89506 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1010 17:58:20.082793   89506 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1010 17:58:20.083061   89506 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-473910 localhost] and IPs [192.168.39.238 127.0.0.1 ::1]
	I1010 17:58:20.198718   89506 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1010 17:58:20.198889   89506 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-473910 localhost] and IPs [192.168.39.238 127.0.0.1 ::1]
	I1010 17:58:20.414158   89506 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1010 17:58:20.542564   89506 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1010 17:58:20.679787   89506 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1010 17:58:20.679965   89506 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1010 17:58:20.803829   89506 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1010 17:58:21.160425   89506 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1010 17:58:21.508888   89506 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1010 17:58:21.652583   89506 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1010 17:58:21.874428   89506 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1010 17:58:21.875064   89506 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1010 17:58:21.879825   89506 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1010 17:58:21.882416   89506 out.go:235]   - Booting up control plane ...
	I1010 17:58:21.882537   89506 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1010 17:58:21.882652   89506 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1010 17:58:21.882750   89506 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1010 17:58:21.897946   89506 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1010 17:58:21.906834   89506 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1010 17:58:21.906888   89506 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1010 17:58:22.033176   89506 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1010 17:58:22.033334   89506 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1010 17:58:23.048551   89506 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001627938s
	I1010 17:58:23.048766   89506 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1010 17:58:28.048016   89506 kubeadm.go:310] [api-check] The API server is healthy after 5.017057619s
	I1010 17:58:28.066325   89506 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1010 17:58:28.090379   89506 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1010 17:58:28.134513   89506 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1010 17:58:28.134727   89506 kubeadm.go:310] [mark-control-plane] Marking the node addons-473910 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1010 17:58:28.147951   89506 kubeadm.go:310] [bootstrap-token] Using token: urf6qy.dqcbghijitdjybk3
	I1010 17:58:28.149572   89506 out.go:235]   - Configuring RBAC rules ...
	I1010 17:58:28.149689   89506 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1010 17:58:28.159529   89506 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1010 17:58:28.170709   89506 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1010 17:58:28.174299   89506 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1010 17:58:28.177513   89506 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1010 17:58:28.180657   89506 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1010 17:58:28.456897   89506 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1010 17:58:28.881476   89506 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1010 17:58:29.458261   89506 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1010 17:58:29.459365   89506 kubeadm.go:310] 
	I1010 17:58:29.459470   89506 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1010 17:58:29.459496   89506 kubeadm.go:310] 
	I1010 17:58:29.459665   89506 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1010 17:58:29.459678   89506 kubeadm.go:310] 
	I1010 17:58:29.459731   89506 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1010 17:58:29.459789   89506 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1010 17:58:29.459836   89506 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1010 17:58:29.459842   89506 kubeadm.go:310] 
	I1010 17:58:29.459884   89506 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1010 17:58:29.459900   89506 kubeadm.go:310] 
	I1010 17:58:29.459984   89506 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1010 17:58:29.459993   89506 kubeadm.go:310] 
	I1010 17:58:29.460064   89506 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1010 17:58:29.460177   89506 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1010 17:58:29.460263   89506 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1010 17:58:29.460277   89506 kubeadm.go:310] 
	I1010 17:58:29.460371   89506 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1010 17:58:29.460475   89506 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1010 17:58:29.460487   89506 kubeadm.go:310] 
	I1010 17:58:29.460617   89506 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token urf6qy.dqcbghijitdjybk3 \
	I1010 17:58:29.460753   89506 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:bc8568f96f96483e4403a7eff9866ac7bd8b0e76d8203796a49dd1a769f11bda \
	I1010 17:58:29.460789   89506 kubeadm.go:310] 	--control-plane 
	I1010 17:58:29.460796   89506 kubeadm.go:310] 
	I1010 17:58:29.460924   89506 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1010 17:58:29.460939   89506 kubeadm.go:310] 
	I1010 17:58:29.461054   89506 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token urf6qy.dqcbghijitdjybk3 \
	I1010 17:58:29.461205   89506 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:bc8568f96f96483e4403a7eff9866ac7bd8b0e76d8203796a49dd1a769f11bda 
	I1010 17:58:29.462071   89506 kubeadm.go:310] W1010 17:58:19.118276     818 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1010 17:58:29.462451   89506 kubeadm.go:310] W1010 17:58:19.119144     818 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1010 17:58:29.462592   89506 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1010 17:58:29.462609   89506 cni.go:84] Creating CNI manager for ""
	I1010 17:58:29.462619   89506 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1010 17:58:29.464485   89506 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1010 17:58:29.465978   89506 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1010 17:58:29.478126   89506 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1010 17:58:29.502974   89506 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1010 17:58:29.503047   89506 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 17:58:29.503114   89506 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-473910 minikube.k8s.io/updated_at=2024_10_10T17_58_29_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=e90d7de550e839433dc631623053398b652747fc minikube.k8s.io/name=addons-473910 minikube.k8s.io/primary=true
	I1010 17:58:29.628639   89506 ops.go:34] apiserver oom_adj: -16
	I1010 17:58:29.628702   89506 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 17:58:30.129418   89506 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 17:58:30.629386   89506 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 17:58:31.129511   89506 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 17:58:31.628971   89506 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 17:58:32.129173   89506 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 17:58:32.629131   89506 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 17:58:33.129178   89506 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 17:58:33.212926   89506 kubeadm.go:1113] duration metric: took 3.709946724s to wait for elevateKubeSystemPrivileges
	I1010 17:58:33.212967   89506 kubeadm.go:394] duration metric: took 14.283023824s to StartCluster
	I1010 17:58:33.212993   89506 settings.go:142] acquiring lock: {Name:mk8d73e472e6ca16e47be8eb712406a95625b8da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 17:58:33.213159   89506 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19787-81676/kubeconfig
	I1010 17:58:33.213639   89506 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19787-81676/kubeconfig: {Name:mk31297b607b774870865548d952622c04d970cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 17:58:33.213860   89506 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1010 17:58:33.213886   89506 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.238 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1010 17:58:33.213954   89506 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1010 17:58:33.214101   89506 addons.go:69] Setting yakd=true in profile "addons-473910"
	I1010 17:58:33.214119   89506 addons.go:69] Setting inspektor-gadget=true in profile "addons-473910"
	I1010 17:58:33.214137   89506 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-473910"
	I1010 17:58:33.214145   89506 addons.go:234] Setting addon inspektor-gadget=true in "addons-473910"
	I1010 17:58:33.214143   89506 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-473910"
	I1010 17:58:33.214156   89506 addons.go:69] Setting cloud-spanner=true in profile "addons-473910"
	I1010 17:58:33.214169   89506 addons.go:69] Setting volumesnapshots=true in profile "addons-473910"
	I1010 17:58:33.214161   89506 addons.go:69] Setting metrics-server=true in profile "addons-473910"
	I1010 17:58:33.214170   89506 addons.go:69] Setting registry=true in profile "addons-473910"
	I1010 17:58:33.214218   89506 addons.go:234] Setting addon registry=true in "addons-473910"
	I1010 17:58:33.214262   89506 host.go:66] Checking if "addons-473910" exists ...
	I1010 17:58:33.214126   89506 addons.go:69] Setting default-storageclass=true in profile "addons-473910"
	I1010 17:58:33.214299   89506 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-473910"
	I1010 17:58:33.214160   89506 addons.go:69] Setting volcano=true in profile "addons-473910"
	I1010 17:58:33.214329   89506 addons.go:234] Setting addon volcano=true in "addons-473910"
	I1010 17:58:33.214363   89506 host.go:66] Checking if "addons-473910" exists ...
	I1010 17:58:33.214125   89506 config.go:182] Loaded profile config "addons-473910": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 17:58:33.214152   89506 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-473910"
	I1010 17:58:33.214164   89506 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-473910"
	I1010 17:58:33.214585   89506 host.go:66] Checking if "addons-473910" exists ...
	I1010 17:58:33.214152   89506 addons.go:69] Setting storage-provisioner=true in profile "addons-473910"
	I1010 17:58:33.214657   89506 addons.go:234] Setting addon storage-provisioner=true in "addons-473910"
	I1010 17:58:33.214171   89506 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-473910"
	I1010 17:58:33.214684   89506 host.go:66] Checking if "addons-473910" exists ...
	I1010 17:58:33.214711   89506 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-473910"
	I1010 17:58:33.214743   89506 host.go:66] Checking if "addons-473910" exists ...
	I1010 17:58:33.214755   89506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 17:58:33.214770   89506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 17:58:33.214780   89506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 17:58:33.214793   89506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 17:58:33.214832   89506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 17:58:33.214855   89506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 17:58:33.214873   89506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 17:58:33.214882   89506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 17:58:33.214967   89506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 17:58:33.215003   89506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 17:58:33.215050   89506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 17:58:33.214174   89506 addons.go:69] Setting ingress=true in profile "addons-473910"
	I1010 17:58:33.215096   89506 addons.go:234] Setting addon ingress=true in "addons-473910"
	I1010 17:58:33.215097   89506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 17:58:33.215139   89506 host.go:66] Checking if "addons-473910" exists ...
	I1010 17:58:33.215187   89506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 17:58:33.215215   89506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 17:58:33.214179   89506 addons.go:234] Setting addon cloud-spanner=true in "addons-473910"
	I1010 17:58:33.215408   89506 host.go:66] Checking if "addons-473910" exists ...
	I1010 17:58:33.214180   89506 addons.go:69] Setting gcp-auth=true in profile "addons-473910"
	I1010 17:58:33.215506   89506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 17:58:33.214180   89506 addons.go:234] Setting addon volumesnapshots=true in "addons-473910"
	I1010 17:58:33.215539   89506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 17:58:33.215545   89506 mustload.go:65] Loading cluster: addons-473910
	I1010 17:58:33.215556   89506 host.go:66] Checking if "addons-473910" exists ...
	I1010 17:58:33.215725   89506 config.go:182] Loaded profile config "addons-473910": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 17:58:33.215781   89506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 17:58:33.215813   89506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 17:58:33.214183   89506 addons.go:69] Setting ingress-dns=true in profile "addons-473910"
	I1010 17:58:33.215896   89506 addons.go:234] Setting addon ingress-dns=true in "addons-473910"
	I1010 17:58:33.215933   89506 host.go:66] Checking if "addons-473910" exists ...
	I1010 17:58:33.216007   89506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 17:58:33.216059   89506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 17:58:33.214188   89506 host.go:66] Checking if "addons-473910" exists ...
	I1010 17:58:33.214128   89506 addons.go:234] Setting addon yakd=true in "addons-473910"
	I1010 17:58:33.216304   89506 host.go:66] Checking if "addons-473910" exists ...
	I1010 17:58:33.214184   89506 addons.go:234] Setting addon metrics-server=true in "addons-473910"
	I1010 17:58:33.216425   89506 host.go:66] Checking if "addons-473910" exists ...
	I1010 17:58:33.216714   89506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 17:58:33.216735   89506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 17:58:33.216714   89506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 17:58:33.216803   89506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 17:58:33.216821   89506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 17:58:33.216835   89506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 17:58:33.217963   89506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 17:58:33.218046   89506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 17:58:33.220938   89506 out.go:177] * Verifying Kubernetes components...
	I1010 17:58:33.233526   89506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 17:58:33.233582   89506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 17:58:33.233747   89506 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 17:58:33.239210   89506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36351
	I1010 17:58:33.241357   89506 main.go:141] libmachine: () Calling .GetVersion
	I1010 17:58:33.244055   89506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43477
	I1010 17:58:33.244203   89506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34357
	I1010 17:58:33.244235   89506 main.go:141] libmachine: Using API Version  1
	I1010 17:58:33.244252   89506 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 17:58:33.244686   89506 main.go:141] libmachine: () Calling .GetVersion
	I1010 17:58:33.244780   89506 main.go:141] libmachine: () Calling .GetVersion
	I1010 17:58:33.245200   89506 main.go:141] libmachine: Using API Version  1
	I1010 17:58:33.245220   89506 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 17:58:33.245365   89506 main.go:141] libmachine: Using API Version  1
	I1010 17:58:33.245376   89506 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 17:58:33.245654   89506 main.go:141] libmachine: () Calling .GetMachineName
	I1010 17:58:33.245707   89506 main.go:141] libmachine: () Calling .GetMachineName
	I1010 17:58:33.246238   89506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 17:58:33.246291   89506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 17:58:33.246497   89506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45101
	I1010 17:58:33.246524   89506 main.go:141] libmachine: (addons-473910) Calling .GetState
	I1010 17:58:33.246855   89506 main.go:141] libmachine: () Calling .GetVersion
	I1010 17:58:33.247618   89506 main.go:141] libmachine: Using API Version  1
	I1010 17:58:33.247635   89506 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 17:58:33.247664   89506 main.go:141] libmachine: () Calling .GetMachineName
	I1010 17:58:33.248064   89506 main.go:141] libmachine: () Calling .GetMachineName
	I1010 17:58:33.248517   89506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 17:58:33.248547   89506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 17:58:33.250849   89506 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-473910"
	I1010 17:58:33.250903   89506 host.go:66] Checking if "addons-473910" exists ...
	I1010 17:58:33.251282   89506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 17:58:33.251303   89506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 17:58:33.252123   89506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 17:58:33.252167   89506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 17:58:33.258889   89506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35939
	I1010 17:58:33.259751   89506 main.go:141] libmachine: () Calling .GetVersion
	I1010 17:58:33.260609   89506 main.go:141] libmachine: Using API Version  1
	I1010 17:58:33.260637   89506 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 17:58:33.261204   89506 main.go:141] libmachine: () Calling .GetMachineName
	I1010 17:58:33.261548   89506 main.go:141] libmachine: (addons-473910) Calling .GetState
	I1010 17:58:33.264965   89506 addons.go:234] Setting addon default-storageclass=true in "addons-473910"
	I1010 17:58:33.265022   89506 host.go:66] Checking if "addons-473910" exists ...
	I1010 17:58:33.265431   89506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 17:58:33.265481   89506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 17:58:33.271189   89506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33083
	I1010 17:58:33.271244   89506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39365
	I1010 17:58:33.271672   89506 main.go:141] libmachine: () Calling .GetVersion
	I1010 17:58:33.272175   89506 main.go:141] libmachine: Using API Version  1
	I1010 17:58:33.272199   89506 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 17:58:33.272611   89506 main.go:141] libmachine: () Calling .GetMachineName
	I1010 17:58:33.273233   89506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 17:58:33.273275   89506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 17:58:33.274110   89506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45525
	I1010 17:58:33.274375   89506 main.go:141] libmachine: () Calling .GetVersion
	I1010 17:58:33.274933   89506 main.go:141] libmachine: Using API Version  1
	I1010 17:58:33.274957   89506 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 17:58:33.275030   89506 main.go:141] libmachine: () Calling .GetVersion
	I1010 17:58:33.275302   89506 main.go:141] libmachine: () Calling .GetMachineName
	I1010 17:58:33.275828   89506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 17:58:33.275857   89506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 17:58:33.276168   89506 main.go:141] libmachine: Using API Version  1
	I1010 17:58:33.276194   89506 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 17:58:33.277874   89506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42571
	I1010 17:58:33.278475   89506 main.go:141] libmachine: () Calling .GetVersion
	I1010 17:58:33.279099   89506 main.go:141] libmachine: Using API Version  1
	I1010 17:58:33.279118   89506 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 17:58:33.279542   89506 main.go:141] libmachine: () Calling .GetMachineName
	I1010 17:58:33.280121   89506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 17:58:33.280174   89506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 17:58:33.280378   89506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41915
	I1010 17:58:33.280816   89506 main.go:141] libmachine: () Calling .GetVersion
	I1010 17:58:33.281350   89506 main.go:141] libmachine: Using API Version  1
	I1010 17:58:33.281369   89506 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 17:58:33.281497   89506 main.go:141] libmachine: () Calling .GetMachineName
	I1010 17:58:33.281874   89506 main.go:141] libmachine: () Calling .GetMachineName
	I1010 17:58:33.281927   89506 main.go:141] libmachine: (addons-473910) Calling .GetState
	I1010 17:58:33.282518   89506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 17:58:33.282559   89506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 17:58:33.283865   89506 host.go:66] Checking if "addons-473910" exists ...
	I1010 17:58:33.284253   89506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 17:58:33.284276   89506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 17:58:33.285057   89506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35683
	I1010 17:58:33.285453   89506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38729
	I1010 17:58:33.285481   89506 main.go:141] libmachine: () Calling .GetVersion
	I1010 17:58:33.285859   89506 main.go:141] libmachine: () Calling .GetVersion
	I1010 17:58:33.286041   89506 main.go:141] libmachine: Using API Version  1
	I1010 17:58:33.286057   89506 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 17:58:33.286379   89506 main.go:141] libmachine: () Calling .GetMachineName
	I1010 17:58:33.286520   89506 main.go:141] libmachine: Using API Version  1
	I1010 17:58:33.286539   89506 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 17:58:33.287109   89506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 17:58:33.287153   89506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 17:58:33.287831   89506 main.go:141] libmachine: () Calling .GetMachineName
	I1010 17:58:33.288696   89506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42921
	I1010 17:58:33.289172   89506 main.go:141] libmachine: () Calling .GetVersion
	I1010 17:58:33.289740   89506 main.go:141] libmachine: Using API Version  1
	I1010 17:58:33.289757   89506 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 17:58:33.290134   89506 main.go:141] libmachine: () Calling .GetMachineName
	I1010 17:58:33.290656   89506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 17:58:33.290697   89506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 17:58:33.293405   89506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33445
	I1010 17:58:33.293468   89506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33291
	I1010 17:58:33.294917   89506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38971
	I1010 17:58:33.299518   89506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45167
	I1010 17:58:33.300067   89506 main.go:141] libmachine: () Calling .GetVersion
	I1010 17:58:33.300689   89506 main.go:141] libmachine: Using API Version  1
	I1010 17:58:33.300717   89506 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 17:58:33.301003   89506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46175
	I1010 17:58:33.303337   89506 main.go:141] libmachine: () Calling .GetMachineName
	I1010 17:58:33.303411   89506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34631
	I1010 17:58:33.303519   89506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34333
	I1010 17:58:33.304019   89506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 17:58:33.304039   89506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 17:58:33.304190   89506 main.go:141] libmachine: () Calling .GetVersion
	I1010 17:58:33.304501   89506 main.go:141] libmachine: () Calling .GetVersion
	I1010 17:58:33.304674   89506 main.go:141] libmachine: Using API Version  1
	I1010 17:58:33.304686   89506 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 17:58:33.304964   89506 main.go:141] libmachine: Using API Version  1
	I1010 17:58:33.304983   89506 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 17:58:33.305051   89506 main.go:141] libmachine: () Calling .GetMachineName
	I1010 17:58:33.305453   89506 main.go:141] libmachine: () Calling .GetMachineName
	I1010 17:58:33.307407   89506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33519
	I1010 17:58:33.309219   89506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39975
	I1010 17:58:33.309307   89506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33605
	I1010 17:58:33.309365   89506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 17:58:33.309364   89506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 17:58:33.309424   89506 main.go:141] libmachine: () Calling .GetVersion
	I1010 17:58:33.309452   89506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 17:58:33.309416   89506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 17:58:33.309666   89506 main.go:141] libmachine: () Calling .GetVersion
	I1010 17:58:33.309770   89506 main.go:141] libmachine: () Calling .GetVersion
	I1010 17:58:33.309850   89506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 17:58:33.309871   89506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 17:58:33.310279   89506 main.go:141] libmachine: Using API Version  1
	I1010 17:58:33.310296   89506 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 17:58:33.310384   89506 main.go:141] libmachine: Using API Version  1
	I1010 17:58:33.310393   89506 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 17:58:33.310431   89506 main.go:141] libmachine: () Calling .GetVersion
	I1010 17:58:33.310482   89506 main.go:141] libmachine: () Calling .GetVersion
	I1010 17:58:33.310512   89506 main.go:141] libmachine: () Calling .GetVersion
	I1010 17:58:33.310820   89506 main.go:141] libmachine: () Calling .GetMachineName
	I1010 17:58:33.310878   89506 main.go:141] libmachine: () Calling .GetMachineName
	I1010 17:58:33.311008   89506 main.go:141] libmachine: Using API Version  1
	I1010 17:58:33.311011   89506 main.go:141] libmachine: Using API Version  1
	I1010 17:58:33.311020   89506 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 17:58:33.311029   89506 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 17:58:33.311165   89506 main.go:141] libmachine: Using API Version  1
	I1010 17:58:33.311180   89506 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 17:58:33.311391   89506 main.go:141] libmachine: () Calling .GetMachineName
	I1010 17:58:33.311451   89506 main.go:141] libmachine: (addons-473910) Calling .GetState
	I1010 17:58:33.311595   89506 main.go:141] libmachine: Using API Version  1
	I1010 17:58:33.311619   89506 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 17:58:33.312146   89506 main.go:141] libmachine: () Calling .GetMachineName
	I1010 17:58:33.312149   89506 main.go:141] libmachine: () Calling .GetMachineName
	I1010 17:58:33.312177   89506 main.go:141] libmachine: () Calling .GetMachineName
	I1010 17:58:33.312222   89506 main.go:141] libmachine: () Calling .GetVersion
	I1010 17:58:33.312665   89506 main.go:141] libmachine: (addons-473910) Calling .GetState
	I1010 17:58:33.312690   89506 main.go:141] libmachine: (addons-473910) Calling .GetState
	I1010 17:58:33.312665   89506 main.go:141] libmachine: (addons-473910) Calling .GetState
	I1010 17:58:33.312729   89506 main.go:141] libmachine: Using API Version  1
	I1010 17:58:33.312744   89506 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 17:58:33.313064   89506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 17:58:33.313112   89506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 17:58:33.313570   89506 main.go:141] libmachine: () Calling .GetMachineName
	I1010 17:58:33.313738   89506 main.go:141] libmachine: (addons-473910) Calling .GetState
	I1010 17:58:33.314917   89506 main.go:141] libmachine: (addons-473910) Calling .GetState
	I1010 17:58:33.315669   89506 main.go:141] libmachine: (addons-473910) Calling .DriverName
	I1010 17:58:33.316204   89506 main.go:141] libmachine: (addons-473910) Calling .DriverName
	I1010 17:58:33.317082   89506 main.go:141] libmachine: (addons-473910) Calling .DriverName
	I1010 17:58:33.317297   89506 main.go:141] libmachine: Making call to close driver server
	I1010 17:58:33.317314   89506 main.go:141] libmachine: (addons-473910) Calling .Close
	I1010 17:58:33.317343   89506 main.go:141] libmachine: (addons-473910) Calling .DriverName
	I1010 17:58:33.318483   89506 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.33.0
	I1010 17:58:33.318501   89506 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1010 17:58:33.318888   89506 main.go:141] libmachine: (addons-473910) Calling .DriverName
	I1010 17:58:33.318945   89506 main.go:141] libmachine: (addons-473910) Calling .DriverName
	I1010 17:58:33.318961   89506 main.go:141] libmachine: (addons-473910) DBG | Closing plugin on server side
	I1010 17:58:33.318976   89506 main.go:141] libmachine: Successfully made call to close driver server
	I1010 17:58:33.319315   89506 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 17:58:33.319324   89506 main.go:141] libmachine: Making call to close driver server
	I1010 17:58:33.319329   89506 main.go:141] libmachine: (addons-473910) Calling .Close
	I1010 17:58:33.319613   89506 main.go:141] libmachine: Successfully made call to close driver server
	I1010 17:58:33.319630   89506 main.go:141] libmachine: Making call to close connection to plugin binary
	W1010 17:58:33.319713   89506 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1010 17:58:33.319815   89506 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1010 17:58:33.319840   89506 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1010 17:58:33.319863   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHHostname
	I1010 17:58:33.319925   89506 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I1010 17:58:33.319936   89506 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I1010 17:58:33.319950   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHHostname
	I1010 17:58:33.320612   89506 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I1010 17:58:33.320623   89506 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 17:58:33.320671   89506 out.go:177]   - Using image docker.io/registry:2.8.3
	I1010 17:58:33.322089   89506 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I1010 17:58:33.322109   89506 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1010 17:58:33.322130   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHHostname
	I1010 17:58:33.322206   89506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38351
	I1010 17:58:33.322786   89506 main.go:141] libmachine: () Calling .GetVersion
	I1010 17:58:33.323172   89506 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 17:58:33.323196   89506 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1010 17:58:33.323215   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHHostname
	I1010 17:58:33.323568   89506 main.go:141] libmachine: Using API Version  1
	I1010 17:58:33.323586   89506 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 17:58:33.324094   89506 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I1010 17:58:33.324454   89506 main.go:141] libmachine: () Calling .GetMachineName
	I1010 17:58:33.324954   89506 main.go:141] libmachine: (addons-473910) Calling .GetState
	I1010 17:58:33.325101   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:33.325335   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:33.325564   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHPort
	I1010 17:58:33.325622   89506 main.go:141] libmachine: (addons-473910) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:7f:56", ip: ""} in network mk-addons-473910: {Iface:virbr1 ExpiryTime:2024-10-10 18:58:03 +0000 UTC Type:0 Mac:52:54:00:6b:7f:56 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:addons-473910 Clientid:01:52:54:00:6b:7f:56}
	I1010 17:58:33.325641   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined IP address 192.168.39.238 and MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:33.325908   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHKeyPath
	I1010 17:58:33.325913   89506 main.go:141] libmachine: (addons-473910) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:7f:56", ip: ""} in network mk-addons-473910: {Iface:virbr1 ExpiryTime:2024-10-10 18:58:03 +0000 UTC Type:0 Mac:52:54:00:6b:7f:56 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:addons-473910 Clientid:01:52:54:00:6b:7f:56}
	I1010 17:58:33.325932   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined IP address 192.168.39.238 and MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:33.326230   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHUsername
	I1010 17:58:33.326230   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHPort
	I1010 17:58:33.326324   89506 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I1010 17:58:33.326342   89506 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1010 17:58:33.326363   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHHostname
	I1010 17:58:33.326489   89506 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/addons-473910/id_rsa Username:docker}
	I1010 17:58:33.326800   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHKeyPath
	I1010 17:58:33.327086   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHUsername
	I1010 17:58:33.327305   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:33.327338   89506 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/addons-473910/id_rsa Username:docker}
	I1010 17:58:33.327497   89506 main.go:141] libmachine: (addons-473910) Calling .DriverName
	I1010 17:58:33.328164   89506 main.go:141] libmachine: (addons-473910) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:7f:56", ip: ""} in network mk-addons-473910: {Iface:virbr1 ExpiryTime:2024-10-10 18:58:03 +0000 UTC Type:0 Mac:52:54:00:6b:7f:56 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:addons-473910 Clientid:01:52:54:00:6b:7f:56}
	I1010 17:58:33.328188   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined IP address 192.168.39.238 and MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:33.328666   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHPort
	I1010 17:58:33.328973   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHKeyPath
	I1010 17:58:33.329132   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHUsername
	I1010 17:58:33.329265   89506 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I1010 17:58:33.329383   89506 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/addons-473910/id_rsa Username:docker}
	I1010 17:58:33.329637   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:33.330162   89506 main.go:141] libmachine: (addons-473910) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:7f:56", ip: ""} in network mk-addons-473910: {Iface:virbr1 ExpiryTime:2024-10-10 18:58:03 +0000 UTC Type:0 Mac:52:54:00:6b:7f:56 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:addons-473910 Clientid:01:52:54:00:6b:7f:56}
	I1010 17:58:33.330184   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined IP address 192.168.39.238 and MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:33.330463   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHPort
	I1010 17:58:33.330630   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHKeyPath
	I1010 17:58:33.330694   89506 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1010 17:58:33.330713   89506 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1010 17:58:33.330731   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHHostname
	I1010 17:58:33.330752   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHUsername
	I1010 17:58:33.331353   89506 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/addons-473910/id_rsa Username:docker}
	I1010 17:58:33.332069   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:33.332540   89506 main.go:141] libmachine: (addons-473910) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:7f:56", ip: ""} in network mk-addons-473910: {Iface:virbr1 ExpiryTime:2024-10-10 18:58:03 +0000 UTC Type:0 Mac:52:54:00:6b:7f:56 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:addons-473910 Clientid:01:52:54:00:6b:7f:56}
	I1010 17:58:33.332571   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined IP address 192.168.39.238 and MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:33.332934   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHPort
	I1010 17:58:33.333087   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHKeyPath
	I1010 17:58:33.333212   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHUsername
	I1010 17:58:33.333331   89506 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/addons-473910/id_rsa Username:docker}
	I1010 17:58:33.334850   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:33.335391   89506 main.go:141] libmachine: (addons-473910) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:7f:56", ip: ""} in network mk-addons-473910: {Iface:virbr1 ExpiryTime:2024-10-10 18:58:03 +0000 UTC Type:0 Mac:52:54:00:6b:7f:56 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:addons-473910 Clientid:01:52:54:00:6b:7f:56}
	I1010 17:58:33.335417   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined IP address 192.168.39.238 and MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:33.335590   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHPort
	I1010 17:58:33.335659   89506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42281
	I1010 17:58:33.335917   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHKeyPath
	I1010 17:58:33.336052   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHUsername
	I1010 17:58:33.336114   89506 main.go:141] libmachine: () Calling .GetVersion
	I1010 17:58:33.336266   89506 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/addons-473910/id_rsa Username:docker}
	I1010 17:58:33.336736   89506 main.go:141] libmachine: Using API Version  1
	I1010 17:58:33.336761   89506 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 17:58:33.337134   89506 main.go:141] libmachine: () Calling .GetMachineName
	I1010 17:58:33.337334   89506 main.go:141] libmachine: (addons-473910) Calling .GetState
	I1010 17:58:33.339025   89506 main.go:141] libmachine: (addons-473910) Calling .DriverName
	I1010 17:58:33.341041   89506 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I1010 17:58:33.342395   89506 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1010 17:58:33.344695   89506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44527
	I1010 17:58:33.345160   89506 main.go:141] libmachine: () Calling .GetVersion
	I1010 17:58:33.345634   89506 main.go:141] libmachine: Using API Version  1
	I1010 17:58:33.345649   89506 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 17:58:33.346154   89506 main.go:141] libmachine: () Calling .GetMachineName
	I1010 17:58:33.346459   89506 main.go:141] libmachine: (addons-473910) Calling .GetState
	I1010 17:58:33.347020   89506 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1010 17:58:33.348149   89506 main.go:141] libmachine: (addons-473910) Calling .DriverName
	I1010 17:58:33.348695   89506 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1010 17:58:33.348715   89506 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1010 17:58:33.348740   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHHostname
	I1010 17:58:33.349523   89506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46651
	I1010 17:58:33.349930   89506 main.go:141] libmachine: () Calling .GetVersion
	I1010 17:58:33.350517   89506 main.go:141] libmachine: Using API Version  1
	I1010 17:58:33.350534   89506 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 17:58:33.351293   89506 main.go:141] libmachine: () Calling .GetMachineName
	I1010 17:58:33.351377   89506 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I1010 17:58:33.351658   89506 main.go:141] libmachine: (addons-473910) Calling .GetState
	I1010 17:58:33.352813   89506 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1010 17:58:33.352832   89506 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1010 17:58:33.352869   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHHostname
	I1010 17:58:33.353473   89506 main.go:141] libmachine: (addons-473910) Calling .DriverName
	I1010 17:58:33.353709   89506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40039
	I1010 17:58:33.354211   89506 main.go:141] libmachine: () Calling .GetVersion
	I1010 17:58:33.354391   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:33.354748   89506 main.go:141] libmachine: (addons-473910) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:7f:56", ip: ""} in network mk-addons-473910: {Iface:virbr1 ExpiryTime:2024-10-10 18:58:03 +0000 UTC Type:0 Mac:52:54:00:6b:7f:56 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:addons-473910 Clientid:01:52:54:00:6b:7f:56}
	I1010 17:58:33.354769   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined IP address 192.168.39.238 and MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:33.355061   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHPort
	I1010 17:58:33.355188   89506 main.go:141] libmachine: Using API Version  1
	I1010 17:58:33.355204   89506 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 17:58:33.355359   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHKeyPath
	I1010 17:58:33.355423   89506 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1010 17:58:33.355576   89506 main.go:141] libmachine: () Calling .GetMachineName
	I1010 17:58:33.355787   89506 main.go:141] libmachine: (addons-473910) Calling .GetState
	I1010 17:58:33.356124   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHUsername
	I1010 17:58:33.356325   89506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41605
	I1010 17:58:33.356613   89506 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/addons-473910/id_rsa Username:docker}
	I1010 17:58:33.356736   89506 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1010 17:58:33.356751   89506 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1010 17:58:33.356770   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHHostname
	I1010 17:58:33.357675   89506 main.go:141] libmachine: () Calling .GetVersion
	I1010 17:58:33.358130   89506 main.go:141] libmachine: (addons-473910) Calling .DriverName
	I1010 17:58:33.358422   89506 main.go:141] libmachine: Using API Version  1
	I1010 17:58:33.358442   89506 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 17:58:33.358691   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:33.358793   89506 main.go:141] libmachine: () Calling .GetMachineName
	I1010 17:58:33.358891   89506 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1010 17:58:33.358932   89506 main.go:141] libmachine: (addons-473910) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:7f:56", ip: ""} in network mk-addons-473910: {Iface:virbr1 ExpiryTime:2024-10-10 18:58:03 +0000 UTC Type:0 Mac:52:54:00:6b:7f:56 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:addons-473910 Clientid:01:52:54:00:6b:7f:56}
	I1010 17:58:33.358961   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined IP address 192.168.39.238 and MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:33.358974   89506 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1010 17:58:33.358995   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHHostname
	I1010 17:58:33.359006   89506 main.go:141] libmachine: (addons-473910) Calling .DriverName
	I1010 17:58:33.359184   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHPort
	I1010 17:58:33.359417   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHKeyPath
	I1010 17:58:33.359547   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHUsername
	I1010 17:58:33.359818   89506 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/addons-473910/id_rsa Username:docker}
	I1010 17:58:33.360250   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:33.360749   89506 main.go:141] libmachine: (addons-473910) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:7f:56", ip: ""} in network mk-addons-473910: {Iface:virbr1 ExpiryTime:2024-10-10 18:58:03 +0000 UTC Type:0 Mac:52:54:00:6b:7f:56 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:addons-473910 Clientid:01:52:54:00:6b:7f:56}
	I1010 17:58:33.360768   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined IP address 192.168.39.238 and MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:33.361013   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHPort
	I1010 17:58:33.361138   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHKeyPath
	I1010 17:58:33.361243   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHUsername
	I1010 17:58:33.361356   89506 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/addons-473910/id_rsa Username:docker}
	I1010 17:58:33.361858   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:33.362274   89506 main.go:141] libmachine: (addons-473910) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:7f:56", ip: ""} in network mk-addons-473910: {Iface:virbr1 ExpiryTime:2024-10-10 18:58:03 +0000 UTC Type:0 Mac:52:54:00:6b:7f:56 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:addons-473910 Clientid:01:52:54:00:6b:7f:56}
	I1010 17:58:33.362293   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined IP address 192.168.39.238 and MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:33.362439   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHPort
	I1010 17:58:33.362610   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHKeyPath
	I1010 17:58:33.362737   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHUsername
	I1010 17:58:33.362836   89506 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/addons-473910/id_rsa Username:docker}
	I1010 17:58:33.364890   89506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40369
	I1010 17:58:33.365302   89506 main.go:141] libmachine: () Calling .GetVersion
	I1010 17:58:33.365756   89506 main.go:141] libmachine: Using API Version  1
	I1010 17:58:33.365774   89506 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 17:58:33.365902   89506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35833
	I1010 17:58:33.366081   89506 main.go:141] libmachine: () Calling .GetMachineName
	I1010 17:58:33.366229   89506 main.go:141] libmachine: (addons-473910) Calling .GetState
	I1010 17:58:33.366593   89506 main.go:141] libmachine: () Calling .GetVersion
	I1010 17:58:33.367166   89506 main.go:141] libmachine: Using API Version  1
	I1010 17:58:33.367189   89506 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 17:58:33.367516   89506 main.go:141] libmachine: () Calling .GetMachineName
	I1010 17:58:33.367690   89506 main.go:141] libmachine: (addons-473910) Calling .DriverName
	I1010 17:58:33.367705   89506 main.go:141] libmachine: (addons-473910) Calling .GetState
	I1010 17:58:33.369251   89506 main.go:141] libmachine: (addons-473910) Calling .DriverName
	I1010 17:58:33.369739   89506 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1010 17:58:33.371182   89506 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I1010 17:58:33.372772   89506 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1010 17:58:33.372793   89506 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1010 17:58:33.372816   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHHostname
	I1010 17:58:33.372901   89506 out.go:177]   - Using image docker.io/busybox:stable
	I1010 17:58:33.374654   89506 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1010 17:58:33.374674   89506 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1010 17:58:33.374697   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHHostname
	I1010 17:58:33.376408   89506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38875
	I1010 17:58:33.376890   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:33.376915   89506 main.go:141] libmachine: (addons-473910) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:7f:56", ip: ""} in network mk-addons-473910: {Iface:virbr1 ExpiryTime:2024-10-10 18:58:03 +0000 UTC Type:0 Mac:52:54:00:6b:7f:56 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:addons-473910 Clientid:01:52:54:00:6b:7f:56}
	I1010 17:58:33.376934   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined IP address 192.168.39.238 and MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:33.377121   89506 main.go:141] libmachine: () Calling .GetVersion
	I1010 17:58:33.377126   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHPort
	I1010 17:58:33.377343   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHKeyPath
	I1010 17:58:33.377537   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHUsername
	I1010 17:58:33.377704   89506 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/addons-473910/id_rsa Username:docker}
	I1010 17:58:33.377828   89506 main.go:141] libmachine: Using API Version  1
	I1010 17:58:33.377846   89506 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 17:58:33.378213   89506 main.go:141] libmachine: () Calling .GetMachineName
	I1010 17:58:33.378315   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:33.378480   89506 main.go:141] libmachine: (addons-473910) Calling .GetState
	I1010 17:58:33.378727   89506 main.go:141] libmachine: (addons-473910) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:7f:56", ip: ""} in network mk-addons-473910: {Iface:virbr1 ExpiryTime:2024-10-10 18:58:03 +0000 UTC Type:0 Mac:52:54:00:6b:7f:56 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:addons-473910 Clientid:01:52:54:00:6b:7f:56}
	I1010 17:58:33.378749   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined IP address 192.168.39.238 and MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	W1010 17:58:33.378850   89506 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1010 17:58:33.378878   89506 retry.go:31] will retry after 324.019951ms: ssh: handshake failed: EOF
	I1010 17:58:33.378932   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHPort
	I1010 17:58:33.379149   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHKeyPath
	I1010 17:58:33.379319   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHUsername
	I1010 17:58:33.379486   89506 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/addons-473910/id_rsa Username:docker}
	I1010 17:58:33.380027   89506 main.go:141] libmachine: (addons-473910) Calling .DriverName
	I1010 17:58:33.381888   89506 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	W1010 17:58:33.383311   89506 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:43324->192.168.39.238:22: read: connection reset by peer
	I1010 17:58:33.383343   89506 retry.go:31] will retry after 193.322587ms: ssh: handshake failed: read tcp 192.168.39.1:43324->192.168.39.238:22: read: connection reset by peer
	I1010 17:58:33.385241   89506 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1010 17:58:33.386766   89506 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1010 17:58:33.388117   89506 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1010 17:58:33.389307   89506 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1010 17:58:33.390822   89506 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1010 17:58:33.392330   89506 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1010 17:58:33.393720   89506 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1010 17:58:33.395092   89506 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1010 17:58:33.395138   89506 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1010 17:58:33.395176   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHHostname
	I1010 17:58:33.398560   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:33.399013   89506 main.go:141] libmachine: (addons-473910) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:7f:56", ip: ""} in network mk-addons-473910: {Iface:virbr1 ExpiryTime:2024-10-10 18:58:03 +0000 UTC Type:0 Mac:52:54:00:6b:7f:56 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:addons-473910 Clientid:01:52:54:00:6b:7f:56}
	I1010 17:58:33.399041   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined IP address 192.168.39.238 and MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:33.399273   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHPort
	I1010 17:58:33.399507   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHKeyPath
	I1010 17:58:33.399654   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHUsername
	I1010 17:58:33.399810   89506 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/addons-473910/id_rsa Username:docker}
	I1010 17:58:33.763625   89506 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 17:58:33.764231   89506 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 17:58:33.764262   89506 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1010 17:58:33.770487   89506 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1010 17:58:33.881674   89506 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1010 17:58:33.881700   89506 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1010 17:58:33.896480   89506 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1010 17:58:33.896513   89506 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1010 17:58:33.921560   89506 addons.go:431] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1010 17:58:33.921596   89506 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14451 bytes)
	I1010 17:58:33.945283   89506 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1010 17:58:33.945316   89506 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1010 17:58:33.946380   89506 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I1010 17:58:33.946410   89506 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1010 17:58:33.976668   89506 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1010 17:58:33.990537   89506 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1010 17:58:34.009872   89506 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1010 17:58:34.009903   89506 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1010 17:58:34.055701   89506 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1010 17:58:34.081759   89506 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1010 17:58:34.081789   89506 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1010 17:58:34.082201   89506 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1010 17:58:34.132576   89506 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1010 17:58:34.132610   89506 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1010 17:58:34.190285   89506 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1010 17:58:34.190324   89506 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1010 17:58:34.234662   89506 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1010 17:58:34.251140   89506 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1010 17:58:34.252868   89506 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1010 17:58:34.252887   89506 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1010 17:58:34.253278   89506 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1010 17:58:34.253296   89506 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1010 17:58:34.290479   89506 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1010 17:58:34.290505   89506 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1010 17:58:34.324813   89506 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1010 17:58:34.324859   89506 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1010 17:58:34.380170   89506 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1010 17:58:34.426944   89506 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1010 17:58:34.426972   89506 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1010 17:58:34.453372   89506 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1010 17:58:34.453400   89506 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1010 17:58:34.522711   89506 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1010 17:58:34.590593   89506 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1010 17:58:34.590630   89506 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1010 17:58:34.609365   89506 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1010 17:58:34.609395   89506 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1010 17:58:34.639642   89506 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1010 17:58:34.639667   89506 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1010 17:58:34.842298   89506 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1010 17:58:34.842324   89506 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1010 17:58:34.873055   89506 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1010 17:58:34.873083   89506 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1010 17:58:34.894285   89506 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1010 17:58:35.116629   89506 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1010 17:58:35.225561   89506 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1010 17:58:35.225587   89506 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1010 17:58:35.603920   89506 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1010 17:58:35.603947   89506 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1010 17:58:36.046596   89506 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1010 17:58:36.046626   89506 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1010 17:58:36.518357   89506 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1010 17:58:36.518385   89506 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1010 17:58:36.862694   89506 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1010 17:58:36.862723   89506 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1010 17:58:37.179728   89506 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1010 17:58:38.211699   89506 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.448035829s)
	I1010 17:58:38.211748   89506 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (4.447453296s)
	I1010 17:58:38.211768   89506 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1010 17:58:38.211800   89506 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (4.447538086s)
	I1010 17:58:38.211771   89506 main.go:141] libmachine: Making call to close driver server
	I1010 17:58:38.211873   89506 main.go:141] libmachine: (addons-473910) Calling .Close
	I1010 17:58:38.211899   89506 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.441386033s)
	I1010 17:58:38.211971   89506 main.go:141] libmachine: Making call to close driver server
	I1010 17:58:38.211989   89506 main.go:141] libmachine: (addons-473910) Calling .Close
	I1010 17:58:38.212309   89506 main.go:141] libmachine: Successfully made call to close driver server
	I1010 17:58:38.212328   89506 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 17:58:38.212338   89506 main.go:141] libmachine: Making call to close driver server
	I1010 17:58:38.212346   89506 main.go:141] libmachine: (addons-473910) Calling .Close
	I1010 17:58:38.212476   89506 main.go:141] libmachine: Successfully made call to close driver server
	I1010 17:58:38.212536   89506 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 17:58:38.212559   89506 main.go:141] libmachine: Making call to close driver server
	I1010 17:58:38.212577   89506 main.go:141] libmachine: (addons-473910) Calling .Close
	I1010 17:58:38.212466   89506 main.go:141] libmachine: (addons-473910) DBG | Closing plugin on server side
	I1010 17:58:38.213136   89506 node_ready.go:35] waiting up to 6m0s for node "addons-473910" to be "Ready" ...
	I1010 17:58:38.213332   89506 main.go:141] libmachine: (addons-473910) DBG | Closing plugin on server side
	I1010 17:58:38.213344   89506 main.go:141] libmachine: (addons-473910) DBG | Closing plugin on server side
	I1010 17:58:38.213370   89506 main.go:141] libmachine: Successfully made call to close driver server
	I1010 17:58:38.213383   89506 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 17:58:38.213399   89506 main.go:141] libmachine: Successfully made call to close driver server
	I1010 17:58:38.213424   89506 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 17:58:38.241250   89506 node_ready.go:49] node "addons-473910" has status "Ready":"True"
	I1010 17:58:38.241286   89506 node_ready.go:38] duration metric: took 28.127142ms for node "addons-473910" to be "Ready" ...
	I1010 17:58:38.241298   89506 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 17:58:38.293987   89506 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-b5dd8" in "kube-system" namespace to be "Ready" ...
	I1010 17:58:38.731251   89506 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-473910" context rescaled to 1 replicas
	I1010 17:58:39.808135   89506 pod_ready.go:93] pod "coredns-7c65d6cfc9-b5dd8" in "kube-system" namespace has status "Ready":"True"
	I1010 17:58:39.808162   89506 pod_ready.go:82] duration metric: took 1.514148752s for pod "coredns-7c65d6cfc9-b5dd8" in "kube-system" namespace to be "Ready" ...
	I1010 17:58:39.808174   89506 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-m8z9n" in "kube-system" namespace to be "Ready" ...
	I1010 17:58:40.369825   89506 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1010 17:58:40.369893   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHHostname
	I1010 17:58:40.373954   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:40.374559   89506 main.go:141] libmachine: (addons-473910) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:7f:56", ip: ""} in network mk-addons-473910: {Iface:virbr1 ExpiryTime:2024-10-10 18:58:03 +0000 UTC Type:0 Mac:52:54:00:6b:7f:56 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:addons-473910 Clientid:01:52:54:00:6b:7f:56}
	I1010 17:58:40.374594   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined IP address 192.168.39.238 and MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:40.374843   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHPort
	I1010 17:58:40.375220   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHKeyPath
	I1010 17:58:40.375433   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHUsername
	I1010 17:58:40.375614   89506 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/addons-473910/id_rsa Username:docker}
	I1010 17:58:40.769765   89506 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1010 17:58:40.920415   89506 addons.go:234] Setting addon gcp-auth=true in "addons-473910"
	I1010 17:58:40.920470   89506 host.go:66] Checking if "addons-473910" exists ...
	I1010 17:58:40.920820   89506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 17:58:40.920888   89506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 17:58:40.936420   89506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39961
	I1010 17:58:40.936970   89506 main.go:141] libmachine: () Calling .GetVersion
	I1010 17:58:40.937487   89506 main.go:141] libmachine: Using API Version  1
	I1010 17:58:40.937508   89506 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 17:58:40.937865   89506 main.go:141] libmachine: () Calling .GetMachineName
	I1010 17:58:40.938485   89506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 17:58:40.938534   89506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 17:58:40.953991   89506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36643
	I1010 17:58:40.954516   89506 main.go:141] libmachine: () Calling .GetVersion
	I1010 17:58:40.955042   89506 main.go:141] libmachine: Using API Version  1
	I1010 17:58:40.955068   89506 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 17:58:40.955412   89506 main.go:141] libmachine: () Calling .GetMachineName
	I1010 17:58:40.955685   89506 main.go:141] libmachine: (addons-473910) Calling .GetState
	I1010 17:58:40.957168   89506 main.go:141] libmachine: (addons-473910) Calling .DriverName
	I1010 17:58:40.957514   89506 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1010 17:58:40.957549   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHHostname
	I1010 17:58:40.960919   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:40.961395   89506 main.go:141] libmachine: (addons-473910) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:7f:56", ip: ""} in network mk-addons-473910: {Iface:virbr1 ExpiryTime:2024-10-10 18:58:03 +0000 UTC Type:0 Mac:52:54:00:6b:7f:56 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:addons-473910 Clientid:01:52:54:00:6b:7f:56}
	I1010 17:58:40.961428   89506 main.go:141] libmachine: (addons-473910) DBG | domain addons-473910 has defined IP address 192.168.39.238 and MAC address 52:54:00:6b:7f:56 in network mk-addons-473910
	I1010 17:58:40.961555   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHPort
	I1010 17:58:40.961813   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHKeyPath
	I1010 17:58:40.962007   89506 main.go:141] libmachine: (addons-473910) Calling .GetSSHUsername
	I1010 17:58:40.962184   89506 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/addons-473910/id_rsa Username:docker}
	I1010 17:58:41.851281   89506 pod_ready.go:103] pod "coredns-7c65d6cfc9-m8z9n" in "kube-system" namespace has status "Ready":"False"
	I1010 17:58:42.201577   89506 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.224859812s)
	I1010 17:58:42.201642   89506 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (8.211064893s)
	I1010 17:58:42.201658   89506 main.go:141] libmachine: Making call to close driver server
	I1010 17:58:42.201675   89506 main.go:141] libmachine: (addons-473910) Calling .Close
	I1010 17:58:42.201688   89506 main.go:141] libmachine: Making call to close driver server
	I1010 17:58:42.201706   89506 main.go:141] libmachine: (addons-473910) Calling .Close
	I1010 17:58:42.201715   89506 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.145982818s)
	I1010 17:58:42.201753   89506 main.go:141] libmachine: Making call to close driver server
	I1010 17:58:42.201769   89506 main.go:141] libmachine: (addons-473910) Calling .Close
	I1010 17:58:42.201827   89506 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.119597488s)
	I1010 17:58:42.201858   89506 main.go:141] libmachine: Making call to close driver server
	I1010 17:58:42.201868   89506 main.go:141] libmachine: (addons-473910) Calling .Close
	I1010 17:58:42.201912   89506 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.967207128s)
	I1010 17:58:42.201942   89506 main.go:141] libmachine: Making call to close driver server
	I1010 17:58:42.201952   89506 main.go:141] libmachine: (addons-473910) Calling .Close
	I1010 17:58:42.202079   89506 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (7.950910413s)
	I1010 17:58:42.202097   89506 main.go:141] libmachine: Making call to close driver server
	I1010 17:58:42.202105   89506 main.go:141] libmachine: (addons-473910) Calling .Close
	I1010 17:58:42.202182   89506 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.821985372s)
	I1010 17:58:42.202200   89506 main.go:141] libmachine: Making call to close driver server
	I1010 17:58:42.202209   89506 main.go:141] libmachine: (addons-473910) Calling .Close
	I1010 17:58:42.202295   89506 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.679542364s)
	I1010 17:58:42.202325   89506 main.go:141] libmachine: Making call to close driver server
	I1010 17:58:42.202337   89506 main.go:141] libmachine: (addons-473910) Calling .Close
	I1010 17:58:42.202434   89506 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.308113924s)
	I1010 17:58:42.202455   89506 main.go:141] libmachine: Making call to close driver server
	I1010 17:58:42.202464   89506 main.go:141] libmachine: (addons-473910) Calling .Close
	I1010 17:58:42.202613   89506 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.085942304s)
	W1010 17:58:42.202649   89506 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1010 17:58:42.202687   89506 retry.go:31] will retry after 371.604527ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1010 17:58:42.205079   89506 main.go:141] libmachine: (addons-473910) DBG | Closing plugin on server side
	I1010 17:58:42.205081   89506 main.go:141] libmachine: (addons-473910) DBG | Closing plugin on server side
	I1010 17:58:42.205091   89506 main.go:141] libmachine: Successfully made call to close driver server
	I1010 17:58:42.205103   89506 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 17:58:42.205113   89506 main.go:141] libmachine: Making call to close driver server
	I1010 17:58:42.205120   89506 main.go:141] libmachine: (addons-473910) Calling .Close
	I1010 17:58:42.205232   89506 main.go:141] libmachine: Successfully made call to close driver server
	I1010 17:58:42.205244   89506 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 17:58:42.205253   89506 main.go:141] libmachine: Making call to close driver server
	I1010 17:58:42.205259   89506 main.go:141] libmachine: (addons-473910) DBG | Closing plugin on server side
	I1010 17:58:42.205262   89506 main.go:141] libmachine: (addons-473910) Calling .Close
	I1010 17:58:42.205277   89506 main.go:141] libmachine: (addons-473910) DBG | Closing plugin on server side
	I1010 17:58:42.205286   89506 main.go:141] libmachine: Successfully made call to close driver server
	I1010 17:58:42.205300   89506 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 17:58:42.205310   89506 main.go:141] libmachine: Making call to close driver server
	I1010 17:58:42.205317   89506 main.go:141] libmachine: (addons-473910) Calling .Close
	I1010 17:58:42.205334   89506 main.go:141] libmachine: (addons-473910) DBG | Closing plugin on server side
	I1010 17:58:42.205339   89506 main.go:141] libmachine: Successfully made call to close driver server
	I1010 17:58:42.205351   89506 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 17:58:42.205358   89506 main.go:141] libmachine: Making call to close driver server
	I1010 17:58:42.205360   89506 main.go:141] libmachine: Successfully made call to close driver server
	I1010 17:58:42.205366   89506 main.go:141] libmachine: (addons-473910) Calling .Close
	I1010 17:58:42.205368   89506 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 17:58:42.205375   89506 main.go:141] libmachine: Making call to close driver server
	I1010 17:58:42.205381   89506 main.go:141] libmachine: (addons-473910) Calling .Close
	I1010 17:58:42.205417   89506 main.go:141] libmachine: (addons-473910) DBG | Closing plugin on server side
	I1010 17:58:42.205425   89506 main.go:141] libmachine: Successfully made call to close driver server
	I1010 17:58:42.205434   89506 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 17:58:42.205434   89506 main.go:141] libmachine: (addons-473910) DBG | Closing plugin on server side
	I1010 17:58:42.205441   89506 main.go:141] libmachine: Making call to close driver server
	I1010 17:58:42.205447   89506 main.go:141] libmachine: (addons-473910) Calling .Close
	I1010 17:58:42.205459   89506 main.go:141] libmachine: Successfully made call to close driver server
	I1010 17:58:42.205465   89506 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 17:58:42.205472   89506 main.go:141] libmachine: Making call to close driver server
	I1010 17:58:42.205478   89506 main.go:141] libmachine: (addons-473910) Calling .Close
	I1010 17:58:42.205526   89506 main.go:141] libmachine: (addons-473910) DBG | Closing plugin on server side
	I1010 17:58:42.205546   89506 main.go:141] libmachine: Successfully made call to close driver server
	I1010 17:58:42.205552   89506 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 17:58:42.205559   89506 main.go:141] libmachine: Making call to close driver server
	I1010 17:58:42.205565   89506 main.go:141] libmachine: (addons-473910) Calling .Close
	I1010 17:58:42.205739   89506 main.go:141] libmachine: Successfully made call to close driver server
	I1010 17:58:42.205749   89506 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 17:58:42.205757   89506 main.go:141] libmachine: Making call to close driver server
	I1010 17:58:42.205300   89506 main.go:141] libmachine: (addons-473910) DBG | Closing plugin on server side
	I1010 17:58:42.205763   89506 main.go:141] libmachine: (addons-473910) Calling .Close
	I1010 17:58:42.210057   89506 main.go:141] libmachine: (addons-473910) DBG | Closing plugin on server side
	I1010 17:58:42.210068   89506 main.go:141] libmachine: Successfully made call to close driver server
	I1010 17:58:42.210081   89506 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 17:58:42.210130   89506 main.go:141] libmachine: (addons-473910) DBG | Closing plugin on server side
	I1010 17:58:42.210167   89506 main.go:141] libmachine: Successfully made call to close driver server
	I1010 17:58:42.210189   89506 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 17:58:42.210199   89506 main.go:141] libmachine: Successfully made call to close driver server
	I1010 17:58:42.210213   89506 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 17:58:42.210224   89506 main.go:141] libmachine: (addons-473910) DBG | Closing plugin on server side
	I1010 17:58:42.210226   89506 addons.go:475] Verifying addon metrics-server=true in "addons-473910"
	I1010 17:58:42.210243   89506 main.go:141] libmachine: (addons-473910) DBG | Closing plugin on server side
	I1010 17:58:42.210272   89506 main.go:141] libmachine: Successfully made call to close driver server
	I1010 17:58:42.210275   89506 main.go:141] libmachine: Successfully made call to close driver server
	I1010 17:58:42.210281   89506 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 17:58:42.210283   89506 main.go:141] libmachine: (addons-473910) DBG | Closing plugin on server side
	I1010 17:58:42.210289   89506 main.go:141] libmachine: (addons-473910) DBG | Closing plugin on server side
	I1010 17:58:42.210289   89506 addons.go:475] Verifying addon ingress=true in "addons-473910"
	I1010 17:58:42.210295   89506 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 17:58:42.210326   89506 main.go:141] libmachine: Successfully made call to close driver server
	I1010 17:58:42.210335   89506 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 17:58:42.210341   89506 main.go:141] libmachine: (addons-473910) DBG | Closing plugin on server side
	I1010 17:58:42.210354   89506 main.go:141] libmachine: Successfully made call to close driver server
	I1010 17:58:42.210364   89506 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 17:58:42.210365   89506 main.go:141] libmachine: Successfully made call to close driver server
	I1010 17:58:42.210498   89506 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 17:58:42.210372   89506 main.go:141] libmachine: Successfully made call to close driver server
	I1010 17:58:42.210565   89506 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 17:58:42.210578   89506 addons.go:475] Verifying addon registry=true in "addons-473910"
	I1010 17:58:42.212263   89506 out.go:177] * Verifying registry addon...
	I1010 17:58:42.212273   89506 out.go:177] * Verifying ingress addon...
	I1010 17:58:42.212277   89506 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-473910 service yakd-dashboard -n yakd-dashboard
	
	I1010 17:58:42.214541   89506 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1010 17:58:42.214715   89506 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1010 17:58:42.238213   89506 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1010 17:58:42.238247   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:58:42.238284   89506 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1010 17:58:42.238303   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:58:42.260493   89506 main.go:141] libmachine: Making call to close driver server
	I1010 17:58:42.260528   89506 main.go:141] libmachine: (addons-473910) Calling .Close
	I1010 17:58:42.260844   89506 main.go:141] libmachine: Successfully made call to close driver server
	I1010 17:58:42.260878   89506 main.go:141] libmachine: Making call to close connection to plugin binary
	W1010 17:58:42.261035   89506 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1010 17:58:42.284140   89506 main.go:141] libmachine: Making call to close driver server
	I1010 17:58:42.284173   89506 main.go:141] libmachine: (addons-473910) Calling .Close
	I1010 17:58:42.284466   89506 main.go:141] libmachine: Successfully made call to close driver server
	I1010 17:58:42.284490   89506 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 17:58:42.284506   89506 main.go:141] libmachine: (addons-473910) DBG | Closing plugin on server side
	I1010 17:58:42.574843   89506 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1010 17:58:42.728905   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:58:42.729511   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:58:43.145904   89506 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.966113437s)
	I1010 17:58:43.145962   89506 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.188418776s)
	I1010 17:58:43.145985   89506 main.go:141] libmachine: Making call to close driver server
	I1010 17:58:43.146016   89506 main.go:141] libmachine: (addons-473910) Calling .Close
	I1010 17:58:43.146313   89506 main.go:141] libmachine: Successfully made call to close driver server
	I1010 17:58:43.146333   89506 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 17:58:43.146342   89506 main.go:141] libmachine: Making call to close driver server
	I1010 17:58:43.146342   89506 main.go:141] libmachine: (addons-473910) DBG | Closing plugin on server side
	I1010 17:58:43.146349   89506 main.go:141] libmachine: (addons-473910) Calling .Close
	I1010 17:58:43.146568   89506 main.go:141] libmachine: Successfully made call to close driver server
	I1010 17:58:43.146581   89506 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 17:58:43.146593   89506 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-473910"
	I1010 17:58:43.148070   89506 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1010 17:58:43.148079   89506 out.go:177] * Verifying csi-hostpath-driver addon...
	I1010 17:58:43.149882   89506 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1010 17:58:43.150503   89506 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1010 17:58:43.151715   89506 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1010 17:58:43.151740   89506 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1010 17:58:43.166627   89506 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1010 17:58:43.166654   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:58:43.223467   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:58:43.223491   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:58:43.336864   89506 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1010 17:58:43.336891   89506 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1010 17:58:43.400142   89506 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1010 17:58:43.400170   89506 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1010 17:58:43.497283   89506 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1010 17:58:43.655571   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:58:43.718488   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:58:43.719035   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:58:44.019146   89506 pod_ready.go:103] pod "coredns-7c65d6cfc9-m8z9n" in "kube-system" namespace has status "Ready":"False"
	I1010 17:58:44.158722   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:58:44.222414   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:58:44.222456   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:58:44.669807   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:58:44.764206   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:58:44.764244   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:58:45.157044   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:58:45.223326   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:58:45.223984   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:58:45.263172   89506 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.68826102s)
	I1010 17:58:45.263255   89506 main.go:141] libmachine: Making call to close driver server
	I1010 17:58:45.263273   89506 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.765946133s)
	I1010 17:58:45.263324   89506 main.go:141] libmachine: Making call to close driver server
	I1010 17:58:45.263286   89506 main.go:141] libmachine: (addons-473910) Calling .Close
	I1010 17:58:45.263357   89506 main.go:141] libmachine: (addons-473910) Calling .Close
	I1010 17:58:45.263749   89506 main.go:141] libmachine: Successfully made call to close driver server
	I1010 17:58:45.263765   89506 main.go:141] libmachine: Successfully made call to close driver server
	I1010 17:58:45.263786   89506 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 17:58:45.263790   89506 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 17:58:45.263796   89506 main.go:141] libmachine: Making call to close driver server
	I1010 17:58:45.263800   89506 main.go:141] libmachine: Making call to close driver server
	I1010 17:58:45.263806   89506 main.go:141] libmachine: (addons-473910) Calling .Close
	I1010 17:58:45.263808   89506 main.go:141] libmachine: (addons-473910) Calling .Close
	I1010 17:58:45.263833   89506 main.go:141] libmachine: (addons-473910) DBG | Closing plugin on server side
	I1010 17:58:45.263761   89506 main.go:141] libmachine: (addons-473910) DBG | Closing plugin on server side
	I1010 17:58:45.264006   89506 main.go:141] libmachine: Successfully made call to close driver server
	I1010 17:58:45.264022   89506 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 17:58:45.264133   89506 main.go:141] libmachine: Successfully made call to close driver server
	I1010 17:58:45.264147   89506 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 17:58:45.266253   89506 addons.go:475] Verifying addon gcp-auth=true in "addons-473910"
	I1010 17:58:45.268487   89506 out.go:177] * Verifying gcp-auth addon...
	I1010 17:58:45.270870   89506 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1010 17:58:45.274245   89506 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1010 17:58:45.274273   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:58:45.656157   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:58:45.756106   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:58:45.756813   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:58:45.774841   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:58:46.156328   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:58:46.223449   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:58:46.225192   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:58:46.274342   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:58:46.324651   89506 pod_ready.go:98] pod "coredns-7c65d6cfc9-m8z9n" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-10-10 17:58:45 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-10-10 17:58:33 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-10-10 17:58:33 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-10-10 17:58:33 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-10-10 17:58:33 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.238 HostIPs:[{IP:192.168.39
.238}] PodIP:10.244.0.3 PodIPs:[{IP:10.244.0.3}] StartTime:2024-10-10 17:58:33 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-10-10 17:58:38 +0000 UTC,FinishedAt:2024-10-10 17:58:44 +0000 UTC,ContainerID:cri-o://f1cd54ec71477ff4945cddb307bce9f5664f3364b54c2c3dfab6a38ba7f8f896,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://f1cd54ec71477ff4945cddb307bce9f5664f3364b54c2c3dfab6a38ba7f8f896 Started:0xc0025ebda0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc0025d9610} {Name:kube-api-access-gvn8m MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc0025d9620}] User:ni
l AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1010 17:58:46.324686   89506 pod_ready.go:82] duration metric: took 6.516505756s for pod "coredns-7c65d6cfc9-m8z9n" in "kube-system" namespace to be "Ready" ...
	E1010 17:58:46.324700   89506 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-7c65d6cfc9-m8z9n" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-10-10 17:58:45 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-10-10 17:58:33 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-10-10 17:58:33 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-10-10 17:58:33 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-10-10 17:58:33 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.3
9.238 HostIPs:[{IP:192.168.39.238}] PodIP:10.244.0.3 PodIPs:[{IP:10.244.0.3}] StartTime:2024-10-10 17:58:33 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-10-10 17:58:38 +0000 UTC,FinishedAt:2024-10-10 17:58:44 +0000 UTC,ContainerID:cri-o://f1cd54ec71477ff4945cddb307bce9f5664f3364b54c2c3dfab6a38ba7f8f896,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://f1cd54ec71477ff4945cddb307bce9f5664f3364b54c2c3dfab6a38ba7f8f896 Started:0xc0025ebda0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc0025d9610} {Name:kube-api-access-gvn8m MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveRe
adOnly:0xc0025d9620}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1010 17:58:46.324711   89506 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-473910" in "kube-system" namespace to be "Ready" ...
	I1010 17:58:46.329352   89506 pod_ready.go:93] pod "etcd-addons-473910" in "kube-system" namespace has status "Ready":"True"
	I1010 17:58:46.329375   89506 pod_ready.go:82] duration metric: took 4.656281ms for pod "etcd-addons-473910" in "kube-system" namespace to be "Ready" ...
	I1010 17:58:46.329386   89506 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-473910" in "kube-system" namespace to be "Ready" ...
	I1010 17:58:46.336117   89506 pod_ready.go:93] pod "kube-apiserver-addons-473910" in "kube-system" namespace has status "Ready":"True"
	I1010 17:58:46.336138   89506 pod_ready.go:82] duration metric: took 6.746483ms for pod "kube-apiserver-addons-473910" in "kube-system" namespace to be "Ready" ...
	I1010 17:58:46.336148   89506 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-473910" in "kube-system" namespace to be "Ready" ...
	I1010 17:58:46.341140   89506 pod_ready.go:93] pod "kube-controller-manager-addons-473910" in "kube-system" namespace has status "Ready":"True"
	I1010 17:58:46.341160   89506 pod_ready.go:82] duration metric: took 5.005743ms for pod "kube-controller-manager-addons-473910" in "kube-system" namespace to be "Ready" ...
	I1010 17:58:46.341171   89506 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-qx6m4" in "kube-system" namespace to be "Ready" ...
	I1010 17:58:46.347142   89506 pod_ready.go:93] pod "kube-proxy-qx6m4" in "kube-system" namespace has status "Ready":"True"
	I1010 17:58:46.347164   89506 pod_ready.go:82] duration metric: took 5.987241ms for pod "kube-proxy-qx6m4" in "kube-system" namespace to be "Ready" ...
	I1010 17:58:46.347175   89506 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-473910" in "kube-system" namespace to be "Ready" ...
	I1010 17:58:46.656395   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:58:46.712574   89506 pod_ready.go:93] pod "kube-scheduler-addons-473910" in "kube-system" namespace has status "Ready":"True"
	I1010 17:58:46.712600   89506 pod_ready.go:82] duration metric: took 365.416615ms for pod "kube-scheduler-addons-473910" in "kube-system" namespace to be "Ready" ...
	I1010 17:58:46.712618   89506 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-6cgkn" in "kube-system" namespace to be "Ready" ...
	I1010 17:58:46.756408   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:58:46.757208   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:58:46.774438   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:58:47.155677   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:58:47.222557   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:58:47.222961   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:58:47.275095   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:58:47.655534   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:58:47.720858   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:58:47.721226   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:58:47.775381   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:58:48.155166   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:58:48.218982   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:58:48.219548   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:58:48.275282   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:58:48.656028   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:58:48.722051   89506 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-6cgkn" in "kube-system" namespace has status "Ready":"False"
	I1010 17:58:48.722291   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:58:48.722695   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:58:49.042552   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:58:49.156488   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:58:49.220399   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:58:49.220692   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:58:49.274299   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:58:49.655944   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:58:49.721802   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:58:49.723015   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:58:49.775797   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:58:50.155947   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:58:50.219210   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:58:50.219840   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:58:50.275476   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:58:50.656112   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:58:50.719599   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:58:50.719930   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:58:50.774375   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:58:51.157705   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:58:51.219816   89506 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-6cgkn" in "kube-system" namespace has status "Ready":"False"
	I1010 17:58:51.221032   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:58:51.221450   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:58:51.275911   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:58:51.655286   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:58:51.721918   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:58:51.723043   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:58:51.775305   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:58:52.155792   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:58:52.219737   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:58:52.219886   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:58:52.275455   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:58:52.655755   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:58:52.719149   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:58:52.720636   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:58:52.775482   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:58:53.155338   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:58:53.220963   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:58:53.221934   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:58:53.222946   89506 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-6cgkn" in "kube-system" namespace has status "Ready":"False"
	I1010 17:58:53.274415   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:58:53.655730   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:58:53.720497   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:58:53.721260   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:58:53.774781   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:58:54.155248   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:58:54.219233   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:58:54.219574   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:58:54.275402   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:58:54.656141   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:58:54.719828   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:58:54.720254   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:58:54.774869   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:58:55.238174   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:58:55.238309   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:58:55.238329   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:58:55.239482   89506 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-6cgkn" in "kube-system" namespace has status "Ready":"False"
	I1010 17:58:55.274657   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:58:55.654798   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:58:55.721031   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:58:55.721695   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:58:55.774769   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:58:56.156015   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:58:56.220928   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:58:56.221181   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:58:56.275608   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:58:56.655407   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:58:56.718994   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:58:56.719576   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:58:56.775062   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:58:57.206282   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:58:57.218877   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:58:57.219060   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:58:57.488636   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:58:57.655929   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:58:57.719210   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:58:57.719457   89506 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-6cgkn" in "kube-system" namespace has status "Ready":"False"
	I1010 17:58:57.720305   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:58:57.775238   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:58:58.155584   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:58:58.220085   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:58:58.220650   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:58:58.274869   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:58:58.654750   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:58:58.720368   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:58:58.720400   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:58:58.774451   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:58:59.156225   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:58:59.219041   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:58:59.219365   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:58:59.274914   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:58:59.655258   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:58:59.720115   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:58:59.720772   89506 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-6cgkn" in "kube-system" namespace has status "Ready":"False"
	I1010 17:58:59.720817   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:58:59.774908   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:00.157458   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:00.219695   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:00.220106   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:59:00.220292   89506 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-6cgkn" in "kube-system" namespace has status "Ready":"True"
	I1010 17:59:00.220309   89506 pod_ready.go:82] duration metric: took 13.507684315s for pod "nvidia-device-plugin-daemonset-6cgkn" in "kube-system" namespace to be "Ready" ...
	I1010 17:59:00.220327   89506 pod_ready.go:39] duration metric: took 21.97901843s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 17:59:00.220364   89506 api_server.go:52] waiting for apiserver process to appear ...
	I1010 17:59:00.220433   89506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 17:59:00.238835   89506 api_server.go:72] duration metric: took 27.024908677s to wait for apiserver process to appear ...
	I1010 17:59:00.238865   89506 api_server.go:88] waiting for apiserver healthz status ...
	I1010 17:59:00.238889   89506 api_server.go:253] Checking apiserver healthz at https://192.168.39.238:8443/healthz ...
	I1010 17:59:00.245346   89506 api_server.go:279] https://192.168.39.238:8443/healthz returned 200:
	ok
	I1010 17:59:00.247043   89506 api_server.go:141] control plane version: v1.31.1
	I1010 17:59:00.247074   89506 api_server.go:131] duration metric: took 8.202236ms to wait for apiserver health ...
	I1010 17:59:00.247083   89506 system_pods.go:43] waiting for kube-system pods to appear ...
	I1010 17:59:00.257576   89506 system_pods.go:59] 17 kube-system pods found
	I1010 17:59:00.257612   89506 system_pods.go:61] "coredns-7c65d6cfc9-b5dd8" [fc517273-4630-428f-99ab-0965a9e1b483] Running
	I1010 17:59:00.257621   89506 system_pods.go:61] "csi-hostpath-attacher-0" [5b8784e3-5271-4a1e-a3fd-aa6f61bef065] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1010 17:59:00.257630   89506 system_pods.go:61] "csi-hostpath-resizer-0" [66c8883a-7176-4819-a2d9-e88a9f7e9311] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1010 17:59:00.257638   89506 system_pods.go:61] "csi-hostpathplugin-fmhgf" [b9750fdd-c60e-4cdb-ac1e-6d7ac5ec9aab] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1010 17:59:00.257647   89506 system_pods.go:61] "etcd-addons-473910" [796b099e-eee7-4be3-9845-d78a9d74cbd6] Running
	I1010 17:59:00.257652   89506 system_pods.go:61] "kube-apiserver-addons-473910" [cd91ec20-324a-4b99-bd28-7d32f89d1e56] Running
	I1010 17:59:00.257656   89506 system_pods.go:61] "kube-controller-manager-addons-473910" [615b8b3b-c358-4f08-b0e5-63448f99a101] Running
	I1010 17:59:00.257661   89506 system_pods.go:61] "kube-ingress-dns-minikube" [292a5f4d-bcd5-4dd5-8530-4228a6d71ff5] Running
	I1010 17:59:00.257666   89506 system_pods.go:61] "kube-proxy-qx6m4" [5a52a8d5-4cda-449b-b74f-cbc835d4dc37] Running
	I1010 17:59:00.257669   89506 system_pods.go:61] "kube-scheduler-addons-473910" [a2234379-3bab-4bb8-be1e-da56ef4f0f89] Running
	I1010 17:59:00.257675   89506 system_pods.go:61] "metrics-server-84c5f94fbc-sr88b" [562db437-e740-4818-a2fd-dec917bd22cf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1010 17:59:00.257682   89506 system_pods.go:61] "nvidia-device-plugin-daemonset-6cgkn" [a63e13d1-dda1-4177-8dda-1a4d528ccd30] Running
	I1010 17:59:00.257688   89506 system_pods.go:61] "registry-66c9cd494c-4k74q" [604b3b36-a2fa-4e21-ab57-959fbdee9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1010 17:59:00.257696   89506 system_pods.go:61] "registry-proxy-f4hnz" [5d8faf25-5998-4727-be43-6800e479cc59] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1010 17:59:00.257704   89506 system_pods.go:61] "snapshot-controller-56fcc65765-k4k5t" [c574bf49-6f95-46f0-8719-6fabfdc878ca] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1010 17:59:00.257713   89506 system_pods.go:61] "snapshot-controller-56fcc65765-pfvnl" [49fa4565-6758-44de-aac1-ab5277b25c51] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1010 17:59:00.257717   89506 system_pods.go:61] "storage-provisioner" [32649d6b-8dd1-4e8c-b16f-9fcc465018b5] Running
	I1010 17:59:00.257724   89506 system_pods.go:74] duration metric: took 10.632494ms to wait for pod list to return data ...
	I1010 17:59:00.257735   89506 default_sa.go:34] waiting for default service account to be created ...
	I1010 17:59:00.261134   89506 default_sa.go:45] found service account: "default"
	I1010 17:59:00.261168   89506 default_sa.go:55] duration metric: took 3.418904ms for default service account to be created ...
	I1010 17:59:00.261179   89506 system_pods.go:116] waiting for k8s-apps to be running ...
	I1010 17:59:00.271177   89506 system_pods.go:86] 17 kube-system pods found
	I1010 17:59:00.271214   89506 system_pods.go:89] "coredns-7c65d6cfc9-b5dd8" [fc517273-4630-428f-99ab-0965a9e1b483] Running
	I1010 17:59:00.271227   89506 system_pods.go:89] "csi-hostpath-attacher-0" [5b8784e3-5271-4a1e-a3fd-aa6f61bef065] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1010 17:59:00.271237   89506 system_pods.go:89] "csi-hostpath-resizer-0" [66c8883a-7176-4819-a2d9-e88a9f7e9311] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1010 17:59:00.271253   89506 system_pods.go:89] "csi-hostpathplugin-fmhgf" [b9750fdd-c60e-4cdb-ac1e-6d7ac5ec9aab] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1010 17:59:00.271259   89506 system_pods.go:89] "etcd-addons-473910" [796b099e-eee7-4be3-9845-d78a9d74cbd6] Running
	I1010 17:59:00.271265   89506 system_pods.go:89] "kube-apiserver-addons-473910" [cd91ec20-324a-4b99-bd28-7d32f89d1e56] Running
	I1010 17:59:00.271272   89506 system_pods.go:89] "kube-controller-manager-addons-473910" [615b8b3b-c358-4f08-b0e5-63448f99a101] Running
	I1010 17:59:00.271278   89506 system_pods.go:89] "kube-ingress-dns-minikube" [292a5f4d-bcd5-4dd5-8530-4228a6d71ff5] Running
	I1010 17:59:00.271284   89506 system_pods.go:89] "kube-proxy-qx6m4" [5a52a8d5-4cda-449b-b74f-cbc835d4dc37] Running
	I1010 17:59:00.271291   89506 system_pods.go:89] "kube-scheduler-addons-473910" [a2234379-3bab-4bb8-be1e-da56ef4f0f89] Running
	I1010 17:59:00.271306   89506 system_pods.go:89] "metrics-server-84c5f94fbc-sr88b" [562db437-e740-4818-a2fd-dec917bd22cf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1010 17:59:00.271312   89506 system_pods.go:89] "nvidia-device-plugin-daemonset-6cgkn" [a63e13d1-dda1-4177-8dda-1a4d528ccd30] Running
	I1010 17:59:00.271322   89506 system_pods.go:89] "registry-66c9cd494c-4k74q" [604b3b36-a2fa-4e21-ab57-959fbdee9a2b] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1010 17:59:00.271332   89506 system_pods.go:89] "registry-proxy-f4hnz" [5d8faf25-5998-4727-be43-6800e479cc59] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1010 17:59:00.271346   89506 system_pods.go:89] "snapshot-controller-56fcc65765-k4k5t" [c574bf49-6f95-46f0-8719-6fabfdc878ca] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1010 17:59:00.271356   89506 system_pods.go:89] "snapshot-controller-56fcc65765-pfvnl" [49fa4565-6758-44de-aac1-ab5277b25c51] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1010 17:59:00.271365   89506 system_pods.go:89] "storage-provisioner" [32649d6b-8dd1-4e8c-b16f-9fcc465018b5] Running
	I1010 17:59:00.271376   89506 system_pods.go:126] duration metric: took 10.189933ms to wait for k8s-apps to be running ...
	I1010 17:59:00.271391   89506 system_svc.go:44] waiting for kubelet service to be running ....
	I1010 17:59:00.271449   89506 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 17:59:00.276561   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:00.289564   89506 system_svc.go:56] duration metric: took 18.165533ms WaitForService to wait for kubelet
	I1010 17:59:00.289597   89506 kubeadm.go:582] duration metric: took 27.075678479s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1010 17:59:00.289619   89506 node_conditions.go:102] verifying NodePressure condition ...
	I1010 17:59:00.293057   89506 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1010 17:59:00.293087   89506 node_conditions.go:123] node cpu capacity is 2
	I1010 17:59:00.293103   89506 node_conditions.go:105] duration metric: took 3.477847ms to run NodePressure ...
	I1010 17:59:00.293120   89506 start.go:241] waiting for startup goroutines ...
	I1010 17:59:00.654810   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:00.719265   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:00.724028   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:59:00.775491   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:01.156275   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:01.219464   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:59:01.219534   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:01.275270   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:01.655539   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:01.718453   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:59:01.719066   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:01.775290   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:02.155833   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:02.219091   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:59:02.219455   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:02.275261   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:02.656151   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:02.736597   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:59:02.737149   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:02.774868   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:03.156263   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:03.218670   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:59:03.219880   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:03.274933   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:03.657591   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:03.719186   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:59:03.719663   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:03.775038   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:04.155797   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:04.220281   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:59:04.220759   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:04.275082   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:04.655834   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:04.719491   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:59:04.719923   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:04.774651   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:05.154994   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:05.218493   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:59:05.219237   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:05.274504   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:05.655813   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:05.719409   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:59:05.720082   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:05.775240   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:06.155356   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:06.219382   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:59:06.220191   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:06.274558   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:06.655977   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:06.720230   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:59:06.720620   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:06.775774   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:07.157686   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:07.257442   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:59:07.257578   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:07.274190   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:07.656221   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:07.756872   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:07.756873   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:59:07.774365   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:08.155397   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:08.218503   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:59:08.219300   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:08.275138   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:08.655251   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:08.719850   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:59:08.720047   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:08.774604   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:09.155825   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:09.218857   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:59:09.219699   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:09.275424   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:09.656835   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:09.757613   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:59:09.757737   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:09.775171   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:10.156325   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:10.220598   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:59:10.220843   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:10.275070   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:10.656092   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:10.719357   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:59:10.720749   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:10.774431   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:11.156150   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:11.219058   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:59:11.219096   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:11.275571   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:11.656069   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:11.720060   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:59:11.720625   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:11.774172   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:12.155883   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:12.219727   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:12.219785   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:59:12.275084   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:12.655697   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:13.114135   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:13.114350   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:59:13.114573   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:13.155493   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:13.219336   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:59:13.220093   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:13.274716   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:13.657994   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:13.719945   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:59:13.721434   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:13.775911   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:14.159480   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:14.219048   89506 kapi.go:107] duration metric: took 32.004326756s to wait for kubernetes.io/minikube-addons=registry ...
	I1010 17:59:14.219312   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:14.274874   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:14.655368   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:14.719400   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:14.774954   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:15.156030   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:15.220089   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:15.274950   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:15.656080   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:15.719488   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:15.775122   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:16.158176   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:16.219756   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:16.274091   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:16.656013   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:16.719249   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:16.774517   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:17.155193   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:17.219167   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:17.275081   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:17.655977   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:17.756170   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:17.774487   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:18.155363   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:18.218904   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:18.274487   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:18.656249   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:18.719194   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:18.775679   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:19.154686   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:19.218429   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:19.275366   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:20.015510   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:20.100910   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:20.100911   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:20.201860   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:20.301548   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:20.301868   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:20.655408   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:20.720464   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:20.775026   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:21.155802   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:21.219030   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:21.274313   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:21.655886   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:21.719724   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:21.774470   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:22.154804   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:22.218989   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:22.275050   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:22.655289   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:22.721021   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:23.044907   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:23.155431   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:23.219446   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:23.275154   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:23.656169   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:23.720546   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:23.775236   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:24.155575   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:24.218295   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:24.274597   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:24.654822   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:24.719964   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:24.819763   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:25.154755   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:25.218952   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:25.274774   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:25.657996   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:25.720681   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:25.775260   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:26.348765   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:26.349244   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:26.349577   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:26.655599   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:26.755613   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:26.774420   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:27.155832   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:27.218719   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:27.274249   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:27.655915   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:27.756923   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:27.774408   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:28.156277   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:28.220955   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:28.275246   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:28.655980   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:28.756807   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:28.774486   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:29.157195   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:29.219609   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:29.274956   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:29.655062   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:29.756002   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:29.776714   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:30.157526   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:30.218526   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:30.274848   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:30.666142   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:30.720341   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:30.775606   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:31.156072   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:31.218774   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:31.275271   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:31.656324   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:31.756181   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:31.774629   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:32.155894   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:32.218982   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:32.276279   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:32.655701   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:32.757053   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:32.774598   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:33.154908   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:33.219830   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:33.274055   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:33.657683   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:33.758318   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:33.776227   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:34.156608   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:34.219031   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:34.274886   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:34.658300   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:35.103311   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:35.120595   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:35.202983   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:35.218813   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:35.300975   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:35.655896   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:35.719603   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:35.775452   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:36.156363   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:36.227185   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:36.274841   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:36.655672   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:36.756671   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:36.775115   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:37.156118   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:37.235167   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:37.276675   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:37.656527   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:37.720904   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:37.823041   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:38.157988   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:38.219946   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:38.274323   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:38.656612   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:38.719982   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:38.774461   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:39.155867   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:39.219121   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:39.274532   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:39.655941   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:39.720979   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:39.776022   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:40.157172   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:40.222962   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:40.275864   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:40.655612   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:40.718976   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:40.774426   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:41.159848   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:41.219037   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:41.274781   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:41.655624   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:59:41.719210   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:41.774802   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:42.156325   89506 kapi.go:107] duration metric: took 59.005818145s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1010 17:59:42.219542   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:42.275521   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:42.720416   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:42.775019   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:43.219493   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:43.274914   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:43.719889   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:43.775066   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:44.219610   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:44.274050   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:44.719760   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:44.774420   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:45.219856   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:45.274513   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:45.719146   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:45.774773   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:46.219462   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:46.276284   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:46.719879   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:46.819389   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:47.219145   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:47.274924   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:48.054223   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:48.054505   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:48.222071   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:48.276143   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:48.721598   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:48.778613   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:49.219442   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:49.276224   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:49.722069   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:49.775020   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:50.219699   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:50.274717   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:50.719202   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:50.774871   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:51.218911   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:51.274411   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:51.720228   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:51.822178   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:52.219302   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:52.274894   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:52.722520   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:52.775384   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:53.219670   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:53.274567   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:53.719683   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:53.775098   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:54.219661   89506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:59:54.275098   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:54.720691   89506 kapi.go:107] duration metric: took 1m12.506145807s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1010 17:59:54.775347   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:55.275093   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:55.775095   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:56.274445   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:57.080577   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:57.275409   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:57.775492   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:58.275211   89506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:59:58.775454   89506 kapi.go:107] duration metric: took 1m13.504575758s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1010 17:59:58.777577   89506 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-473910 cluster.
	I1010 17:59:58.779170   89506 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1010 17:59:58.780864   89506 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1010 17:59:58.782468   89506 out.go:177] * Enabled addons: storage-provisioner, cloud-spanner, inspektor-gadget, metrics-server, ingress-dns, nvidia-device-plugin, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1010 17:59:58.783844   89506 addons.go:510] duration metric: took 1m25.569891907s for enable addons: enabled=[storage-provisioner cloud-spanner inspektor-gadget metrics-server ingress-dns nvidia-device-plugin yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1010 17:59:58.783887   89506 start.go:246] waiting for cluster config update ...
	I1010 17:59:58.783906   89506 start.go:255] writing updated cluster config ...
	I1010 17:59:58.784242   89506 ssh_runner.go:195] Run: rm -f paused
	I1010 17:59:58.835759   89506 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1010 17:59:58.837921   89506 out.go:177] * Done! kubectl is now configured to use "addons-473910" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 10 18:06:09 addons-473910 crio[664]: time="2024-10-10 18:06:09.638441821Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728583569638412607,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582833,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d649b69c-64cb-482a-8714-7dccda97bf6b name=/runtime.v1.ImageService/ImageFsInfo
	Oct 10 18:06:09 addons-473910 crio[664]: time="2024-10-10 18:06:09.638960682Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=59b38fab-d7b7-4fcf-9331-44c399d28e8d name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 18:06:09 addons-473910 crio[664]: time="2024-10-10 18:06:09.639028968Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=59b38fab-d7b7-4fcf-9331-44c399d28e8d name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 18:06:09 addons-473910 crio[664]: time="2024-10-10 18:06:09.639357841Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:36136f4e792d049464d12cb3b9fde6256713705f025b767534254f2d388777c4,PodSandboxId:161834b6b8da2816e53b43145535717744f0128750195b53c76495c12c325f83,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1728583379209710220,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-hjz49,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5c2a63b0-9bc9-4632-83c5-68ff53b86390,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac61bcb449dd008022a57025113e8ce0f569b64dd266ea48ce07233fb05c610a,PodSandboxId:e7a9e829f252f54805a998ef17bcea567c25ab25439ae6921adb0f1744739151,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1728583240469506019,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2d2384a6-8648-46d4-94c5-9c3ec997ecdc,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6690ee966904d3752b6c5ddc92710db914930936530246b1664d693c89208591,PodSandboxId:1c0c6796f2cdf53539eb2350758502cf2e8a059ecc2ef6925b7599ad07bf6e3a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1728583202391782724,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ee51e877-54f4-4ca0-8
4f2-3fa775f67d92,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b6348a73c59af413c47d026c6c02ed02603f8f76800efceafe9257887000383,PodSandboxId:fd615680300bfde797e70660c3cef4366d0e0d9c9f827fdc718b34c6623f669a,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1728583155020959661,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-sr88b,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 562db437-e740-4818-a2fd-dec917bd22cf,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:deac66408019d59749c7de76502f032afd1679bedfd1fd5c55e029101716b9b0,PodSandboxId:1452b1a36b355be1d36ee0c3bb69f0ff0092a9cd99a2ccaf71e36d984c28d1ce,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728583120245559496,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner
,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32649d6b-8dd1-4e8c-b16f-9fcc465018b5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b221dbce7cbc7bb27dd0ca4b197a5089677416b54742e3f3eb98801b6f4b4f1a,PodSandboxId:e506ba1f2ea95b63a31f8b44fe7cfe38a0314c47c9e68a19780cfff48a00b77f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728583117647931369,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c6
5d6cfc9-b5dd8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc517273-4630-428f-99ab-0965a9e1b483,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a58a6a528d510cfebfd4412ea54a8ecf08518f84f3f8fdd857854d977196d4a3,PodSandboxId:eaa0c1c7802d21db183fd3618cfdc9827890205fede0d3a3891830b77372914d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d13
1805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728583114775787325,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qx6m4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a52a8d5-4cda-449b-b74f-cbc835d4dc37,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f79440a61b5920283b88621a6580dd1741cd751a10ace64cc621653297aede51,PodSandboxId:777a98389002e1503fe2d4f0869e14ff24d933bee6b6395063c6987f3e685cbf,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8
d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728583103283194523,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-473910,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d27f453211eff4a3155dfbdf6354ecec,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76288df1302c87280618ccfd56c890edf9404223642b2c7063f14fe9814b69e4,PodSandboxId:9913494a438b435077fd647dd0bd9a6d6542fbfba7d4dc003975ba06766ff53e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNI
NG,CreatedAt:1728583103261371138,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-473910,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3020ffd8df3abe5b746c768a102d7868,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d18ceead39c176ae40504127165ef6f70d34a3524c690cf597693a5cea85eb9f,PodSandboxId:8c0c63bcacfc537f877bae552d8cc5eff1e964ca3e05d5b23e5034f63a754d2c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNN
ING,CreatedAt:1728583103294195727,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-473910,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd938ef3bfcffd67f7fa1a9a06d155c1,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dda4917567e30a8dffe800dded48fb64958951c261adef9310b2ade87d30bf28,PodSandboxId:acd84339bc29fa286098e565057879464c6add2bf5ea93422bcc1376d3be191f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:172
8583103275484443,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-473910,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e53a2912e9524a040780063467147bc8,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=59b38fab-d7b7-4fcf-9331-44c399d28e8d name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 18:06:09 addons-473910 crio[664]: time="2024-10-10 18:06:09.682782679Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a49eced3-e28e-495a-8f4b-0a479464ab5f name=/runtime.v1.RuntimeService/Version
	Oct 10 18:06:09 addons-473910 crio[664]: time="2024-10-10 18:06:09.682861314Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a49eced3-e28e-495a-8f4b-0a479464ab5f name=/runtime.v1.RuntimeService/Version
	Oct 10 18:06:09 addons-473910 crio[664]: time="2024-10-10 18:06:09.683967681Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a1f2499d-9e5c-49b7-92fe-ce9e3336e27e name=/runtime.v1.ImageService/ImageFsInfo
	Oct 10 18:06:09 addons-473910 crio[664]: time="2024-10-10 18:06:09.685511540Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728583569685479879,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582833,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a1f2499d-9e5c-49b7-92fe-ce9e3336e27e name=/runtime.v1.ImageService/ImageFsInfo
	Oct 10 18:06:09 addons-473910 crio[664]: time="2024-10-10 18:06:09.686180711Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2d45fe6e-160d-49c9-919b-0df7d16b3ca6 name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 18:06:09 addons-473910 crio[664]: time="2024-10-10 18:06:09.686253514Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2d45fe6e-160d-49c9-919b-0df7d16b3ca6 name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 18:06:09 addons-473910 crio[664]: time="2024-10-10 18:06:09.686756027Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:36136f4e792d049464d12cb3b9fde6256713705f025b767534254f2d388777c4,PodSandboxId:161834b6b8da2816e53b43145535717744f0128750195b53c76495c12c325f83,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1728583379209710220,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-hjz49,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5c2a63b0-9bc9-4632-83c5-68ff53b86390,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac61bcb449dd008022a57025113e8ce0f569b64dd266ea48ce07233fb05c610a,PodSandboxId:e7a9e829f252f54805a998ef17bcea567c25ab25439ae6921adb0f1744739151,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1728583240469506019,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2d2384a6-8648-46d4-94c5-9c3ec997ecdc,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6690ee966904d3752b6c5ddc92710db914930936530246b1664d693c89208591,PodSandboxId:1c0c6796f2cdf53539eb2350758502cf2e8a059ecc2ef6925b7599ad07bf6e3a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1728583202391782724,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ee51e877-54f4-4ca0-8
4f2-3fa775f67d92,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b6348a73c59af413c47d026c6c02ed02603f8f76800efceafe9257887000383,PodSandboxId:fd615680300bfde797e70660c3cef4366d0e0d9c9f827fdc718b34c6623f669a,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1728583155020959661,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-sr88b,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 562db437-e740-4818-a2fd-dec917bd22cf,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:deac66408019d59749c7de76502f032afd1679bedfd1fd5c55e029101716b9b0,PodSandboxId:1452b1a36b355be1d36ee0c3bb69f0ff0092a9cd99a2ccaf71e36d984c28d1ce,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728583120245559496,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner
,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32649d6b-8dd1-4e8c-b16f-9fcc465018b5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b221dbce7cbc7bb27dd0ca4b197a5089677416b54742e3f3eb98801b6f4b4f1a,PodSandboxId:e506ba1f2ea95b63a31f8b44fe7cfe38a0314c47c9e68a19780cfff48a00b77f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728583117647931369,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c6
5d6cfc9-b5dd8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc517273-4630-428f-99ab-0965a9e1b483,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a58a6a528d510cfebfd4412ea54a8ecf08518f84f3f8fdd857854d977196d4a3,PodSandboxId:eaa0c1c7802d21db183fd3618cfdc9827890205fede0d3a3891830b77372914d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d13
1805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728583114775787325,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qx6m4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a52a8d5-4cda-449b-b74f-cbc835d4dc37,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f79440a61b5920283b88621a6580dd1741cd751a10ace64cc621653297aede51,PodSandboxId:777a98389002e1503fe2d4f0869e14ff24d933bee6b6395063c6987f3e685cbf,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8
d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728583103283194523,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-473910,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d27f453211eff4a3155dfbdf6354ecec,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76288df1302c87280618ccfd56c890edf9404223642b2c7063f14fe9814b69e4,PodSandboxId:9913494a438b435077fd647dd0bd9a6d6542fbfba7d4dc003975ba06766ff53e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNI
NG,CreatedAt:1728583103261371138,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-473910,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3020ffd8df3abe5b746c768a102d7868,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d18ceead39c176ae40504127165ef6f70d34a3524c690cf597693a5cea85eb9f,PodSandboxId:8c0c63bcacfc537f877bae552d8cc5eff1e964ca3e05d5b23e5034f63a754d2c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNN
ING,CreatedAt:1728583103294195727,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-473910,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd938ef3bfcffd67f7fa1a9a06d155c1,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dda4917567e30a8dffe800dded48fb64958951c261adef9310b2ade87d30bf28,PodSandboxId:acd84339bc29fa286098e565057879464c6add2bf5ea93422bcc1376d3be191f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:172
8583103275484443,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-473910,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e53a2912e9524a040780063467147bc8,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2d45fe6e-160d-49c9-919b-0df7d16b3ca6 name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 18:06:09 addons-473910 crio[664]: time="2024-10-10 18:06:09.725034940Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=30b30ed7-e30c-4103-b954-b58f70def87b name=/runtime.v1.RuntimeService/Version
	Oct 10 18:06:09 addons-473910 crio[664]: time="2024-10-10 18:06:09.725173650Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=30b30ed7-e30c-4103-b954-b58f70def87b name=/runtime.v1.RuntimeService/Version
	Oct 10 18:06:09 addons-473910 crio[664]: time="2024-10-10 18:06:09.726681393Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3a595b42-132c-40d1-a078-f8b17c2713b9 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 10 18:06:09 addons-473910 crio[664]: time="2024-10-10 18:06:09.728043719Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728583569728017786,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582833,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3a595b42-132c-40d1-a078-f8b17c2713b9 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 10 18:06:09 addons-473910 crio[664]: time="2024-10-10 18:06:09.728746943Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ac4d05f0-0447-49ff-9b68-f6dde3217705 name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 18:06:09 addons-473910 crio[664]: time="2024-10-10 18:06:09.728903264Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ac4d05f0-0447-49ff-9b68-f6dde3217705 name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 18:06:09 addons-473910 crio[664]: time="2024-10-10 18:06:09.729205287Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:36136f4e792d049464d12cb3b9fde6256713705f025b767534254f2d388777c4,PodSandboxId:161834b6b8da2816e53b43145535717744f0128750195b53c76495c12c325f83,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1728583379209710220,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-hjz49,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5c2a63b0-9bc9-4632-83c5-68ff53b86390,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac61bcb449dd008022a57025113e8ce0f569b64dd266ea48ce07233fb05c610a,PodSandboxId:e7a9e829f252f54805a998ef17bcea567c25ab25439ae6921adb0f1744739151,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1728583240469506019,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2d2384a6-8648-46d4-94c5-9c3ec997ecdc,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6690ee966904d3752b6c5ddc92710db914930936530246b1664d693c89208591,PodSandboxId:1c0c6796f2cdf53539eb2350758502cf2e8a059ecc2ef6925b7599ad07bf6e3a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1728583202391782724,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ee51e877-54f4-4ca0-8
4f2-3fa775f67d92,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b6348a73c59af413c47d026c6c02ed02603f8f76800efceafe9257887000383,PodSandboxId:fd615680300bfde797e70660c3cef4366d0e0d9c9f827fdc718b34c6623f669a,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1728583155020959661,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-sr88b,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 562db437-e740-4818-a2fd-dec917bd22cf,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:deac66408019d59749c7de76502f032afd1679bedfd1fd5c55e029101716b9b0,PodSandboxId:1452b1a36b355be1d36ee0c3bb69f0ff0092a9cd99a2ccaf71e36d984c28d1ce,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728583120245559496,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner
,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32649d6b-8dd1-4e8c-b16f-9fcc465018b5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b221dbce7cbc7bb27dd0ca4b197a5089677416b54742e3f3eb98801b6f4b4f1a,PodSandboxId:e506ba1f2ea95b63a31f8b44fe7cfe38a0314c47c9e68a19780cfff48a00b77f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728583117647931369,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c6
5d6cfc9-b5dd8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc517273-4630-428f-99ab-0965a9e1b483,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a58a6a528d510cfebfd4412ea54a8ecf08518f84f3f8fdd857854d977196d4a3,PodSandboxId:eaa0c1c7802d21db183fd3618cfdc9827890205fede0d3a3891830b77372914d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d13
1805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728583114775787325,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qx6m4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a52a8d5-4cda-449b-b74f-cbc835d4dc37,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f79440a61b5920283b88621a6580dd1741cd751a10ace64cc621653297aede51,PodSandboxId:777a98389002e1503fe2d4f0869e14ff24d933bee6b6395063c6987f3e685cbf,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8
d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728583103283194523,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-473910,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d27f453211eff4a3155dfbdf6354ecec,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76288df1302c87280618ccfd56c890edf9404223642b2c7063f14fe9814b69e4,PodSandboxId:9913494a438b435077fd647dd0bd9a6d6542fbfba7d4dc003975ba06766ff53e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNI
NG,CreatedAt:1728583103261371138,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-473910,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3020ffd8df3abe5b746c768a102d7868,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d18ceead39c176ae40504127165ef6f70d34a3524c690cf597693a5cea85eb9f,PodSandboxId:8c0c63bcacfc537f877bae552d8cc5eff1e964ca3e05d5b23e5034f63a754d2c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNN
ING,CreatedAt:1728583103294195727,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-473910,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd938ef3bfcffd67f7fa1a9a06d155c1,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dda4917567e30a8dffe800dded48fb64958951c261adef9310b2ade87d30bf28,PodSandboxId:acd84339bc29fa286098e565057879464c6add2bf5ea93422bcc1376d3be191f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:172
8583103275484443,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-473910,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e53a2912e9524a040780063467147bc8,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ac4d05f0-0447-49ff-9b68-f6dde3217705 name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 18:06:09 addons-473910 crio[664]: time="2024-10-10 18:06:09.766721580Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7ca2d40e-f96a-4e8c-ade7-6040f8c38a8b name=/runtime.v1.RuntimeService/Version
	Oct 10 18:06:09 addons-473910 crio[664]: time="2024-10-10 18:06:09.766821037Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7ca2d40e-f96a-4e8c-ade7-6040f8c38a8b name=/runtime.v1.RuntimeService/Version
	Oct 10 18:06:09 addons-473910 crio[664]: time="2024-10-10 18:06:09.768515892Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e4429bfa-cd6f-40b3-b21c-68af9c6a1468 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 10 18:06:09 addons-473910 crio[664]: time="2024-10-10 18:06:09.769795044Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728583569769761620,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582833,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e4429bfa-cd6f-40b3-b21c-68af9c6a1468 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 10 18:06:09 addons-473910 crio[664]: time="2024-10-10 18:06:09.770563660Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=53126252-e141-4e5b-9e84-31ce11a98558 name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 18:06:09 addons-473910 crio[664]: time="2024-10-10 18:06:09.770646495Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=53126252-e141-4e5b-9e84-31ce11a98558 name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 18:06:09 addons-473910 crio[664]: time="2024-10-10 18:06:09.770873149Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:36136f4e792d049464d12cb3b9fde6256713705f025b767534254f2d388777c4,PodSandboxId:161834b6b8da2816e53b43145535717744f0128750195b53c76495c12c325f83,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1728583379209710220,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-hjz49,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5c2a63b0-9bc9-4632-83c5-68ff53b86390,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac61bcb449dd008022a57025113e8ce0f569b64dd266ea48ce07233fb05c610a,PodSandboxId:e7a9e829f252f54805a998ef17bcea567c25ab25439ae6921adb0f1744739151,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1728583240469506019,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2d2384a6-8648-46d4-94c5-9c3ec997ecdc,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6690ee966904d3752b6c5ddc92710db914930936530246b1664d693c89208591,PodSandboxId:1c0c6796f2cdf53539eb2350758502cf2e8a059ecc2ef6925b7599ad07bf6e3a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1728583202391782724,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ee51e877-54f4-4ca0-8
4f2-3fa775f67d92,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b6348a73c59af413c47d026c6c02ed02603f8f76800efceafe9257887000383,PodSandboxId:fd615680300bfde797e70660c3cef4366d0e0d9c9f827fdc718b34c6623f669a,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1728583155020959661,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-sr88b,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 562db437-e740-4818-a2fd-dec917bd22cf,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:deac66408019d59749c7de76502f032afd1679bedfd1fd5c55e029101716b9b0,PodSandboxId:1452b1a36b355be1d36ee0c3bb69f0ff0092a9cd99a2ccaf71e36d984c28d1ce,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728583120245559496,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner
,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32649d6b-8dd1-4e8c-b16f-9fcc465018b5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b221dbce7cbc7bb27dd0ca4b197a5089677416b54742e3f3eb98801b6f4b4f1a,PodSandboxId:e506ba1f2ea95b63a31f8b44fe7cfe38a0314c47c9e68a19780cfff48a00b77f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728583117647931369,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c6
5d6cfc9-b5dd8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc517273-4630-428f-99ab-0965a9e1b483,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a58a6a528d510cfebfd4412ea54a8ecf08518f84f3f8fdd857854d977196d4a3,PodSandboxId:eaa0c1c7802d21db183fd3618cfdc9827890205fede0d3a3891830b77372914d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d13
1805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728583114775787325,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qx6m4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a52a8d5-4cda-449b-b74f-cbc835d4dc37,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f79440a61b5920283b88621a6580dd1741cd751a10ace64cc621653297aede51,PodSandboxId:777a98389002e1503fe2d4f0869e14ff24d933bee6b6395063c6987f3e685cbf,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8
d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728583103283194523,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-473910,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d27f453211eff4a3155dfbdf6354ecec,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76288df1302c87280618ccfd56c890edf9404223642b2c7063f14fe9814b69e4,PodSandboxId:9913494a438b435077fd647dd0bd9a6d6542fbfba7d4dc003975ba06766ff53e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNI
NG,CreatedAt:1728583103261371138,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-473910,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3020ffd8df3abe5b746c768a102d7868,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d18ceead39c176ae40504127165ef6f70d34a3524c690cf597693a5cea85eb9f,PodSandboxId:8c0c63bcacfc537f877bae552d8cc5eff1e964ca3e05d5b23e5034f63a754d2c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNN
ING,CreatedAt:1728583103294195727,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-473910,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd938ef3bfcffd67f7fa1a9a06d155c1,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dda4917567e30a8dffe800dded48fb64958951c261adef9310b2ade87d30bf28,PodSandboxId:acd84339bc29fa286098e565057879464c6add2bf5ea93422bcc1376d3be191f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:172
8583103275484443,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-473910,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e53a2912e9524a040780063467147bc8,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=53126252-e141-4e5b-9e84-31ce11a98558 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	36136f4e792d0       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   3 minutes ago       Running             hello-world-app           0                   161834b6b8da2       hello-world-app-55bf9c44b4-hjz49
	ac61bcb449dd0       docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250                         5 minutes ago       Running             nginx                     0                   e7a9e829f252f       nginx
	6690ee966904d       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                     6 minutes ago       Running             busybox                   0                   1c0c6796f2cdf       busybox
	0b6348a73c59a       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a   6 minutes ago       Running             metrics-server            0                   fd615680300bf       metrics-server-84c5f94fbc-sr88b
	deac66408019d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        7 minutes ago       Running             storage-provisioner       0                   1452b1a36b355       storage-provisioner
	b221dbce7cbc7       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                        7 minutes ago       Running             coredns                   0                   e506ba1f2ea95       coredns-7c65d6cfc9-b5dd8
	a58a6a528d510       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                        7 minutes ago       Running             kube-proxy                0                   eaa0c1c7802d2       kube-proxy-qx6m4
	d18ceead39c17       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                                        7 minutes ago       Running             kube-apiserver            0                   8c0c63bcacfc5       kube-apiserver-addons-473910
	f79440a61b592       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                        7 minutes ago       Running             etcd                      0                   777a98389002e       etcd-addons-473910
	dda4917567e30       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                        7 minutes ago       Running             kube-scheduler            0                   acd84339bc29f       kube-scheduler-addons-473910
	76288df1302c8       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                        7 minutes ago       Running             kube-controller-manager   0                   9913494a438b4       kube-controller-manager-addons-473910
	
	
	==> coredns [b221dbce7cbc7bb27dd0ca4b197a5089677416b54742e3f3eb98801b6f4b4f1a] <==
	[INFO] 10.244.0.21:51467 - 16676 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00006786s
	[INFO] 10.244.0.21:51467 - 46962 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000112844s
	[INFO] 10.244.0.21:51467 - 49370 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000068382s
	[INFO] 10.244.0.21:51467 - 19660 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000102505s
	[INFO] 10.244.0.21:46380 - 17885 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000105907s
	[INFO] 10.244.0.21:46380 - 36321 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000113352s
	[INFO] 10.244.0.21:46380 - 3760 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000040672s
	[INFO] 10.244.0.21:46380 - 17680 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00003887s
	[INFO] 10.244.0.21:46380 - 19683 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000037572s
	[INFO] 10.244.0.21:46380 - 48308 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000036648s
	[INFO] 10.244.0.21:46380 - 24058 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000050723s
	[INFO] 10.244.0.21:40423 - 8494 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000161748s
	[INFO] 10.244.0.21:56755 - 38065 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000067054s
	[INFO] 10.244.0.21:40423 - 2302 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000051366s
	[INFO] 10.244.0.21:40423 - 18587 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000037703s
	[INFO] 10.244.0.21:56755 - 29065 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.00010198s
	[INFO] 10.244.0.21:40423 - 54439 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00008255s
	[INFO] 10.244.0.21:56755 - 13327 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000046119s
	[INFO] 10.244.0.21:40423 - 61446 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000032363s
	[INFO] 10.244.0.21:40423 - 58987 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000038441s
	[INFO] 10.244.0.21:56755 - 43623 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000046089s
	[INFO] 10.244.0.21:56755 - 64885 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000098631s
	[INFO] 10.244.0.21:40423 - 43246 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00004342s
	[INFO] 10.244.0.21:56755 - 53531 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000083323s
	[INFO] 10.244.0.21:56755 - 59405 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000072217s
	
	
	==> describe nodes <==
	Name:               addons-473910
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-473910
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e90d7de550e839433dc631623053398b652747fc
	                    minikube.k8s.io/name=addons-473910
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_10T17_58_29_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-473910
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 10 Oct 2024 17:58:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-473910
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 10 Oct 2024 18:06:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 10 Oct 2024 18:03:35 +0000   Thu, 10 Oct 2024 17:58:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 10 Oct 2024 18:03:35 +0000   Thu, 10 Oct 2024 17:58:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 10 Oct 2024 18:03:35 +0000   Thu, 10 Oct 2024 17:58:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 10 Oct 2024 18:03:35 +0000   Thu, 10 Oct 2024 17:58:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.238
	  Hostname:    addons-473910
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 4b93c70cc0c0471baa6df81ff59b007c
	  System UUID:                4b93c70c-c0c0-471b-aa6d-f81ff59b007c
	  Boot ID:                    74cb2697-df15-48c6-999e-efbc2fa7d0aa
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m11s
	  default                     hello-world-app-55bf9c44b4-hjz49         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m14s
	  default                     nginx                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m32s
	  kube-system                 coredns-7c65d6cfc9-b5dd8                 100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     7m37s
	  kube-system                 etcd-addons-473910                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         7m42s
	  kube-system                 kube-apiserver-addons-473910             250m (12%)    0 (0%)      0 (0%)           0 (0%)         7m42s
	  kube-system                 kube-controller-manager-addons-473910    200m (10%)    0 (0%)      0 (0%)           0 (0%)         7m42s
	  kube-system                 kube-proxy-qx6m4                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m37s
	  kube-system                 kube-scheduler-addons-473910             100m (5%)     0 (0%)      0 (0%)           0 (0%)         7m42s
	  kube-system                 metrics-server-84c5f94fbc-sr88b          100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         7m31s
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             370Mi (9%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m34s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  7m48s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 7m48s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7m48s (x8 over 7m48s)  kubelet          Node addons-473910 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m48s (x8 over 7m48s)  kubelet          Node addons-473910 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m48s (x7 over 7m48s)  kubelet          Node addons-473910 status is now: NodeHasSufficientPID
	  Normal  Starting                 7m42s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7m42s (x2 over 7m42s)  kubelet          Node addons-473910 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m42s (x2 over 7m42s)  kubelet          Node addons-473910 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m42s (x2 over 7m42s)  kubelet          Node addons-473910 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m42s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                7m41s                  kubelet          Node addons-473910 status is now: NodeReady
	  Normal  RegisteredNode           7m38s                  node-controller  Node addons-473910 event: Registered Node addons-473910 in Controller
	  Normal  CIDRAssignmentFailed     7m38s                  cidrAllocator    Node addons-473910 status is now: CIDRAssignmentFailed
	
	
	==> dmesg <==
	[  +6.508460] kauditd_printk_skb: 88 callbacks suppressed
	[Oct10 17:59] kauditd_printk_skb: 2 callbacks suppressed
	[  +6.344045] kauditd_printk_skb: 2 callbacks suppressed
	[  +6.696273] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.052645] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.228022] kauditd_printk_skb: 24 callbacks suppressed
	[  +5.303127] kauditd_printk_skb: 34 callbacks suppressed
	[  +5.327054] kauditd_printk_skb: 25 callbacks suppressed
	[ +12.787507] kauditd_printk_skb: 24 callbacks suppressed
	[  +5.941270] kauditd_printk_skb: 6 callbacks suppressed
	[Oct10 18:00] kauditd_printk_skb: 7 callbacks suppressed
	[ +13.549335] kauditd_printk_skb: 2 callbacks suppressed
	[ +12.836367] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.020223] kauditd_printk_skb: 19 callbacks suppressed
	[  +6.999029] kauditd_printk_skb: 41 callbacks suppressed
	[  +5.062895] kauditd_printk_skb: 13 callbacks suppressed
	[  +5.995859] kauditd_printk_skb: 16 callbacks suppressed
	[Oct10 18:01] kauditd_printk_skb: 25 callbacks suppressed
	[ +19.827274] kauditd_printk_skb: 6 callbacks suppressed
	[  +7.885716] kauditd_printk_skb: 7 callbacks suppressed
	[ +14.838694] kauditd_printk_skb: 57 callbacks suppressed
	[  +5.002637] kauditd_printk_skb: 22 callbacks suppressed
	[  +6.844652] kauditd_printk_skb: 3 callbacks suppressed
	[Oct10 18:02] kauditd_printk_skb: 2 callbacks suppressed
	[Oct10 18:03] kauditd_printk_skb: 19 callbacks suppressed
	
	
	==> etcd [f79440a61b5920283b88621a6580dd1741cd751a10ace64cc621653297aede51] <==
	{"level":"info","ts":"2024-10-10T17:59:57.065973Z","caller":"traceutil/trace.go:171","msg":"trace[927349461] linearizableReadLoop","detail":"{readStateIndex:1185; appliedIndex:1184; }","duration":"361.535534ms","start":"2024-10-10T17:59:56.704419Z","end":"2024-10-10T17:59:57.065955Z","steps":["trace[927349461] 'read index received'  (duration: 361.296293ms)","trace[927349461] 'applied index is now lower than readState.Index'  (duration: 238.723µs)"],"step_count":2}
	{"level":"info","ts":"2024-10-10T17:59:57.066123Z","caller":"traceutil/trace.go:171","msg":"trace[892081535] transaction","detail":"{read_only:false; response_revision:1147; number_of_response:1; }","duration":"421.415918ms","start":"2024-10-10T17:59:56.644646Z","end":"2024-10-10T17:59:57.066062Z","steps":["trace[892081535] 'process raft request'  (duration: 421.20048ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-10T17:59:57.066206Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-10T17:59:56.644632Z","time spent":"421.511982ms","remote":"127.0.0.1:46698","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1143 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-10-10T17:59:57.066205Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"303.533801ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-10T17:59:57.066241Z","caller":"traceutil/trace.go:171","msg":"trace[1787784892] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1147; }","duration":"303.577779ms","start":"2024-10-10T17:59:56.762655Z","end":"2024-10-10T17:59:57.066233Z","steps":["trace[1787784892] 'agreement among raft nodes before linearized reading'  (duration: 303.5145ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-10T17:59:57.066275Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-10T17:59:56.762615Z","time spent":"303.655046ms","remote":"127.0.0.1:46708","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2024-10-10T17:59:57.066426Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"263.563539ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-10T17:59:57.066442Z","caller":"traceutil/trace.go:171","msg":"trace[1757354927] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1147; }","duration":"263.579522ms","start":"2024-10-10T17:59:56.802857Z","end":"2024-10-10T17:59:57.066436Z","steps":["trace[1757354927] 'agreement among raft nodes before linearized reading'  (duration: 263.555747ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-10T17:59:57.066456Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"252.171757ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" ","response":"range_response_count:1 size:499"}
	{"level":"info","ts":"2024-10-10T17:59:57.066474Z","caller":"traceutil/trace.go:171","msg":"trace[979404040] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1147; }","duration":"252.190694ms","start":"2024-10-10T17:59:56.814279Z","end":"2024-10-10T17:59:57.066469Z","steps":["trace[979404040] 'agreement among raft nodes before linearized reading'  (duration: 252.115879ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-10T17:59:57.066601Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"362.197016ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/snapshot.storage.k8s.io/volumesnapshotclasses/\" range_end:\"/registry/snapshot.storage.k8s.io/volumesnapshotclasses0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-10-10T17:59:57.066617Z","caller":"traceutil/trace.go:171","msg":"trace[2140921366] range","detail":"{range_begin:/registry/snapshot.storage.k8s.io/volumesnapshotclasses/; range_end:/registry/snapshot.storage.k8s.io/volumesnapshotclasses0; response_count:0; response_revision:1147; }","duration":"362.219092ms","start":"2024-10-10T17:59:56.704393Z","end":"2024-10-10T17:59:57.066612Z","steps":["trace[2140921366] 'agreement among raft nodes before linearized reading'  (duration: 362.18563ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-10T17:59:57.066640Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-10T17:59:56.704359Z","time spent":"362.274782ms","remote":"127.0.0.1:40238","response type":"/etcdserverpb.KV/Range","request count":0,"request size":118,"response count":1,"response size":30,"request content":"key:\"/registry/snapshot.storage.k8s.io/volumesnapshotclasses/\" range_end:\"/registry/snapshot.storage.k8s.io/volumesnapshotclasses0\" count_only:true "}
	{"level":"info","ts":"2024-10-10T18:01:05.903713Z","caller":"traceutil/trace.go:171","msg":"trace[1475457963] linearizableReadLoop","detail":"{readStateIndex:1623; appliedIndex:1622; }","duration":"391.489148ms","start":"2024-10-10T18:01:05.512207Z","end":"2024-10-10T18:01:05.903696Z","steps":["trace[1475457963] 'read index received'  (duration: 391.346663ms)","trace[1475457963] 'applied index is now lower than readState.Index'  (duration: 142.075µs)"],"step_count":2}
	{"level":"info","ts":"2024-10-10T18:01:05.903937Z","caller":"traceutil/trace.go:171","msg":"trace[1298268550] transaction","detail":"{read_only:false; response_revision:1562; number_of_response:1; }","duration":"402.691115ms","start":"2024-10-10T18:01:05.501239Z","end":"2024-10-10T18:01:05.903930Z","steps":["trace[1298268550] 'process raft request'  (duration: 402.36557ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-10T18:01:05.904176Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-10T18:01:05.501209Z","time spent":"402.772231ms","remote":"127.0.0.1:46698","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1548 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-10-10T18:01:05.904174Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"324.936917ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/local-path-storage\" ","response":"range_response_count:1 size:636"}
	{"level":"info","ts":"2024-10-10T18:01:05.904932Z","caller":"traceutil/trace.go:171","msg":"trace[61111919] range","detail":"{range_begin:/registry/namespaces/local-path-storage; range_end:; response_count:1; response_revision:1562; }","duration":"325.701218ms","start":"2024-10-10T18:01:05.579220Z","end":"2024-10-10T18:01:05.904922Z","steps":["trace[61111919] 'agreement among raft nodes before linearized reading'  (duration: 324.820888ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-10T18:01:05.904971Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-10T18:01:05.579186Z","time spent":"325.775435ms","remote":"127.0.0.1:46622","response type":"/etcdserverpb.KV/Range","request count":0,"request size":41,"response count":1,"response size":659,"request content":"key:\"/registry/namespaces/local-path-storage\" "}
	{"level":"warn","ts":"2024-10-10T18:01:05.904237Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"392.025689ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/persistentvolumeclaims/default/hpvc-restore\" ","response":"range_response_count:1 size:982"}
	{"level":"info","ts":"2024-10-10T18:01:05.905124Z","caller":"traceutil/trace.go:171","msg":"trace[1421856629] range","detail":"{range_begin:/registry/persistentvolumeclaims/default/hpvc-restore; range_end:; response_count:1; response_revision:1562; }","duration":"392.914078ms","start":"2024-10-10T18:01:05.512203Z","end":"2024-10-10T18:01:05.905117Z","steps":["trace[1421856629] 'agreement among raft nodes before linearized reading'  (duration: 391.999233ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-10T18:01:05.905148Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-10T18:01:05.512169Z","time spent":"392.972751ms","remote":"127.0.0.1:46688","response type":"/etcdserverpb.KV/Range","request count":0,"request size":55,"response count":1,"response size":1005,"request content":"key:\"/registry/persistentvolumeclaims/default/hpvc-restore\" "}
	{"level":"warn","ts":"2024-10-10T18:01:05.904259Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.045374ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-10T18:01:05.905385Z","caller":"traceutil/trace.go:171","msg":"trace[297005619] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1562; }","duration":"102.17041ms","start":"2024-10-10T18:01:05.803205Z","end":"2024-10-10T18:01:05.905375Z","steps":["trace[297005619] 'agreement among raft nodes before linearized reading'  (duration: 101.040888ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-10T18:02:21.702911Z","caller":"traceutil/trace.go:171","msg":"trace[1826958245] transaction","detail":"{read_only:false; response_revision:1878; number_of_response:1; }","duration":"218.689166ms","start":"2024-10-10T18:02:21.483969Z","end":"2024-10-10T18:02:21.702658Z","steps":["trace[1826958245] 'process raft request'  (duration: 218.553419ms)"],"step_count":1}
	
	
	==> kernel <==
	 18:06:10 up 8 min,  0 users,  load average: 0.02, 0.47, 0.38
	Linux addons-473910 5.10.207 #1 SMP Tue Oct 8 15:16:25 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [d18ceead39c176ae40504127165ef6f70d34a3524c690cf597693a5cea85eb9f] <==
	E1010 18:00:19.877357       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.99.141.215:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.99.141.215:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.99.141.215:443: connect: connection refused" logger="UnhandledError"
	E1010 18:00:19.882401       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.99.141.215:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.99.141.215:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.99.141.215:443: connect: connection refused" logger="UnhandledError"
	I1010 18:00:19.958438       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1010 18:00:37.971510       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I1010 18:00:38.146175       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.111.147.204"}
	I1010 18:00:38.815000       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1010 18:00:39.970282       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1010 18:01:02.543718       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E1010 18:01:15.873369       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1010 18:01:31.261684       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1010 18:01:31.262056       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1010 18:01:31.297493       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1010 18:01:31.297604       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1010 18:01:31.315010       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1010 18:01:31.315179       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1010 18:01:31.332984       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1010 18:01:31.333042       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1010 18:01:31.512675       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1010 18:01:31.512744       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	E1010 18:01:31.623236       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"snapshot-controller\" not found]"
	W1010 18:01:32.315475       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1010 18:01:32.515268       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1010 18:01:32.550600       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1010 18:01:45.020949       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.98.41.79"}
	I1010 18:02:56.750163       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.107.212.100"}
	
	
	==> kube-controller-manager [76288df1302c87280618ccfd56c890edf9404223642b2c7063f14fe9814b69e4] <==
	E1010 18:03:57.297995       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1010 18:04:12.052860       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1010 18:04:12.053026       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1010 18:04:16.169995       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1010 18:04:16.170134       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1010 18:04:28.528382       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1010 18:04:28.528482       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1010 18:04:33.642399       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1010 18:04:33.642468       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1010 18:04:52.118378       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1010 18:04:52.118432       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1010 18:04:59.934424       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1010 18:04:59.934564       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1010 18:05:05.776757       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1010 18:05:05.776823       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1010 18:05:13.579494       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1010 18:05:13.579561       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1010 18:05:24.649482       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1010 18:05:24.649543       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1010 18:05:32.581911       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1010 18:05:32.582053       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1010 18:05:42.753879       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1010 18:05:42.753940       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1010 18:05:58.672977       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1010 18:05:58.673116       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [a58a6a528d510cfebfd4412ea54a8ecf08518f84f3f8fdd857854d977196d4a3] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1010 17:58:35.616671       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1010 17:58:35.626711       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.238"]
	E1010 17:58:35.626834       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1010 17:58:35.717285       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1010 17:58:35.717356       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1010 17:58:35.717382       1 server_linux.go:169] "Using iptables Proxier"
	I1010 17:58:35.753313       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1010 17:58:35.753613       1 server.go:483] "Version info" version="v1.31.1"
	I1010 17:58:35.753626       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1010 17:58:35.756156       1 config.go:199] "Starting service config controller"
	I1010 17:58:35.756185       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1010 17:58:35.756207       1 config.go:105] "Starting endpoint slice config controller"
	I1010 17:58:35.756211       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1010 17:58:35.756560       1 config.go:328] "Starting node config controller"
	I1010 17:58:35.756590       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1010 17:58:35.856444       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1010 17:58:35.856507       1 shared_informer.go:320] Caches are synced for service config
	I1010 17:58:35.856751       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [dda4917567e30a8dffe800dded48fb64958951c261adef9310b2ade87d30bf28] <==
	W1010 17:58:26.788527       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1010 17:58:26.788561       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1010 17:58:26.821495       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1010 17:58:26.821600       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1010 17:58:26.853322       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1010 17:58:26.854804       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1010 17:58:26.859947       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1010 17:58:26.860061       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1010 17:58:26.910697       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1010 17:58:26.910749       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1010 17:58:27.137255       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1010 17:58:27.137388       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1010 17:58:27.177491       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1010 17:58:27.177543       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1010 17:58:27.218491       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1010 17:58:27.218627       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1010 17:58:27.227220       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1010 17:58:27.227295       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1010 17:58:27.234612       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1010 17:58:27.234664       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1010 17:58:27.241135       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1010 17:58:27.241212       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1010 17:58:27.372241       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1010 17:58:27.372506       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I1010 17:58:29.131164       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 10 18:04:39 addons-473910 kubelet[1205]: E1010 18:04:39.050554    1205 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728583479050151323,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582833,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 18:04:49 addons-473910 kubelet[1205]: E1010 18:04:49.054467    1205 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728583489053506068,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582833,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 18:04:49 addons-473910 kubelet[1205]: E1010 18:04:49.054797    1205 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728583489053506068,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582833,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 18:04:59 addons-473910 kubelet[1205]: E1010 18:04:59.058110    1205 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728583499057580879,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582833,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 18:04:59 addons-473910 kubelet[1205]: E1010 18:04:59.058504    1205 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728583499057580879,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582833,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 18:05:04 addons-473910 kubelet[1205]: I1010 18:05:04.784666    1205 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Oct 10 18:05:09 addons-473910 kubelet[1205]: E1010 18:05:09.061799    1205 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728583509061346461,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582833,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 18:05:09 addons-473910 kubelet[1205]: E1010 18:05:09.061848    1205 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728583509061346461,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582833,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 18:05:19 addons-473910 kubelet[1205]: E1010 18:05:19.066890    1205 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728583519066144981,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582833,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 18:05:19 addons-473910 kubelet[1205]: E1010 18:05:19.066939    1205 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728583519066144981,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582833,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 18:05:28 addons-473910 kubelet[1205]: E1010 18:05:28.819389    1205 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 10 18:05:28 addons-473910 kubelet[1205]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 10 18:05:28 addons-473910 kubelet[1205]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 10 18:05:28 addons-473910 kubelet[1205]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 10 18:05:28 addons-473910 kubelet[1205]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 10 18:05:29 addons-473910 kubelet[1205]: E1010 18:05:29.069455    1205 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728583529069034524,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582833,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 18:05:29 addons-473910 kubelet[1205]: E1010 18:05:29.069511    1205 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728583529069034524,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582833,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 18:05:39 addons-473910 kubelet[1205]: E1010 18:05:39.072886    1205 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728583539072368042,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582833,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 18:05:39 addons-473910 kubelet[1205]: E1010 18:05:39.072935    1205 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728583539072368042,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582833,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 18:05:49 addons-473910 kubelet[1205]: E1010 18:05:49.075855    1205 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728583549075352107,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582833,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 18:05:49 addons-473910 kubelet[1205]: E1010 18:05:49.075942    1205 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728583549075352107,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582833,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 18:05:59 addons-473910 kubelet[1205]: E1010 18:05:59.078381    1205 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728583559077800649,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582833,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 18:05:59 addons-473910 kubelet[1205]: E1010 18:05:59.078661    1205 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728583559077800649,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582833,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 18:06:09 addons-473910 kubelet[1205]: E1010 18:06:09.081006    1205 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728583569080562527,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582833,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 18:06:09 addons-473910 kubelet[1205]: E1010 18:06:09.081039    1205 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728583569080562527,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582833,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [deac66408019d59749c7de76502f032afd1679bedfd1fd5c55e029101716b9b0] <==
	I1010 17:58:40.658372       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1010 17:58:40.692360       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1010 17:58:40.692440       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1010 17:58:40.710950       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1010 17:58:40.711127       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-473910_27d4675b-7e18-4c78-8a84-45db36aafbd9!
	I1010 17:58:40.713970       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"335f240f-ce46-419a-9030-88aacf16bc62", APIVersion:"v1", ResourceVersion:"644", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-473910_27d4675b-7e18-4c78-8a84-45db36aafbd9 became leader
	I1010 17:58:40.814288       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-473910_27d4675b-7e18-4c78-8a84-45db36aafbd9!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-473910 -n addons-473910
helpers_test.go:261: (dbg) Run:  kubectl --context addons-473910 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
addons_test.go:975: (dbg) Run:  out/minikube-linux-amd64 -p addons-473910 addons disable metrics-server --alsologtostderr -v=1
--- FAIL: TestAddons/parallel/MetricsServer (350.41s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (154.32s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-473910
addons_test.go:170: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-473910: exit status 82 (2m0.497763702s)

                                                
                                                
-- stdout --
	* Stopping node "addons-473910"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:172: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-473910" : exit status 82
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-473910
addons_test.go:174: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-473910: exit status 11 (21.537127532s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.238:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:176: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-473910" : exit status 11
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-473910
addons_test.go:178: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-473910: exit status 11 (6.144789135s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.238:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:180: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-473910" : exit status 11
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-473910
addons_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-473910: exit status 11 (6.143904252s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.238:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:185: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-473910" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (154.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (141.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-142481 node stop m02 -v=7 --alsologtostderr
E1010 18:18:30.995595   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/functional-346405/client.crt: no such file or directory" logger="UnhandledError"
E1010 18:19:11.958145   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/functional-346405/client.crt: no such file or directory" logger="UnhandledError"
E1010 18:19:59.530804   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/addons-473910/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:365: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-142481 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.486874969s)

                                                
                                                
-- stdout --
	* Stopping node "ha-142481-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I1010 18:18:20.329924  103869 out.go:345] Setting OutFile to fd 1 ...
	I1010 18:18:20.330028  103869 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 18:18:20.330036  103869 out.go:358] Setting ErrFile to fd 2...
	I1010 18:18:20.330040  103869 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 18:18:20.330213  103869 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19787-81676/.minikube/bin
	I1010 18:18:20.330452  103869 mustload.go:65] Loading cluster: ha-142481
	I1010 18:18:20.330870  103869 config.go:182] Loaded profile config "ha-142481": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 18:18:20.330889  103869 stop.go:39] StopHost: ha-142481-m02
	I1010 18:18:20.331262  103869 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 18:18:20.331307  103869 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 18:18:20.348173  103869 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46463
	I1010 18:18:20.348765  103869 main.go:141] libmachine: () Calling .GetVersion
	I1010 18:18:20.349421  103869 main.go:141] libmachine: Using API Version  1
	I1010 18:18:20.349445  103869 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 18:18:20.349895  103869 main.go:141] libmachine: () Calling .GetMachineName
	I1010 18:18:20.352406  103869 out.go:177] * Stopping node "ha-142481-m02"  ...
	I1010 18:18:20.353841  103869 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1010 18:18:20.353886  103869 main.go:141] libmachine: (ha-142481-m02) Calling .DriverName
	I1010 18:18:20.354121  103869 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1010 18:18:20.354158  103869 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHHostname
	I1010 18:18:20.357200  103869 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:18:20.357634  103869 main.go:141] libmachine: (ha-142481-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:30:26", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:14:39 +0000 UTC Type:0 Mac:52:54:00:70:30:26 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-142481-m02 Clientid:01:52:54:00:70:30:26}
	I1010 18:18:20.357674  103869 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined IP address 192.168.39.186 and MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:18:20.357796  103869 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHPort
	I1010 18:18:20.358018  103869 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHKeyPath
	I1010 18:18:20.358191  103869 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHUsername
	I1010 18:18:20.358340  103869 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481-m02/id_rsa Username:docker}
	I1010 18:18:20.449004  103869 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1010 18:18:20.504089  103869 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1010 18:18:20.559490  103869 main.go:141] libmachine: Stopping "ha-142481-m02"...
	I1010 18:18:20.559527  103869 main.go:141] libmachine: (ha-142481-m02) Calling .GetState
	I1010 18:18:20.561413  103869 main.go:141] libmachine: (ha-142481-m02) Calling .Stop
	I1010 18:18:20.565427  103869 main.go:141] libmachine: (ha-142481-m02) Waiting for machine to stop 0/120
	I1010 18:18:21.567544  103869 main.go:141] libmachine: (ha-142481-m02) Waiting for machine to stop 1/120
	I1010 18:18:22.569415  103869 main.go:141] libmachine: (ha-142481-m02) Waiting for machine to stop 2/120
	I1010 18:18:23.571617  103869 main.go:141] libmachine: (ha-142481-m02) Waiting for machine to stop 3/120
	I1010 18:18:24.573400  103869 main.go:141] libmachine: (ha-142481-m02) Waiting for machine to stop 4/120
	I1010 18:18:25.575433  103869 main.go:141] libmachine: (ha-142481-m02) Waiting for machine to stop 5/120
	I1010 18:18:26.576818  103869 main.go:141] libmachine: (ha-142481-m02) Waiting for machine to stop 6/120
	I1010 18:18:27.578796  103869 main.go:141] libmachine: (ha-142481-m02) Waiting for machine to stop 7/120
	I1010 18:18:28.580613  103869 main.go:141] libmachine: (ha-142481-m02) Waiting for machine to stop 8/120
	I1010 18:18:29.581886  103869 main.go:141] libmachine: (ha-142481-m02) Waiting for machine to stop 9/120
	I1010 18:18:30.584141  103869 main.go:141] libmachine: (ha-142481-m02) Waiting for machine to stop 10/120
	I1010 18:18:31.585867  103869 main.go:141] libmachine: (ha-142481-m02) Waiting for machine to stop 11/120
	I1010 18:18:32.587348  103869 main.go:141] libmachine: (ha-142481-m02) Waiting for machine to stop 12/120
	I1010 18:18:33.588463  103869 main.go:141] libmachine: (ha-142481-m02) Waiting for machine to stop 13/120
	I1010 18:18:34.589996  103869 main.go:141] libmachine: (ha-142481-m02) Waiting for machine to stop 14/120
	I1010 18:18:35.592097  103869 main.go:141] libmachine: (ha-142481-m02) Waiting for machine to stop 15/120
	I1010 18:18:36.593515  103869 main.go:141] libmachine: (ha-142481-m02) Waiting for machine to stop 16/120
	I1010 18:18:37.595391  103869 main.go:141] libmachine: (ha-142481-m02) Waiting for machine to stop 17/120
	I1010 18:18:38.596909  103869 main.go:141] libmachine: (ha-142481-m02) Waiting for machine to stop 18/120
	I1010 18:18:39.598608  103869 main.go:141] libmachine: (ha-142481-m02) Waiting for machine to stop 19/120
	I1010 18:18:40.600435  103869 main.go:141] libmachine: (ha-142481-m02) Waiting for machine to stop 20/120
	I1010 18:18:41.601983  103869 main.go:141] libmachine: (ha-142481-m02) Waiting for machine to stop 21/120
	I1010 18:18:42.603435  103869 main.go:141] libmachine: (ha-142481-m02) Waiting for machine to stop 22/120
	I1010 18:18:43.604774  103869 main.go:141] libmachine: (ha-142481-m02) Waiting for machine to stop 23/120
	I1010 18:18:44.606051  103869 main.go:141] libmachine: (ha-142481-m02) Waiting for machine to stop 24/120
	I1010 18:18:45.608229  103869 main.go:141] libmachine: (ha-142481-m02) Waiting for machine to stop 25/120
	I1010 18:18:46.609720  103869 main.go:141] libmachine: (ha-142481-m02) Waiting for machine to stop 26/120
	I1010 18:18:47.611345  103869 main.go:141] libmachine: (ha-142481-m02) Waiting for machine to stop 27/120
	I1010 18:18:48.612750  103869 main.go:141] libmachine: (ha-142481-m02) Waiting for machine to stop 28/120
	I1010 18:18:49.614504  103869 main.go:141] libmachine: (ha-142481-m02) Waiting for machine to stop 29/120
	I1010 18:18:50.616355  103869 main.go:141] libmachine: (ha-142481-m02) Waiting for machine to stop 30/120
	I1010 18:18:51.617657  103869 main.go:141] libmachine: (ha-142481-m02) Waiting for machine to stop 31/120
	I1010 18:18:52.619600  103869 main.go:141] libmachine: (ha-142481-m02) Waiting for machine to stop 32/120
	I1010 18:18:53.621111  103869 main.go:141] libmachine: (ha-142481-m02) Waiting for machine to stop 33/120
	I1010 18:18:54.623424  103869 main.go:141] libmachine: (ha-142481-m02) Waiting for machine to stop 34/120
	I1010 18:18:55.625612  103869 main.go:141] libmachine: (ha-142481-m02) Waiting for machine to stop 35/120
	I1010 18:18:56.627683  103869 main.go:141] libmachine: (ha-142481-m02) Waiting for machine to stop 36/120
	I1010 18:18:57.629177  103869 main.go:141] libmachine: (ha-142481-m02) Waiting for machine to stop 37/120
	I1010 18:18:58.631691  103869 main.go:141] libmachine: (ha-142481-m02) Waiting for machine to stop 38/120
	I1010 18:18:59.633899  103869 main.go:141] libmachine: (ha-142481-m02) Waiting for machine to stop 39/120
	I1010 18:19:00.635741  103869 main.go:141] libmachine: (ha-142481-m02) Waiting for machine to stop 40/120
	I1010 18:19:01.637051  103869 main.go:141] libmachine: (ha-142481-m02) Waiting for machine to stop 41/120
	I1010 18:19:02.639518  103869 main.go:141] libmachine: (ha-142481-m02) Waiting for machine to stop 42/120
	I1010 18:19:03.640939  103869 main.go:141] libmachine: (ha-142481-m02) Waiting for machine to stop 43/120
	I1010 18:19:04.642167  103869 main.go:141] libmachine: (ha-142481-m02) Waiting for machine to stop 44/120
	I1010 18:19:05.644221  103869 main.go:141] libmachine: (ha-142481-m02) Waiting for machine to stop 45/120
	I1010 18:19:06.645542  103869 main.go:141] libmachine: (ha-142481-m02) Waiting for machine to stop 46/120
	I1010 18:19:07.646843  103869 main.go:141] libmachine: (ha-142481-m02) Waiting for machine to stop 47/120
	I1010 18:19:08.648460  103869 main.go:141] libmachine: (ha-142481-m02) Waiting for machine to stop 48/120
	I1010 18:19:09.649767  103869 main.go:141] libmachine: (ha-142481-m02) Waiting for machine to stop 49/120
	I1010 18:19:10.652022  103869 main.go:141] libmachine: (ha-142481-m02) Waiting for machine to stop 50/120
	I1010 18:19:11.653637  103869 main.go:141] libmachine: (ha-142481-m02) Waiting for machine to stop 51/120
	I1010 18:19:12.655093  103869 main.go:141] libmachine: (ha-142481-m02) Waiting for machine to stop 52/120
	I1010 18:19:13.656351  103869 main.go:141] libmachine: (ha-142481-m02) Waiting for machine to stop 53/120
	I1010 18:19:14.657721  103869 main.go:141] libmachine: (ha-142481-m02) Waiting for machine to stop 54/120
	I1010 18:19:15.659834  103869 main.go:141] libmachine: (ha-142481-m02) Waiting for machine to stop 55/120
	I1010 18:19:16.661087  103869 main.go:141] libmachine: (ha-142481-m02) Waiting for machine to stop 56/120
	I1010 18:19:17.663347  103869 main.go:141] libmachine: (ha-142481-m02) Waiting for machine to stop 57/120
	I1010 18:19:18.664781  103869 main.go:141] libmachine: (ha-142481-m02) Waiting for machine to stop 58/120
	I1010 18:19:19.666362  103869 main.go:141] libmachine: (ha-142481-m02) Waiting for machine to stop 59/120
	I1010 18:19:20.669099  103869 main.go:141] libmachine: (ha-142481-m02) Waiting for machine to stop 60/120
	I1010 18:19:21.671219  103869 main.go:141] libmachine: (ha-142481-m02) Waiting for machine to stop 61/120
	I1010 18:19:22.672775  103869 main.go:141] libmachine: (ha-142481-m02) Waiting for machine to stop 62/120
	I1010 18:19:23.674166  103869 main.go:141] libmachine: (ha-142481-m02) Waiting for machine to stop 63/120
	I1010 18:19:24.675686  103869 main.go:141] libmachine: (ha-142481-m02) Waiting for machine to stop 64/120
	I1010 18:19:25.677713  103869 main.go:141] libmachine: (ha-142481-m02) Waiting for machine to stop 65/120
	I1010 18:19:26.680033  103869 main.go:141] libmachine: (ha-142481-m02) Waiting for machine to stop 66/120
	I1010 18:19:27.681462  103869 main.go:141] libmachine: (ha-142481-m02) Waiting for machine to stop 67/120
	I1010 18:19:28.682848  103869 main.go:141] libmachine: (ha-142481-m02) Waiting for machine to stop 68/120
	I1010 18:19:29.684111  103869 main.go:141] libmachine: (ha-142481-m02) Waiting for machine to stop 69/120
	I1010 18:19:30.686172  103869 main.go:141] libmachine: (ha-142481-m02) Waiting for machine to stop 70/120
	I1010 18:19:31.688118  103869 main.go:141] libmachine: (ha-142481-m02) Waiting for machine to stop 71/120
	I1010 18:19:32.689603  103869 main.go:141] libmachine: (ha-142481-m02) Waiting for machine to stop 72/120
	I1010 18:19:33.691463  103869 main.go:141] libmachine: (ha-142481-m02) Waiting for machine to stop 73/120
	I1010 18:19:34.692816  103869 main.go:141] libmachine: (ha-142481-m02) Waiting for machine to stop 74/120
	I1010 18:19:35.694650  103869 main.go:141] libmachine: (ha-142481-m02) Waiting for machine to stop 75/120
	I1010 18:19:36.696019  103869 main.go:141] libmachine: (ha-142481-m02) Waiting for machine to stop 76/120
	I1010 18:19:37.697706  103869 main.go:141] libmachine: (ha-142481-m02) Waiting for machine to stop 77/120
	I1010 18:19:38.699075  103869 main.go:141] libmachine: (ha-142481-m02) Waiting for machine to stop 78/120
	I1010 18:19:39.700724  103869 main.go:141] libmachine: (ha-142481-m02) Waiting for machine to stop 79/120
	I1010 18:19:40.702089  103869 main.go:141] libmachine: (ha-142481-m02) Waiting for machine to stop 80/120
	I1010 18:19:41.703598  103869 main.go:141] libmachine: (ha-142481-m02) Waiting for machine to stop 81/120
	I1010 18:19:42.705155  103869 main.go:141] libmachine: (ha-142481-m02) Waiting for machine to stop 82/120
	I1010 18:19:43.706728  103869 main.go:141] libmachine: (ha-142481-m02) Waiting for machine to stop 83/120
	I1010 18:19:44.708667  103869 main.go:141] libmachine: (ha-142481-m02) Waiting for machine to stop 84/120
	I1010 18:19:45.710924  103869 main.go:141] libmachine: (ha-142481-m02) Waiting for machine to stop 85/120
	I1010 18:19:46.712545  103869 main.go:141] libmachine: (ha-142481-m02) Waiting for machine to stop 86/120
	I1010 18:19:47.714329  103869 main.go:141] libmachine: (ha-142481-m02) Waiting for machine to stop 87/120
	I1010 18:19:48.715866  103869 main.go:141] libmachine: (ha-142481-m02) Waiting for machine to stop 88/120
	I1010 18:19:49.718589  103869 main.go:141] libmachine: (ha-142481-m02) Waiting for machine to stop 89/120
	I1010 18:19:50.720940  103869 main.go:141] libmachine: (ha-142481-m02) Waiting for machine to stop 90/120
	I1010 18:19:51.722800  103869 main.go:141] libmachine: (ha-142481-m02) Waiting for machine to stop 91/120
	I1010 18:19:52.724022  103869 main.go:141] libmachine: (ha-142481-m02) Waiting for machine to stop 92/120
	I1010 18:19:53.725350  103869 main.go:141] libmachine: (ha-142481-m02) Waiting for machine to stop 93/120
	I1010 18:19:54.726689  103869 main.go:141] libmachine: (ha-142481-m02) Waiting for machine to stop 94/120
	I1010 18:19:55.728185  103869 main.go:141] libmachine: (ha-142481-m02) Waiting for machine to stop 95/120
	I1010 18:19:56.729468  103869 main.go:141] libmachine: (ha-142481-m02) Waiting for machine to stop 96/120
	I1010 18:19:57.731346  103869 main.go:141] libmachine: (ha-142481-m02) Waiting for machine to stop 97/120
	I1010 18:19:58.732965  103869 main.go:141] libmachine: (ha-142481-m02) Waiting for machine to stop 98/120
	I1010 18:19:59.734273  103869 main.go:141] libmachine: (ha-142481-m02) Waiting for machine to stop 99/120
	I1010 18:20:00.736460  103869 main.go:141] libmachine: (ha-142481-m02) Waiting for machine to stop 100/120
	I1010 18:20:01.738107  103869 main.go:141] libmachine: (ha-142481-m02) Waiting for machine to stop 101/120
	I1010 18:20:02.739694  103869 main.go:141] libmachine: (ha-142481-m02) Waiting for machine to stop 102/120
	I1010 18:20:03.741706  103869 main.go:141] libmachine: (ha-142481-m02) Waiting for machine to stop 103/120
	I1010 18:20:04.743339  103869 main.go:141] libmachine: (ha-142481-m02) Waiting for machine to stop 104/120
	I1010 18:20:05.745753  103869 main.go:141] libmachine: (ha-142481-m02) Waiting for machine to stop 105/120
	I1010 18:20:06.747305  103869 main.go:141] libmachine: (ha-142481-m02) Waiting for machine to stop 106/120
	I1010 18:20:07.748718  103869 main.go:141] libmachine: (ha-142481-m02) Waiting for machine to stop 107/120
	I1010 18:20:08.750045  103869 main.go:141] libmachine: (ha-142481-m02) Waiting for machine to stop 108/120
	I1010 18:20:09.751305  103869 main.go:141] libmachine: (ha-142481-m02) Waiting for machine to stop 109/120
	I1010 18:20:10.753196  103869 main.go:141] libmachine: (ha-142481-m02) Waiting for machine to stop 110/120
	I1010 18:20:11.755604  103869 main.go:141] libmachine: (ha-142481-m02) Waiting for machine to stop 111/120
	I1010 18:20:12.757078  103869 main.go:141] libmachine: (ha-142481-m02) Waiting for machine to stop 112/120
	I1010 18:20:13.759207  103869 main.go:141] libmachine: (ha-142481-m02) Waiting for machine to stop 113/120
	I1010 18:20:14.760629  103869 main.go:141] libmachine: (ha-142481-m02) Waiting for machine to stop 114/120
	I1010 18:20:15.762439  103869 main.go:141] libmachine: (ha-142481-m02) Waiting for machine to stop 115/120
	I1010 18:20:16.763822  103869 main.go:141] libmachine: (ha-142481-m02) Waiting for machine to stop 116/120
	I1010 18:20:17.765149  103869 main.go:141] libmachine: (ha-142481-m02) Waiting for machine to stop 117/120
	I1010 18:20:18.766601  103869 main.go:141] libmachine: (ha-142481-m02) Waiting for machine to stop 118/120
	I1010 18:20:19.768807  103869 main.go:141] libmachine: (ha-142481-m02) Waiting for machine to stop 119/120
	I1010 18:20:20.770150  103869 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1010 18:20:20.770297  103869 out.go:270] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:367: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-142481 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-142481 status -v=7 --alsologtostderr
E1010 18:20:33.879515   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/functional-346405/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:371: (dbg) Done: out/minikube-linux-amd64 -p ha-142481 status -v=7 --alsologtostderr: (18.851084574s)
ha_test.go:377: status says not all three control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-142481 status -v=7 --alsologtostderr": 
ha_test.go:380: status says not three hosts are running: args "out/minikube-linux-amd64 -p ha-142481 status -v=7 --alsologtostderr": 
ha_test.go:383: status says not three kubelets are running: args "out/minikube-linux-amd64 -p ha-142481 status -v=7 --alsologtostderr": 
ha_test.go:386: status says not two apiservers are running: args "out/minikube-linux-amd64 -p ha-142481 status -v=7 --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-142481 -n ha-142481
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-142481 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-142481 logs -n 25: (1.54861046s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-142481 cp ha-142481-m03:/home/docker/cp-test.txt                              | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile4115718102/001/cp-test_ha-142481-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-142481 ssh -n                                                                 | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | ha-142481-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-142481 cp ha-142481-m03:/home/docker/cp-test.txt                              | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | ha-142481:/home/docker/cp-test_ha-142481-m03_ha-142481.txt                       |           |         |         |                     |                     |
	| ssh     | ha-142481 ssh -n                                                                 | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | ha-142481-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-142481 ssh -n ha-142481 sudo cat                                              | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | /home/docker/cp-test_ha-142481-m03_ha-142481.txt                                 |           |         |         |                     |                     |
	| cp      | ha-142481 cp ha-142481-m03:/home/docker/cp-test.txt                              | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | ha-142481-m02:/home/docker/cp-test_ha-142481-m03_ha-142481-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-142481 ssh -n                                                                 | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | ha-142481-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-142481 ssh -n ha-142481-m02 sudo cat                                          | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | /home/docker/cp-test_ha-142481-m03_ha-142481-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-142481 cp ha-142481-m03:/home/docker/cp-test.txt                              | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | ha-142481-m04:/home/docker/cp-test_ha-142481-m03_ha-142481-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-142481 ssh -n                                                                 | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | ha-142481-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-142481 ssh -n ha-142481-m04 sudo cat                                          | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | /home/docker/cp-test_ha-142481-m03_ha-142481-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-142481 cp testdata/cp-test.txt                                                | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | ha-142481-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-142481 ssh -n                                                                 | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | ha-142481-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-142481 cp ha-142481-m04:/home/docker/cp-test.txt                              | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile4115718102/001/cp-test_ha-142481-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-142481 ssh -n                                                                 | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | ha-142481-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-142481 cp ha-142481-m04:/home/docker/cp-test.txt                              | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | ha-142481:/home/docker/cp-test_ha-142481-m04_ha-142481.txt                       |           |         |         |                     |                     |
	| ssh     | ha-142481 ssh -n                                                                 | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | ha-142481-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-142481 ssh -n ha-142481 sudo cat                                              | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | /home/docker/cp-test_ha-142481-m04_ha-142481.txt                                 |           |         |         |                     |                     |
	| cp      | ha-142481 cp ha-142481-m04:/home/docker/cp-test.txt                              | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | ha-142481-m02:/home/docker/cp-test_ha-142481-m04_ha-142481-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-142481 ssh -n                                                                 | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | ha-142481-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-142481 ssh -n ha-142481-m02 sudo cat                                          | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | /home/docker/cp-test_ha-142481-m04_ha-142481-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-142481 cp ha-142481-m04:/home/docker/cp-test.txt                              | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | ha-142481-m03:/home/docker/cp-test_ha-142481-m04_ha-142481-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-142481 ssh -n                                                                 | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | ha-142481-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-142481 ssh -n ha-142481-m03 sudo cat                                          | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | /home/docker/cp-test_ha-142481-m04_ha-142481-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-142481 node stop m02 -v=7                                                     | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/10 18:13:38
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1010 18:13:38.106562   99368 out.go:345] Setting OutFile to fd 1 ...
	I1010 18:13:38.106682   99368 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 18:13:38.106690   99368 out.go:358] Setting ErrFile to fd 2...
	I1010 18:13:38.106694   99368 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 18:13:38.106895   99368 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19787-81676/.minikube/bin
	I1010 18:13:38.107477   99368 out.go:352] Setting JSON to false
	I1010 18:13:38.108309   99368 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":6964,"bootTime":1728577054,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1010 18:13:38.108413   99368 start.go:139] virtualization: kvm guest
	I1010 18:13:38.110824   99368 out.go:177] * [ha-142481] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1010 18:13:38.112418   99368 out.go:177]   - MINIKUBE_LOCATION=19787
	I1010 18:13:38.112454   99368 notify.go:220] Checking for updates...
	I1010 18:13:38.114936   99368 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1010 18:13:38.116370   99368 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19787-81676/kubeconfig
	I1010 18:13:38.117745   99368 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19787-81676/.minikube
	I1010 18:13:38.118944   99368 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1010 18:13:38.120250   99368 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1010 18:13:38.121551   99368 driver.go:394] Setting default libvirt URI to qemu:///system
	I1010 18:13:38.157644   99368 out.go:177] * Using the kvm2 driver based on user configuration
	I1010 18:13:38.158888   99368 start.go:297] selected driver: kvm2
	I1010 18:13:38.158919   99368 start.go:901] validating driver "kvm2" against <nil>
	I1010 18:13:38.158934   99368 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1010 18:13:38.159711   99368 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 18:13:38.159814   99368 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19787-81676/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1010 18:13:38.174780   99368 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1010 18:13:38.174840   99368 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1010 18:13:38.175095   99368 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1010 18:13:38.175132   99368 cni.go:84] Creating CNI manager for ""
	I1010 18:13:38.175195   99368 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1010 18:13:38.175219   99368 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1010 18:13:38.175271   99368 start.go:340] cluster config:
	{Name:ha-142481 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-142481 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1010 18:13:38.175372   99368 iso.go:125] acquiring lock: {Name:mk53f4624ffe67cac4f5d6b29b9ed87292a010a5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 18:13:38.177295   99368 out.go:177] * Starting "ha-142481" primary control-plane node in "ha-142481" cluster
	I1010 18:13:38.178523   99368 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1010 18:13:38.178564   99368 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1010 18:13:38.178578   99368 cache.go:56] Caching tarball of preloaded images
	I1010 18:13:38.178671   99368 preload.go:172] Found /home/jenkins/minikube-integration/19787-81676/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1010 18:13:38.178686   99368 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1010 18:13:38.179056   99368 profile.go:143] Saving config to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/config.json ...
	I1010 18:13:38.179080   99368 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/config.json: {Name:mk6ba06e5ddbd39667f8d6031429fc5b567ca233 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:13:38.179240   99368 start.go:360] acquireMachinesLock for ha-142481: {Name:mkf50a8a3600473176c10a3ea212772e747151e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1010 18:13:38.179277   99368 start.go:364] duration metric: took 20.536µs to acquireMachinesLock for "ha-142481"
	I1010 18:13:38.179299   99368 start.go:93] Provisioning new machine with config: &{Name:ha-142481 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-142481 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1010 18:13:38.179350   99368 start.go:125] createHost starting for "" (driver="kvm2")
	I1010 18:13:38.180956   99368 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1010 18:13:38.181134   99368 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 18:13:38.181190   99368 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 18:13:38.195735   99368 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37675
	I1010 18:13:38.196239   99368 main.go:141] libmachine: () Calling .GetVersion
	I1010 18:13:38.196810   99368 main.go:141] libmachine: Using API Version  1
	I1010 18:13:38.196834   99368 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 18:13:38.197229   99368 main.go:141] libmachine: () Calling .GetMachineName
	I1010 18:13:38.197439   99368 main.go:141] libmachine: (ha-142481) Calling .GetMachineName
	I1010 18:13:38.197656   99368 main.go:141] libmachine: (ha-142481) Calling .DriverName
	I1010 18:13:38.197815   99368 start.go:159] libmachine.API.Create for "ha-142481" (driver="kvm2")
	I1010 18:13:38.197850   99368 client.go:168] LocalClient.Create starting
	I1010 18:13:38.197896   99368 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem
	I1010 18:13:38.197929   99368 main.go:141] libmachine: Decoding PEM data...
	I1010 18:13:38.197946   99368 main.go:141] libmachine: Parsing certificate...
	I1010 18:13:38.197994   99368 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem
	I1010 18:13:38.198011   99368 main.go:141] libmachine: Decoding PEM data...
	I1010 18:13:38.198032   99368 main.go:141] libmachine: Parsing certificate...
	I1010 18:13:38.198051   99368 main.go:141] libmachine: Running pre-create checks...
	I1010 18:13:38.198059   99368 main.go:141] libmachine: (ha-142481) Calling .PreCreateCheck
	I1010 18:13:38.198443   99368 main.go:141] libmachine: (ha-142481) Calling .GetConfigRaw
	I1010 18:13:38.198814   99368 main.go:141] libmachine: Creating machine...
	I1010 18:13:38.198829   99368 main.go:141] libmachine: (ha-142481) Calling .Create
	I1010 18:13:38.199006   99368 main.go:141] libmachine: (ha-142481) Creating KVM machine...
	I1010 18:13:38.200423   99368 main.go:141] libmachine: (ha-142481) DBG | found existing default KVM network
	I1010 18:13:38.201134   99368 main.go:141] libmachine: (ha-142481) DBG | I1010 18:13:38.200987   99391 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015b90}
	I1010 18:13:38.201152   99368 main.go:141] libmachine: (ha-142481) DBG | created network xml: 
	I1010 18:13:38.201163   99368 main.go:141] libmachine: (ha-142481) DBG | <network>
	I1010 18:13:38.201168   99368 main.go:141] libmachine: (ha-142481) DBG |   <name>mk-ha-142481</name>
	I1010 18:13:38.201173   99368 main.go:141] libmachine: (ha-142481) DBG |   <dns enable='no'/>
	I1010 18:13:38.201179   99368 main.go:141] libmachine: (ha-142481) DBG |   
	I1010 18:13:38.201186   99368 main.go:141] libmachine: (ha-142481) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1010 18:13:38.201195   99368 main.go:141] libmachine: (ha-142481) DBG |     <dhcp>
	I1010 18:13:38.201204   99368 main.go:141] libmachine: (ha-142481) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1010 18:13:38.201210   99368 main.go:141] libmachine: (ha-142481) DBG |     </dhcp>
	I1010 18:13:38.201224   99368 main.go:141] libmachine: (ha-142481) DBG |   </ip>
	I1010 18:13:38.201233   99368 main.go:141] libmachine: (ha-142481) DBG |   
	I1010 18:13:38.201241   99368 main.go:141] libmachine: (ha-142481) DBG | </network>
	I1010 18:13:38.201253   99368 main.go:141] libmachine: (ha-142481) DBG | 
	I1010 18:13:38.206109   99368 main.go:141] libmachine: (ha-142481) DBG | trying to create private KVM network mk-ha-142481 192.168.39.0/24...
	I1010 18:13:38.273921   99368 main.go:141] libmachine: (ha-142481) DBG | private KVM network mk-ha-142481 192.168.39.0/24 created
	I1010 18:13:38.273973   99368 main.go:141] libmachine: (ha-142481) DBG | I1010 18:13:38.273888   99391 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19787-81676/.minikube
	I1010 18:13:38.273987   99368 main.go:141] libmachine: (ha-142481) Setting up store path in /home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481 ...
	I1010 18:13:38.274008   99368 main.go:141] libmachine: (ha-142481) Building disk image from file:///home/jenkins/minikube-integration/19787-81676/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso
	I1010 18:13:38.274030   99368 main.go:141] libmachine: (ha-142481) Downloading /home/jenkins/minikube-integration/19787-81676/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19787-81676/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso...
	I1010 18:13:38.538580   99368 main.go:141] libmachine: (ha-142481) DBG | I1010 18:13:38.538442   99391 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481/id_rsa...
	I1010 18:13:38.734956   99368 main.go:141] libmachine: (ha-142481) DBG | I1010 18:13:38.734800   99391 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481/ha-142481.rawdisk...
	I1010 18:13:38.734986   99368 main.go:141] libmachine: (ha-142481) DBG | Writing magic tar header
	I1010 18:13:38.734996   99368 main.go:141] libmachine: (ha-142481) DBG | Writing SSH key tar header
	I1010 18:13:38.735006   99368 main.go:141] libmachine: (ha-142481) DBG | I1010 18:13:38.734920   99391 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481 ...
	I1010 18:13:38.735023   99368 main.go:141] libmachine: (ha-142481) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481
	I1010 18:13:38.735054   99368 main.go:141] libmachine: (ha-142481) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19787-81676/.minikube/machines
	I1010 18:13:38.735062   99368 main.go:141] libmachine: (ha-142481) Setting executable bit set on /home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481 (perms=drwx------)
	I1010 18:13:38.735074   99368 main.go:141] libmachine: (ha-142481) Setting executable bit set on /home/jenkins/minikube-integration/19787-81676/.minikube/machines (perms=drwxr-xr-x)
	I1010 18:13:38.735083   99368 main.go:141] libmachine: (ha-142481) Setting executable bit set on /home/jenkins/minikube-integration/19787-81676/.minikube (perms=drwxr-xr-x)
	I1010 18:13:38.735098   99368 main.go:141] libmachine: (ha-142481) Setting executable bit set on /home/jenkins/minikube-integration/19787-81676 (perms=drwxrwxr-x)
	I1010 18:13:38.735107   99368 main.go:141] libmachine: (ha-142481) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19787-81676/.minikube
	I1010 18:13:38.735121   99368 main.go:141] libmachine: (ha-142481) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1010 18:13:38.735132   99368 main.go:141] libmachine: (ha-142481) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1010 18:13:38.735139   99368 main.go:141] libmachine: (ha-142481) Creating domain...
	I1010 18:13:38.735156   99368 main.go:141] libmachine: (ha-142481) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19787-81676
	I1010 18:13:38.735166   99368 main.go:141] libmachine: (ha-142481) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1010 18:13:38.735171   99368 main.go:141] libmachine: (ha-142481) DBG | Checking permissions on dir: /home/jenkins
	I1010 18:13:38.735177   99368 main.go:141] libmachine: (ha-142481) DBG | Checking permissions on dir: /home
	I1010 18:13:38.735183   99368 main.go:141] libmachine: (ha-142481) DBG | Skipping /home - not owner
	I1010 18:13:38.736388   99368 main.go:141] libmachine: (ha-142481) define libvirt domain using xml: 
	I1010 18:13:38.736417   99368 main.go:141] libmachine: (ha-142481) <domain type='kvm'>
	I1010 18:13:38.736427   99368 main.go:141] libmachine: (ha-142481)   <name>ha-142481</name>
	I1010 18:13:38.736439   99368 main.go:141] libmachine: (ha-142481)   <memory unit='MiB'>2200</memory>
	I1010 18:13:38.736471   99368 main.go:141] libmachine: (ha-142481)   <vcpu>2</vcpu>
	I1010 18:13:38.736493   99368 main.go:141] libmachine: (ha-142481)   <features>
	I1010 18:13:38.736527   99368 main.go:141] libmachine: (ha-142481)     <acpi/>
	I1010 18:13:38.736554   99368 main.go:141] libmachine: (ha-142481)     <apic/>
	I1010 18:13:38.736566   99368 main.go:141] libmachine: (ha-142481)     <pae/>
	I1010 18:13:38.736588   99368 main.go:141] libmachine: (ha-142481)     
	I1010 18:13:38.736600   99368 main.go:141] libmachine: (ha-142481)   </features>
	I1010 18:13:38.736610   99368 main.go:141] libmachine: (ha-142481)   <cpu mode='host-passthrough'>
	I1010 18:13:38.736620   99368 main.go:141] libmachine: (ha-142481)   
	I1010 18:13:38.736633   99368 main.go:141] libmachine: (ha-142481)   </cpu>
	I1010 18:13:38.736643   99368 main.go:141] libmachine: (ha-142481)   <os>
	I1010 18:13:38.736649   99368 main.go:141] libmachine: (ha-142481)     <type>hvm</type>
	I1010 18:13:38.736661   99368 main.go:141] libmachine: (ha-142481)     <boot dev='cdrom'/>
	I1010 18:13:38.736672   99368 main.go:141] libmachine: (ha-142481)     <boot dev='hd'/>
	I1010 18:13:38.736684   99368 main.go:141] libmachine: (ha-142481)     <bootmenu enable='no'/>
	I1010 18:13:38.736693   99368 main.go:141] libmachine: (ha-142481)   </os>
	I1010 18:13:38.736700   99368 main.go:141] libmachine: (ha-142481)   <devices>
	I1010 18:13:38.736710   99368 main.go:141] libmachine: (ha-142481)     <disk type='file' device='cdrom'>
	I1010 18:13:38.736729   99368 main.go:141] libmachine: (ha-142481)       <source file='/home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481/boot2docker.iso'/>
	I1010 18:13:38.736737   99368 main.go:141] libmachine: (ha-142481)       <target dev='hdc' bus='scsi'/>
	I1010 18:13:38.736742   99368 main.go:141] libmachine: (ha-142481)       <readonly/>
	I1010 18:13:38.736748   99368 main.go:141] libmachine: (ha-142481)     </disk>
	I1010 18:13:38.736754   99368 main.go:141] libmachine: (ha-142481)     <disk type='file' device='disk'>
	I1010 18:13:38.736761   99368 main.go:141] libmachine: (ha-142481)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1010 18:13:38.736768   99368 main.go:141] libmachine: (ha-142481)       <source file='/home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481/ha-142481.rawdisk'/>
	I1010 18:13:38.736773   99368 main.go:141] libmachine: (ha-142481)       <target dev='hda' bus='virtio'/>
	I1010 18:13:38.736780   99368 main.go:141] libmachine: (ha-142481)     </disk>
	I1010 18:13:38.736789   99368 main.go:141] libmachine: (ha-142481)     <interface type='network'>
	I1010 18:13:38.736795   99368 main.go:141] libmachine: (ha-142481)       <source network='mk-ha-142481'/>
	I1010 18:13:38.736800   99368 main.go:141] libmachine: (ha-142481)       <model type='virtio'/>
	I1010 18:13:38.736804   99368 main.go:141] libmachine: (ha-142481)     </interface>
	I1010 18:13:38.736811   99368 main.go:141] libmachine: (ha-142481)     <interface type='network'>
	I1010 18:13:38.736816   99368 main.go:141] libmachine: (ha-142481)       <source network='default'/>
	I1010 18:13:38.736822   99368 main.go:141] libmachine: (ha-142481)       <model type='virtio'/>
	I1010 18:13:38.736831   99368 main.go:141] libmachine: (ha-142481)     </interface>
	I1010 18:13:38.736837   99368 main.go:141] libmachine: (ha-142481)     <serial type='pty'>
	I1010 18:13:38.736842   99368 main.go:141] libmachine: (ha-142481)       <target port='0'/>
	I1010 18:13:38.736868   99368 main.go:141] libmachine: (ha-142481)     </serial>
	I1010 18:13:38.736882   99368 main.go:141] libmachine: (ha-142481)     <console type='pty'>
	I1010 18:13:38.736896   99368 main.go:141] libmachine: (ha-142481)       <target type='serial' port='0'/>
	I1010 18:13:38.736911   99368 main.go:141] libmachine: (ha-142481)     </console>
	I1010 18:13:38.736921   99368 main.go:141] libmachine: (ha-142481)     <rng model='virtio'>
	I1010 18:13:38.736929   99368 main.go:141] libmachine: (ha-142481)       <backend model='random'>/dev/random</backend>
	I1010 18:13:38.736935   99368 main.go:141] libmachine: (ha-142481)     </rng>
	I1010 18:13:38.736942   99368 main.go:141] libmachine: (ha-142481)     
	I1010 18:13:38.736951   99368 main.go:141] libmachine: (ha-142481)     
	I1010 18:13:38.736962   99368 main.go:141] libmachine: (ha-142481)   </devices>
	I1010 18:13:38.736973   99368 main.go:141] libmachine: (ha-142481) </domain>
	I1010 18:13:38.737007   99368 main.go:141] libmachine: (ha-142481) 
	I1010 18:13:38.741472   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:b1:0c:5d in network default
	I1010 18:13:38.742188   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:13:38.742202   99368 main.go:141] libmachine: (ha-142481) Ensuring networks are active...
	I1010 18:13:38.743102   99368 main.go:141] libmachine: (ha-142481) Ensuring network default is active
	I1010 18:13:38.743484   99368 main.go:141] libmachine: (ha-142481) Ensuring network mk-ha-142481 is active
	I1010 18:13:38.743981   99368 main.go:141] libmachine: (ha-142481) Getting domain xml...
	I1010 18:13:38.744831   99368 main.go:141] libmachine: (ha-142481) Creating domain...
	I1010 18:13:39.943643   99368 main.go:141] libmachine: (ha-142481) Waiting to get IP...
	I1010 18:13:39.944415   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:13:39.944819   99368 main.go:141] libmachine: (ha-142481) DBG | unable to find current IP address of domain ha-142481 in network mk-ha-142481
	I1010 18:13:39.944886   99368 main.go:141] libmachine: (ha-142481) DBG | I1010 18:13:39.944805   99391 retry.go:31] will retry after 263.450232ms: waiting for machine to come up
	I1010 18:13:40.210494   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:13:40.210938   99368 main.go:141] libmachine: (ha-142481) DBG | unable to find current IP address of domain ha-142481 in network mk-ha-142481
	I1010 18:13:40.210979   99368 main.go:141] libmachine: (ha-142481) DBG | I1010 18:13:40.210904   99391 retry.go:31] will retry after 318.83444ms: waiting for machine to come up
	I1010 18:13:40.531556   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:13:40.531982   99368 main.go:141] libmachine: (ha-142481) DBG | unable to find current IP address of domain ha-142481 in network mk-ha-142481
	I1010 18:13:40.532010   99368 main.go:141] libmachine: (ha-142481) DBG | I1010 18:13:40.531946   99391 retry.go:31] will retry after 379.250744ms: waiting for machine to come up
	I1010 18:13:40.912440   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:13:40.912909   99368 main.go:141] libmachine: (ha-142481) DBG | unable to find current IP address of domain ha-142481 in network mk-ha-142481
	I1010 18:13:40.912942   99368 main.go:141] libmachine: (ha-142481) DBG | I1010 18:13:40.912844   99391 retry.go:31] will retry after 505.831382ms: waiting for machine to come up
	I1010 18:13:41.420670   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:13:41.421119   99368 main.go:141] libmachine: (ha-142481) DBG | unable to find current IP address of domain ha-142481 in network mk-ha-142481
	I1010 18:13:41.421141   99368 main.go:141] libmachine: (ha-142481) DBG | I1010 18:13:41.421071   99391 retry.go:31] will retry after 555.074801ms: waiting for machine to come up
	I1010 18:13:41.977849   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:13:41.978257   99368 main.go:141] libmachine: (ha-142481) DBG | unable to find current IP address of domain ha-142481 in network mk-ha-142481
	I1010 18:13:41.978281   99368 main.go:141] libmachine: (ha-142481) DBG | I1010 18:13:41.978194   99391 retry.go:31] will retry after 636.152434ms: waiting for machine to come up
	I1010 18:13:42.615909   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:13:42.616285   99368 main.go:141] libmachine: (ha-142481) DBG | unable to find current IP address of domain ha-142481 in network mk-ha-142481
	I1010 18:13:42.616320   99368 main.go:141] libmachine: (ha-142481) DBG | I1010 18:13:42.616236   99391 retry.go:31] will retry after 907.451913ms: waiting for machine to come up
	I1010 18:13:43.524700   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:13:43.525164   99368 main.go:141] libmachine: (ha-142481) DBG | unable to find current IP address of domain ha-142481 in network mk-ha-142481
	I1010 18:13:43.525241   99368 main.go:141] libmachine: (ha-142481) DBG | I1010 18:13:43.525119   99391 retry.go:31] will retry after 916.746032ms: waiting for machine to come up
	I1010 18:13:44.443019   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:13:44.443439   99368 main.go:141] libmachine: (ha-142481) DBG | unable to find current IP address of domain ha-142481 in network mk-ha-142481
	I1010 18:13:44.443463   99368 main.go:141] libmachine: (ha-142481) DBG | I1010 18:13:44.443379   99391 retry.go:31] will retry after 1.722399675s: waiting for machine to come up
	I1010 18:13:46.168252   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:13:46.168660   99368 main.go:141] libmachine: (ha-142481) DBG | unable to find current IP address of domain ha-142481 in network mk-ha-142481
	I1010 18:13:46.168691   99368 main.go:141] libmachine: (ha-142481) DBG | I1010 18:13:46.168625   99391 retry.go:31] will retry after 2.191060126s: waiting for machine to come up
	I1010 18:13:48.361115   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:13:48.361666   99368 main.go:141] libmachine: (ha-142481) DBG | unable to find current IP address of domain ha-142481 in network mk-ha-142481
	I1010 18:13:48.361699   99368 main.go:141] libmachine: (ha-142481) DBG | I1010 18:13:48.361609   99391 retry.go:31] will retry after 2.390239739s: waiting for machine to come up
	I1010 18:13:50.755200   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:13:50.755610   99368 main.go:141] libmachine: (ha-142481) DBG | unable to find current IP address of domain ha-142481 in network mk-ha-142481
	I1010 18:13:50.755636   99368 main.go:141] libmachine: (ha-142481) DBG | I1010 18:13:50.755576   99391 retry.go:31] will retry after 2.188596051s: waiting for machine to come up
	I1010 18:13:52.946995   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:13:52.947360   99368 main.go:141] libmachine: (ha-142481) DBG | unable to find current IP address of domain ha-142481 in network mk-ha-142481
	I1010 18:13:52.947382   99368 main.go:141] libmachine: (ha-142481) DBG | I1010 18:13:52.947318   99391 retry.go:31] will retry after 3.863064875s: waiting for machine to come up
	I1010 18:13:56.814839   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:13:56.815487   99368 main.go:141] libmachine: (ha-142481) DBG | unable to find current IP address of domain ha-142481 in network mk-ha-142481
	I1010 18:13:56.815508   99368 main.go:141] libmachine: (ha-142481) DBG | I1010 18:13:56.815409   99391 retry.go:31] will retry after 3.762373701s: waiting for machine to come up
	I1010 18:14:00.580406   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:00.580915   99368 main.go:141] libmachine: (ha-142481) Found IP for machine: 192.168.39.104
	I1010 18:14:00.580940   99368 main.go:141] libmachine: (ha-142481) Reserving static IP address...
	I1010 18:14:00.580952   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has current primary IP address 192.168.39.104 and MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:00.581384   99368 main.go:141] libmachine: (ha-142481) DBG | unable to find host DHCP lease matching {name: "ha-142481", mac: "52:54:00:3e:fa:00", ip: "192.168.39.104"} in network mk-ha-142481
	I1010 18:14:00.656496   99368 main.go:141] libmachine: (ha-142481) DBG | Getting to WaitForSSH function...
	I1010 18:14:00.656530   99368 main.go:141] libmachine: (ha-142481) Reserved static IP address: 192.168.39.104
	I1010 18:14:00.656576   99368 main.go:141] libmachine: (ha-142481) Waiting for SSH to be available...
	I1010 18:14:00.659584   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:00.659994   99368 main.go:141] libmachine: (ha-142481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:fa:00", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:13:53 +0000 UTC Type:0 Mac:52:54:00:3e:fa:00 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:minikube Clientid:01:52:54:00:3e:fa:00}
	I1010 18:14:00.660032   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined IP address 192.168.39.104 and MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:00.660120   99368 main.go:141] libmachine: (ha-142481) DBG | Using SSH client type: external
	I1010 18:14:00.660175   99368 main.go:141] libmachine: (ha-142481) DBG | Using SSH private key: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481/id_rsa (-rw-------)
	I1010 18:14:00.660252   99368 main.go:141] libmachine: (ha-142481) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.104 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1010 18:14:00.660280   99368 main.go:141] libmachine: (ha-142481) DBG | About to run SSH command:
	I1010 18:14:00.660297   99368 main.go:141] libmachine: (ha-142481) DBG | exit 0
	I1010 18:14:00.789008   99368 main.go:141] libmachine: (ha-142481) DBG | SSH cmd err, output: <nil>: 
	I1010 18:14:00.789292   99368 main.go:141] libmachine: (ha-142481) KVM machine creation complete!
	I1010 18:14:00.789591   99368 main.go:141] libmachine: (ha-142481) Calling .GetConfigRaw
	I1010 18:14:00.790247   99368 main.go:141] libmachine: (ha-142481) Calling .DriverName
	I1010 18:14:00.790563   99368 main.go:141] libmachine: (ha-142481) Calling .DriverName
	I1010 18:14:00.790779   99368 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1010 18:14:00.790797   99368 main.go:141] libmachine: (ha-142481) Calling .GetState
	I1010 18:14:00.791977   99368 main.go:141] libmachine: Detecting operating system of created instance...
	I1010 18:14:00.791993   99368 main.go:141] libmachine: Waiting for SSH to be available...
	I1010 18:14:00.792000   99368 main.go:141] libmachine: Getting to WaitForSSH function...
	I1010 18:14:00.792007   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHHostname
	I1010 18:14:00.795049   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:00.795517   99368 main.go:141] libmachine: (ha-142481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:fa:00", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:13:53 +0000 UTC Type:0 Mac:52:54:00:3e:fa:00 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-142481 Clientid:01:52:54:00:3e:fa:00}
	I1010 18:14:00.795546   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined IP address 192.168.39.104 and MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:00.795737   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHPort
	I1010 18:14:00.795931   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHKeyPath
	I1010 18:14:00.796109   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHKeyPath
	I1010 18:14:00.796201   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHUsername
	I1010 18:14:00.796384   99368 main.go:141] libmachine: Using SSH client type: native
	I1010 18:14:00.796677   99368 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I1010 18:14:00.796694   99368 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1010 18:14:00.904506   99368 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1010 18:14:00.904529   99368 main.go:141] libmachine: Detecting the provisioner...
	I1010 18:14:00.904538   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHHostname
	I1010 18:14:00.907535   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:00.907882   99368 main.go:141] libmachine: (ha-142481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:fa:00", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:13:53 +0000 UTC Type:0 Mac:52:54:00:3e:fa:00 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-142481 Clientid:01:52:54:00:3e:fa:00}
	I1010 18:14:00.907924   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined IP address 192.168.39.104 and MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:00.908104   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHPort
	I1010 18:14:00.908324   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHKeyPath
	I1010 18:14:00.908499   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHKeyPath
	I1010 18:14:00.908658   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHUsername
	I1010 18:14:00.908892   99368 main.go:141] libmachine: Using SSH client type: native
	I1010 18:14:00.909076   99368 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I1010 18:14:00.909086   99368 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1010 18:14:01.018108   99368 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1010 18:14:01.018217   99368 main.go:141] libmachine: found compatible host: buildroot
	I1010 18:14:01.018228   99368 main.go:141] libmachine: Provisioning with buildroot...
	I1010 18:14:01.018236   99368 main.go:141] libmachine: (ha-142481) Calling .GetMachineName
	I1010 18:14:01.018570   99368 buildroot.go:166] provisioning hostname "ha-142481"
	I1010 18:14:01.018602   99368 main.go:141] libmachine: (ha-142481) Calling .GetMachineName
	I1010 18:14:01.018780   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHHostname
	I1010 18:14:01.021625   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:01.022001   99368 main.go:141] libmachine: (ha-142481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:fa:00", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:13:53 +0000 UTC Type:0 Mac:52:54:00:3e:fa:00 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-142481 Clientid:01:52:54:00:3e:fa:00}
	I1010 18:14:01.022049   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined IP address 192.168.39.104 and MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:01.022142   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHPort
	I1010 18:14:01.022330   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHKeyPath
	I1010 18:14:01.022485   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHKeyPath
	I1010 18:14:01.022628   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHUsername
	I1010 18:14:01.022792   99368 main.go:141] libmachine: Using SSH client type: native
	I1010 18:14:01.023020   99368 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I1010 18:14:01.023040   99368 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-142481 && echo "ha-142481" | sudo tee /etc/hostname
	I1010 18:14:01.148746   99368 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-142481
	
	I1010 18:14:01.148780   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHHostname
	I1010 18:14:01.151700   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:01.152069   99368 main.go:141] libmachine: (ha-142481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:fa:00", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:13:53 +0000 UTC Type:0 Mac:52:54:00:3e:fa:00 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-142481 Clientid:01:52:54:00:3e:fa:00}
	I1010 18:14:01.152101   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined IP address 192.168.39.104 and MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:01.152379   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHPort
	I1010 18:14:01.152566   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHKeyPath
	I1010 18:14:01.152733   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHKeyPath
	I1010 18:14:01.153007   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHUsername
	I1010 18:14:01.153254   99368 main.go:141] libmachine: Using SSH client type: native
	I1010 18:14:01.153456   99368 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I1010 18:14:01.153473   99368 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-142481' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-142481/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-142481' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1010 18:14:01.270656   99368 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1010 18:14:01.270702   99368 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19787-81676/.minikube CaCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19787-81676/.minikube}
	I1010 18:14:01.270768   99368 buildroot.go:174] setting up certificates
	I1010 18:14:01.270784   99368 provision.go:84] configureAuth start
	I1010 18:14:01.270804   99368 main.go:141] libmachine: (ha-142481) Calling .GetMachineName
	I1010 18:14:01.271123   99368 main.go:141] libmachine: (ha-142481) Calling .GetIP
	I1010 18:14:01.274054   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:01.274377   99368 main.go:141] libmachine: (ha-142481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:fa:00", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:13:53 +0000 UTC Type:0 Mac:52:54:00:3e:fa:00 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-142481 Clientid:01:52:54:00:3e:fa:00}
	I1010 18:14:01.274414   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined IP address 192.168.39.104 and MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:01.274599   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHHostname
	I1010 18:14:01.277056   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:01.277372   99368 main.go:141] libmachine: (ha-142481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:fa:00", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:13:53 +0000 UTC Type:0 Mac:52:54:00:3e:fa:00 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-142481 Clientid:01:52:54:00:3e:fa:00}
	I1010 18:14:01.277402   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined IP address 192.168.39.104 and MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:01.277532   99368 provision.go:143] copyHostCerts
	I1010 18:14:01.277566   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem
	I1010 18:14:01.277608   99368 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem, removing ...
	I1010 18:14:01.277620   99368 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem
	I1010 18:14:01.277701   99368 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem (1675 bytes)
	I1010 18:14:01.277845   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem
	I1010 18:14:01.277882   99368 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem, removing ...
	I1010 18:14:01.277893   99368 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem
	I1010 18:14:01.277935   99368 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem (1078 bytes)
	I1010 18:14:01.278014   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem
	I1010 18:14:01.278037   99368 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem, removing ...
	I1010 18:14:01.278043   99368 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem
	I1010 18:14:01.278078   99368 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem (1123 bytes)
	I1010 18:14:01.278160   99368 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem org=jenkins.ha-142481 san=[127.0.0.1 192.168.39.104 ha-142481 localhost minikube]
	I1010 18:14:01.863097   99368 provision.go:177] copyRemoteCerts
	I1010 18:14:01.863162   99368 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1010 18:14:01.863187   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHHostname
	I1010 18:14:01.866290   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:01.866626   99368 main.go:141] libmachine: (ha-142481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:fa:00", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:13:53 +0000 UTC Type:0 Mac:52:54:00:3e:fa:00 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-142481 Clientid:01:52:54:00:3e:fa:00}
	I1010 18:14:01.866657   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined IP address 192.168.39.104 and MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:01.866843   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHPort
	I1010 18:14:01.867075   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHKeyPath
	I1010 18:14:01.867295   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHUsername
	I1010 18:14:01.867474   99368 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481/id_rsa Username:docker}
	I1010 18:14:01.951802   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1010 18:14:01.951888   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1010 18:14:01.976504   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1010 18:14:01.976590   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1010 18:14:02.000608   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1010 18:14:02.000694   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1010 18:14:02.025514   99368 provision.go:87] duration metric: took 754.678106ms to configureAuth
	I1010 18:14:02.025558   99368 buildroot.go:189] setting minikube options for container-runtime
	I1010 18:14:02.025780   99368 config.go:182] Loaded profile config "ha-142481": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 18:14:02.025872   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHHostname
	I1010 18:14:02.028822   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:02.029419   99368 main.go:141] libmachine: (ha-142481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:fa:00", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:13:53 +0000 UTC Type:0 Mac:52:54:00:3e:fa:00 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-142481 Clientid:01:52:54:00:3e:fa:00}
	I1010 18:14:02.029448   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined IP address 192.168.39.104 and MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:02.029637   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHPort
	I1010 18:14:02.029859   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHKeyPath
	I1010 18:14:02.030076   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHKeyPath
	I1010 18:14:02.030249   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHUsername
	I1010 18:14:02.030408   99368 main.go:141] libmachine: Using SSH client type: native
	I1010 18:14:02.030613   99368 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I1010 18:14:02.030638   99368 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1010 18:14:02.255598   99368 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1010 18:14:02.255635   99368 main.go:141] libmachine: Checking connection to Docker...
	I1010 18:14:02.255663   99368 main.go:141] libmachine: (ha-142481) Calling .GetURL
	I1010 18:14:02.256998   99368 main.go:141] libmachine: (ha-142481) DBG | Using libvirt version 6000000
	I1010 18:14:02.259693   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:02.260061   99368 main.go:141] libmachine: (ha-142481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:fa:00", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:13:53 +0000 UTC Type:0 Mac:52:54:00:3e:fa:00 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-142481 Clientid:01:52:54:00:3e:fa:00}
	I1010 18:14:02.260105   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined IP address 192.168.39.104 and MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:02.260245   99368 main.go:141] libmachine: Docker is up and running!
	I1010 18:14:02.260269   99368 main.go:141] libmachine: Reticulating splines...
	I1010 18:14:02.260277   99368 client.go:171] duration metric: took 24.062416136s to LocalClient.Create
	I1010 18:14:02.260305   99368 start.go:167] duration metric: took 24.062491775s to libmachine.API.Create "ha-142481"
	I1010 18:14:02.260317   99368 start.go:293] postStartSetup for "ha-142481" (driver="kvm2")
	I1010 18:14:02.260330   99368 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1010 18:14:02.260355   99368 main.go:141] libmachine: (ha-142481) Calling .DriverName
	I1010 18:14:02.260598   99368 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1010 18:14:02.260623   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHHostname
	I1010 18:14:02.262655   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:02.262966   99368 main.go:141] libmachine: (ha-142481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:fa:00", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:13:53 +0000 UTC Type:0 Mac:52:54:00:3e:fa:00 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-142481 Clientid:01:52:54:00:3e:fa:00}
	I1010 18:14:02.262995   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined IP address 192.168.39.104 and MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:02.263106   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHPort
	I1010 18:14:02.263281   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHKeyPath
	I1010 18:14:02.263418   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHUsername
	I1010 18:14:02.263549   99368 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481/id_rsa Username:docker}
	I1010 18:14:02.347386   99368 ssh_runner.go:195] Run: cat /etc/os-release
	I1010 18:14:02.352007   99368 info.go:137] Remote host: Buildroot 2023.02.9
	I1010 18:14:02.352037   99368 filesync.go:126] Scanning /home/jenkins/minikube-integration/19787-81676/.minikube/addons for local assets ...
	I1010 18:14:02.352118   99368 filesync.go:126] Scanning /home/jenkins/minikube-integration/19787-81676/.minikube/files for local assets ...
	I1010 18:14:02.352241   99368 filesync.go:149] local asset: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem -> 888762.pem in /etc/ssl/certs
	I1010 18:14:02.352255   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem -> /etc/ssl/certs/888762.pem
	I1010 18:14:02.352383   99368 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1010 18:14:02.361986   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem --> /etc/ssl/certs/888762.pem (1708 bytes)
	I1010 18:14:02.387757   99368 start.go:296] duration metric: took 127.42447ms for postStartSetup
	I1010 18:14:02.387817   99368 main.go:141] libmachine: (ha-142481) Calling .GetConfigRaw
	I1010 18:14:02.388481   99368 main.go:141] libmachine: (ha-142481) Calling .GetIP
	I1010 18:14:02.391530   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:02.391900   99368 main.go:141] libmachine: (ha-142481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:fa:00", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:13:53 +0000 UTC Type:0 Mac:52:54:00:3e:fa:00 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-142481 Clientid:01:52:54:00:3e:fa:00}
	I1010 18:14:02.391927   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined IP address 192.168.39.104 and MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:02.392187   99368 profile.go:143] Saving config to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/config.json ...
	I1010 18:14:02.392385   99368 start.go:128] duration metric: took 24.213024958s to createHost
	I1010 18:14:02.392410   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHHostname
	I1010 18:14:02.394865   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:02.395239   99368 main.go:141] libmachine: (ha-142481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:fa:00", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:13:53 +0000 UTC Type:0 Mac:52:54:00:3e:fa:00 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-142481 Clientid:01:52:54:00:3e:fa:00}
	I1010 18:14:02.395269   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined IP address 192.168.39.104 and MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:02.395418   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHPort
	I1010 18:14:02.395616   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHKeyPath
	I1010 18:14:02.395799   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHKeyPath
	I1010 18:14:02.395913   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHUsername
	I1010 18:14:02.396045   99368 main.go:141] libmachine: Using SSH client type: native
	I1010 18:14:02.396233   99368 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I1010 18:14:02.396253   99368 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1010 18:14:02.506374   99368 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728584042.463674877
	
	I1010 18:14:02.506405   99368 fix.go:216] guest clock: 1728584042.463674877
	I1010 18:14:02.506415   99368 fix.go:229] Guest: 2024-10-10 18:14:02.463674877 +0000 UTC Remote: 2024-10-10 18:14:02.392397471 +0000 UTC m=+24.322985546 (delta=71.277406ms)
	I1010 18:14:02.506501   99368 fix.go:200] guest clock delta is within tolerance: 71.277406ms
	I1010 18:14:02.506513   99368 start.go:83] releasing machines lock for "ha-142481", held for 24.327223548s
	I1010 18:14:02.506550   99368 main.go:141] libmachine: (ha-142481) Calling .DriverName
	I1010 18:14:02.506889   99368 main.go:141] libmachine: (ha-142481) Calling .GetIP
	I1010 18:14:02.509401   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:02.509764   99368 main.go:141] libmachine: (ha-142481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:fa:00", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:13:53 +0000 UTC Type:0 Mac:52:54:00:3e:fa:00 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-142481 Clientid:01:52:54:00:3e:fa:00}
	I1010 18:14:02.509802   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined IP address 192.168.39.104 and MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:02.509942   99368 main.go:141] libmachine: (ha-142481) Calling .DriverName
	I1010 18:14:02.510549   99368 main.go:141] libmachine: (ha-142481) Calling .DriverName
	I1010 18:14:02.510772   99368 main.go:141] libmachine: (ha-142481) Calling .DriverName
	I1010 18:14:02.510843   99368 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1010 18:14:02.510929   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHHostname
	I1010 18:14:02.511003   99368 ssh_runner.go:195] Run: cat /version.json
	I1010 18:14:02.511038   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHHostname
	I1010 18:14:02.513796   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:02.513896   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:02.514234   99368 main.go:141] libmachine: (ha-142481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:fa:00", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:13:53 +0000 UTC Type:0 Mac:52:54:00:3e:fa:00 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-142481 Clientid:01:52:54:00:3e:fa:00}
	I1010 18:14:02.514254   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined IP address 192.168.39.104 and MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:02.514280   99368 main.go:141] libmachine: (ha-142481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:fa:00", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:13:53 +0000 UTC Type:0 Mac:52:54:00:3e:fa:00 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-142481 Clientid:01:52:54:00:3e:fa:00}
	I1010 18:14:02.514293   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined IP address 192.168.39.104 and MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:02.514533   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHPort
	I1010 18:14:02.514631   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHPort
	I1010 18:14:02.514713   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHKeyPath
	I1010 18:14:02.514804   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHKeyPath
	I1010 18:14:02.514890   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHUsername
	I1010 18:14:02.514938   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHUsername
	I1010 18:14:02.515026   99368 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481/id_rsa Username:docker}
	I1010 18:14:02.515073   99368 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481/id_rsa Username:docker}
	I1010 18:14:02.615715   99368 ssh_runner.go:195] Run: systemctl --version
	I1010 18:14:02.621955   99368 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1010 18:14:02.785775   99368 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1010 18:14:02.792271   99368 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1010 18:14:02.792352   99368 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1010 18:14:02.808426   99368 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1010 18:14:02.808464   99368 start.go:495] detecting cgroup driver to use...
	I1010 18:14:02.808542   99368 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1010 18:14:02.825314   99368 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1010 18:14:02.842065   99368 docker.go:217] disabling cri-docker service (if available) ...
	I1010 18:14:02.842135   99368 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1010 18:14:02.858984   99368 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1010 18:14:02.876330   99368 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1010 18:14:02.990523   99368 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1010 18:14:03.132316   99368 docker.go:233] disabling docker service ...
	I1010 18:14:03.132386   99368 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1010 18:14:03.147477   99368 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1010 18:14:03.161268   99368 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1010 18:14:03.304325   99368 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1010 18:14:03.429397   99368 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1010 18:14:03.443898   99368 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1010 18:14:03.463181   99368 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1010 18:14:03.463273   99368 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:14:03.474215   99368 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1010 18:14:03.474286   99368 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:14:03.485513   99368 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:14:03.496394   99368 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:14:03.507084   99368 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1010 18:14:03.517675   99368 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:14:03.527867   99368 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:14:03.545825   99368 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:14:03.556723   99368 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1010 18:14:03.566428   99368 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1010 18:14:03.566513   99368 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1010 18:14:03.579726   99368 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1010 18:14:03.589897   99368 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 18:14:03.711306   99368 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1010 18:14:03.812353   99368 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1010 18:14:03.812440   99368 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1010 18:14:03.817265   99368 start.go:563] Will wait 60s for crictl version
	I1010 18:14:03.817331   99368 ssh_runner.go:195] Run: which crictl
	I1010 18:14:03.821238   99368 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1010 18:14:03.865031   99368 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1010 18:14:03.865131   99368 ssh_runner.go:195] Run: crio --version
	I1010 18:14:03.893405   99368 ssh_runner.go:195] Run: crio --version
	I1010 18:14:03.923688   99368 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1010 18:14:03.925089   99368 main.go:141] libmachine: (ha-142481) Calling .GetIP
	I1010 18:14:03.927862   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:03.928210   99368 main.go:141] libmachine: (ha-142481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:fa:00", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:13:53 +0000 UTC Type:0 Mac:52:54:00:3e:fa:00 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-142481 Clientid:01:52:54:00:3e:fa:00}
	I1010 18:14:03.928239   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined IP address 192.168.39.104 and MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:03.928482   99368 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1010 18:14:03.932808   99368 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 18:14:03.947607   99368 kubeadm.go:883] updating cluster {Name:ha-142481 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-142481 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.104 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1010 18:14:03.947723   99368 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1010 18:14:03.947771   99368 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 18:14:03.980321   99368 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1010 18:14:03.980402   99368 ssh_runner.go:195] Run: which lz4
	I1010 18:14:03.984490   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1010 18:14:03.984586   99368 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1010 18:14:03.988814   99368 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1010 18:14:03.988866   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1010 18:14:05.363098   99368 crio.go:462] duration metric: took 1.37853137s to copy over tarball
	I1010 18:14:05.363172   99368 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1010 18:14:07.378827   99368 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.01562073s)
	I1010 18:14:07.378863   99368 crio.go:469] duration metric: took 2.015730634s to extract the tarball
	I1010 18:14:07.378873   99368 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1010 18:14:07.415494   99368 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 18:14:07.461637   99368 crio.go:514] all images are preloaded for cri-o runtime.
	I1010 18:14:07.461668   99368 cache_images.go:84] Images are preloaded, skipping loading
	I1010 18:14:07.461678   99368 kubeadm.go:934] updating node { 192.168.39.104 8443 v1.31.1 crio true true} ...
	I1010 18:14:07.461810   99368 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-142481 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.104
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-142481 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1010 18:14:07.461895   99368 ssh_runner.go:195] Run: crio config
	I1010 18:14:07.511179   99368 cni.go:84] Creating CNI manager for ""
	I1010 18:14:07.511203   99368 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1010 18:14:07.511219   99368 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1010 18:14:07.511240   99368 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.104 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-142481 NodeName:ha-142481 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.104"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.104 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1010 18:14:07.511378   99368 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.104
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-142481"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.104
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.104"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1010 18:14:07.511402   99368 kube-vip.go:115] generating kube-vip config ...
	I1010 18:14:07.511447   99368 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1010 18:14:07.530825   99368 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1010 18:14:07.530966   99368 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1010 18:14:07.531061   99368 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1010 18:14:07.541336   99368 binaries.go:44] Found k8s binaries, skipping transfer
	I1010 18:14:07.541418   99368 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1010 18:14:07.551149   99368 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I1010 18:14:07.567775   99368 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1010 18:14:07.585048   99368 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I1010 18:14:07.601614   99368 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I1010 18:14:07.618435   99368 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1010 18:14:07.622366   99368 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 18:14:07.634534   99368 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 18:14:07.769061   99368 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 18:14:07.786728   99368 certs.go:68] Setting up /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481 for IP: 192.168.39.104
	I1010 18:14:07.786757   99368 certs.go:194] generating shared ca certs ...
	I1010 18:14:07.786780   99368 certs.go:226] acquiring lock for ca certs: {Name:mk934aa2a82050d7b4e8800492bf64f91d140517 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:14:07.786963   99368 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key
	I1010 18:14:07.787019   99368 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key
	I1010 18:14:07.787049   99368 certs.go:256] generating profile certs ...
	I1010 18:14:07.787126   99368 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/client.key
	I1010 18:14:07.787145   99368 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/client.crt with IP's: []
	I1010 18:14:07.903290   99368 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/client.crt ...
	I1010 18:14:07.903319   99368 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/client.crt: {Name:mkc3e45adeab2c56df47bde3919e2c30e370ae85 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:14:07.903506   99368 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/client.key ...
	I1010 18:14:07.903521   99368 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/client.key: {Name:mka461c8525916f7bc85840820bc278320ec6313 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:14:07.903626   99368 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.key.36896560
	I1010 18:14:07.903643   99368 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.crt.36896560 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.104 192.168.39.254]
	I1010 18:14:08.280801   99368 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.crt.36896560 ...
	I1010 18:14:08.280860   99368 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.crt.36896560: {Name:mk5acd7350e86bebedada3fd330840a975c10cfe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:14:08.281063   99368 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.key.36896560 ...
	I1010 18:14:08.281078   99368 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.key.36896560: {Name:mk1053269a10fe97cf940622a274d032edb2023c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:14:08.281164   99368 certs.go:381] copying /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.crt.36896560 -> /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.crt
	I1010 18:14:08.281248   99368 certs.go:385] copying /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.key.36896560 -> /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.key
	I1010 18:14:08.281307   99368 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/proxy-client.key
	I1010 18:14:08.281325   99368 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/proxy-client.crt with IP's: []
	I1010 18:14:08.428528   99368 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/proxy-client.crt ...
	I1010 18:14:08.428562   99368 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/proxy-client.crt: {Name:mk868dec1ca79ab4285d30dbc6ee93e0f0415a05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:14:08.428730   99368 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/proxy-client.key ...
	I1010 18:14:08.428741   99368 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/proxy-client.key: {Name:mk5632176fd6e0bd1fedbd590f44cb77fc86fc75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:14:08.428812   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1010 18:14:08.428829   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1010 18:14:08.428839   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1010 18:14:08.428867   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1010 18:14:08.428886   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1010 18:14:08.428905   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1010 18:14:08.428919   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1010 18:14:08.428930   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1010 18:14:08.428986   99368 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876.pem (1338 bytes)
	W1010 18:14:08.429023   99368 certs.go:480] ignoring /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876_empty.pem, impossibly tiny 0 bytes
	I1010 18:14:08.429032   99368 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem (1679 bytes)
	I1010 18:14:08.429057   99368 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem (1078 bytes)
	I1010 18:14:08.429082   99368 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem (1123 bytes)
	I1010 18:14:08.429103   99368 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem (1675 bytes)
	I1010 18:14:08.429139   99368 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem (1708 bytes)
	I1010 18:14:08.429166   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876.pem -> /usr/share/ca-certificates/88876.pem
	I1010 18:14:08.429180   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem -> /usr/share/ca-certificates/888762.pem
	I1010 18:14:08.429192   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:14:08.429725   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1010 18:14:08.459934   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1010 18:14:08.486537   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1010 18:14:08.511793   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1010 18:14:08.536743   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1010 18:14:08.569819   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1010 18:14:08.605499   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1010 18:14:08.633615   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1010 18:14:08.657501   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876.pem --> /usr/share/ca-certificates/88876.pem (1338 bytes)
	I1010 18:14:08.684906   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem --> /usr/share/ca-certificates/888762.pem (1708 bytes)
	I1010 18:14:08.712812   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1010 18:14:08.741219   99368 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1010 18:14:08.760444   99368 ssh_runner.go:195] Run: openssl version
	I1010 18:14:08.766741   99368 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/888762.pem && ln -fs /usr/share/ca-certificates/888762.pem /etc/ssl/certs/888762.pem"
	I1010 18:14:08.778475   99368 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/888762.pem
	I1010 18:14:08.783145   99368 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 10 18:09 /usr/share/ca-certificates/888762.pem
	I1010 18:14:08.783213   99368 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/888762.pem
	I1010 18:14:08.789500   99368 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/888762.pem /etc/ssl/certs/3ec20f2e.0"
	I1010 18:14:08.800279   99368 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1010 18:14:08.811452   99368 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:14:08.816338   99368 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 10 17:58 /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:14:08.816413   99368 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:14:08.822105   99368 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1010 18:14:08.833024   99368 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/88876.pem && ln -fs /usr/share/ca-certificates/88876.pem /etc/ssl/certs/88876.pem"
	I1010 18:14:08.844522   99368 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/88876.pem
	I1010 18:14:08.849855   99368 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 10 18:09 /usr/share/ca-certificates/88876.pem
	I1010 18:14:08.849915   99368 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/88876.pem
	I1010 18:14:08.856326   99368 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/88876.pem /etc/ssl/certs/51391683.0"
	I1010 18:14:08.868339   99368 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1010 18:14:08.873080   99368 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1010 18:14:08.873139   99368 kubeadm.go:392] StartCluster: {Name:ha-142481 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-142481 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.104 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 18:14:08.873227   99368 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1010 18:14:08.873270   99368 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1010 18:14:08.916635   99368 cri.go:89] found id: ""
	I1010 18:14:08.916701   99368 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1010 18:14:08.927424   99368 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1010 18:14:08.937639   99368 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1010 18:14:08.950754   99368 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1010 18:14:08.950779   99368 kubeadm.go:157] found existing configuration files:
	
	I1010 18:14:08.950834   99368 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1010 18:14:08.962204   99368 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1010 18:14:08.962290   99368 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1010 18:14:08.975261   99368 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1010 18:14:08.986716   99368 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1010 18:14:08.986809   99368 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1010 18:14:08.998689   99368 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1010 18:14:09.010244   99368 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1010 18:14:09.010336   99368 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1010 18:14:09.022153   99368 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1010 18:14:09.033360   99368 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1010 18:14:09.033436   99368 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1010 18:14:09.045356   99368 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1010 18:14:09.160966   99368 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1010 18:14:09.161052   99368 kubeadm.go:310] [preflight] Running pre-flight checks
	I1010 18:14:09.286355   99368 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1010 18:14:09.286552   99368 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1010 18:14:09.286700   99368 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1010 18:14:09.304139   99368 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1010 18:14:09.367960   99368 out.go:235]   - Generating certificates and keys ...
	I1010 18:14:09.368080   99368 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1010 18:14:09.368161   99368 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1010 18:14:09.384046   99368 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1010 18:14:09.463103   99368 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1010 18:14:09.567857   99368 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1010 18:14:09.723111   99368 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1010 18:14:09.854233   99368 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1010 18:14:09.854378   99368 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-142481 localhost] and IPs [192.168.39.104 127.0.0.1 ::1]
	I1010 18:14:09.939722   99368 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1010 18:14:09.939862   99368 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-142481 localhost] and IPs [192.168.39.104 127.0.0.1 ::1]
	I1010 18:14:10.144343   99368 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1010 18:14:10.236373   99368 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1010 18:14:10.313629   99368 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1010 18:14:10.313727   99368 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1010 18:14:10.420431   99368 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1010 18:14:10.571019   99368 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1010 18:14:10.736436   99368 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1010 18:14:10.835479   99368 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1010 18:14:10.964962   99368 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1010 18:14:10.965625   99368 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1010 18:14:10.970210   99368 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1010 18:14:10.974272   99368 out.go:235]   - Booting up control plane ...
	I1010 18:14:10.974411   99368 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1010 18:14:10.974532   99368 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1010 18:14:10.974647   99368 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1010 18:14:10.995458   99368 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1010 18:14:11.002605   99368 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1010 18:14:11.002687   99368 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1010 18:14:11.149847   99368 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1010 18:14:11.150007   99368 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1010 18:14:11.651121   99368 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.084729ms
	I1010 18:14:11.651236   99368 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1010 18:14:20.808127   99368 kubeadm.go:310] [api-check] The API server is healthy after 9.156536113s
	I1010 18:14:20.824946   99368 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1010 18:14:20.839773   99368 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1010 18:14:20.870820   99368 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1010 18:14:20.871016   99368 kubeadm.go:310] [mark-control-plane] Marking the node ha-142481 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1010 18:14:20.887157   99368 kubeadm.go:310] [bootstrap-token] Using token: 644oik.7go4jyqro7if5l4w
	I1010 18:14:20.888737   99368 out.go:235]   - Configuring RBAC rules ...
	I1010 18:14:20.888842   99368 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1010 18:14:20.898440   99368 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1010 18:14:20.910480   99368 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1010 18:14:20.915628   99368 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1010 18:14:20.920682   99368 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1010 18:14:20.931471   99368 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1010 18:14:21.219016   99368 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1010 18:14:21.647641   99368 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1010 18:14:22.223206   99368 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1010 18:14:22.224137   99368 kubeadm.go:310] 
	I1010 18:14:22.224257   99368 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1010 18:14:22.224281   99368 kubeadm.go:310] 
	I1010 18:14:22.224367   99368 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1010 18:14:22.224376   99368 kubeadm.go:310] 
	I1010 18:14:22.224411   99368 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1010 18:14:22.224481   99368 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1010 18:14:22.224552   99368 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1010 18:14:22.224561   99368 kubeadm.go:310] 
	I1010 18:14:22.224636   99368 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1010 18:14:22.224649   99368 kubeadm.go:310] 
	I1010 18:14:22.224716   99368 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1010 18:14:22.224728   99368 kubeadm.go:310] 
	I1010 18:14:22.224806   99368 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1010 18:14:22.224925   99368 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1010 18:14:22.225015   99368 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1010 18:14:22.225025   99368 kubeadm.go:310] 
	I1010 18:14:22.225149   99368 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1010 18:14:22.225266   99368 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1010 18:14:22.225276   99368 kubeadm.go:310] 
	I1010 18:14:22.225390   99368 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 644oik.7go4jyqro7if5l4w \
	I1010 18:14:22.225541   99368 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:bc8568f96f96483e4403a7eff9866ac7bd8b0e76d8203796a49dd1a769f11bda \
	I1010 18:14:22.225591   99368 kubeadm.go:310] 	--control-plane 
	I1010 18:14:22.225619   99368 kubeadm.go:310] 
	I1010 18:14:22.225743   99368 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1010 18:14:22.225753   99368 kubeadm.go:310] 
	I1010 18:14:22.225845   99368 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 644oik.7go4jyqro7if5l4w \
	I1010 18:14:22.225968   99368 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:bc8568f96f96483e4403a7eff9866ac7bd8b0e76d8203796a49dd1a769f11bda 
	I1010 18:14:22.226430   99368 kubeadm.go:310] W1010 18:14:09.112606     828 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1010 18:14:22.226836   99368 kubeadm.go:310] W1010 18:14:09.113373     828 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1010 18:14:22.226944   99368 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1010 18:14:22.226978   99368 cni.go:84] Creating CNI manager for ""
	I1010 18:14:22.226989   99368 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1010 18:14:22.229089   99368 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1010 18:14:22.230625   99368 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1010 18:14:22.236334   99368 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I1010 18:14:22.236358   99368 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1010 18:14:22.263826   99368 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1010 18:14:22.691291   99368 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1010 18:14:22.691383   99368 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:14:22.691399   99368 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-142481 minikube.k8s.io/updated_at=2024_10_10T18_14_22_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=e90d7de550e839433dc631623053398b652747fc minikube.k8s.io/name=ha-142481 minikube.k8s.io/primary=true
	I1010 18:14:22.748532   99368 ops.go:34] apiserver oom_adj: -16
	I1010 18:14:22.970463   99368 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:14:23.471032   99368 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:14:23.553414   99368 kubeadm.go:1113] duration metric: took 862.100636ms to wait for elevateKubeSystemPrivileges
	I1010 18:14:23.553464   99368 kubeadm.go:394] duration metric: took 14.680326546s to StartCluster
	I1010 18:14:23.553490   99368 settings.go:142] acquiring lock: {Name:mk8d73e472e6ca16e47be8eb712406a95625b8da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:14:23.553611   99368 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19787-81676/kubeconfig
	I1010 18:14:23.554487   99368 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19787-81676/kubeconfig: {Name:mk31297b607b774870865548d952622c04d970cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:14:23.554725   99368 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1010 18:14:23.554735   99368 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1010 18:14:23.554719   99368 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.104 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1010 18:14:23.554809   99368 addons.go:69] Setting storage-provisioner=true in profile "ha-142481"
	I1010 18:14:23.554818   99368 start.go:241] waiting for startup goroutines ...
	I1010 18:14:23.554825   99368 addons.go:234] Setting addon storage-provisioner=true in "ha-142481"
	I1010 18:14:23.554829   99368 addons.go:69] Setting default-storageclass=true in profile "ha-142481"
	I1010 18:14:23.554845   99368 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-142481"
	I1010 18:14:23.554853   99368 host.go:66] Checking if "ha-142481" exists ...
	I1010 18:14:23.554928   99368 config.go:182] Loaded profile config "ha-142481": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 18:14:23.555209   99368 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 18:14:23.555239   99368 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 18:14:23.555300   99368 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 18:14:23.555338   99368 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 18:14:23.570324   99368 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36105
	I1010 18:14:23.570445   99368 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38545
	I1010 18:14:23.570857   99368 main.go:141] libmachine: () Calling .GetVersion
	I1010 18:14:23.570886   99368 main.go:141] libmachine: () Calling .GetVersion
	I1010 18:14:23.571436   99368 main.go:141] libmachine: Using API Version  1
	I1010 18:14:23.571459   99368 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 18:14:23.571566   99368 main.go:141] libmachine: Using API Version  1
	I1010 18:14:23.571589   99368 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 18:14:23.571790   99368 main.go:141] libmachine: () Calling .GetMachineName
	I1010 18:14:23.571894   99368 main.go:141] libmachine: () Calling .GetMachineName
	I1010 18:14:23.571996   99368 main.go:141] libmachine: (ha-142481) Calling .GetState
	I1010 18:14:23.572434   99368 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 18:14:23.572484   99368 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 18:14:23.574225   99368 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19787-81676/kubeconfig
	I1010 18:14:23.574554   99368 kapi.go:59] client config for ha-142481: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/client.crt", KeyFile:"/home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/client.key", CAFile:"/home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2432700), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1010 18:14:23.575091   99368 cert_rotation.go:140] Starting client certificate rotation controller
	I1010 18:14:23.575347   99368 addons.go:234] Setting addon default-storageclass=true in "ha-142481"
	I1010 18:14:23.575391   99368 host.go:66] Checking if "ha-142481" exists ...
	I1010 18:14:23.575743   99368 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 18:14:23.575783   99368 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 18:14:23.587483   99368 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34291
	I1010 18:14:23.587940   99368 main.go:141] libmachine: () Calling .GetVersion
	I1010 18:14:23.588477   99368 main.go:141] libmachine: Using API Version  1
	I1010 18:14:23.588502   99368 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 18:14:23.588933   99368 main.go:141] libmachine: () Calling .GetMachineName
	I1010 18:14:23.589102   99368 main.go:141] libmachine: (ha-142481) Calling .GetState
	I1010 18:14:23.590856   99368 main.go:141] libmachine: (ha-142481) Calling .DriverName
	I1010 18:14:23.590904   99368 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42799
	I1010 18:14:23.591399   99368 main.go:141] libmachine: () Calling .GetVersion
	I1010 18:14:23.591917   99368 main.go:141] libmachine: Using API Version  1
	I1010 18:14:23.591946   99368 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 18:14:23.592234   99368 main.go:141] libmachine: () Calling .GetMachineName
	I1010 18:14:23.592690   99368 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 18:14:23.592731   99368 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 18:14:23.593082   99368 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 18:14:23.594593   99368 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 18:14:23.594613   99368 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1010 18:14:23.594629   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHHostname
	I1010 18:14:23.597561   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:23.598029   99368 main.go:141] libmachine: (ha-142481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:fa:00", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:13:53 +0000 UTC Type:0 Mac:52:54:00:3e:fa:00 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-142481 Clientid:01:52:54:00:3e:fa:00}
	I1010 18:14:23.598057   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined IP address 192.168.39.104 and MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:23.598292   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHPort
	I1010 18:14:23.598455   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHKeyPath
	I1010 18:14:23.598621   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHUsername
	I1010 18:14:23.598811   99368 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481/id_rsa Username:docker}
	I1010 18:14:23.608949   99368 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46053
	I1010 18:14:23.609372   99368 main.go:141] libmachine: () Calling .GetVersion
	I1010 18:14:23.609889   99368 main.go:141] libmachine: Using API Version  1
	I1010 18:14:23.609916   99368 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 18:14:23.610243   99368 main.go:141] libmachine: () Calling .GetMachineName
	I1010 18:14:23.610467   99368 main.go:141] libmachine: (ha-142481) Calling .GetState
	I1010 18:14:23.612216   99368 main.go:141] libmachine: (ha-142481) Calling .DriverName
	I1010 18:14:23.612447   99368 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1010 18:14:23.612464   99368 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1010 18:14:23.612481   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHHostname
	I1010 18:14:23.615402   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:23.615852   99368 main.go:141] libmachine: (ha-142481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:fa:00", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:13:53 +0000 UTC Type:0 Mac:52:54:00:3e:fa:00 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-142481 Clientid:01:52:54:00:3e:fa:00}
	I1010 18:14:23.615886   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined IP address 192.168.39.104 and MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:23.616075   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHPort
	I1010 18:14:23.616255   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHKeyPath
	I1010 18:14:23.616404   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHUsername
	I1010 18:14:23.616566   99368 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481/id_rsa Username:docker}
	I1010 18:14:23.680546   99368 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1010 18:14:23.774021   99368 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 18:14:23.820915   99368 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1010 18:14:24.197953   99368 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1010 18:14:24.533925   99368 main.go:141] libmachine: Making call to close driver server
	I1010 18:14:24.533960   99368 main.go:141] libmachine: (ha-142481) Calling .Close
	I1010 18:14:24.533990   99368 main.go:141] libmachine: Making call to close driver server
	I1010 18:14:24.534001   99368 main.go:141] libmachine: (ha-142481) Calling .Close
	I1010 18:14:24.534267   99368 main.go:141] libmachine: (ha-142481) DBG | Closing plugin on server side
	I1010 18:14:24.534297   99368 main.go:141] libmachine: Successfully made call to close driver server
	I1010 18:14:24.534313   99368 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 18:14:24.534319   99368 main.go:141] libmachine: Successfully made call to close driver server
	I1010 18:14:24.534320   99368 main.go:141] libmachine: (ha-142481) DBG | Closing plugin on server side
	I1010 18:14:24.534323   99368 main.go:141] libmachine: Making call to close driver server
	I1010 18:14:24.534342   99368 main.go:141] libmachine: (ha-142481) Calling .Close
	I1010 18:14:24.534328   99368 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 18:14:24.534394   99368 main.go:141] libmachine: Making call to close driver server
	I1010 18:14:24.534402   99368 main.go:141] libmachine: (ha-142481) Calling .Close
	I1010 18:14:24.534551   99368 main.go:141] libmachine: Successfully made call to close driver server
	I1010 18:14:24.534571   99368 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 18:14:24.534647   99368 main.go:141] libmachine: Successfully made call to close driver server
	I1010 18:14:24.534673   99368 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 18:14:24.534690   99368 main.go:141] libmachine: (ha-142481) DBG | Closing plugin on server side
	I1010 18:14:24.534743   99368 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1010 18:14:24.534893   99368 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1010 18:14:24.535016   99368 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1010 18:14:24.535028   99368 round_trippers.go:469] Request Headers:
	I1010 18:14:24.535038   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:14:24.535046   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:14:24.550066   99368 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I1010 18:14:24.550802   99368 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1010 18:14:24.550817   99368 round_trippers.go:469] Request Headers:
	I1010 18:14:24.550825   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:14:24.550830   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:14:24.550834   99368 round_trippers.go:473]     Content-Type: application/json
	I1010 18:14:24.554277   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:14:24.554448   99368 main.go:141] libmachine: Making call to close driver server
	I1010 18:14:24.554465   99368 main.go:141] libmachine: (ha-142481) Calling .Close
	I1010 18:14:24.554772   99368 main.go:141] libmachine: Successfully made call to close driver server
	I1010 18:14:24.554791   99368 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 18:14:24.556620   99368 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1010 18:14:24.558034   99368 addons.go:510] duration metric: took 1.003294102s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1010 18:14:24.558071   99368 start.go:246] waiting for cluster config update ...
	I1010 18:14:24.558083   99368 start.go:255] writing updated cluster config ...
	I1010 18:14:24.559825   99368 out.go:201] 
	I1010 18:14:24.561439   99368 config.go:182] Loaded profile config "ha-142481": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 18:14:24.561503   99368 profile.go:143] Saving config to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/config.json ...
	I1010 18:14:24.563101   99368 out.go:177] * Starting "ha-142481-m02" control-plane node in "ha-142481" cluster
	I1010 18:14:24.564327   99368 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1010 18:14:24.564349   99368 cache.go:56] Caching tarball of preloaded images
	I1010 18:14:24.564452   99368 preload.go:172] Found /home/jenkins/minikube-integration/19787-81676/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1010 18:14:24.564466   99368 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1010 18:14:24.564540   99368 profile.go:143] Saving config to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/config.json ...
	I1010 18:14:24.564701   99368 start.go:360] acquireMachinesLock for ha-142481-m02: {Name:mkf50a8a3600473176c10a3ea212772e747151e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1010 18:14:24.564749   99368 start.go:364] duration metric: took 27.041µs to acquireMachinesLock for "ha-142481-m02"
	I1010 18:14:24.564772   99368 start.go:93] Provisioning new machine with config: &{Name:ha-142481 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-142481 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.104 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1010 18:14:24.564841   99368 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1010 18:14:24.566583   99368 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1010 18:14:24.566679   99368 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 18:14:24.566707   99368 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 18:14:24.581685   99368 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43727
	I1010 18:14:24.582176   99368 main.go:141] libmachine: () Calling .GetVersion
	I1010 18:14:24.582682   99368 main.go:141] libmachine: Using API Version  1
	I1010 18:14:24.582704   99368 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 18:14:24.583014   99368 main.go:141] libmachine: () Calling .GetMachineName
	I1010 18:14:24.583206   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetMachineName
	I1010 18:14:24.583343   99368 main.go:141] libmachine: (ha-142481-m02) Calling .DriverName
	I1010 18:14:24.583500   99368 start.go:159] libmachine.API.Create for "ha-142481" (driver="kvm2")
	I1010 18:14:24.583528   99368 client.go:168] LocalClient.Create starting
	I1010 18:14:24.583563   99368 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem
	I1010 18:14:24.583608   99368 main.go:141] libmachine: Decoding PEM data...
	I1010 18:14:24.583628   99368 main.go:141] libmachine: Parsing certificate...
	I1010 18:14:24.583689   99368 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem
	I1010 18:14:24.583714   99368 main.go:141] libmachine: Decoding PEM data...
	I1010 18:14:24.583730   99368 main.go:141] libmachine: Parsing certificate...
	I1010 18:14:24.583754   99368 main.go:141] libmachine: Running pre-create checks...
	I1010 18:14:24.583765   99368 main.go:141] libmachine: (ha-142481-m02) Calling .PreCreateCheck
	I1010 18:14:24.584021   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetConfigRaw
	I1010 18:14:24.584567   99368 main.go:141] libmachine: Creating machine...
	I1010 18:14:24.584588   99368 main.go:141] libmachine: (ha-142481-m02) Calling .Create
	I1010 18:14:24.584740   99368 main.go:141] libmachine: (ha-142481-m02) Creating KVM machine...
	I1010 18:14:24.585948   99368 main.go:141] libmachine: (ha-142481-m02) DBG | found existing default KVM network
	I1010 18:14:24.586049   99368 main.go:141] libmachine: (ha-142481-m02) DBG | found existing private KVM network mk-ha-142481
	I1010 18:14:24.586156   99368 main.go:141] libmachine: (ha-142481-m02) Setting up store path in /home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481-m02 ...
	I1010 18:14:24.586179   99368 main.go:141] libmachine: (ha-142481-m02) Building disk image from file:///home/jenkins/minikube-integration/19787-81676/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso
	I1010 18:14:24.586274   99368 main.go:141] libmachine: (ha-142481-m02) DBG | I1010 18:14:24.586151   99736 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19787-81676/.minikube
	I1010 18:14:24.586354   99368 main.go:141] libmachine: (ha-142481-m02) Downloading /home/jenkins/minikube-integration/19787-81676/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19787-81676/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso...
	I1010 18:14:24.870233   99368 main.go:141] libmachine: (ha-142481-m02) DBG | I1010 18:14:24.870047   99736 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481-m02/id_rsa...
	I1010 18:14:25.124750   99368 main.go:141] libmachine: (ha-142481-m02) DBG | I1010 18:14:25.124608   99736 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481-m02/ha-142481-m02.rawdisk...
	I1010 18:14:25.124783   99368 main.go:141] libmachine: (ha-142481-m02) DBG | Writing magic tar header
	I1010 18:14:25.124795   99368 main.go:141] libmachine: (ha-142481-m02) DBG | Writing SSH key tar header
	I1010 18:14:25.124806   99368 main.go:141] libmachine: (ha-142481-m02) DBG | I1010 18:14:25.124735   99736 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481-m02 ...
	I1010 18:14:25.124821   99368 main.go:141] libmachine: (ha-142481-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481-m02
	I1010 18:14:25.124919   99368 main.go:141] libmachine: (ha-142481-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19787-81676/.minikube/machines
	I1010 18:14:25.124946   99368 main.go:141] libmachine: (ha-142481-m02) Setting executable bit set on /home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481-m02 (perms=drwx------)
	I1010 18:14:25.124954   99368 main.go:141] libmachine: (ha-142481-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19787-81676/.minikube
	I1010 18:14:25.124968   99368 main.go:141] libmachine: (ha-142481-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19787-81676
	I1010 18:14:25.124973   99368 main.go:141] libmachine: (ha-142481-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1010 18:14:25.124980   99368 main.go:141] libmachine: (ha-142481-m02) Setting executable bit set on /home/jenkins/minikube-integration/19787-81676/.minikube/machines (perms=drwxr-xr-x)
	I1010 18:14:25.124988   99368 main.go:141] libmachine: (ha-142481-m02) Setting executable bit set on /home/jenkins/minikube-integration/19787-81676/.minikube (perms=drwxr-xr-x)
	I1010 18:14:25.124994   99368 main.go:141] libmachine: (ha-142481-m02) Setting executable bit set on /home/jenkins/minikube-integration/19787-81676 (perms=drwxrwxr-x)
	I1010 18:14:25.124999   99368 main.go:141] libmachine: (ha-142481-m02) DBG | Checking permissions on dir: /home/jenkins
	I1010 18:14:25.125037   99368 main.go:141] libmachine: (ha-142481-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1010 18:14:25.125058   99368 main.go:141] libmachine: (ha-142481-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1010 18:14:25.125067   99368 main.go:141] libmachine: (ha-142481-m02) DBG | Checking permissions on dir: /home
	I1010 18:14:25.125079   99368 main.go:141] libmachine: (ha-142481-m02) Creating domain...
	I1010 18:14:25.125091   99368 main.go:141] libmachine: (ha-142481-m02) DBG | Skipping /home - not owner
	I1010 18:14:25.126075   99368 main.go:141] libmachine: (ha-142481-m02) define libvirt domain using xml: 
	I1010 18:14:25.126098   99368 main.go:141] libmachine: (ha-142481-m02) <domain type='kvm'>
	I1010 18:14:25.126107   99368 main.go:141] libmachine: (ha-142481-m02)   <name>ha-142481-m02</name>
	I1010 18:14:25.126114   99368 main.go:141] libmachine: (ha-142481-m02)   <memory unit='MiB'>2200</memory>
	I1010 18:14:25.126125   99368 main.go:141] libmachine: (ha-142481-m02)   <vcpu>2</vcpu>
	I1010 18:14:25.126132   99368 main.go:141] libmachine: (ha-142481-m02)   <features>
	I1010 18:14:25.126140   99368 main.go:141] libmachine: (ha-142481-m02)     <acpi/>
	I1010 18:14:25.126150   99368 main.go:141] libmachine: (ha-142481-m02)     <apic/>
	I1010 18:14:25.126164   99368 main.go:141] libmachine: (ha-142481-m02)     <pae/>
	I1010 18:14:25.126176   99368 main.go:141] libmachine: (ha-142481-m02)     
	I1010 18:14:25.126185   99368 main.go:141] libmachine: (ha-142481-m02)   </features>
	I1010 18:14:25.126193   99368 main.go:141] libmachine: (ha-142481-m02)   <cpu mode='host-passthrough'>
	I1010 18:14:25.126201   99368 main.go:141] libmachine: (ha-142481-m02)   
	I1010 18:14:25.126208   99368 main.go:141] libmachine: (ha-142481-m02)   </cpu>
	I1010 18:14:25.126215   99368 main.go:141] libmachine: (ha-142481-m02)   <os>
	I1010 18:14:25.126225   99368 main.go:141] libmachine: (ha-142481-m02)     <type>hvm</type>
	I1010 18:14:25.126232   99368 main.go:141] libmachine: (ha-142481-m02)     <boot dev='cdrom'/>
	I1010 18:14:25.126241   99368 main.go:141] libmachine: (ha-142481-m02)     <boot dev='hd'/>
	I1010 18:14:25.126251   99368 main.go:141] libmachine: (ha-142481-m02)     <bootmenu enable='no'/>
	I1010 18:14:25.126273   99368 main.go:141] libmachine: (ha-142481-m02)   </os>
	I1010 18:14:25.126284   99368 main.go:141] libmachine: (ha-142481-m02)   <devices>
	I1010 18:14:25.126294   99368 main.go:141] libmachine: (ha-142481-m02)     <disk type='file' device='cdrom'>
	I1010 18:14:25.126307   99368 main.go:141] libmachine: (ha-142481-m02)       <source file='/home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481-m02/boot2docker.iso'/>
	I1010 18:14:25.126318   99368 main.go:141] libmachine: (ha-142481-m02)       <target dev='hdc' bus='scsi'/>
	I1010 18:14:25.126329   99368 main.go:141] libmachine: (ha-142481-m02)       <readonly/>
	I1010 18:14:25.126342   99368 main.go:141] libmachine: (ha-142481-m02)     </disk>
	I1010 18:14:25.126353   99368 main.go:141] libmachine: (ha-142481-m02)     <disk type='file' device='disk'>
	I1010 18:14:25.126365   99368 main.go:141] libmachine: (ha-142481-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1010 18:14:25.126380   99368 main.go:141] libmachine: (ha-142481-m02)       <source file='/home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481-m02/ha-142481-m02.rawdisk'/>
	I1010 18:14:25.126391   99368 main.go:141] libmachine: (ha-142481-m02)       <target dev='hda' bus='virtio'/>
	I1010 18:14:25.126401   99368 main.go:141] libmachine: (ha-142481-m02)     </disk>
	I1010 18:14:25.126413   99368 main.go:141] libmachine: (ha-142481-m02)     <interface type='network'>
	I1010 18:14:25.126425   99368 main.go:141] libmachine: (ha-142481-m02)       <source network='mk-ha-142481'/>
	I1010 18:14:25.126434   99368 main.go:141] libmachine: (ha-142481-m02)       <model type='virtio'/>
	I1010 18:14:25.126443   99368 main.go:141] libmachine: (ha-142481-m02)     </interface>
	I1010 18:14:25.126454   99368 main.go:141] libmachine: (ha-142481-m02)     <interface type='network'>
	I1010 18:14:25.126463   99368 main.go:141] libmachine: (ha-142481-m02)       <source network='default'/>
	I1010 18:14:25.126473   99368 main.go:141] libmachine: (ha-142481-m02)       <model type='virtio'/>
	I1010 18:14:25.126494   99368 main.go:141] libmachine: (ha-142481-m02)     </interface>
	I1010 18:14:25.126518   99368 main.go:141] libmachine: (ha-142481-m02)     <serial type='pty'>
	I1010 18:14:25.126526   99368 main.go:141] libmachine: (ha-142481-m02)       <target port='0'/>
	I1010 18:14:25.126530   99368 main.go:141] libmachine: (ha-142481-m02)     </serial>
	I1010 18:14:25.126535   99368 main.go:141] libmachine: (ha-142481-m02)     <console type='pty'>
	I1010 18:14:25.126545   99368 main.go:141] libmachine: (ha-142481-m02)       <target type='serial' port='0'/>
	I1010 18:14:25.126550   99368 main.go:141] libmachine: (ha-142481-m02)     </console>
	I1010 18:14:25.126556   99368 main.go:141] libmachine: (ha-142481-m02)     <rng model='virtio'>
	I1010 18:14:25.126562   99368 main.go:141] libmachine: (ha-142481-m02)       <backend model='random'>/dev/random</backend>
	I1010 18:14:25.126569   99368 main.go:141] libmachine: (ha-142481-m02)     </rng>
	I1010 18:14:25.126574   99368 main.go:141] libmachine: (ha-142481-m02)     
	I1010 18:14:25.126579   99368 main.go:141] libmachine: (ha-142481-m02)     
	I1010 18:14:25.126610   99368 main.go:141] libmachine: (ha-142481-m02)   </devices>
	I1010 18:14:25.126633   99368 main.go:141] libmachine: (ha-142481-m02) </domain>
	I1010 18:14:25.126647   99368 main.go:141] libmachine: (ha-142481-m02) 
	I1010 18:14:25.133808   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:63:37:66 in network default
	I1010 18:14:25.134525   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:25.134551   99368 main.go:141] libmachine: (ha-142481-m02) Ensuring networks are active...
	I1010 18:14:25.135477   99368 main.go:141] libmachine: (ha-142481-m02) Ensuring network default is active
	I1010 18:14:25.135837   99368 main.go:141] libmachine: (ha-142481-m02) Ensuring network mk-ha-142481 is active
	I1010 18:14:25.136343   99368 main.go:141] libmachine: (ha-142481-m02) Getting domain xml...
	I1010 18:14:25.137263   99368 main.go:141] libmachine: (ha-142481-m02) Creating domain...
	I1010 18:14:26.362672   99368 main.go:141] libmachine: (ha-142481-m02) Waiting to get IP...
	I1010 18:14:26.363443   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:26.363821   99368 main.go:141] libmachine: (ha-142481-m02) DBG | unable to find current IP address of domain ha-142481-m02 in network mk-ha-142481
	I1010 18:14:26.363878   99368 main.go:141] libmachine: (ha-142481-m02) DBG | I1010 18:14:26.363829   99736 retry.go:31] will retry after 237.123337ms: waiting for machine to come up
	I1010 18:14:26.602398   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:26.602883   99368 main.go:141] libmachine: (ha-142481-m02) DBG | unable to find current IP address of domain ha-142481-m02 in network mk-ha-142481
	I1010 18:14:26.602910   99368 main.go:141] libmachine: (ha-142481-m02) DBG | I1010 18:14:26.602829   99736 retry.go:31] will retry after 255.919096ms: waiting for machine to come up
	I1010 18:14:26.860273   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:26.860891   99368 main.go:141] libmachine: (ha-142481-m02) DBG | unable to find current IP address of domain ha-142481-m02 in network mk-ha-142481
	I1010 18:14:26.860917   99368 main.go:141] libmachine: (ha-142481-m02) DBG | I1010 18:14:26.860860   99736 retry.go:31] will retry after 363.867823ms: waiting for machine to come up
	I1010 18:14:27.226493   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:27.226955   99368 main.go:141] libmachine: (ha-142481-m02) DBG | unable to find current IP address of domain ha-142481-m02 in network mk-ha-142481
	I1010 18:14:27.226984   99368 main.go:141] libmachine: (ha-142481-m02) DBG | I1010 18:14:27.226896   99736 retry.go:31] will retry after 430.931001ms: waiting for machine to come up
	I1010 18:14:27.659820   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:27.660273   99368 main.go:141] libmachine: (ha-142481-m02) DBG | unable to find current IP address of domain ha-142481-m02 in network mk-ha-142481
	I1010 18:14:27.660299   99368 main.go:141] libmachine: (ha-142481-m02) DBG | I1010 18:14:27.660222   99736 retry.go:31] will retry after 681.867141ms: waiting for machine to come up
	I1010 18:14:28.344366   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:28.344931   99368 main.go:141] libmachine: (ha-142481-m02) DBG | unable to find current IP address of domain ha-142481-m02 in network mk-ha-142481
	I1010 18:14:28.344989   99368 main.go:141] libmachine: (ha-142481-m02) DBG | I1010 18:14:28.344843   99736 retry.go:31] will retry after 753.410001ms: waiting for machine to come up
	I1010 18:14:29.099845   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:29.100316   99368 main.go:141] libmachine: (ha-142481-m02) DBG | unable to find current IP address of domain ha-142481-m02 in network mk-ha-142481
	I1010 18:14:29.100345   99368 main.go:141] libmachine: (ha-142481-m02) DBG | I1010 18:14:29.100254   99736 retry.go:31] will retry after 1.081998824s: waiting for machine to come up
	I1010 18:14:30.183319   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:30.183733   99368 main.go:141] libmachine: (ha-142481-m02) DBG | unable to find current IP address of domain ha-142481-m02 in network mk-ha-142481
	I1010 18:14:30.183762   99368 main.go:141] libmachine: (ha-142481-m02) DBG | I1010 18:14:30.183699   99736 retry.go:31] will retry after 1.2621544s: waiting for machine to come up
	I1010 18:14:31.448194   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:31.448615   99368 main.go:141] libmachine: (ha-142481-m02) DBG | unable to find current IP address of domain ha-142481-m02 in network mk-ha-142481
	I1010 18:14:31.448639   99368 main.go:141] libmachine: (ha-142481-m02) DBG | I1010 18:14:31.448571   99736 retry.go:31] will retry after 1.545841483s: waiting for machine to come up
	I1010 18:14:32.996370   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:32.996940   99368 main.go:141] libmachine: (ha-142481-m02) DBG | unable to find current IP address of domain ha-142481-m02 in network mk-ha-142481
	I1010 18:14:32.996970   99368 main.go:141] libmachine: (ha-142481-m02) DBG | I1010 18:14:32.996877   99736 retry.go:31] will retry after 1.954916368s: waiting for machine to come up
	I1010 18:14:34.953362   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:34.953810   99368 main.go:141] libmachine: (ha-142481-m02) DBG | unable to find current IP address of domain ha-142481-m02 in network mk-ha-142481
	I1010 18:14:34.953834   99368 main.go:141] libmachine: (ha-142481-m02) DBG | I1010 18:14:34.953765   99736 retry.go:31] will retry after 2.832021438s: waiting for machine to come up
	I1010 18:14:37.787030   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:37.787437   99368 main.go:141] libmachine: (ha-142481-m02) DBG | unable to find current IP address of domain ha-142481-m02 in network mk-ha-142481
	I1010 18:14:37.787462   99368 main.go:141] libmachine: (ha-142481-m02) DBG | I1010 18:14:37.787399   99736 retry.go:31] will retry after 3.372903659s: waiting for machine to come up
	I1010 18:14:41.162229   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:41.162830   99368 main.go:141] libmachine: (ha-142481-m02) DBG | unable to find current IP address of domain ha-142481-m02 in network mk-ha-142481
	I1010 18:14:41.162860   99368 main.go:141] libmachine: (ha-142481-m02) DBG | I1010 18:14:41.162748   99736 retry.go:31] will retry after 3.532610017s: waiting for machine to come up
	I1010 18:14:44.697346   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:44.697811   99368 main.go:141] libmachine: (ha-142481-m02) DBG | unable to find current IP address of domain ha-142481-m02 in network mk-ha-142481
	I1010 18:14:44.697838   99368 main.go:141] libmachine: (ha-142481-m02) DBG | I1010 18:14:44.697765   99736 retry.go:31] will retry after 4.121205885s: waiting for machine to come up
	I1010 18:14:48.820235   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:48.820691   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has current primary IP address 192.168.39.186 and MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:48.820707   99368 main.go:141] libmachine: (ha-142481-m02) Found IP for machine: 192.168.39.186
	I1010 18:14:48.820716   99368 main.go:141] libmachine: (ha-142481-m02) Reserving static IP address...
	I1010 18:14:48.821115   99368 main.go:141] libmachine: (ha-142481-m02) DBG | unable to find host DHCP lease matching {name: "ha-142481-m02", mac: "52:54:00:70:30:26", ip: "192.168.39.186"} in network mk-ha-142481
	I1010 18:14:48.903340   99368 main.go:141] libmachine: (ha-142481-m02) Reserved static IP address: 192.168.39.186
	I1010 18:14:48.903376   99368 main.go:141] libmachine: (ha-142481-m02) Waiting for SSH to be available...
	I1010 18:14:48.903387   99368 main.go:141] libmachine: (ha-142481-m02) DBG | Getting to WaitForSSH function...
	I1010 18:14:48.906232   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:48.906828   99368 main.go:141] libmachine: (ha-142481-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:30:26", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:14:39 +0000 UTC Type:0 Mac:52:54:00:70:30:26 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:minikube Clientid:01:52:54:00:70:30:26}
	I1010 18:14:48.906862   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined IP address 192.168.39.186 and MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:48.907057   99368 main.go:141] libmachine: (ha-142481-m02) DBG | Using SSH client type: external
	I1010 18:14:48.907087   99368 main.go:141] libmachine: (ha-142481-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481-m02/id_rsa (-rw-------)
	I1010 18:14:48.907120   99368 main.go:141] libmachine: (ha-142481-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.186 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1010 18:14:48.907134   99368 main.go:141] libmachine: (ha-142481-m02) DBG | About to run SSH command:
	I1010 18:14:48.907147   99368 main.go:141] libmachine: (ha-142481-m02) DBG | exit 0
	I1010 18:14:49.037555   99368 main.go:141] libmachine: (ha-142481-m02) DBG | SSH cmd err, output: <nil>: 
	I1010 18:14:49.037876   99368 main.go:141] libmachine: (ha-142481-m02) KVM machine creation complete!
	I1010 18:14:49.038189   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetConfigRaw
	I1010 18:14:49.038756   99368 main.go:141] libmachine: (ha-142481-m02) Calling .DriverName
	I1010 18:14:49.038950   99368 main.go:141] libmachine: (ha-142481-m02) Calling .DriverName
	I1010 18:14:49.039103   99368 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1010 18:14:49.039117   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetState
	I1010 18:14:49.040560   99368 main.go:141] libmachine: Detecting operating system of created instance...
	I1010 18:14:49.040573   99368 main.go:141] libmachine: Waiting for SSH to be available...
	I1010 18:14:49.040578   99368 main.go:141] libmachine: Getting to WaitForSSH function...
	I1010 18:14:49.040584   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHHostname
	I1010 18:14:49.042911   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:49.043240   99368 main.go:141] libmachine: (ha-142481-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:30:26", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:14:39 +0000 UTC Type:0 Mac:52:54:00:70:30:26 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-142481-m02 Clientid:01:52:54:00:70:30:26}
	I1010 18:14:49.043266   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined IP address 192.168.39.186 and MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:49.043533   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHPort
	I1010 18:14:49.043730   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHKeyPath
	I1010 18:14:49.043927   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHKeyPath
	I1010 18:14:49.044092   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHUsername
	I1010 18:14:49.044245   99368 main.go:141] libmachine: Using SSH client type: native
	I1010 18:14:49.044498   99368 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I1010 18:14:49.044515   99368 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1010 18:14:49.156568   99368 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1010 18:14:49.156599   99368 main.go:141] libmachine: Detecting the provisioner...
	I1010 18:14:49.156607   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHHostname
	I1010 18:14:49.159819   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:49.160299   99368 main.go:141] libmachine: (ha-142481-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:30:26", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:14:39 +0000 UTC Type:0 Mac:52:54:00:70:30:26 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-142481-m02 Clientid:01:52:54:00:70:30:26}
	I1010 18:14:49.160329   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined IP address 192.168.39.186 and MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:49.160572   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHPort
	I1010 18:14:49.160782   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHKeyPath
	I1010 18:14:49.160954   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHKeyPath
	I1010 18:14:49.161115   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHUsername
	I1010 18:14:49.161282   99368 main.go:141] libmachine: Using SSH client type: native
	I1010 18:14:49.161504   99368 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I1010 18:14:49.161519   99368 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1010 18:14:49.274150   99368 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1010 18:14:49.274238   99368 main.go:141] libmachine: found compatible host: buildroot
	I1010 18:14:49.274249   99368 main.go:141] libmachine: Provisioning with buildroot...
	I1010 18:14:49.274261   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetMachineName
	I1010 18:14:49.274541   99368 buildroot.go:166] provisioning hostname "ha-142481-m02"
	I1010 18:14:49.274574   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetMachineName
	I1010 18:14:49.274809   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHHostname
	I1010 18:14:49.277484   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:49.277861   99368 main.go:141] libmachine: (ha-142481-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:30:26", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:14:39 +0000 UTC Type:0 Mac:52:54:00:70:30:26 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-142481-m02 Clientid:01:52:54:00:70:30:26}
	I1010 18:14:49.277893   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined IP address 192.168.39.186 and MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:49.278037   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHPort
	I1010 18:14:49.278241   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHKeyPath
	I1010 18:14:49.278416   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHKeyPath
	I1010 18:14:49.278595   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHUsername
	I1010 18:14:49.278858   99368 main.go:141] libmachine: Using SSH client type: native
	I1010 18:14:49.279047   99368 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I1010 18:14:49.279061   99368 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-142481-m02 && echo "ha-142481-m02" | sudo tee /etc/hostname
	I1010 18:14:49.409335   99368 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-142481-m02
	
	I1010 18:14:49.409369   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHHostname
	I1010 18:14:49.412112   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:49.412427   99368 main.go:141] libmachine: (ha-142481-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:30:26", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:14:39 +0000 UTC Type:0 Mac:52:54:00:70:30:26 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-142481-m02 Clientid:01:52:54:00:70:30:26}
	I1010 18:14:49.412458   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined IP address 192.168.39.186 and MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:49.412712   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHPort
	I1010 18:14:49.412921   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHKeyPath
	I1010 18:14:49.413069   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHKeyPath
	I1010 18:14:49.413182   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHUsername
	I1010 18:14:49.413398   99368 main.go:141] libmachine: Using SSH client type: native
	I1010 18:14:49.413565   99368 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I1010 18:14:49.413581   99368 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-142481-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-142481-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-142481-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1010 18:14:49.542003   99368 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1010 18:14:49.542039   99368 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19787-81676/.minikube CaCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19787-81676/.minikube}
	I1010 18:14:49.542058   99368 buildroot.go:174] setting up certificates
	I1010 18:14:49.542069   99368 provision.go:84] configureAuth start
	I1010 18:14:49.542080   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetMachineName
	I1010 18:14:49.542340   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetIP
	I1010 18:14:49.545159   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:49.545524   99368 main.go:141] libmachine: (ha-142481-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:30:26", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:14:39 +0000 UTC Type:0 Mac:52:54:00:70:30:26 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-142481-m02 Clientid:01:52:54:00:70:30:26}
	I1010 18:14:49.545554   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined IP address 192.168.39.186 and MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:49.545698   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHHostname
	I1010 18:14:49.547804   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:49.548115   99368 main.go:141] libmachine: (ha-142481-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:30:26", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:14:39 +0000 UTC Type:0 Mac:52:54:00:70:30:26 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-142481-m02 Clientid:01:52:54:00:70:30:26}
	I1010 18:14:49.548135   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined IP address 192.168.39.186 and MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:49.548323   99368 provision.go:143] copyHostCerts
	I1010 18:14:49.548352   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem
	I1010 18:14:49.548392   99368 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem, removing ...
	I1010 18:14:49.548403   99368 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem
	I1010 18:14:49.548486   99368 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem (1078 bytes)
	I1010 18:14:49.548582   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem
	I1010 18:14:49.548609   99368 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem, removing ...
	I1010 18:14:49.548619   99368 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem
	I1010 18:14:49.548657   99368 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem (1123 bytes)
	I1010 18:14:49.548719   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem
	I1010 18:14:49.548743   99368 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem, removing ...
	I1010 18:14:49.548752   99368 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem
	I1010 18:14:49.548788   99368 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem (1675 bytes)
	I1010 18:14:49.548865   99368 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem org=jenkins.ha-142481-m02 san=[127.0.0.1 192.168.39.186 ha-142481-m02 localhost minikube]
	I1010 18:14:49.606708   99368 provision.go:177] copyRemoteCerts
	I1010 18:14:49.606781   99368 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1010 18:14:49.606811   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHHostname
	I1010 18:14:49.609620   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:49.609921   99368 main.go:141] libmachine: (ha-142481-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:30:26", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:14:39 +0000 UTC Type:0 Mac:52:54:00:70:30:26 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-142481-m02 Clientid:01:52:54:00:70:30:26}
	I1010 18:14:49.609952   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined IP address 192.168.39.186 and MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:49.610121   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHPort
	I1010 18:14:49.610322   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHKeyPath
	I1010 18:14:49.610506   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHUsername
	I1010 18:14:49.610631   99368 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481-m02/id_rsa Username:docker}
	I1010 18:14:49.695655   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1010 18:14:49.695736   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1010 18:14:49.723445   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1010 18:14:49.723520   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1010 18:14:49.748318   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1010 18:14:49.748402   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1010 18:14:49.773423   99368 provision.go:87] duration metric: took 231.339814ms to configureAuth
	I1010 18:14:49.773451   99368 buildroot.go:189] setting minikube options for container-runtime
	I1010 18:14:49.773626   99368 config.go:182] Loaded profile config "ha-142481": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 18:14:49.773705   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHHostname
	I1010 18:14:49.776350   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:49.776701   99368 main.go:141] libmachine: (ha-142481-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:30:26", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:14:39 +0000 UTC Type:0 Mac:52:54:00:70:30:26 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-142481-m02 Clientid:01:52:54:00:70:30:26}
	I1010 18:14:49.776726   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined IP address 192.168.39.186 and MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:49.776913   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHPort
	I1010 18:14:49.777128   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHKeyPath
	I1010 18:14:49.777292   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHKeyPath
	I1010 18:14:49.777435   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHUsername
	I1010 18:14:49.777590   99368 main.go:141] libmachine: Using SSH client type: native
	I1010 18:14:49.777795   99368 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I1010 18:14:49.777817   99368 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1010 18:14:50.018484   99368 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1010 18:14:50.018513   99368 main.go:141] libmachine: Checking connection to Docker...
	I1010 18:14:50.018525   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetURL
	I1010 18:14:50.019796   99368 main.go:141] libmachine: (ha-142481-m02) DBG | Using libvirt version 6000000
	I1010 18:14:50.022107   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:50.022432   99368 main.go:141] libmachine: (ha-142481-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:30:26", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:14:39 +0000 UTC Type:0 Mac:52:54:00:70:30:26 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-142481-m02 Clientid:01:52:54:00:70:30:26}
	I1010 18:14:50.022476   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined IP address 192.168.39.186 and MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:50.022628   99368 main.go:141] libmachine: Docker is up and running!
	I1010 18:14:50.022646   99368 main.go:141] libmachine: Reticulating splines...
	I1010 18:14:50.022657   99368 client.go:171] duration metric: took 25.439118717s to LocalClient.Create
	I1010 18:14:50.022695   99368 start.go:167] duration metric: took 25.439191435s to libmachine.API.Create "ha-142481"
	I1010 18:14:50.022708   99368 start.go:293] postStartSetup for "ha-142481-m02" (driver="kvm2")
	I1010 18:14:50.022725   99368 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1010 18:14:50.022763   99368 main.go:141] libmachine: (ha-142481-m02) Calling .DriverName
	I1010 18:14:50.023030   99368 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1010 18:14:50.023055   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHHostname
	I1010 18:14:50.025463   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:50.025834   99368 main.go:141] libmachine: (ha-142481-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:30:26", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:14:39 +0000 UTC Type:0 Mac:52:54:00:70:30:26 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-142481-m02 Clientid:01:52:54:00:70:30:26}
	I1010 18:14:50.025869   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined IP address 192.168.39.186 and MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:50.026093   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHPort
	I1010 18:14:50.026322   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHKeyPath
	I1010 18:14:50.026520   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHUsername
	I1010 18:14:50.026673   99368 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481-m02/id_rsa Username:docker}
	I1010 18:14:50.115488   99368 ssh_runner.go:195] Run: cat /etc/os-release
	I1010 18:14:50.120106   99368 info.go:137] Remote host: Buildroot 2023.02.9
	I1010 18:14:50.120146   99368 filesync.go:126] Scanning /home/jenkins/minikube-integration/19787-81676/.minikube/addons for local assets ...
	I1010 18:14:50.120259   99368 filesync.go:126] Scanning /home/jenkins/minikube-integration/19787-81676/.minikube/files for local assets ...
	I1010 18:14:50.120347   99368 filesync.go:149] local asset: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem -> 888762.pem in /etc/ssl/certs
	I1010 18:14:50.120360   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem -> /etc/ssl/certs/888762.pem
	I1010 18:14:50.120462   99368 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1010 18:14:50.130011   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem --> /etc/ssl/certs/888762.pem (1708 bytes)
	I1010 18:14:50.156296   99368 start.go:296] duration metric: took 133.570332ms for postStartSetup
	I1010 18:14:50.156350   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetConfigRaw
	I1010 18:14:50.156937   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetIP
	I1010 18:14:50.159597   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:50.160043   99368 main.go:141] libmachine: (ha-142481-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:30:26", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:14:39 +0000 UTC Type:0 Mac:52:54:00:70:30:26 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-142481-m02 Clientid:01:52:54:00:70:30:26}
	I1010 18:14:50.160071   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined IP address 192.168.39.186 and MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:50.160321   99368 profile.go:143] Saving config to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/config.json ...
	I1010 18:14:50.160495   99368 start.go:128] duration metric: took 25.595643097s to createHost
	I1010 18:14:50.160517   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHHostname
	I1010 18:14:50.162762   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:50.163085   99368 main.go:141] libmachine: (ha-142481-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:30:26", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:14:39 +0000 UTC Type:0 Mac:52:54:00:70:30:26 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-142481-m02 Clientid:01:52:54:00:70:30:26}
	I1010 18:14:50.163110   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined IP address 192.168.39.186 and MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:50.163276   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHPort
	I1010 18:14:50.163459   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHKeyPath
	I1010 18:14:50.163603   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHKeyPath
	I1010 18:14:50.163760   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHUsername
	I1010 18:14:50.163931   99368 main.go:141] libmachine: Using SSH client type: native
	I1010 18:14:50.164125   99368 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I1010 18:14:50.164139   99368 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1010 18:14:50.277898   99368 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728584090.237251579
	
	I1010 18:14:50.277925   99368 fix.go:216] guest clock: 1728584090.237251579
	I1010 18:14:50.277933   99368 fix.go:229] Guest: 2024-10-10 18:14:50.237251579 +0000 UTC Remote: 2024-10-10 18:14:50.160506288 +0000 UTC m=+72.091094363 (delta=76.745291ms)
	I1010 18:14:50.277949   99368 fix.go:200] guest clock delta is within tolerance: 76.745291ms
	I1010 18:14:50.277955   99368 start.go:83] releasing machines lock for "ha-142481-m02", held for 25.713195595s
	I1010 18:14:50.277975   99368 main.go:141] libmachine: (ha-142481-m02) Calling .DriverName
	I1010 18:14:50.278294   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetIP
	I1010 18:14:50.280842   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:50.281256   99368 main.go:141] libmachine: (ha-142481-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:30:26", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:14:39 +0000 UTC Type:0 Mac:52:54:00:70:30:26 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-142481-m02 Clientid:01:52:54:00:70:30:26}
	I1010 18:14:50.281283   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined IP address 192.168.39.186 and MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:50.283734   99368 out.go:177] * Found network options:
	I1010 18:14:50.285300   99368 out.go:177]   - NO_PROXY=192.168.39.104
	W1010 18:14:50.286708   99368 proxy.go:119] fail to check proxy env: Error ip not in block
	I1010 18:14:50.286748   99368 main.go:141] libmachine: (ha-142481-m02) Calling .DriverName
	I1010 18:14:50.287340   99368 main.go:141] libmachine: (ha-142481-m02) Calling .DriverName
	I1010 18:14:50.287549   99368 main.go:141] libmachine: (ha-142481-m02) Calling .DriverName
	I1010 18:14:50.287642   99368 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1010 18:14:50.287694   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHHostname
	W1010 18:14:50.287740   99368 proxy.go:119] fail to check proxy env: Error ip not in block
	I1010 18:14:50.287827   99368 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1010 18:14:50.287852   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHHostname
	I1010 18:14:50.290823   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:50.290971   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:50.291276   99368 main.go:141] libmachine: (ha-142481-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:30:26", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:14:39 +0000 UTC Type:0 Mac:52:54:00:70:30:26 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-142481-m02 Clientid:01:52:54:00:70:30:26}
	I1010 18:14:50.291307   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined IP address 192.168.39.186 and MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:50.291499   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHPort
	I1010 18:14:50.291594   99368 main.go:141] libmachine: (ha-142481-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:30:26", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:14:39 +0000 UTC Type:0 Mac:52:54:00:70:30:26 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-142481-m02 Clientid:01:52:54:00:70:30:26}
	I1010 18:14:50.291635   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined IP address 192.168.39.186 and MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:50.291693   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHKeyPath
	I1010 18:14:50.291858   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHPort
	I1010 18:14:50.291862   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHUsername
	I1010 18:14:50.292017   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHKeyPath
	I1010 18:14:50.292017   99368 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481-m02/id_rsa Username:docker}
	I1010 18:14:50.292146   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHUsername
	I1010 18:14:50.292458   99368 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481-m02/id_rsa Username:docker}
	I1010 18:14:50.532570   99368 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1010 18:14:50.540169   99368 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1010 18:14:50.540248   99368 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1010 18:14:50.557472   99368 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1010 18:14:50.557500   99368 start.go:495] detecting cgroup driver to use...
	I1010 18:14:50.557574   99368 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1010 18:14:50.574787   99368 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1010 18:14:50.590774   99368 docker.go:217] disabling cri-docker service (if available) ...
	I1010 18:14:50.590848   99368 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1010 18:14:50.605941   99368 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1010 18:14:50.620901   99368 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1010 18:14:50.753387   99368 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1010 18:14:50.919446   99368 docker.go:233] disabling docker service ...
	I1010 18:14:50.919535   99368 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1010 18:14:50.934691   99368 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1010 18:14:50.948383   99368 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1010 18:14:51.098212   99368 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1010 18:14:51.222205   99368 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1010 18:14:51.236395   99368 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1010 18:14:51.255620   99368 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1010 18:14:51.255682   99368 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:14:51.265706   99368 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1010 18:14:51.265766   99368 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:14:51.276288   99368 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:14:51.287384   99368 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:14:51.298290   99368 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1010 18:14:51.309391   99368 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:14:51.322059   99368 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:14:51.341165   99368 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:14:51.352334   99368 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1010 18:14:51.361995   99368 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1010 18:14:51.362055   99368 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1010 18:14:51.376647   99368 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1010 18:14:51.387344   99368 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 18:14:51.501276   99368 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1010 18:14:51.591570   99368 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1010 18:14:51.591667   99368 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1010 18:14:51.596519   99368 start.go:563] Will wait 60s for crictl version
	I1010 18:14:51.596593   99368 ssh_runner.go:195] Run: which crictl
	I1010 18:14:51.600964   99368 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1010 18:14:51.642625   99368 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1010 18:14:51.642709   99368 ssh_runner.go:195] Run: crio --version
	I1010 18:14:51.670857   99368 ssh_runner.go:195] Run: crio --version
	I1010 18:14:51.701992   99368 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1010 18:14:51.703402   99368 out.go:177]   - env NO_PROXY=192.168.39.104
	I1010 18:14:51.704577   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetIP
	I1010 18:14:51.707504   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:51.707889   99368 main.go:141] libmachine: (ha-142481-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:30:26", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:14:39 +0000 UTC Type:0 Mac:52:54:00:70:30:26 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-142481-m02 Clientid:01:52:54:00:70:30:26}
	I1010 18:14:51.707921   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined IP address 192.168.39.186 and MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:51.708187   99368 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1010 18:14:51.712581   99368 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 18:14:51.728042   99368 mustload.go:65] Loading cluster: ha-142481
	I1010 18:14:51.728254   99368 config.go:182] Loaded profile config "ha-142481": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 18:14:51.728534   99368 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 18:14:51.728571   99368 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 18:14:51.744127   99368 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38121
	I1010 18:14:51.744674   99368 main.go:141] libmachine: () Calling .GetVersion
	I1010 18:14:51.745223   99368 main.go:141] libmachine: Using API Version  1
	I1010 18:14:51.745247   99368 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 18:14:51.745620   99368 main.go:141] libmachine: () Calling .GetMachineName
	I1010 18:14:51.745831   99368 main.go:141] libmachine: (ha-142481) Calling .GetState
	I1010 18:14:51.747403   99368 host.go:66] Checking if "ha-142481" exists ...
	I1010 18:14:51.747706   99368 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 18:14:51.747737   99368 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 18:14:51.763030   99368 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41747
	I1010 18:14:51.763446   99368 main.go:141] libmachine: () Calling .GetVersion
	I1010 18:14:51.763925   99368 main.go:141] libmachine: Using API Version  1
	I1010 18:14:51.763949   99368 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 18:14:51.764295   99368 main.go:141] libmachine: () Calling .GetMachineName
	I1010 18:14:51.764486   99368 main.go:141] libmachine: (ha-142481) Calling .DriverName
	I1010 18:14:51.764627   99368 certs.go:68] Setting up /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481 for IP: 192.168.39.186
	I1010 18:14:51.764637   99368 certs.go:194] generating shared ca certs ...
	I1010 18:14:51.764650   99368 certs.go:226] acquiring lock for ca certs: {Name:mk934aa2a82050d7b4e8800492bf64f91d140517 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:14:51.764765   99368 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key
	I1010 18:14:51.764803   99368 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key
	I1010 18:14:51.764812   99368 certs.go:256] generating profile certs ...
	I1010 18:14:51.764912   99368 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/client.key
	I1010 18:14:51.764937   99368 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.key.b244f992
	I1010 18:14:51.764951   99368 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.crt.b244f992 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.104 192.168.39.186 192.168.39.254]
	I1010 18:14:51.993768   99368 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.crt.b244f992 ...
	I1010 18:14:51.993803   99368 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.crt.b244f992: {Name:mk9eca5b6bcf4de2bd1cb4984282b7c5168c504a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:14:51.993982   99368 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.key.b244f992 ...
	I1010 18:14:51.993996   99368 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.key.b244f992: {Name:mk53f522d230afb3a7d1b4f761a379d6be7ff843 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:14:51.994077   99368 certs.go:381] copying /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.crt.b244f992 -> /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.crt
	I1010 18:14:51.994210   99368 certs.go:385] copying /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.key.b244f992 -> /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.key
	I1010 18:14:51.994347   99368 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/proxy-client.key
	I1010 18:14:51.994363   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1010 18:14:51.994376   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1010 18:14:51.994389   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1010 18:14:51.994407   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1010 18:14:51.994420   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1010 18:14:51.994432   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1010 18:14:51.994443   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1010 18:14:51.994454   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1010 18:14:51.994507   99368 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876.pem (1338 bytes)
	W1010 18:14:51.994535   99368 certs.go:480] ignoring /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876_empty.pem, impossibly tiny 0 bytes
	I1010 18:14:51.994545   99368 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem (1679 bytes)
	I1010 18:14:51.994565   99368 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem (1078 bytes)
	I1010 18:14:51.994589   99368 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem (1123 bytes)
	I1010 18:14:51.994613   99368 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem (1675 bytes)
	I1010 18:14:51.994650   99368 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem (1708 bytes)
	I1010 18:14:51.994681   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:14:51.994695   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876.pem -> /usr/share/ca-certificates/88876.pem
	I1010 18:14:51.994706   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem -> /usr/share/ca-certificates/888762.pem
	I1010 18:14:51.994740   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHHostname
	I1010 18:14:51.997958   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:51.998443   99368 main.go:141] libmachine: (ha-142481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:fa:00", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:13:53 +0000 UTC Type:0 Mac:52:54:00:3e:fa:00 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-142481 Clientid:01:52:54:00:3e:fa:00}
	I1010 18:14:51.998473   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined IP address 192.168.39.104 and MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:51.998636   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHPort
	I1010 18:14:51.998839   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHKeyPath
	I1010 18:14:51.999035   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHUsername
	I1010 18:14:51.999239   99368 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481/id_rsa Username:docker}
	I1010 18:14:52.077280   99368 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1010 18:14:52.082655   99368 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1010 18:14:52.094293   99368 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1010 18:14:52.102951   99368 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1010 18:14:52.115800   99368 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1010 18:14:52.120082   99368 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1010 18:14:52.130693   99368 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1010 18:14:52.135696   99368 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1010 18:14:52.148816   99368 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1010 18:14:52.158283   99368 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1010 18:14:52.169959   99368 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1010 18:14:52.174352   99368 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1010 18:14:52.185494   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1010 18:14:52.211191   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1010 18:14:52.237842   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1010 18:14:52.263110   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1010 18:14:52.287843   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1010 18:14:52.313473   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1010 18:14:52.338065   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1010 18:14:52.363071   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1010 18:14:52.387579   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1010 18:14:52.412888   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876.pem --> /usr/share/ca-certificates/88876.pem (1338 bytes)
	I1010 18:14:52.437781   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem --> /usr/share/ca-certificates/888762.pem (1708 bytes)
	I1010 18:14:52.464757   99368 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1010 18:14:52.481913   99368 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1010 18:14:52.499025   99368 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1010 18:14:52.515900   99368 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1010 18:14:52.533545   99368 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1010 18:14:52.550809   99368 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1010 18:14:52.567422   99368 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1010 18:14:52.584795   99368 ssh_runner.go:195] Run: openssl version
	I1010 18:14:52.590891   99368 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/888762.pem && ln -fs /usr/share/ca-certificates/888762.pem /etc/ssl/certs/888762.pem"
	I1010 18:14:52.602879   99368 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/888762.pem
	I1010 18:14:52.607603   99368 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 10 18:09 /usr/share/ca-certificates/888762.pem
	I1010 18:14:52.607658   99368 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/888762.pem
	I1010 18:14:52.613708   99368 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/888762.pem /etc/ssl/certs/3ec20f2e.0"
	I1010 18:14:52.631468   99368 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1010 18:14:52.643064   99368 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:14:52.647811   99368 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 10 17:58 /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:14:52.647874   99368 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:14:52.653881   99368 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1010 18:14:52.665152   99368 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/88876.pem && ln -fs /usr/share/ca-certificates/88876.pem /etc/ssl/certs/88876.pem"
	I1010 18:14:52.676562   99368 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/88876.pem
	I1010 18:14:52.681256   99368 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 10 18:09 /usr/share/ca-certificates/88876.pem
	I1010 18:14:52.681313   99368 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/88876.pem
	I1010 18:14:52.687223   99368 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/88876.pem /etc/ssl/certs/51391683.0"
	I1010 18:14:52.699194   99368 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1010 18:14:52.703641   99368 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1010 18:14:52.703707   99368 kubeadm.go:934] updating node {m02 192.168.39.186 8443 v1.31.1 crio true true} ...
	I1010 18:14:52.703805   99368 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-142481-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.186
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-142481 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1010 18:14:52.703835   99368 kube-vip.go:115] generating kube-vip config ...
	I1010 18:14:52.703878   99368 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1010 18:14:52.723026   99368 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1010 18:14:52.723119   99368 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1010 18:14:52.723189   99368 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1010 18:14:52.734671   99368 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I1010 18:14:52.734752   99368 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I1010 18:14:52.745741   99368 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19787-81676/.minikube/cache/linux/amd64/v1.31.1/kubelet
	I1010 18:14:52.745751   99368 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19787-81676/.minikube/cache/linux/amd64/v1.31.1/kubeadm
	I1010 18:14:52.745751   99368 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I1010 18:14:52.745871   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I1010 18:14:52.745940   99368 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I1010 18:14:52.751099   99368 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I1010 18:14:52.751132   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I1010 18:14:53.544046   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1010 18:14:53.544130   99368 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1010 18:14:53.549472   99368 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I1010 18:14:53.549517   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I1010 18:14:53.647955   99368 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 18:14:53.681722   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I1010 18:14:53.681823   99368 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I1010 18:14:53.695932   99368 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I1010 18:14:53.695987   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I1010 18:14:54.175941   99368 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1010 18:14:54.187282   99368 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1010 18:14:54.205511   99368 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1010 18:14:54.223508   99368 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1010 18:14:54.241125   99368 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1010 18:14:54.245490   99368 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 18:14:54.259173   99368 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 18:14:54.401351   99368 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 18:14:54.419984   99368 host.go:66] Checking if "ha-142481" exists ...
	I1010 18:14:54.420484   99368 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 18:14:54.420546   99368 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 18:14:54.436033   99368 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37253
	I1010 18:14:54.436556   99368 main.go:141] libmachine: () Calling .GetVersion
	I1010 18:14:54.437251   99368 main.go:141] libmachine: Using API Version  1
	I1010 18:14:54.437281   99368 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 18:14:54.437607   99368 main.go:141] libmachine: () Calling .GetMachineName
	I1010 18:14:54.437831   99368 main.go:141] libmachine: (ha-142481) Calling .DriverName
	I1010 18:14:54.438020   99368 start.go:317] joinCluster: &{Name:ha-142481 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-142481 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.104 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.186 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 18:14:54.438157   99368 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1010 18:14:54.438180   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHHostname
	I1010 18:14:54.441157   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:54.441581   99368 main.go:141] libmachine: (ha-142481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:fa:00", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:13:53 +0000 UTC Type:0 Mac:52:54:00:3e:fa:00 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-142481 Clientid:01:52:54:00:3e:fa:00}
	I1010 18:14:54.441609   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined IP address 192.168.39.104 and MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:54.441854   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHPort
	I1010 18:14:54.442034   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHKeyPath
	I1010 18:14:54.442149   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHUsername
	I1010 18:14:54.442289   99368 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481/id_rsa Username:docker}
	I1010 18:14:54.604951   99368 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.186 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1010 18:14:54.605013   99368 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token wt3o3w.k6pkjtb13sd57t6w --discovery-token-ca-cert-hash sha256:bc8568f96f96483e4403a7eff9866ac7bd8b0e76d8203796a49dd1a769f11bda --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-142481-m02 --control-plane --apiserver-advertise-address=192.168.39.186 --apiserver-bind-port=8443"
	I1010 18:15:14.578208   99368 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token wt3o3w.k6pkjtb13sd57t6w --discovery-token-ca-cert-hash sha256:bc8568f96f96483e4403a7eff9866ac7bd8b0e76d8203796a49dd1a769f11bda --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-142481-m02 --control-plane --apiserver-advertise-address=192.168.39.186 --apiserver-bind-port=8443": (19.973131424s)
	I1010 18:15:14.578257   99368 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1010 18:15:15.095544   99368 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-142481-m02 minikube.k8s.io/updated_at=2024_10_10T18_15_15_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=e90d7de550e839433dc631623053398b652747fc minikube.k8s.io/name=ha-142481 minikube.k8s.io/primary=false
	I1010 18:15:15.208568   99368 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-142481-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I1010 18:15:15.337167   99368 start.go:319] duration metric: took 20.899144024s to joinCluster
	I1010 18:15:15.337270   99368 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.186 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1010 18:15:15.337601   99368 config.go:182] Loaded profile config "ha-142481": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 18:15:15.339949   99368 out.go:177] * Verifying Kubernetes components...
	I1010 18:15:15.341260   99368 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 18:15:15.615485   99368 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 18:15:15.642973   99368 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19787-81676/kubeconfig
	I1010 18:15:15.643325   99368 kapi.go:59] client config for ha-142481: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/client.crt", KeyFile:"/home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/client.key", CAFile:"/home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2432700), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1010 18:15:15.643422   99368 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.104:8443
	I1010 18:15:15.643731   99368 node_ready.go:35] waiting up to 6m0s for node "ha-142481-m02" to be "Ready" ...
	I1010 18:15:15.643859   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:15.643869   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:15.643880   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:15.643892   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:15.665402   99368 round_trippers.go:574] Response Status: 200 OK in 21 milliseconds
	I1010 18:15:16.144314   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:16.144340   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:16.144351   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:16.144357   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:16.150219   99368 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1010 18:15:16.644045   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:16.644074   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:16.644086   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:16.644093   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:16.654043   99368 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1010 18:15:17.144554   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:17.144581   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:17.144590   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:17.144595   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:17.148858   99368 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1010 18:15:17.643970   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:17.644078   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:17.644104   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:17.644122   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:17.653880   99368 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1010 18:15:17.654572   99368 node_ready.go:53] node "ha-142481-m02" has status "Ready":"False"
	I1010 18:15:18.144266   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:18.144294   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:18.144302   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:18.144308   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:18.147936   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:18.644346   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:18.644369   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:18.644378   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:18.644382   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:18.648587   99368 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1010 18:15:19.144413   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:19.144443   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:19.144454   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:19.144460   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:19.147695   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:19.644688   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:19.644715   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:19.644726   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:19.644730   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:19.648487   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:20.144679   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:20.144700   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:20.144708   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:20.144712   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:20.148475   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:20.149193   99368 node_ready.go:53] node "ha-142481-m02" has status "Ready":"False"
	I1010 18:15:20.644644   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:20.644675   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:20.644687   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:20.644694   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:20.648513   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:21.144341   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:21.144366   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:21.144377   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:21.144384   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:21.147839   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:21.644909   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:21.644934   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:21.644942   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:21.644946   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:21.648387   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:22.144173   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:22.144196   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:22.144205   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:22.144209   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:22.147385   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:22.644414   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:22.644444   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:22.644456   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:22.644462   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:22.713904   99368 round_trippers.go:574] Response Status: 200 OK in 69 milliseconds
	I1010 18:15:22.714410   99368 node_ready.go:53] node "ha-142481-m02" has status "Ready":"False"
	I1010 18:15:23.144902   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:23.144934   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:23.144947   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:23.144954   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:23.147993   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:23.644885   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:23.644971   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:23.644995   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:23.645002   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:23.648711   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:24.144645   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:24.144673   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:24.144685   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:24.144690   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:24.148415   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:24.644379   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:24.644413   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:24.644424   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:24.644429   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:24.648175   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:25.144097   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:25.144120   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:25.144128   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:25.144133   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:25.147203   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:25.147854   99368 node_ready.go:53] node "ha-142481-m02" has status "Ready":"False"
	I1010 18:15:25.644276   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:25.644303   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:25.644311   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:25.644316   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:25.647929   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:26.143986   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:26.144010   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:26.144018   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:26.144023   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:26.147277   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:26.644893   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:26.644924   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:26.644934   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:26.644939   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:26.648455   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:27.144020   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:27.144042   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:27.144050   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:27.144053   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:27.150719   99368 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1010 18:15:27.151307   99368 node_ready.go:53] node "ha-142481-m02" has status "Ready":"False"
	I1010 18:15:27.644596   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:27.644620   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:27.644628   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:27.644632   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:27.648391   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:28.144777   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:28.144801   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:28.144809   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:28.144813   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:28.148258   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:28.644636   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:28.644665   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:28.644673   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:28.644676   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:28.648181   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:29.144094   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:29.144120   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:29.144128   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:29.144133   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:29.147945   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:29.644955   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:29.644977   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:29.644986   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:29.644990   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:29.648391   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:29.649199   99368 node_ready.go:53] node "ha-142481-m02" has status "Ready":"False"
	I1010 18:15:30.144628   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:30.144653   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:30.144661   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:30.144665   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:30.148286   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:30.644255   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:30.644288   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:30.644299   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:30.644304   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:30.648062   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:31.144076   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:31.144101   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:31.144109   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:31.144112   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:31.148081   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:31.644011   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:31.644037   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:31.644049   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:31.644055   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:31.653327   99368 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1010 18:15:31.653921   99368 node_ready.go:53] node "ha-142481-m02" has status "Ready":"False"
	I1010 18:15:32.144247   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:32.144273   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:32.144282   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:32.144286   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:32.147700   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:32.644836   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:32.644894   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:32.644908   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:32.644913   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:32.648022   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:33.144204   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:33.144231   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:33.144240   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:33.144242   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:33.148094   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:33.644909   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:33.644932   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:33.644940   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:33.644943   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:33.648586   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:34.144644   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:34.144672   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:34.144680   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:34.144685   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:34.148129   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:34.148805   99368 node_ready.go:53] node "ha-142481-m02" has status "Ready":"False"
	I1010 18:15:34.644279   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:34.644310   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:34.644321   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:34.644329   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:34.648073   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:34.648695   99368 node_ready.go:49] node "ha-142481-m02" has status "Ready":"True"
	I1010 18:15:34.648716   99368 node_ready.go:38] duration metric: took 19.004960132s for node "ha-142481-m02" to be "Ready" ...
	I1010 18:15:34.648732   99368 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 18:15:34.648874   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods
	I1010 18:15:34.648887   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:34.648899   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:34.648905   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:34.653067   99368 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1010 18:15:34.660867   99368 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-28dll" in "kube-system" namespace to be "Ready" ...
	I1010 18:15:34.660985   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-28dll
	I1010 18:15:34.660996   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:34.661004   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:34.661008   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:34.673094   99368 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I1010 18:15:34.673807   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481
	I1010 18:15:34.673825   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:34.673833   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:34.673838   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:34.679300   99368 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1010 18:15:34.679893   99368 pod_ready.go:93] pod "coredns-7c65d6cfc9-28dll" in "kube-system" namespace has status "Ready":"True"
	I1010 18:15:34.679919   99368 pod_ready.go:82] duration metric: took 19.021803ms for pod "coredns-7c65d6cfc9-28dll" in "kube-system" namespace to be "Ready" ...
	I1010 18:15:34.679934   99368 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-xfhq8" in "kube-system" namespace to be "Ready" ...
	I1010 18:15:34.680016   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-xfhq8
	I1010 18:15:34.680028   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:34.680039   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:34.680046   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:34.687874   99368 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1010 18:15:34.688550   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481
	I1010 18:15:34.688567   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:34.688575   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:34.688578   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:34.693607   99368 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1010 18:15:34.694298   99368 pod_ready.go:93] pod "coredns-7c65d6cfc9-xfhq8" in "kube-system" namespace has status "Ready":"True"
	I1010 18:15:34.694318   99368 pod_ready.go:82] duration metric: took 14.376081ms for pod "coredns-7c65d6cfc9-xfhq8" in "kube-system" namespace to be "Ready" ...
	I1010 18:15:34.694329   99368 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-142481" in "kube-system" namespace to be "Ready" ...
	I1010 18:15:34.694401   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/etcd-ha-142481
	I1010 18:15:34.694412   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:34.694422   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:34.694427   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:34.705466   99368 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1010 18:15:34.706122   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481
	I1010 18:15:34.706142   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:34.706152   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:34.706157   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:34.713862   99368 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1010 18:15:34.714292   99368 pod_ready.go:93] pod "etcd-ha-142481" in "kube-system" namespace has status "Ready":"True"
	I1010 18:15:34.714313   99368 pod_ready.go:82] duration metric: took 19.977824ms for pod "etcd-ha-142481" in "kube-system" namespace to be "Ready" ...
	I1010 18:15:34.714324   99368 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-142481-m02" in "kube-system" namespace to be "Ready" ...
	I1010 18:15:34.714393   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/etcd-ha-142481-m02
	I1010 18:15:34.714397   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:34.714407   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:34.714411   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:34.724173   99368 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1010 18:15:34.725474   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:34.725492   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:34.725502   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:34.725507   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:34.728517   99368 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1010 18:15:34.729350   99368 pod_ready.go:93] pod "etcd-ha-142481-m02" in "kube-system" namespace has status "Ready":"True"
	I1010 18:15:34.729374   99368 pod_ready.go:82] duration metric: took 15.044498ms for pod "etcd-ha-142481-m02" in "kube-system" namespace to be "Ready" ...
	I1010 18:15:34.729392   99368 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-142481" in "kube-system" namespace to be "Ready" ...
	I1010 18:15:34.844828   99368 request.go:632] Waited for 115.352966ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-142481
	I1010 18:15:34.844940   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-142481
	I1010 18:15:34.844954   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:34.844965   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:34.844980   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:34.849582   99368 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1010 18:15:35.044720   99368 request.go:632] Waited for 194.440409ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/nodes/ha-142481
	I1010 18:15:35.044815   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481
	I1010 18:15:35.044823   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:35.044922   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:35.044934   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:35.049101   99368 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1010 18:15:35.049648   99368 pod_ready.go:93] pod "kube-apiserver-ha-142481" in "kube-system" namespace has status "Ready":"True"
	I1010 18:15:35.049671   99368 pod_ready.go:82] duration metric: took 320.272231ms for pod "kube-apiserver-ha-142481" in "kube-system" namespace to be "Ready" ...
	I1010 18:15:35.049694   99368 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-142481-m02" in "kube-system" namespace to be "Ready" ...
	I1010 18:15:35.244714   99368 request.go:632] Waited for 194.93387ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-142481-m02
	I1010 18:15:35.244774   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-142481-m02
	I1010 18:15:35.244780   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:35.244788   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:35.244791   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:35.248696   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:35.444831   99368 request.go:632] Waited for 195.412897ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:35.444927   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:35.444933   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:35.444942   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:35.444946   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:35.448991   99368 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1010 18:15:35.450079   99368 pod_ready.go:93] pod "kube-apiserver-ha-142481-m02" in "kube-system" namespace has status "Ready":"True"
	I1010 18:15:35.450103   99368 pod_ready.go:82] duration metric: took 400.401007ms for pod "kube-apiserver-ha-142481-m02" in "kube-system" namespace to be "Ready" ...
	I1010 18:15:35.450118   99368 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-142481" in "kube-system" namespace to be "Ready" ...
	I1010 18:15:35.645157   99368 request.go:632] Waited for 194.960575ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-142481
	I1010 18:15:35.645249   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-142481
	I1010 18:15:35.645257   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:35.645268   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:35.645274   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:35.648746   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:35.844906   99368 request.go:632] Waited for 195.418533ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/nodes/ha-142481
	I1010 18:15:35.844969   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481
	I1010 18:15:35.844974   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:35.844982   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:35.844985   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:35.849036   99368 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1010 18:15:35.849631   99368 pod_ready.go:93] pod "kube-controller-manager-ha-142481" in "kube-system" namespace has status "Ready":"True"
	I1010 18:15:35.849652   99368 pod_ready.go:82] duration metric: took 399.526564ms for pod "kube-controller-manager-ha-142481" in "kube-system" namespace to be "Ready" ...
	I1010 18:15:35.849663   99368 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-142481-m02" in "kube-system" namespace to be "Ready" ...
	I1010 18:15:36.044750   99368 request.go:632] Waited for 194.993362ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-142481-m02
	I1010 18:15:36.044821   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-142481-m02
	I1010 18:15:36.044829   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:36.044841   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:36.044860   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:36.048403   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:36.244872   99368 request.go:632] Waited for 195.41194ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:36.244966   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:36.244978   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:36.244991   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:36.245003   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:36.248422   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:36.249090   99368 pod_ready.go:93] pod "kube-controller-manager-ha-142481-m02" in "kube-system" namespace has status "Ready":"True"
	I1010 18:15:36.249112   99368 pod_ready.go:82] duration metric: took 399.440459ms for pod "kube-controller-manager-ha-142481-m02" in "kube-system" namespace to be "Ready" ...
	I1010 18:15:36.249127   99368 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-gwvrh" in "kube-system" namespace to be "Ready" ...
	I1010 18:15:36.445275   99368 request.go:632] Waited for 196.04196ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gwvrh
	I1010 18:15:36.445337   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gwvrh
	I1010 18:15:36.445343   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:36.445350   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:36.445354   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:36.449425   99368 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1010 18:15:36.644689   99368 request.go:632] Waited for 194.411636ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/nodes/ha-142481
	I1010 18:15:36.644795   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481
	I1010 18:15:36.644806   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:36.644817   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:36.644825   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:36.648756   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:36.649220   99368 pod_ready.go:93] pod "kube-proxy-gwvrh" in "kube-system" namespace has status "Ready":"True"
	I1010 18:15:36.649241   99368 pod_ready.go:82] duration metric: took 400.105171ms for pod "kube-proxy-gwvrh" in "kube-system" namespace to be "Ready" ...
	I1010 18:15:36.649254   99368 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-srfng" in "kube-system" namespace to be "Ready" ...
	I1010 18:15:36.844338   99368 request.go:632] Waited for 194.987151ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-proxy-srfng
	I1010 18:15:36.844405   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-proxy-srfng
	I1010 18:15:36.844411   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:36.844420   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:36.844434   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:36.848477   99368 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1010 18:15:37.044640   99368 request.go:632] Waited for 195.367234ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:37.044708   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:37.044715   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:37.044726   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:37.044731   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:37.048116   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:37.048721   99368 pod_ready.go:93] pod "kube-proxy-srfng" in "kube-system" namespace has status "Ready":"True"
	I1010 18:15:37.048745   99368 pod_ready.go:82] duration metric: took 399.483125ms for pod "kube-proxy-srfng" in "kube-system" namespace to be "Ready" ...
	I1010 18:15:37.048759   99368 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-142481" in "kube-system" namespace to be "Ready" ...
	I1010 18:15:37.244914   99368 request.go:632] Waited for 196.022775ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-142481
	I1010 18:15:37.244993   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-142481
	I1010 18:15:37.245004   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:37.245029   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:37.245036   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:37.248801   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:37.444916   99368 request.go:632] Waited for 195.401869ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/nodes/ha-142481
	I1010 18:15:37.444984   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481
	I1010 18:15:37.444991   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:37.445002   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:37.445008   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:37.448457   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:37.449008   99368 pod_ready.go:93] pod "kube-scheduler-ha-142481" in "kube-system" namespace has status "Ready":"True"
	I1010 18:15:37.449028   99368 pod_ready.go:82] duration metric: took 400.260773ms for pod "kube-scheduler-ha-142481" in "kube-system" namespace to be "Ready" ...
	I1010 18:15:37.449039   99368 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-142481-m02" in "kube-system" namespace to be "Ready" ...
	I1010 18:15:37.645172   99368 request.go:632] Waited for 196.046461ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-142481-m02
	I1010 18:15:37.645249   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-142481-m02
	I1010 18:15:37.645256   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:37.645265   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:37.645271   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:37.648894   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:37.844799   99368 request.go:632] Waited for 195.42858ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:37.844915   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:37.844926   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:37.844937   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:37.844945   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:37.848459   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:37.849058   99368 pod_ready.go:93] pod "kube-scheduler-ha-142481-m02" in "kube-system" namespace has status "Ready":"True"
	I1010 18:15:37.849077   99368 pod_ready.go:82] duration metric: took 400.031968ms for pod "kube-scheduler-ha-142481-m02" in "kube-system" namespace to be "Ready" ...
	I1010 18:15:37.849089   99368 pod_ready.go:39] duration metric: took 3.200308757s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 18:15:37.849113   99368 api_server.go:52] waiting for apiserver process to appear ...
	I1010 18:15:37.849168   99368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 18:15:37.867701   99368 api_server.go:72] duration metric: took 22.53038697s to wait for apiserver process to appear ...
	I1010 18:15:37.867737   99368 api_server.go:88] waiting for apiserver healthz status ...
	I1010 18:15:37.867762   99368 api_server.go:253] Checking apiserver healthz at https://192.168.39.104:8443/healthz ...
	I1010 18:15:37.874449   99368 api_server.go:279] https://192.168.39.104:8443/healthz returned 200:
	ok
	I1010 18:15:37.874534   99368 round_trippers.go:463] GET https://192.168.39.104:8443/version
	I1010 18:15:37.874545   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:37.874561   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:37.874568   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:37.875635   99368 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1010 18:15:37.875761   99368 api_server.go:141] control plane version: v1.31.1
	I1010 18:15:37.875781   99368 api_server.go:131] duration metric: took 8.036588ms to wait for apiserver health ...
	I1010 18:15:37.875792   99368 system_pods.go:43] waiting for kube-system pods to appear ...
	I1010 18:15:38.045248   99368 request.go:632] Waited for 169.346857ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods
	I1010 18:15:38.045336   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods
	I1010 18:15:38.045344   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:38.045356   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:38.045367   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:38.051387   99368 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1010 18:15:38.056244   99368 system_pods.go:59] 17 kube-system pods found
	I1010 18:15:38.056282   99368 system_pods.go:61] "coredns-7c65d6cfc9-28dll" [1d8bde36-6ab6-4b48-b00a-2a80059ecc11] Running
	I1010 18:15:38.056289   99368 system_pods.go:61] "coredns-7c65d6cfc9-xfhq8" [33d478fb-98f0-4029-87ae-9701a54cecd9] Running
	I1010 18:15:38.056293   99368 system_pods.go:61] "etcd-ha-142481" [fa917068-3ac4-4929-b80b-bfabdc3a9b94] Running
	I1010 18:15:38.056297   99368 system_pods.go:61] "etcd-ha-142481-m02" [8e6c1111-716c-41ff-9d31-ee189956993b] Running
	I1010 18:15:38.056300   99368 system_pods.go:61] "kindnet-4d9v4" [f52506a7-d4f9-4112-8b28-f239c7c8230b] Running
	I1010 18:15:38.056308   99368 system_pods.go:61] "kindnet-5k6j8" [e2d0fb49-23e6-4389-b890-25c03f6089a1] Running
	I1010 18:15:38.056311   99368 system_pods.go:61] "kube-apiserver-ha-142481" [b3d0326d-123b-43af-b1a2-b745b884c3b5] Running
	I1010 18:15:38.056315   99368 system_pods.go:61] "kube-apiserver-ha-142481-m02" [36f5732a-f056-4751-83e3-67f84744af8e] Running
	I1010 18:15:38.056318   99368 system_pods.go:61] "kube-controller-manager-ha-142481" [ad42e72b-6208-44c3-b4cb-ac9794ec9683] Running
	I1010 18:15:38.056323   99368 system_pods.go:61] "kube-controller-manager-ha-142481-m02" [4bd4ce73-1b81-44db-a59f-06c89184d88c] Running
	I1010 18:15:38.056327   99368 system_pods.go:61] "kube-proxy-gwvrh" [06e8f455-b250-46ea-bbcc-75ed230f2b57] Running
	I1010 18:15:38.056331   99368 system_pods.go:61] "kube-proxy-srfng" [0aad186a-89b6-48d9-8434-1063c7c8a42f] Running
	I1010 18:15:38.056334   99368 system_pods.go:61] "kube-scheduler-ha-142481" [c28a2da7-1807-4111-bf99-84891a0b278a] Running
	I1010 18:15:38.056337   99368 system_pods.go:61] "kube-scheduler-ha-142481-m02" [f29260ee-8ec8-47c4-848d-ee8ff1d06610] Running
	I1010 18:15:38.056340   99368 system_pods.go:61] "kube-vip-ha-142481" [3ae4dc2c-27b6-4f93-a605-8de6f0c096d0] Running
	I1010 18:15:38.056343   99368 system_pods.go:61] "kube-vip-ha-142481-m02" [bd50bf3a-5e9c-48d4-8bba-fe1fea6c017d] Running
	I1010 18:15:38.056345   99368 system_pods.go:61] "storage-provisioner" [9d74f64e-6391-4ab0-99ad-8c1711840696] Running
	I1010 18:15:38.056352   99368 system_pods.go:74] duration metric: took 180.553557ms to wait for pod list to return data ...
	I1010 18:15:38.056362   99368 default_sa.go:34] waiting for default service account to be created ...
	I1010 18:15:38.244537   99368 request.go:632] Waited for 188.093724ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/default/serviceaccounts
	I1010 18:15:38.244618   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/default/serviceaccounts
	I1010 18:15:38.244624   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:38.244633   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:38.244641   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:38.248165   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:38.248399   99368 default_sa.go:45] found service account: "default"
	I1010 18:15:38.248416   99368 default_sa.go:55] duration metric: took 192.046524ms for default service account to be created ...
	I1010 18:15:38.248427   99368 system_pods.go:116] waiting for k8s-apps to be running ...
	I1010 18:15:38.444704   99368 request.go:632] Waited for 196.206785ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods
	I1010 18:15:38.444765   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods
	I1010 18:15:38.444770   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:38.444778   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:38.444783   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:38.479585   99368 round_trippers.go:574] Response Status: 200 OK in 34 milliseconds
	I1010 18:15:38.484055   99368 system_pods.go:86] 17 kube-system pods found
	I1010 18:15:38.484088   99368 system_pods.go:89] "coredns-7c65d6cfc9-28dll" [1d8bde36-6ab6-4b48-b00a-2a80059ecc11] Running
	I1010 18:15:38.484094   99368 system_pods.go:89] "coredns-7c65d6cfc9-xfhq8" [33d478fb-98f0-4029-87ae-9701a54cecd9] Running
	I1010 18:15:38.484098   99368 system_pods.go:89] "etcd-ha-142481" [fa917068-3ac4-4929-b80b-bfabdc3a9b94] Running
	I1010 18:15:38.484102   99368 system_pods.go:89] "etcd-ha-142481-m02" [8e6c1111-716c-41ff-9d31-ee189956993b] Running
	I1010 18:15:38.484106   99368 system_pods.go:89] "kindnet-4d9v4" [f52506a7-d4f9-4112-8b28-f239c7c8230b] Running
	I1010 18:15:38.484109   99368 system_pods.go:89] "kindnet-5k6j8" [e2d0fb49-23e6-4389-b890-25c03f6089a1] Running
	I1010 18:15:38.484113   99368 system_pods.go:89] "kube-apiserver-ha-142481" [b3d0326d-123b-43af-b1a2-b745b884c3b5] Running
	I1010 18:15:38.484116   99368 system_pods.go:89] "kube-apiserver-ha-142481-m02" [36f5732a-f056-4751-83e3-67f84744af8e] Running
	I1010 18:15:38.484119   99368 system_pods.go:89] "kube-controller-manager-ha-142481" [ad42e72b-6208-44c3-b4cb-ac9794ec9683] Running
	I1010 18:15:38.484122   99368 system_pods.go:89] "kube-controller-manager-ha-142481-m02" [4bd4ce73-1b81-44db-a59f-06c89184d88c] Running
	I1010 18:15:38.484125   99368 system_pods.go:89] "kube-proxy-gwvrh" [06e8f455-b250-46ea-bbcc-75ed230f2b57] Running
	I1010 18:15:38.484128   99368 system_pods.go:89] "kube-proxy-srfng" [0aad186a-89b6-48d9-8434-1063c7c8a42f] Running
	I1010 18:15:38.484132   99368 system_pods.go:89] "kube-scheduler-ha-142481" [c28a2da7-1807-4111-bf99-84891a0b278a] Running
	I1010 18:15:38.484135   99368 system_pods.go:89] "kube-scheduler-ha-142481-m02" [f29260ee-8ec8-47c4-848d-ee8ff1d06610] Running
	I1010 18:15:38.484139   99368 system_pods.go:89] "kube-vip-ha-142481" [3ae4dc2c-27b6-4f93-a605-8de6f0c096d0] Running
	I1010 18:15:38.484141   99368 system_pods.go:89] "kube-vip-ha-142481-m02" [bd50bf3a-5e9c-48d4-8bba-fe1fea6c017d] Running
	I1010 18:15:38.484144   99368 system_pods.go:89] "storage-provisioner" [9d74f64e-6391-4ab0-99ad-8c1711840696] Running
	I1010 18:15:38.484152   99368 system_pods.go:126] duration metric: took 235.71716ms to wait for k8s-apps to be running ...
	I1010 18:15:38.484162   99368 system_svc.go:44] waiting for kubelet service to be running ....
	I1010 18:15:38.484219   99368 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 18:15:38.499587   99368 system_svc.go:56] duration metric: took 15.413149ms WaitForService to wait for kubelet
	I1010 18:15:38.499630   99368 kubeadm.go:582] duration metric: took 23.162321939s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1010 18:15:38.499655   99368 node_conditions.go:102] verifying NodePressure condition ...
	I1010 18:15:38.645127   99368 request.go:632] Waited for 145.342386ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/nodes
	I1010 18:15:38.645247   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes
	I1010 18:15:38.645259   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:38.645267   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:38.645272   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:38.649291   99368 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1010 18:15:38.650032   99368 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1010 18:15:38.650065   99368 node_conditions.go:123] node cpu capacity is 2
	I1010 18:15:38.650077   99368 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1010 18:15:38.650081   99368 node_conditions.go:123] node cpu capacity is 2
	I1010 18:15:38.650086   99368 node_conditions.go:105] duration metric: took 150.425543ms to run NodePressure ...
	I1010 18:15:38.650104   99368 start.go:241] waiting for startup goroutines ...
	I1010 18:15:38.650137   99368 start.go:255] writing updated cluster config ...
	I1010 18:15:38.652551   99368 out.go:201] 
	I1010 18:15:38.654476   99368 config.go:182] Loaded profile config "ha-142481": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 18:15:38.654593   99368 profile.go:143] Saving config to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/config.json ...
	I1010 18:15:38.656332   99368 out.go:177] * Starting "ha-142481-m03" control-plane node in "ha-142481" cluster
	I1010 18:15:38.657633   99368 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1010 18:15:38.657659   99368 cache.go:56] Caching tarball of preloaded images
	I1010 18:15:38.657790   99368 preload.go:172] Found /home/jenkins/minikube-integration/19787-81676/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1010 18:15:38.657806   99368 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1010 18:15:38.657908   99368 profile.go:143] Saving config to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/config.json ...
	I1010 18:15:38.658076   99368 start.go:360] acquireMachinesLock for ha-142481-m03: {Name:mkf50a8a3600473176c10a3ea212772e747151e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1010 18:15:38.658122   99368 start.go:364] duration metric: took 26.16µs to acquireMachinesLock for "ha-142481-m03"
	I1010 18:15:38.658147   99368 start.go:93] Provisioning new machine with config: &{Name:ha-142481 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-142481 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.104 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.186 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspekto
r-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1010 18:15:38.658249   99368 start.go:125] createHost starting for "m03" (driver="kvm2")
	I1010 18:15:38.660071   99368 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1010 18:15:38.660197   99368 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 18:15:38.660258   99368 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 18:15:38.676361   99368 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34941
	I1010 18:15:38.676935   99368 main.go:141] libmachine: () Calling .GetVersion
	I1010 18:15:38.677467   99368 main.go:141] libmachine: Using API Version  1
	I1010 18:15:38.677506   99368 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 18:15:38.677892   99368 main.go:141] libmachine: () Calling .GetMachineName
	I1010 18:15:38.678105   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetMachineName
	I1010 18:15:38.678326   99368 main.go:141] libmachine: (ha-142481-m03) Calling .DriverName
	I1010 18:15:38.678504   99368 start.go:159] libmachine.API.Create for "ha-142481" (driver="kvm2")
	I1010 18:15:38.678538   99368 client.go:168] LocalClient.Create starting
	I1010 18:15:38.678568   99368 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem
	I1010 18:15:38.678601   99368 main.go:141] libmachine: Decoding PEM data...
	I1010 18:15:38.678614   99368 main.go:141] libmachine: Parsing certificate...
	I1010 18:15:38.678663   99368 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem
	I1010 18:15:38.678681   99368 main.go:141] libmachine: Decoding PEM data...
	I1010 18:15:38.678691   99368 main.go:141] libmachine: Parsing certificate...
	I1010 18:15:38.678707   99368 main.go:141] libmachine: Running pre-create checks...
	I1010 18:15:38.678715   99368 main.go:141] libmachine: (ha-142481-m03) Calling .PreCreateCheck
	I1010 18:15:38.678898   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetConfigRaw
	I1010 18:15:38.679630   99368 main.go:141] libmachine: Creating machine...
	I1010 18:15:38.679653   99368 main.go:141] libmachine: (ha-142481-m03) Calling .Create
	I1010 18:15:38.680877   99368 main.go:141] libmachine: (ha-142481-m03) Creating KVM machine...
	I1010 18:15:38.681726   99368 main.go:141] libmachine: (ha-142481-m03) DBG | found existing default KVM network
	I1010 18:15:38.681754   99368 main.go:141] libmachine: (ha-142481-m03) DBG | found existing private KVM network mk-ha-142481
	I1010 18:15:38.681811   99368 main.go:141] libmachine: (ha-142481-m03) Setting up store path in /home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481-m03 ...
	I1010 18:15:38.681845   99368 main.go:141] libmachine: (ha-142481-m03) Building disk image from file:///home/jenkins/minikube-integration/19787-81676/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso
	I1010 18:15:38.681908   99368 main.go:141] libmachine: (ha-142481-m03) DBG | I1010 18:15:38.681805  100144 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19787-81676/.minikube
	I1010 18:15:38.681991   99368 main.go:141] libmachine: (ha-142481-m03) Downloading /home/jenkins/minikube-integration/19787-81676/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19787-81676/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso...
	I1010 18:15:38.938889   99368 main.go:141] libmachine: (ha-142481-m03) DBG | I1010 18:15:38.938689  100144 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481-m03/id_rsa...
	I1010 18:15:39.048405   99368 main.go:141] libmachine: (ha-142481-m03) DBG | I1010 18:15:39.048265  100144 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481-m03/ha-142481-m03.rawdisk...
	I1010 18:15:39.048440   99368 main.go:141] libmachine: (ha-142481-m03) DBG | Writing magic tar header
	I1010 18:15:39.048457   99368 main.go:141] libmachine: (ha-142481-m03) DBG | Writing SSH key tar header
	I1010 18:15:39.048467   99368 main.go:141] libmachine: (ha-142481-m03) DBG | I1010 18:15:39.048382  100144 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481-m03 ...
	I1010 18:15:39.048494   99368 main.go:141] libmachine: (ha-142481-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481-m03
	I1010 18:15:39.048510   99368 main.go:141] libmachine: (ha-142481-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19787-81676/.minikube/machines
	I1010 18:15:39.048527   99368 main.go:141] libmachine: (ha-142481-m03) Setting executable bit set on /home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481-m03 (perms=drwx------)
	I1010 18:15:39.048549   99368 main.go:141] libmachine: (ha-142481-m03) Setting executable bit set on /home/jenkins/minikube-integration/19787-81676/.minikube/machines (perms=drwxr-xr-x)
	I1010 18:15:39.048564   99368 main.go:141] libmachine: (ha-142481-m03) Setting executable bit set on /home/jenkins/minikube-integration/19787-81676/.minikube (perms=drwxr-xr-x)
	I1010 18:15:39.048578   99368 main.go:141] libmachine: (ha-142481-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19787-81676/.minikube
	I1010 18:15:39.048592   99368 main.go:141] libmachine: (ha-142481-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19787-81676
	I1010 18:15:39.048605   99368 main.go:141] libmachine: (ha-142481-m03) Setting executable bit set on /home/jenkins/minikube-integration/19787-81676 (perms=drwxrwxr-x)
	I1010 18:15:39.048635   99368 main.go:141] libmachine: (ha-142481-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1010 18:15:39.048655   99368 main.go:141] libmachine: (ha-142481-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1010 18:15:39.048662   99368 main.go:141] libmachine: (ha-142481-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1010 18:15:39.048676   99368 main.go:141] libmachine: (ha-142481-m03) Creating domain...
	I1010 18:15:39.048685   99368 main.go:141] libmachine: (ha-142481-m03) DBG | Checking permissions on dir: /home/jenkins
	I1010 18:15:39.048696   99368 main.go:141] libmachine: (ha-142481-m03) DBG | Checking permissions on dir: /home
	I1010 18:15:39.048710   99368 main.go:141] libmachine: (ha-142481-m03) DBG | Skipping /home - not owner
	I1010 18:15:39.049753   99368 main.go:141] libmachine: (ha-142481-m03) define libvirt domain using xml: 
	I1010 18:15:39.049779   99368 main.go:141] libmachine: (ha-142481-m03) <domain type='kvm'>
	I1010 18:15:39.049790   99368 main.go:141] libmachine: (ha-142481-m03)   <name>ha-142481-m03</name>
	I1010 18:15:39.049799   99368 main.go:141] libmachine: (ha-142481-m03)   <memory unit='MiB'>2200</memory>
	I1010 18:15:39.049809   99368 main.go:141] libmachine: (ha-142481-m03)   <vcpu>2</vcpu>
	I1010 18:15:39.049816   99368 main.go:141] libmachine: (ha-142481-m03)   <features>
	I1010 18:15:39.049822   99368 main.go:141] libmachine: (ha-142481-m03)     <acpi/>
	I1010 18:15:39.049830   99368 main.go:141] libmachine: (ha-142481-m03)     <apic/>
	I1010 18:15:39.049835   99368 main.go:141] libmachine: (ha-142481-m03)     <pae/>
	I1010 18:15:39.049839   99368 main.go:141] libmachine: (ha-142481-m03)     
	I1010 18:15:39.049845   99368 main.go:141] libmachine: (ha-142481-m03)   </features>
	I1010 18:15:39.049849   99368 main.go:141] libmachine: (ha-142481-m03)   <cpu mode='host-passthrough'>
	I1010 18:15:39.049856   99368 main.go:141] libmachine: (ha-142481-m03)   
	I1010 18:15:39.049862   99368 main.go:141] libmachine: (ha-142481-m03)   </cpu>
	I1010 18:15:39.049890   99368 main.go:141] libmachine: (ha-142481-m03)   <os>
	I1010 18:15:39.049903   99368 main.go:141] libmachine: (ha-142481-m03)     <type>hvm</type>
	I1010 18:15:39.049915   99368 main.go:141] libmachine: (ha-142481-m03)     <boot dev='cdrom'/>
	I1010 18:15:39.049926   99368 main.go:141] libmachine: (ha-142481-m03)     <boot dev='hd'/>
	I1010 18:15:39.049939   99368 main.go:141] libmachine: (ha-142481-m03)     <bootmenu enable='no'/>
	I1010 18:15:39.049945   99368 main.go:141] libmachine: (ha-142481-m03)   </os>
	I1010 18:15:39.049956   99368 main.go:141] libmachine: (ha-142481-m03)   <devices>
	I1010 18:15:39.049966   99368 main.go:141] libmachine: (ha-142481-m03)     <disk type='file' device='cdrom'>
	I1010 18:15:39.049980   99368 main.go:141] libmachine: (ha-142481-m03)       <source file='/home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481-m03/boot2docker.iso'/>
	I1010 18:15:39.049991   99368 main.go:141] libmachine: (ha-142481-m03)       <target dev='hdc' bus='scsi'/>
	I1010 18:15:39.050016   99368 main.go:141] libmachine: (ha-142481-m03)       <readonly/>
	I1010 18:15:39.050029   99368 main.go:141] libmachine: (ha-142481-m03)     </disk>
	I1010 18:15:39.050036   99368 main.go:141] libmachine: (ha-142481-m03)     <disk type='file' device='disk'>
	I1010 18:15:39.050044   99368 main.go:141] libmachine: (ha-142481-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1010 18:15:39.050056   99368 main.go:141] libmachine: (ha-142481-m03)       <source file='/home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481-m03/ha-142481-m03.rawdisk'/>
	I1010 18:15:39.050065   99368 main.go:141] libmachine: (ha-142481-m03)       <target dev='hda' bus='virtio'/>
	I1010 18:15:39.050070   99368 main.go:141] libmachine: (ha-142481-m03)     </disk>
	I1010 18:15:39.050075   99368 main.go:141] libmachine: (ha-142481-m03)     <interface type='network'>
	I1010 18:15:39.050081   99368 main.go:141] libmachine: (ha-142481-m03)       <source network='mk-ha-142481'/>
	I1010 18:15:39.050087   99368 main.go:141] libmachine: (ha-142481-m03)       <model type='virtio'/>
	I1010 18:15:39.050092   99368 main.go:141] libmachine: (ha-142481-m03)     </interface>
	I1010 18:15:39.050099   99368 main.go:141] libmachine: (ha-142481-m03)     <interface type='network'>
	I1010 18:15:39.050104   99368 main.go:141] libmachine: (ha-142481-m03)       <source network='default'/>
	I1010 18:15:39.050114   99368 main.go:141] libmachine: (ha-142481-m03)       <model type='virtio'/>
	I1010 18:15:39.050121   99368 main.go:141] libmachine: (ha-142481-m03)     </interface>
	I1010 18:15:39.050128   99368 main.go:141] libmachine: (ha-142481-m03)     <serial type='pty'>
	I1010 18:15:39.050232   99368 main.go:141] libmachine: (ha-142481-m03)       <target port='0'/>
	I1010 18:15:39.050268   99368 main.go:141] libmachine: (ha-142481-m03)     </serial>
	I1010 18:15:39.050282   99368 main.go:141] libmachine: (ha-142481-m03)     <console type='pty'>
	I1010 18:15:39.050294   99368 main.go:141] libmachine: (ha-142481-m03)       <target type='serial' port='0'/>
	I1010 18:15:39.050305   99368 main.go:141] libmachine: (ha-142481-m03)     </console>
	I1010 18:15:39.050315   99368 main.go:141] libmachine: (ha-142481-m03)     <rng model='virtio'>
	I1010 18:15:39.050328   99368 main.go:141] libmachine: (ha-142481-m03)       <backend model='random'>/dev/random</backend>
	I1010 18:15:39.050340   99368 main.go:141] libmachine: (ha-142481-m03)     </rng>
	I1010 18:15:39.050350   99368 main.go:141] libmachine: (ha-142481-m03)     
	I1010 18:15:39.050359   99368 main.go:141] libmachine: (ha-142481-m03)     
	I1010 18:15:39.050371   99368 main.go:141] libmachine: (ha-142481-m03)   </devices>
	I1010 18:15:39.050378   99368 main.go:141] libmachine: (ha-142481-m03) </domain>
	I1010 18:15:39.050391   99368 main.go:141] libmachine: (ha-142481-m03) 
	I1010 18:15:39.057742   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:01:68:df in network default
	I1010 18:15:39.058339   99368 main.go:141] libmachine: (ha-142481-m03) Ensuring networks are active...
	I1010 18:15:39.058372   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:15:39.059040   99368 main.go:141] libmachine: (ha-142481-m03) Ensuring network default is active
	I1010 18:15:39.059385   99368 main.go:141] libmachine: (ha-142481-m03) Ensuring network mk-ha-142481 is active
	I1010 18:15:39.060065   99368 main.go:141] libmachine: (ha-142481-m03) Getting domain xml...
	I1010 18:15:39.061108   99368 main.go:141] libmachine: (ha-142481-m03) Creating domain...
	I1010 18:15:40.343936   99368 main.go:141] libmachine: (ha-142481-m03) Waiting to get IP...
	I1010 18:15:40.344892   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:15:40.345373   99368 main.go:141] libmachine: (ha-142481-m03) DBG | unable to find current IP address of domain ha-142481-m03 in network mk-ha-142481
	I1010 18:15:40.345401   99368 main.go:141] libmachine: (ha-142481-m03) DBG | I1010 18:15:40.345319  100144 retry.go:31] will retry after 289.570163ms: waiting for machine to come up
	I1010 18:15:40.637167   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:15:40.637765   99368 main.go:141] libmachine: (ha-142481-m03) DBG | unable to find current IP address of domain ha-142481-m03 in network mk-ha-142481
	I1010 18:15:40.637799   99368 main.go:141] libmachine: (ha-142481-m03) DBG | I1010 18:15:40.637685  100144 retry.go:31] will retry after 311.078832ms: waiting for machine to come up
	I1010 18:15:40.950108   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:15:40.950581   99368 main.go:141] libmachine: (ha-142481-m03) DBG | unable to find current IP address of domain ha-142481-m03 in network mk-ha-142481
	I1010 18:15:40.950610   99368 main.go:141] libmachine: (ha-142481-m03) DBG | I1010 18:15:40.950529  100144 retry.go:31] will retry after 356.951796ms: waiting for machine to come up
	I1010 18:15:41.309147   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:15:41.309650   99368 main.go:141] libmachine: (ha-142481-m03) DBG | unable to find current IP address of domain ha-142481-m03 in network mk-ha-142481
	I1010 18:15:41.309677   99368 main.go:141] libmachine: (ha-142481-m03) DBG | I1010 18:15:41.309602  100144 retry.go:31] will retry after 532.45566ms: waiting for machine to come up
	I1010 18:15:41.843545   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:15:41.844119   99368 main.go:141] libmachine: (ha-142481-m03) DBG | unable to find current IP address of domain ha-142481-m03 in network mk-ha-142481
	I1010 18:15:41.844147   99368 main.go:141] libmachine: (ha-142481-m03) DBG | I1010 18:15:41.844054  100144 retry.go:31] will retry after 601.557958ms: waiting for machine to come up
	I1010 18:15:42.447022   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:15:42.447619   99368 main.go:141] libmachine: (ha-142481-m03) DBG | unable to find current IP address of domain ha-142481-m03 in network mk-ha-142481
	I1010 18:15:42.447649   99368 main.go:141] libmachine: (ha-142481-m03) DBG | I1010 18:15:42.447560  100144 retry.go:31] will retry after 756.716179ms: waiting for machine to come up
	I1010 18:15:43.206472   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:15:43.207013   99368 main.go:141] libmachine: (ha-142481-m03) DBG | unable to find current IP address of domain ha-142481-m03 in network mk-ha-142481
	I1010 18:15:43.207043   99368 main.go:141] libmachine: (ha-142481-m03) DBG | I1010 18:15:43.206973  100144 retry.go:31] will retry after 1.170057285s: waiting for machine to come up
	I1010 18:15:44.378682   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:15:44.379169   99368 main.go:141] libmachine: (ha-142481-m03) DBG | unable to find current IP address of domain ha-142481-m03 in network mk-ha-142481
	I1010 18:15:44.379199   99368 main.go:141] libmachine: (ha-142481-m03) DBG | I1010 18:15:44.379123  100144 retry.go:31] will retry after 1.176461257s: waiting for machine to come up
	I1010 18:15:45.558684   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:15:45.559193   99368 main.go:141] libmachine: (ha-142481-m03) DBG | unable to find current IP address of domain ha-142481-m03 in network mk-ha-142481
	I1010 18:15:45.559220   99368 main.go:141] libmachine: (ha-142481-m03) DBG | I1010 18:15:45.559154  100144 retry.go:31] will retry after 1.48319029s: waiting for machine to come up
	I1010 18:15:47.044036   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:15:47.044496   99368 main.go:141] libmachine: (ha-142481-m03) DBG | unable to find current IP address of domain ha-142481-m03 in network mk-ha-142481
	I1010 18:15:47.044521   99368 main.go:141] libmachine: (ha-142481-m03) DBG | I1010 18:15:47.044430  100144 retry.go:31] will retry after 1.688231692s: waiting for machine to come up
	I1010 18:15:48.734646   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:15:48.735151   99368 main.go:141] libmachine: (ha-142481-m03) DBG | unable to find current IP address of domain ha-142481-m03 in network mk-ha-142481
	I1010 18:15:48.735174   99368 main.go:141] libmachine: (ha-142481-m03) DBG | I1010 18:15:48.735104  100144 retry.go:31] will retry after 2.212019945s: waiting for machine to come up
	I1010 18:15:50.948675   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:15:50.949207   99368 main.go:141] libmachine: (ha-142481-m03) DBG | unable to find current IP address of domain ha-142481-m03 in network mk-ha-142481
	I1010 18:15:50.949236   99368 main.go:141] libmachine: (ha-142481-m03) DBG | I1010 18:15:50.949160  100144 retry.go:31] will retry after 2.319000915s: waiting for machine to come up
	I1010 18:15:53.270642   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:15:53.271193   99368 main.go:141] libmachine: (ha-142481-m03) DBG | unable to find current IP address of domain ha-142481-m03 in network mk-ha-142481
	I1010 18:15:53.271216   99368 main.go:141] libmachine: (ha-142481-m03) DBG | I1010 18:15:53.271155  100144 retry.go:31] will retry after 3.719042495s: waiting for machine to come up
	I1010 18:15:56.994579   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:15:56.995029   99368 main.go:141] libmachine: (ha-142481-m03) DBG | unable to find current IP address of domain ha-142481-m03 in network mk-ha-142481
	I1010 18:15:56.995054   99368 main.go:141] libmachine: (ha-142481-m03) DBG | I1010 18:15:56.994970  100144 retry.go:31] will retry after 5.298417625s: waiting for machine to come up
	I1010 18:16:02.294993   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:02.295462   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has current primary IP address 192.168.39.175 and MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:02.295487   99368 main.go:141] libmachine: (ha-142481-m03) Found IP for machine: 192.168.39.175
	I1010 18:16:02.295500   99368 main.go:141] libmachine: (ha-142481-m03) Reserving static IP address...
	I1010 18:16:02.295917   99368 main.go:141] libmachine: (ha-142481-m03) DBG | unable to find host DHCP lease matching {name: "ha-142481-m03", mac: "52:54:00:06:ed:5a", ip: "192.168.39.175"} in network mk-ha-142481
	I1010 18:16:02.376364   99368 main.go:141] libmachine: (ha-142481-m03) DBG | Getting to WaitForSSH function...
	I1010 18:16:02.376400   99368 main.go:141] libmachine: (ha-142481-m03) Reserved static IP address: 192.168.39.175
	I1010 18:16:02.376420   99368 main.go:141] libmachine: (ha-142481-m03) Waiting for SSH to be available...
	I1010 18:16:02.379038   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:02.379428   99368 main.go:141] libmachine: (ha-142481-m03) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:06:ed:5a", ip: ""} in network mk-ha-142481
	I1010 18:16:02.379482   99368 main.go:141] libmachine: (ha-142481-m03) DBG | unable to find defined IP address of network mk-ha-142481 interface with MAC address 52:54:00:06:ed:5a
	I1010 18:16:02.379643   99368 main.go:141] libmachine: (ha-142481-m03) DBG | Using SSH client type: external
	I1010 18:16:02.379666   99368 main.go:141] libmachine: (ha-142481-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481-m03/id_rsa (-rw-------)
	I1010 18:16:02.379695   99368 main.go:141] libmachine: (ha-142481-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1010 18:16:02.379708   99368 main.go:141] libmachine: (ha-142481-m03) DBG | About to run SSH command:
	I1010 18:16:02.379720   99368 main.go:141] libmachine: (ha-142481-m03) DBG | exit 0
	I1010 18:16:02.383609   99368 main.go:141] libmachine: (ha-142481-m03) DBG | SSH cmd err, output: exit status 255: 
	I1010 18:16:02.383645   99368 main.go:141] libmachine: (ha-142481-m03) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1010 18:16:02.383673   99368 main.go:141] libmachine: (ha-142481-m03) DBG | command : exit 0
	I1010 18:16:02.383687   99368 main.go:141] libmachine: (ha-142481-m03) DBG | err     : exit status 255
	I1010 18:16:02.383701   99368 main.go:141] libmachine: (ha-142481-m03) DBG | output  : 
	I1010 18:16:05.385045   99368 main.go:141] libmachine: (ha-142481-m03) DBG | Getting to WaitForSSH function...
	I1010 18:16:05.387500   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:05.388024   99368 main.go:141] libmachine: (ha-142481-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:ed:5a", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:15:53 +0000 UTC Type:0 Mac:52:54:00:06:ed:5a Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:ha-142481-m03 Clientid:01:52:54:00:06:ed:5a}
	I1010 18:16:05.388058   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined IP address 192.168.39.175 and MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:05.388149   99368 main.go:141] libmachine: (ha-142481-m03) DBG | Using SSH client type: external
	I1010 18:16:05.388172   99368 main.go:141] libmachine: (ha-142481-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481-m03/id_rsa (-rw-------)
	I1010 18:16:05.388198   99368 main.go:141] libmachine: (ha-142481-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.175 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1010 18:16:05.388212   99368 main.go:141] libmachine: (ha-142481-m03) DBG | About to run SSH command:
	I1010 18:16:05.388222   99368 main.go:141] libmachine: (ha-142481-m03) DBG | exit 0
	I1010 18:16:05.517373   99368 main.go:141] libmachine: (ha-142481-m03) DBG | SSH cmd err, output: <nil>: 
	I1010 18:16:05.517675   99368 main.go:141] libmachine: (ha-142481-m03) KVM machine creation complete!
	I1010 18:16:05.517976   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetConfigRaw
	I1010 18:16:05.518524   99368 main.go:141] libmachine: (ha-142481-m03) Calling .DriverName
	I1010 18:16:05.518756   99368 main.go:141] libmachine: (ha-142481-m03) Calling .DriverName
	I1010 18:16:05.518928   99368 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1010 18:16:05.518944   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetState
	I1010 18:16:05.520359   99368 main.go:141] libmachine: Detecting operating system of created instance...
	I1010 18:16:05.520374   99368 main.go:141] libmachine: Waiting for SSH to be available...
	I1010 18:16:05.520382   99368 main.go:141] libmachine: Getting to WaitForSSH function...
	I1010 18:16:05.520388   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHHostname
	I1010 18:16:05.523092   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:05.523568   99368 main.go:141] libmachine: (ha-142481-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:ed:5a", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:15:53 +0000 UTC Type:0 Mac:52:54:00:06:ed:5a Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:ha-142481-m03 Clientid:01:52:54:00:06:ed:5a}
	I1010 18:16:05.523601   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined IP address 192.168.39.175 and MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:05.523714   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHPort
	I1010 18:16:05.523901   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHKeyPath
	I1010 18:16:05.524055   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHKeyPath
	I1010 18:16:05.524156   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHUsername
	I1010 18:16:05.524338   99368 main.go:141] libmachine: Using SSH client type: native
	I1010 18:16:05.524636   99368 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.175 22 <nil> <nil>}
	I1010 18:16:05.524669   99368 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1010 18:16:05.632367   99368 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1010 18:16:05.632396   99368 main.go:141] libmachine: Detecting the provisioner...
	I1010 18:16:05.632408   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHHostname
	I1010 18:16:05.635809   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:05.636216   99368 main.go:141] libmachine: (ha-142481-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:ed:5a", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:15:53 +0000 UTC Type:0 Mac:52:54:00:06:ed:5a Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:ha-142481-m03 Clientid:01:52:54:00:06:ed:5a}
	I1010 18:16:05.636238   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined IP address 192.168.39.175 and MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:05.636547   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHPort
	I1010 18:16:05.636757   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHKeyPath
	I1010 18:16:05.636963   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHKeyPath
	I1010 18:16:05.637090   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHUsername
	I1010 18:16:05.637319   99368 main.go:141] libmachine: Using SSH client type: native
	I1010 18:16:05.637523   99368 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.175 22 <nil> <nil>}
	I1010 18:16:05.637539   99368 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1010 18:16:05.749769   99368 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1010 18:16:05.749833   99368 main.go:141] libmachine: found compatible host: buildroot
	I1010 18:16:05.749840   99368 main.go:141] libmachine: Provisioning with buildroot...
	I1010 18:16:05.749847   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetMachineName
	I1010 18:16:05.750100   99368 buildroot.go:166] provisioning hostname "ha-142481-m03"
	I1010 18:16:05.750135   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetMachineName
	I1010 18:16:05.750348   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHHostname
	I1010 18:16:05.753204   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:05.753697   99368 main.go:141] libmachine: (ha-142481-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:ed:5a", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:15:53 +0000 UTC Type:0 Mac:52:54:00:06:ed:5a Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:ha-142481-m03 Clientid:01:52:54:00:06:ed:5a}
	I1010 18:16:05.753724   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined IP address 192.168.39.175 and MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:05.753970   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHPort
	I1010 18:16:05.754155   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHKeyPath
	I1010 18:16:05.754326   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHKeyPath
	I1010 18:16:05.754456   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHUsername
	I1010 18:16:05.754597   99368 main.go:141] libmachine: Using SSH client type: native
	I1010 18:16:05.754815   99368 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.175 22 <nil> <nil>}
	I1010 18:16:05.754835   99368 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-142481-m03 && echo "ha-142481-m03" | sudo tee /etc/hostname
	I1010 18:16:05.886094   99368 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-142481-m03
	
	I1010 18:16:05.886129   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHHostname
	I1010 18:16:05.889027   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:05.889401   99368 main.go:141] libmachine: (ha-142481-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:ed:5a", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:15:53 +0000 UTC Type:0 Mac:52:54:00:06:ed:5a Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:ha-142481-m03 Clientid:01:52:54:00:06:ed:5a}
	I1010 18:16:05.889420   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined IP address 192.168.39.175 and MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:05.889629   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHPort
	I1010 18:16:05.889843   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHKeyPath
	I1010 18:16:05.889995   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHKeyPath
	I1010 18:16:05.890115   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHUsername
	I1010 18:16:05.890271   99368 main.go:141] libmachine: Using SSH client type: native
	I1010 18:16:05.890474   99368 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.175 22 <nil> <nil>}
	I1010 18:16:05.890491   99368 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-142481-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-142481-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-142481-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1010 18:16:06.011027   99368 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1010 18:16:06.011075   99368 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19787-81676/.minikube CaCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19787-81676/.minikube}
	I1010 18:16:06.011118   99368 buildroot.go:174] setting up certificates
	I1010 18:16:06.011128   99368 provision.go:84] configureAuth start
	I1010 18:16:06.011159   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetMachineName
	I1010 18:16:06.011515   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetIP
	I1010 18:16:06.014592   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:06.015019   99368 main.go:141] libmachine: (ha-142481-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:ed:5a", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:15:53 +0000 UTC Type:0 Mac:52:54:00:06:ed:5a Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:ha-142481-m03 Clientid:01:52:54:00:06:ed:5a}
	I1010 18:16:06.015050   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined IP address 192.168.39.175 and MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:06.015255   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHHostname
	I1010 18:16:06.017745   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:06.018212   99368 main.go:141] libmachine: (ha-142481-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:ed:5a", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:15:53 +0000 UTC Type:0 Mac:52:54:00:06:ed:5a Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:ha-142481-m03 Clientid:01:52:54:00:06:ed:5a}
	I1010 18:16:06.018241   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined IP address 192.168.39.175 and MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:06.018399   99368 provision.go:143] copyHostCerts
	I1010 18:16:06.018428   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem
	I1010 18:16:06.018461   99368 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem, removing ...
	I1010 18:16:06.018471   99368 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem
	I1010 18:16:06.018534   99368 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem (1675 bytes)
	I1010 18:16:06.018611   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem
	I1010 18:16:06.018628   99368 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem, removing ...
	I1010 18:16:06.018635   99368 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem
	I1010 18:16:06.018659   99368 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem (1078 bytes)
	I1010 18:16:06.018703   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem
	I1010 18:16:06.018722   99368 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem, removing ...
	I1010 18:16:06.018728   99368 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem
	I1010 18:16:06.018748   99368 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem (1123 bytes)
	I1010 18:16:06.018800   99368 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem org=jenkins.ha-142481-m03 san=[127.0.0.1 192.168.39.175 ha-142481-m03 localhost minikube]
	I1010 18:16:06.222717   99368 provision.go:177] copyRemoteCerts
	I1010 18:16:06.222779   99368 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1010 18:16:06.222805   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHHostname
	I1010 18:16:06.225434   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:06.225825   99368 main.go:141] libmachine: (ha-142481-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:ed:5a", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:15:53 +0000 UTC Type:0 Mac:52:54:00:06:ed:5a Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:ha-142481-m03 Clientid:01:52:54:00:06:ed:5a}
	I1010 18:16:06.225848   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined IP address 192.168.39.175 and MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:06.226065   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHPort
	I1010 18:16:06.226286   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHKeyPath
	I1010 18:16:06.226456   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHUsername
	I1010 18:16:06.226630   99368 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481-m03/id_rsa Username:docker}
	I1010 18:16:06.315791   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1010 18:16:06.315882   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1010 18:16:06.343259   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1010 18:16:06.343345   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1010 18:16:06.370749   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1010 18:16:06.370822   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1010 18:16:06.397148   99368 provision.go:87] duration metric: took 386.005417ms to configureAuth
	I1010 18:16:06.397183   99368 buildroot.go:189] setting minikube options for container-runtime
	I1010 18:16:06.397452   99368 config.go:182] Loaded profile config "ha-142481": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 18:16:06.397548   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHHostname
	I1010 18:16:06.400947   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:06.401493   99368 main.go:141] libmachine: (ha-142481-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:ed:5a", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:15:53 +0000 UTC Type:0 Mac:52:54:00:06:ed:5a Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:ha-142481-m03 Clientid:01:52:54:00:06:ed:5a}
	I1010 18:16:06.401529   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined IP address 192.168.39.175 and MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:06.401697   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHPort
	I1010 18:16:06.401877   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHKeyPath
	I1010 18:16:06.402099   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHKeyPath
	I1010 18:16:06.402329   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHUsername
	I1010 18:16:06.402536   99368 main.go:141] libmachine: Using SSH client type: native
	I1010 18:16:06.402752   99368 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.175 22 <nil> <nil>}
	I1010 18:16:06.402772   99368 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1010 18:16:06.637717   99368 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1010 18:16:06.637751   99368 main.go:141] libmachine: Checking connection to Docker...
	I1010 18:16:06.637762   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetURL
	I1010 18:16:06.639112   99368 main.go:141] libmachine: (ha-142481-m03) DBG | Using libvirt version 6000000
	I1010 18:16:06.641181   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:06.641548   99368 main.go:141] libmachine: (ha-142481-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:ed:5a", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:15:53 +0000 UTC Type:0 Mac:52:54:00:06:ed:5a Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:ha-142481-m03 Clientid:01:52:54:00:06:ed:5a}
	I1010 18:16:06.641587   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined IP address 192.168.39.175 and MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:06.641730   99368 main.go:141] libmachine: Docker is up and running!
	I1010 18:16:06.641747   99368 main.go:141] libmachine: Reticulating splines...
	I1010 18:16:06.641756   99368 client.go:171] duration metric: took 27.963208724s to LocalClient.Create
	I1010 18:16:06.641785   99368 start.go:167] duration metric: took 27.963279742s to libmachine.API.Create "ha-142481"
	I1010 18:16:06.641795   99368 start.go:293] postStartSetup for "ha-142481-m03" (driver="kvm2")
	I1010 18:16:06.641804   99368 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1010 18:16:06.641824   99368 main.go:141] libmachine: (ha-142481-m03) Calling .DriverName
	I1010 18:16:06.642091   99368 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1010 18:16:06.642123   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHHostname
	I1010 18:16:06.644087   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:06.644396   99368 main.go:141] libmachine: (ha-142481-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:ed:5a", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:15:53 +0000 UTC Type:0 Mac:52:54:00:06:ed:5a Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:ha-142481-m03 Clientid:01:52:54:00:06:ed:5a}
	I1010 18:16:06.644432   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined IP address 192.168.39.175 and MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:06.644567   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHPort
	I1010 18:16:06.644765   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHKeyPath
	I1010 18:16:06.644924   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHUsername
	I1010 18:16:06.645078   99368 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481-m03/id_rsa Username:docker}
	I1010 18:16:06.732228   99368 ssh_runner.go:195] Run: cat /etc/os-release
	I1010 18:16:06.736988   99368 info.go:137] Remote host: Buildroot 2023.02.9
	I1010 18:16:06.737036   99368 filesync.go:126] Scanning /home/jenkins/minikube-integration/19787-81676/.minikube/addons for local assets ...
	I1010 18:16:06.737116   99368 filesync.go:126] Scanning /home/jenkins/minikube-integration/19787-81676/.minikube/files for local assets ...
	I1010 18:16:06.737228   99368 filesync.go:149] local asset: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem -> 888762.pem in /etc/ssl/certs
	I1010 18:16:06.737241   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem -> /etc/ssl/certs/888762.pem
	I1010 18:16:06.737350   99368 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1010 18:16:06.747599   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem --> /etc/ssl/certs/888762.pem (1708 bytes)
	I1010 18:16:06.779643   99368 start.go:296] duration metric: took 137.832802ms for postStartSetup
	I1010 18:16:06.779701   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetConfigRaw
	I1010 18:16:06.780474   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetIP
	I1010 18:16:06.783287   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:06.783711   99368 main.go:141] libmachine: (ha-142481-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:ed:5a", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:15:53 +0000 UTC Type:0 Mac:52:54:00:06:ed:5a Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:ha-142481-m03 Clientid:01:52:54:00:06:ed:5a}
	I1010 18:16:06.783739   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined IP address 192.168.39.175 and MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:06.784133   99368 profile.go:143] Saving config to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/config.json ...
	I1010 18:16:06.784363   99368 start.go:128] duration metric: took 28.126102871s to createHost
	I1010 18:16:06.784390   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHHostname
	I1010 18:16:06.786724   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:06.787090   99368 main.go:141] libmachine: (ha-142481-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:ed:5a", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:15:53 +0000 UTC Type:0 Mac:52:54:00:06:ed:5a Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:ha-142481-m03 Clientid:01:52:54:00:06:ed:5a}
	I1010 18:16:06.787113   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined IP address 192.168.39.175 and MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:06.787327   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHPort
	I1010 18:16:06.787526   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHKeyPath
	I1010 18:16:06.787700   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHKeyPath
	I1010 18:16:06.787826   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHUsername
	I1010 18:16:06.787997   99368 main.go:141] libmachine: Using SSH client type: native
	I1010 18:16:06.788211   99368 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.175 22 <nil> <nil>}
	I1010 18:16:06.788226   99368 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1010 18:16:06.901742   99368 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728584166.882037024
	
	I1010 18:16:06.901769   99368 fix.go:216] guest clock: 1728584166.882037024
	I1010 18:16:06.901778   99368 fix.go:229] Guest: 2024-10-10 18:16:06.882037024 +0000 UTC Remote: 2024-10-10 18:16:06.784377622 +0000 UTC m=+148.714965698 (delta=97.659402ms)
	I1010 18:16:06.901799   99368 fix.go:200] guest clock delta is within tolerance: 97.659402ms
	I1010 18:16:06.901806   99368 start.go:83] releasing machines lock for "ha-142481-m03", held for 28.24367452s
	I1010 18:16:06.901831   99368 main.go:141] libmachine: (ha-142481-m03) Calling .DriverName
	I1010 18:16:06.902170   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetIP
	I1010 18:16:06.904709   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:06.905164   99368 main.go:141] libmachine: (ha-142481-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:ed:5a", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:15:53 +0000 UTC Type:0 Mac:52:54:00:06:ed:5a Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:ha-142481-m03 Clientid:01:52:54:00:06:ed:5a}
	I1010 18:16:06.905194   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined IP address 192.168.39.175 and MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:06.907619   99368 out.go:177] * Found network options:
	I1010 18:16:06.909057   99368 out.go:177]   - NO_PROXY=192.168.39.104,192.168.39.186
	W1010 18:16:06.910397   99368 proxy.go:119] fail to check proxy env: Error ip not in block
	W1010 18:16:06.910422   99368 proxy.go:119] fail to check proxy env: Error ip not in block
	I1010 18:16:06.910439   99368 main.go:141] libmachine: (ha-142481-m03) Calling .DriverName
	I1010 18:16:06.911020   99368 main.go:141] libmachine: (ha-142481-m03) Calling .DriverName
	I1010 18:16:06.911247   99368 main.go:141] libmachine: (ha-142481-m03) Calling .DriverName
	I1010 18:16:06.911351   99368 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1010 18:16:06.911394   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHHostname
	W1010 18:16:06.911428   99368 proxy.go:119] fail to check proxy env: Error ip not in block
	W1010 18:16:06.911458   99368 proxy.go:119] fail to check proxy env: Error ip not in block
	I1010 18:16:06.911514   99368 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1010 18:16:06.911529   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHHostname
	I1010 18:16:06.914295   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:06.914543   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:06.914629   99368 main.go:141] libmachine: (ha-142481-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:ed:5a", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:15:53 +0000 UTC Type:0 Mac:52:54:00:06:ed:5a Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:ha-142481-m03 Clientid:01:52:54:00:06:ed:5a}
	I1010 18:16:06.914656   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined IP address 192.168.39.175 and MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:06.914760   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHPort
	I1010 18:16:06.914838   99368 main.go:141] libmachine: (ha-142481-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:ed:5a", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:15:53 +0000 UTC Type:0 Mac:52:54:00:06:ed:5a Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:ha-142481-m03 Clientid:01:52:54:00:06:ed:5a}
	I1010 18:16:06.914856   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined IP address 192.168.39.175 and MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:06.914913   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHKeyPath
	I1010 18:16:06.915049   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHPort
	I1010 18:16:06.915098   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHUsername
	I1010 18:16:06.915168   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHKeyPath
	I1010 18:16:06.915225   99368 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481-m03/id_rsa Username:docker}
	I1010 18:16:06.915381   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHUsername
	I1010 18:16:06.915497   99368 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481-m03/id_rsa Username:docker}
	I1010 18:16:07.163627   99368 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1010 18:16:07.170344   99368 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1010 18:16:07.170418   99368 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1010 18:16:07.188658   99368 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1010 18:16:07.188691   99368 start.go:495] detecting cgroup driver to use...
	I1010 18:16:07.188764   99368 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1010 18:16:07.207458   99368 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1010 18:16:07.223388   99368 docker.go:217] disabling cri-docker service (if available) ...
	I1010 18:16:07.223465   99368 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1010 18:16:07.240312   99368 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1010 18:16:07.258338   99368 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1010 18:16:07.397297   99368 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1010 18:16:07.555534   99368 docker.go:233] disabling docker service ...
	I1010 18:16:07.555621   99368 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1010 18:16:07.571003   99368 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1010 18:16:07.585612   99368 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1010 18:16:07.724995   99368 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1010 18:16:07.861369   99368 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1010 18:16:07.876144   99368 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1010 18:16:07.895651   99368 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1010 18:16:07.895716   99368 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:16:07.906721   99368 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1010 18:16:07.906792   99368 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:16:07.917729   99368 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:16:07.929016   99368 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:16:07.940559   99368 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1010 18:16:07.953995   99368 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:16:07.965226   99368 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:16:07.984344   99368 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:16:07.995983   99368 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1010 18:16:08.006420   99368 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1010 18:16:08.006504   99368 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1010 18:16:08.021735   99368 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1010 18:16:08.033011   99368 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 18:16:08.164791   99368 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1010 18:16:08.260672   99368 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1010 18:16:08.260742   99368 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1010 18:16:08.271900   99368 start.go:563] Will wait 60s for crictl version
	I1010 18:16:08.271960   99368 ssh_runner.go:195] Run: which crictl
	I1010 18:16:08.275929   99368 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1010 18:16:08.314672   99368 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1010 18:16:08.314749   99368 ssh_runner.go:195] Run: crio --version
	I1010 18:16:08.346340   99368 ssh_runner.go:195] Run: crio --version
	I1010 18:16:08.377606   99368 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1010 18:16:08.379014   99368 out.go:177]   - env NO_PROXY=192.168.39.104
	I1010 18:16:08.380435   99368 out.go:177]   - env NO_PROXY=192.168.39.104,192.168.39.186
	I1010 18:16:08.381694   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetIP
	I1010 18:16:08.384544   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:08.384908   99368 main.go:141] libmachine: (ha-142481-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:ed:5a", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:15:53 +0000 UTC Type:0 Mac:52:54:00:06:ed:5a Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:ha-142481-m03 Clientid:01:52:54:00:06:ed:5a}
	I1010 18:16:08.384939   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined IP address 192.168.39.175 and MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:08.385183   99368 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1010 18:16:08.389725   99368 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 18:16:08.402638   99368 mustload.go:65] Loading cluster: ha-142481
	I1010 18:16:08.402881   99368 config.go:182] Loaded profile config "ha-142481": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 18:16:08.403135   99368 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 18:16:08.403183   99368 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 18:16:08.418274   99368 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43461
	I1010 18:16:08.418827   99368 main.go:141] libmachine: () Calling .GetVersion
	I1010 18:16:08.419392   99368 main.go:141] libmachine: Using API Version  1
	I1010 18:16:08.419418   99368 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 18:16:08.419747   99368 main.go:141] libmachine: () Calling .GetMachineName
	I1010 18:16:08.419899   99368 main.go:141] libmachine: (ha-142481) Calling .GetState
	I1010 18:16:08.421605   99368 host.go:66] Checking if "ha-142481" exists ...
	I1010 18:16:08.421927   99368 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 18:16:08.421980   99368 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 18:16:08.437329   99368 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37965
	I1010 18:16:08.437789   99368 main.go:141] libmachine: () Calling .GetVersion
	I1010 18:16:08.438250   99368 main.go:141] libmachine: Using API Version  1
	I1010 18:16:08.438271   99368 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 18:16:08.438615   99368 main.go:141] libmachine: () Calling .GetMachineName
	I1010 18:16:08.438801   99368 main.go:141] libmachine: (ha-142481) Calling .DriverName
	I1010 18:16:08.438970   99368 certs.go:68] Setting up /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481 for IP: 192.168.39.175
	I1010 18:16:08.438988   99368 certs.go:194] generating shared ca certs ...
	I1010 18:16:08.439008   99368 certs.go:226] acquiring lock for ca certs: {Name:mk934aa2a82050d7b4e8800492bf64f91d140517 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:16:08.439150   99368 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key
	I1010 18:16:08.439211   99368 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key
	I1010 18:16:08.439224   99368 certs.go:256] generating profile certs ...
	I1010 18:16:08.439325   99368 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/client.key
	I1010 18:16:08.439355   99368 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.key.7bb2202d
	I1010 18:16:08.439376   99368 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.crt.7bb2202d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.104 192.168.39.186 192.168.39.175 192.168.39.254]
	I1010 18:16:08.528731   99368 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.crt.7bb2202d ...
	I1010 18:16:08.528764   99368 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.crt.7bb2202d: {Name:mk202db6f01b46b51940ca7afe581ede7b3af4e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:16:08.528980   99368 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.key.7bb2202d ...
	I1010 18:16:08.528997   99368 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.key.7bb2202d: {Name:mk61783eedf299ba3a6dbb3f62b131938823078c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:16:08.529112   99368 certs.go:381] copying /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.crt.7bb2202d -> /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.crt
	I1010 18:16:08.529294   99368 certs.go:385] copying /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.key.7bb2202d -> /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.key
	I1010 18:16:08.529465   99368 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/proxy-client.key
	I1010 18:16:08.529488   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1010 18:16:08.529506   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1010 18:16:08.529521   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1010 18:16:08.529540   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1010 18:16:08.529557   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1010 18:16:08.529580   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1010 18:16:08.529599   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1010 18:16:08.545002   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1010 18:16:08.545123   99368 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876.pem (1338 bytes)
	W1010 18:16:08.545166   99368 certs.go:480] ignoring /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876_empty.pem, impossibly tiny 0 bytes
	I1010 18:16:08.545178   99368 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem (1679 bytes)
	I1010 18:16:08.545225   99368 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem (1078 bytes)
	I1010 18:16:08.545259   99368 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem (1123 bytes)
	I1010 18:16:08.545291   99368 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem (1675 bytes)
	I1010 18:16:08.545339   99368 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem (1708 bytes)
	I1010 18:16:08.545380   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem -> /usr/share/ca-certificates/888762.pem
	I1010 18:16:08.545401   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:16:08.545415   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876.pem -> /usr/share/ca-certificates/88876.pem
	I1010 18:16:08.545465   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHHostname
	I1010 18:16:08.548797   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:16:08.549296   99368 main.go:141] libmachine: (ha-142481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:fa:00", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:13:53 +0000 UTC Type:0 Mac:52:54:00:3e:fa:00 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-142481 Clientid:01:52:54:00:3e:fa:00}
	I1010 18:16:08.549316   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined IP address 192.168.39.104 and MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:16:08.549545   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHPort
	I1010 18:16:08.549789   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHKeyPath
	I1010 18:16:08.549993   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHUsername
	I1010 18:16:08.550143   99368 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481/id_rsa Username:docker}
	I1010 18:16:08.629272   99368 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1010 18:16:08.635349   99368 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1010 18:16:08.648258   99368 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1010 18:16:08.653797   99368 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1010 18:16:08.665553   99368 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1010 18:16:08.670066   99368 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1010 18:16:08.681281   99368 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1010 18:16:08.685851   99368 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1010 18:16:08.696759   99368 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1010 18:16:08.701070   99368 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1010 18:16:08.719143   99368 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1010 18:16:08.723782   99368 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1010 18:16:08.735082   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1010 18:16:08.763420   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1010 18:16:08.789246   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1010 18:16:08.814697   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1010 18:16:08.840641   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1010 18:16:08.865783   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1010 18:16:08.890663   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1010 18:16:08.916077   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1010 18:16:08.941574   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem --> /usr/share/ca-certificates/888762.pem (1708 bytes)
	I1010 18:16:08.971689   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1010 18:16:08.996394   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876.pem --> /usr/share/ca-certificates/88876.pem (1338 bytes)
	I1010 18:16:09.021329   99368 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1010 18:16:09.039289   99368 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1010 18:16:09.058514   99368 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1010 18:16:09.075508   99368 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1010 18:16:09.094047   99368 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1010 18:16:09.112093   99368 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1010 18:16:09.130182   99368 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1010 18:16:09.147655   99368 ssh_runner.go:195] Run: openssl version
	I1010 18:16:09.153962   99368 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/888762.pem && ln -fs /usr/share/ca-certificates/888762.pem /etc/ssl/certs/888762.pem"
	I1010 18:16:09.165361   99368 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/888762.pem
	I1010 18:16:09.170099   99368 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 10 18:09 /usr/share/ca-certificates/888762.pem
	I1010 18:16:09.170163   99368 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/888762.pem
	I1010 18:16:09.175991   99368 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/888762.pem /etc/ssl/certs/3ec20f2e.0"
	I1010 18:16:09.187134   99368 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1010 18:16:09.199298   99368 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:16:09.204550   99368 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 10 17:58 /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:16:09.204607   99368 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:16:09.210501   99368 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1010 18:16:09.222047   99368 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/88876.pem && ln -fs /usr/share/ca-certificates/88876.pem /etc/ssl/certs/88876.pem"
	I1010 18:16:09.233165   99368 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/88876.pem
	I1010 18:16:09.238141   99368 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 10 18:09 /usr/share/ca-certificates/88876.pem
	I1010 18:16:09.238209   99368 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/88876.pem
	I1010 18:16:09.243899   99368 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/88876.pem /etc/ssl/certs/51391683.0"
	I1010 18:16:09.256154   99368 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1010 18:16:09.260558   99368 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1010 18:16:09.260620   99368 kubeadm.go:934] updating node {m03 192.168.39.175 8443 v1.31.1 crio true true} ...
	I1010 18:16:09.260712   99368 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-142481-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.175
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-142481 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1010 18:16:09.260747   99368 kube-vip.go:115] generating kube-vip config ...
	I1010 18:16:09.260788   99368 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1010 18:16:09.281432   99368 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1010 18:16:09.281532   99368 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1010 18:16:09.281598   99368 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1010 18:16:09.292238   99368 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I1010 18:16:09.292302   99368 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I1010 18:16:09.302815   99368 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I1010 18:16:09.302834   99368 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I1010 18:16:09.302847   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I1010 18:16:09.302858   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1010 18:16:09.302874   99368 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I1010 18:16:09.302911   99368 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I1010 18:16:09.302925   99368 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1010 18:16:09.302927   99368 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 18:16:09.313038   99368 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I1010 18:16:09.313076   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I1010 18:16:09.313295   99368 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I1010 18:16:09.313324   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I1010 18:16:09.329019   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I1010 18:16:09.329132   99368 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I1010 18:16:09.460792   99368 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I1010 18:16:09.460863   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I1010 18:16:10.167695   99368 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1010 18:16:10.178304   99368 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1010 18:16:10.196198   99368 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1010 18:16:10.214107   99368 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1010 18:16:10.231699   99368 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1010 18:16:10.235598   99368 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 18:16:10.249379   99368 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 18:16:10.372228   99368 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 18:16:10.389956   99368 host.go:66] Checking if "ha-142481" exists ...
	I1010 18:16:10.390482   99368 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 18:16:10.390543   99368 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 18:16:10.406538   99368 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33965
	I1010 18:16:10.407120   99368 main.go:141] libmachine: () Calling .GetVersion
	I1010 18:16:10.407715   99368 main.go:141] libmachine: Using API Version  1
	I1010 18:16:10.407745   99368 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 18:16:10.408171   99368 main.go:141] libmachine: () Calling .GetMachineName
	I1010 18:16:10.408424   99368 main.go:141] libmachine: (ha-142481) Calling .DriverName
	I1010 18:16:10.408616   99368 start.go:317] joinCluster: &{Name:ha-142481 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-142481 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.104 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.186 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.175 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:f
alse istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 18:16:10.408761   99368 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1010 18:16:10.408786   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHHostname
	I1010 18:16:10.412501   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:16:10.412938   99368 main.go:141] libmachine: (ha-142481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:fa:00", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:13:53 +0000 UTC Type:0 Mac:52:54:00:3e:fa:00 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-142481 Clientid:01:52:54:00:3e:fa:00}
	I1010 18:16:10.412967   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined IP address 192.168.39.104 and MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:16:10.413287   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHPort
	I1010 18:16:10.413489   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHKeyPath
	I1010 18:16:10.413662   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHUsername
	I1010 18:16:10.413878   99368 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481/id_rsa Username:docker}
	I1010 18:16:10.584962   99368 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.175 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1010 18:16:10.585036   99368 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 01a2dn.g9vqo5mbslppupip --discovery-token-ca-cert-hash sha256:bc8568f96f96483e4403a7eff9866ac7bd8b0e76d8203796a49dd1a769f11bda --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-142481-m03 --control-plane --apiserver-advertise-address=192.168.39.175 --apiserver-bind-port=8443"
	I1010 18:16:34.116751   99368 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 01a2dn.g9vqo5mbslppupip --discovery-token-ca-cert-hash sha256:bc8568f96f96483e4403a7eff9866ac7bd8b0e76d8203796a49dd1a769f11bda --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-142481-m03 --control-plane --apiserver-advertise-address=192.168.39.175 --apiserver-bind-port=8443": (23.531656117s)
	I1010 18:16:34.116799   99368 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1010 18:16:34.662406   99368 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-142481-m03 minikube.k8s.io/updated_at=2024_10_10T18_16_34_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=e90d7de550e839433dc631623053398b652747fc minikube.k8s.io/name=ha-142481 minikube.k8s.io/primary=false
	I1010 18:16:34.812925   99368 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-142481-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I1010 18:16:34.939968   99368 start.go:319] duration metric: took 24.531346267s to joinCluster
	I1010 18:16:34.940121   99368 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.175 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1010 18:16:34.940600   99368 config.go:182] Loaded profile config "ha-142481": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 18:16:34.942338   99368 out.go:177] * Verifying Kubernetes components...
	I1010 18:16:34.943872   99368 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 18:16:35.261137   99368 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 18:16:35.322955   99368 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19787-81676/kubeconfig
	I1010 18:16:35.323214   99368 kapi.go:59] client config for ha-142481: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/client.crt", KeyFile:"/home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/client.key", CAFile:"/home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2432700), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1010 18:16:35.323281   99368 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.104:8443
	I1010 18:16:35.323557   99368 node_ready.go:35] waiting up to 6m0s for node "ha-142481-m03" to be "Ready" ...
	I1010 18:16:35.323656   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:35.323668   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:35.323679   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:35.323685   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:35.327318   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:35.823831   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:35.823858   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:35.823871   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:35.823877   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:35.828659   99368 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1010 18:16:36.324358   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:36.324382   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:36.324391   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:36.324395   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:36.327758   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:36.823911   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:36.823934   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:36.823942   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:36.823946   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:36.827063   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:37.323987   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:37.324011   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:37.324019   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:37.324023   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:37.327375   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:37.328058   99368 node_ready.go:53] node "ha-142481-m03" has status "Ready":"False"
	I1010 18:16:37.824329   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:37.824354   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:37.824443   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:37.824455   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:37.828067   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:38.323986   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:38.324025   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:38.324040   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:38.324046   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:38.327494   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:38.823762   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:38.823785   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:38.823794   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:38.823798   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:38.827926   99368 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1010 18:16:39.323928   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:39.323957   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:39.323969   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:39.323975   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:39.330422   99368 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1010 18:16:39.331171   99368 node_ready.go:53] node "ha-142481-m03" has status "Ready":"False"
	I1010 18:16:39.824574   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:39.824598   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:39.824607   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:39.824610   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:39.828722   99368 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1010 18:16:40.324796   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:40.324827   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:40.324838   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:40.324845   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:40.328842   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:40.823953   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:40.823979   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:40.823990   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:40.823996   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:40.828272   99368 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1010 18:16:41.324192   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:41.324218   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:41.324227   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:41.324230   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:41.327987   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:41.824162   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:41.824186   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:41.824198   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:41.824204   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:41.827541   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:41.828232   99368 node_ready.go:53] node "ha-142481-m03" has status "Ready":"False"
	I1010 18:16:42.324743   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:42.324783   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:42.324794   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:42.324801   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:42.328551   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:42.824718   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:42.824744   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:42.824755   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:42.824760   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:42.828428   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:43.324320   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:43.324346   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:43.324355   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:43.324364   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:43.328322   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:43.823956   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:43.824002   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:43.824013   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:43.824019   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:43.827615   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:43.828260   99368 node_ready.go:53] node "ha-142481-m03" has status "Ready":"False"
	I1010 18:16:44.324587   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:44.324612   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:44.324620   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:44.324623   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:44.328569   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:44.823816   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:44.823840   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:44.823849   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:44.823853   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:44.827589   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:45.324648   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:45.324673   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:45.324681   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:45.324684   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:45.328227   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:45.824305   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:45.824330   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:45.824338   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:45.824342   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:45.827901   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:45.828489   99368 node_ready.go:53] node "ha-142481-m03" has status "Ready":"False"
	I1010 18:16:46.323779   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:46.323813   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:46.323825   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:46.323830   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:46.327223   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:46.823931   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:46.823955   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:46.823964   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:46.823968   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:46.828168   99368 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1010 18:16:47.324172   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:47.324200   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:47.324214   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:47.324232   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:47.327405   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:47.824446   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:47.824470   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:47.824478   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:47.824483   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:47.828085   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:47.828574   99368 node_ready.go:53] node "ha-142481-m03" has status "Ready":"False"
	I1010 18:16:48.324641   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:48.324666   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:48.324674   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:48.324678   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:48.328399   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:48.823841   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:48.823872   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:48.823883   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:48.823899   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:48.827862   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:49.324364   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:49.324391   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:49.324402   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:49.324410   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:49.329836   99368 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1010 18:16:49.824868   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:49.824898   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:49.824909   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:49.824916   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:49.832424   99368 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1010 18:16:49.833781   99368 node_ready.go:53] node "ha-142481-m03" has status "Ready":"False"
	I1010 18:16:50.324106   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:50.324129   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:50.324137   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:50.324141   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:50.327377   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:50.824781   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:50.824809   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:50.824818   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:50.824824   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:50.828461   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:51.324626   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:51.324651   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:51.324659   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:51.324663   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:51.327965   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:51.824004   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:51.824028   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:51.824036   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:51.824041   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:51.827827   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:52.323895   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:52.323930   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:52.323939   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:52.323943   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:52.327292   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:52.327943   99368 node_ready.go:49] node "ha-142481-m03" has status "Ready":"True"
	I1010 18:16:52.327963   99368 node_ready.go:38] duration metric: took 17.004388796s for node "ha-142481-m03" to be "Ready" ...
	I1010 18:16:52.327973   99368 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 18:16:52.328041   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods
	I1010 18:16:52.328051   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:52.328058   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:52.328063   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:52.335352   99368 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1010 18:16:52.341969   99368 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-28dll" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:52.342092   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-28dll
	I1010 18:16:52.342105   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:52.342116   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:52.342121   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:52.346524   99368 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1010 18:16:52.347823   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481
	I1010 18:16:52.347844   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:52.347853   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:52.347860   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:52.352427   99368 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1010 18:16:52.353100   99368 pod_ready.go:93] pod "coredns-7c65d6cfc9-28dll" in "kube-system" namespace has status "Ready":"True"
	I1010 18:16:52.353132   99368 pod_ready.go:82] duration metric: took 11.131703ms for pod "coredns-7c65d6cfc9-28dll" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:52.353146   99368 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-xfhq8" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:52.353233   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-xfhq8
	I1010 18:16:52.353246   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:52.353255   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:52.353262   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:52.358189   99368 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1010 18:16:52.359137   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481
	I1010 18:16:52.359158   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:52.359170   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:52.359194   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:52.361882   99368 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1010 18:16:52.362586   99368 pod_ready.go:93] pod "coredns-7c65d6cfc9-xfhq8" in "kube-system" namespace has status "Ready":"True"
	I1010 18:16:52.362606   99368 pod_ready.go:82] duration metric: took 9.449469ms for pod "coredns-7c65d6cfc9-xfhq8" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:52.362618   99368 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-142481" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:52.362680   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/etcd-ha-142481
	I1010 18:16:52.362689   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:52.362696   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:52.362701   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:52.365259   99368 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1010 18:16:52.365819   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481
	I1010 18:16:52.365835   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:52.365842   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:52.365857   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:52.368864   99368 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1010 18:16:52.369337   99368 pod_ready.go:93] pod "etcd-ha-142481" in "kube-system" namespace has status "Ready":"True"
	I1010 18:16:52.369355   99368 pod_ready.go:82] duration metric: took 6.728138ms for pod "etcd-ha-142481" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:52.369365   99368 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-142481-m02" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:52.369427   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/etcd-ha-142481-m02
	I1010 18:16:52.369435   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:52.369442   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:52.369447   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:52.371801   99368 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1010 18:16:52.372469   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:16:52.372485   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:52.372496   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:52.372501   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:52.374845   99368 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1010 18:16:52.375380   99368 pod_ready.go:93] pod "etcd-ha-142481-m02" in "kube-system" namespace has status "Ready":"True"
	I1010 18:16:52.375400   99368 pod_ready.go:82] duration metric: took 6.028654ms for pod "etcd-ha-142481-m02" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:52.375414   99368 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-142481-m03" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:52.524876   99368 request.go:632] Waited for 149.316037ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/etcd-ha-142481-m03
	I1010 18:16:52.524969   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/etcd-ha-142481-m03
	I1010 18:16:52.524980   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:52.524993   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:52.525002   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:52.528336   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:52.724349   99368 request.go:632] Waited for 195.357304ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:52.724414   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:52.724419   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:52.724429   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:52.724433   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:52.727821   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:52.728420   99368 pod_ready.go:93] pod "etcd-ha-142481-m03" in "kube-system" namespace has status "Ready":"True"
	I1010 18:16:52.728440   99368 pod_ready.go:82] duration metric: took 353.013897ms for pod "etcd-ha-142481-m03" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:52.728461   99368 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-142481" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:52.924606   99368 request.go:632] Waited for 196.006652ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-142481
	I1010 18:16:52.924680   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-142481
	I1010 18:16:52.924687   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:52.924697   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:52.924702   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:52.928387   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:53.124197   99368 request.go:632] Waited for 194.992104ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/nodes/ha-142481
	I1010 18:16:53.124259   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481
	I1010 18:16:53.124264   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:53.124276   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:53.124281   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:53.127550   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:53.128097   99368 pod_ready.go:93] pod "kube-apiserver-ha-142481" in "kube-system" namespace has status "Ready":"True"
	I1010 18:16:53.128116   99368 pod_ready.go:82] duration metric: took 399.647709ms for pod "kube-apiserver-ha-142481" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:53.128127   99368 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-142481-m02" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:53.324538   99368 request.go:632] Waited for 196.340534ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-142481-m02
	I1010 18:16:53.324600   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-142481-m02
	I1010 18:16:53.324606   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:53.324613   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:53.324617   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:53.328266   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:53.524803   99368 request.go:632] Waited for 195.841443ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:16:53.524898   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:16:53.524906   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:53.524920   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:53.524931   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:53.529027   99368 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1010 18:16:53.529616   99368 pod_ready.go:93] pod "kube-apiserver-ha-142481-m02" in "kube-system" namespace has status "Ready":"True"
	I1010 18:16:53.529639   99368 pod_ready.go:82] duration metric: took 401.504985ms for pod "kube-apiserver-ha-142481-m02" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:53.529650   99368 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-142481-m03" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:53.724123   99368 request.go:632] Waited for 194.402378ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-142481-m03
	I1010 18:16:53.724207   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-142481-m03
	I1010 18:16:53.724212   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:53.724220   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:53.724226   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:53.728029   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:53.924000   99368 request.go:632] Waited for 195.20231ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:53.924121   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:53.924136   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:53.924145   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:53.924149   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:53.927318   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:53.927936   99368 pod_ready.go:93] pod "kube-apiserver-ha-142481-m03" in "kube-system" namespace has status "Ready":"True"
	I1010 18:16:53.927963   99368 pod_ready.go:82] duration metric: took 398.303309ms for pod "kube-apiserver-ha-142481-m03" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:53.927977   99368 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-142481" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:54.124931   99368 request.go:632] Waited for 196.86396ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-142481
	I1010 18:16:54.125030   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-142481
	I1010 18:16:54.125037   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:54.125045   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:54.125050   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:54.129323   99368 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1010 18:16:54.324484   99368 request.go:632] Waited for 194.400861ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/nodes/ha-142481
	I1010 18:16:54.324554   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481
	I1010 18:16:54.324564   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:54.324574   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:54.324580   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:54.327854   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:54.328431   99368 pod_ready.go:93] pod "kube-controller-manager-ha-142481" in "kube-system" namespace has status "Ready":"True"
	I1010 18:16:54.328451   99368 pod_ready.go:82] duration metric: took 400.466203ms for pod "kube-controller-manager-ha-142481" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:54.328463   99368 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-142481-m02" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:54.524928   99368 request.go:632] Waited for 196.394012ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-142481-m02
	I1010 18:16:54.524994   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-142481-m02
	I1010 18:16:54.525000   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:54.525008   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:54.525013   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:54.528390   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:54.724248   99368 request.go:632] Waited for 195.108613ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:16:54.724318   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:16:54.724325   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:54.724335   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:54.724341   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:54.727499   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:54.727990   99368 pod_ready.go:93] pod "kube-controller-manager-ha-142481-m02" in "kube-system" namespace has status "Ready":"True"
	I1010 18:16:54.728011   99368 pod_ready.go:82] duration metric: took 399.541027ms for pod "kube-controller-manager-ha-142481-m02" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:54.728023   99368 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-142481-m03" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:54.924017   99368 request.go:632] Waited for 195.924922ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-142481-m03
	I1010 18:16:54.924118   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-142481-m03
	I1010 18:16:54.924129   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:54.924137   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:54.924142   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:54.928875   99368 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1010 18:16:55.123960   99368 request.go:632] Waited for 194.31178ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:55.124017   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:55.124022   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:55.124030   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:55.124033   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:55.127461   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:55.128120   99368 pod_ready.go:93] pod "kube-controller-manager-ha-142481-m03" in "kube-system" namespace has status "Ready":"True"
	I1010 18:16:55.128144   99368 pod_ready.go:82] duration metric: took 400.113475ms for pod "kube-controller-manager-ha-142481-m03" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:55.128160   99368 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-cdjzg" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:55.323986   99368 request.go:632] Waited for 195.748073ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cdjzg
	I1010 18:16:55.324049   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cdjzg
	I1010 18:16:55.324055   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:55.324063   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:55.324069   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:55.327396   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:55.524493   99368 request.go:632] Waited for 196.370396ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:55.524560   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:55.524567   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:55.524578   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:55.524586   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:55.534026   99368 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1010 18:16:55.534701   99368 pod_ready.go:93] pod "kube-proxy-cdjzg" in "kube-system" namespace has status "Ready":"True"
	I1010 18:16:55.534728   99368 pod_ready.go:82] duration metric: took 406.559679ms for pod "kube-proxy-cdjzg" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:55.534745   99368 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-gwvrh" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:55.724765   99368 request.go:632] Waited for 189.945021ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gwvrh
	I1010 18:16:55.724857   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gwvrh
	I1010 18:16:55.724864   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:55.724872   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:55.724878   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:55.727940   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:55.923972   99368 request.go:632] Waited for 195.304711ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/nodes/ha-142481
	I1010 18:16:55.924037   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481
	I1010 18:16:55.924052   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:55.924078   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:55.924085   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:55.927605   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:55.928243   99368 pod_ready.go:93] pod "kube-proxy-gwvrh" in "kube-system" namespace has status "Ready":"True"
	I1010 18:16:55.928264   99368 pod_ready.go:82] duration metric: took 393.511622ms for pod "kube-proxy-gwvrh" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:55.928278   99368 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-srfng" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:56.124193   99368 request.go:632] Waited for 195.82573ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-proxy-srfng
	I1010 18:16:56.124313   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-proxy-srfng
	I1010 18:16:56.124327   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:56.124336   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:56.124340   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:56.127896   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:56.324881   99368 request.go:632] Waited for 196.244687ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:16:56.324996   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:16:56.325012   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:56.325022   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:56.325029   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:56.328576   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:56.329284   99368 pod_ready.go:93] pod "kube-proxy-srfng" in "kube-system" namespace has status "Ready":"True"
	I1010 18:16:56.329304   99368 pod_ready.go:82] duration metric: took 401.01865ms for pod "kube-proxy-srfng" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:56.329315   99368 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-142481" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:56.524473   99368 request.go:632] Waited for 195.075639ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-142481
	I1010 18:16:56.524535   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-142481
	I1010 18:16:56.524541   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:56.524548   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:56.524554   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:56.527661   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:56.724798   99368 request.go:632] Waited for 196.388114ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/nodes/ha-142481
	I1010 18:16:56.724919   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481
	I1010 18:16:56.724934   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:56.724945   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:56.724955   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:56.728172   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:56.728664   99368 pod_ready.go:93] pod "kube-scheduler-ha-142481" in "kube-system" namespace has status "Ready":"True"
	I1010 18:16:56.728684   99368 pod_ready.go:82] duration metric: took 399.362342ms for pod "kube-scheduler-ha-142481" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:56.728700   99368 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-142481-m02" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:56.924703   99368 request.go:632] Waited for 195.908558ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-142481-m02
	I1010 18:16:56.924769   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-142481-m02
	I1010 18:16:56.924784   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:56.924793   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:56.924796   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:56.928241   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:57.124466   99368 request.go:632] Waited for 195.354302ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:16:57.124566   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:16:57.124592   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:57.124604   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:57.124613   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:57.128217   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:57.128748   99368 pod_ready.go:93] pod "kube-scheduler-ha-142481-m02" in "kube-system" namespace has status "Ready":"True"
	I1010 18:16:57.128773   99368 pod_ready.go:82] duration metric: took 400.06441ms for pod "kube-scheduler-ha-142481-m02" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:57.128788   99368 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-142481-m03" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:57.323894   99368 request.go:632] Waited for 195.025916ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-142481-m03
	I1010 18:16:57.323960   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-142481-m03
	I1010 18:16:57.324019   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:57.324032   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:57.324036   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:57.328239   99368 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1010 18:16:57.524431   99368 request.go:632] Waited for 195.425292ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:57.524497   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:57.524503   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:57.524511   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:57.524515   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:57.527825   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:57.528689   99368 pod_ready.go:93] pod "kube-scheduler-ha-142481-m03" in "kube-system" namespace has status "Ready":"True"
	I1010 18:16:57.528706   99368 pod_ready.go:82] duration metric: took 399.911051ms for pod "kube-scheduler-ha-142481-m03" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:57.528718   99368 pod_ready.go:39] duration metric: took 5.200736466s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 18:16:57.528734   99368 api_server.go:52] waiting for apiserver process to appear ...
	I1010 18:16:57.528787   99368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 18:16:57.545663   99368 api_server.go:72] duration metric: took 22.605494204s to wait for apiserver process to appear ...
	I1010 18:16:57.545694   99368 api_server.go:88] waiting for apiserver healthz status ...
	I1010 18:16:57.545718   99368 api_server.go:253] Checking apiserver healthz at https://192.168.39.104:8443/healthz ...
	I1010 18:16:57.552066   99368 api_server.go:279] https://192.168.39.104:8443/healthz returned 200:
	ok
	I1010 18:16:57.552813   99368 round_trippers.go:463] GET https://192.168.39.104:8443/version
	I1010 18:16:57.552870   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:57.552882   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:57.552890   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:57.555288   99368 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1010 18:16:57.555381   99368 api_server.go:141] control plane version: v1.31.1
	I1010 18:16:57.555401   99368 api_server.go:131] duration metric: took 9.699914ms to wait for apiserver health ...
	I1010 18:16:57.555411   99368 system_pods.go:43] waiting for kube-system pods to appear ...
	I1010 18:16:57.724005   99368 request.go:632] Waited for 168.467999ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods
	I1010 18:16:57.724082   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods
	I1010 18:16:57.724091   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:57.724106   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:57.724114   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:57.730879   99368 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1010 18:16:57.737404   99368 system_pods.go:59] 24 kube-system pods found
	I1010 18:16:57.737436   99368 system_pods.go:61] "coredns-7c65d6cfc9-28dll" [1d8bde36-6ab6-4b48-b00a-2a80059ecc11] Running
	I1010 18:16:57.737442   99368 system_pods.go:61] "coredns-7c65d6cfc9-xfhq8" [33d478fb-98f0-4029-87ae-9701a54cecd9] Running
	I1010 18:16:57.737445   99368 system_pods.go:61] "etcd-ha-142481" [fa917068-3ac4-4929-b80b-bfabdc3a9b94] Running
	I1010 18:16:57.737449   99368 system_pods.go:61] "etcd-ha-142481-m02" [8e6c1111-716c-41ff-9d31-ee189956993b] Running
	I1010 18:16:57.737452   99368 system_pods.go:61] "etcd-ha-142481-m03" [3f1ae212-d09b-446c-9172-52b9bfc6c20c] Running
	I1010 18:16:57.737456   99368 system_pods.go:61] "kindnet-4d9v4" [f52506a7-d4f9-4112-8b28-f239c7c8230b] Running
	I1010 18:16:57.737459   99368 system_pods.go:61] "kindnet-5k6j8" [e2d0fb49-23e6-4389-b890-25c03f6089a1] Running
	I1010 18:16:57.737463   99368 system_pods.go:61] "kindnet-cjcsf" [237e5649-ed64-401c-befd-99ef520d0761] Running
	I1010 18:16:57.737466   99368 system_pods.go:61] "kube-apiserver-ha-142481" [b3d0326d-123b-43af-b1a2-b745b884c3b5] Running
	I1010 18:16:57.737469   99368 system_pods.go:61] "kube-apiserver-ha-142481-m02" [36f5732a-f056-4751-83e3-67f84744af8e] Running
	I1010 18:16:57.737472   99368 system_pods.go:61] "kube-apiserver-ha-142481-m03" [4c7836a0-6697-4ce5-87d6-582097925f80] Running
	I1010 18:16:57.737476   99368 system_pods.go:61] "kube-controller-manager-ha-142481" [ad42e72b-6208-44c3-b4cb-ac9794ec9683] Running
	I1010 18:16:57.737480   99368 system_pods.go:61] "kube-controller-manager-ha-142481-m02" [4bd4ce73-1b81-44db-a59f-06c89184d88c] Running
	I1010 18:16:57.737484   99368 system_pods.go:61] "kube-controller-manager-ha-142481-m03" [9444eb06-6dc4-44ab-a7d6-2d1d5b3e6410] Running
	I1010 18:16:57.737487   99368 system_pods.go:61] "kube-proxy-cdjzg" [98288460-9764-4e92-a589-e7e34654cfc5] Running
	I1010 18:16:57.737491   99368 system_pods.go:61] "kube-proxy-gwvrh" [06e8f455-b250-46ea-bbcc-75ed230f2b57] Running
	I1010 18:16:57.737494   99368 system_pods.go:61] "kube-proxy-srfng" [0aad186a-89b6-48d9-8434-1063c7c8a42f] Running
	I1010 18:16:57.737499   99368 system_pods.go:61] "kube-scheduler-ha-142481" [c28a2da7-1807-4111-bf99-84891a0b278a] Running
	I1010 18:16:57.737505   99368 system_pods.go:61] "kube-scheduler-ha-142481-m02" [f29260ee-8ec8-47c4-848d-ee8ff1d06610] Running
	I1010 18:16:57.737509   99368 system_pods.go:61] "kube-scheduler-ha-142481-m03" [a3eea545-bc31-4990-ad58-a43666964468] Running
	I1010 18:16:57.737512   99368 system_pods.go:61] "kube-vip-ha-142481" [3ae4dc2c-27b6-4f93-a605-8de6f0c096d0] Running
	I1010 18:16:57.737515   99368 system_pods.go:61] "kube-vip-ha-142481-m02" [bd50bf3a-5e9c-48d4-8bba-fe1fea6c017d] Running
	I1010 18:16:57.737519   99368 system_pods.go:61] "kube-vip-ha-142481-m03" [a93b4d63-0f6c-47b5-b987-a082b2b0d51a] Running
	I1010 18:16:57.737522   99368 system_pods.go:61] "storage-provisioner" [9d74f64e-6391-4ab0-99ad-8c1711840696] Running
	I1010 18:16:57.737528   99368 system_pods.go:74] duration metric: took 182.108204ms to wait for pod list to return data ...
	I1010 18:16:57.737537   99368 default_sa.go:34] waiting for default service account to be created ...
	I1010 18:16:57.923961   99368 request.go:632] Waited for 186.32043ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/default/serviceaccounts
	I1010 18:16:57.924040   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/default/serviceaccounts
	I1010 18:16:57.924048   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:57.924059   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:57.924064   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:57.928023   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:57.928206   99368 default_sa.go:45] found service account: "default"
	I1010 18:16:57.928229   99368 default_sa.go:55] duration metric: took 190.684117ms for default service account to be created ...
	I1010 18:16:57.928243   99368 system_pods.go:116] waiting for k8s-apps to be running ...
	I1010 18:16:58.124915   99368 request.go:632] Waited for 196.547566ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods
	I1010 18:16:58.124982   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods
	I1010 18:16:58.124989   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:58.124999   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:58.125007   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:58.131096   99368 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1010 18:16:58.138059   99368 system_pods.go:86] 24 kube-system pods found
	I1010 18:16:58.138089   99368 system_pods.go:89] "coredns-7c65d6cfc9-28dll" [1d8bde36-6ab6-4b48-b00a-2a80059ecc11] Running
	I1010 18:16:58.138095   99368 system_pods.go:89] "coredns-7c65d6cfc9-xfhq8" [33d478fb-98f0-4029-87ae-9701a54cecd9] Running
	I1010 18:16:58.138099   99368 system_pods.go:89] "etcd-ha-142481" [fa917068-3ac4-4929-b80b-bfabdc3a9b94] Running
	I1010 18:16:58.138103   99368 system_pods.go:89] "etcd-ha-142481-m02" [8e6c1111-716c-41ff-9d31-ee189956993b] Running
	I1010 18:16:58.138107   99368 system_pods.go:89] "etcd-ha-142481-m03" [3f1ae212-d09b-446c-9172-52b9bfc6c20c] Running
	I1010 18:16:58.138111   99368 system_pods.go:89] "kindnet-4d9v4" [f52506a7-d4f9-4112-8b28-f239c7c8230b] Running
	I1010 18:16:58.138114   99368 system_pods.go:89] "kindnet-5k6j8" [e2d0fb49-23e6-4389-b890-25c03f6089a1] Running
	I1010 18:16:58.138117   99368 system_pods.go:89] "kindnet-cjcsf" [237e5649-ed64-401c-befd-99ef520d0761] Running
	I1010 18:16:58.138120   99368 system_pods.go:89] "kube-apiserver-ha-142481" [b3d0326d-123b-43af-b1a2-b745b884c3b5] Running
	I1010 18:16:58.138124   99368 system_pods.go:89] "kube-apiserver-ha-142481-m02" [36f5732a-f056-4751-83e3-67f84744af8e] Running
	I1010 18:16:58.138127   99368 system_pods.go:89] "kube-apiserver-ha-142481-m03" [4c7836a0-6697-4ce5-87d6-582097925f80] Running
	I1010 18:16:58.138131   99368 system_pods.go:89] "kube-controller-manager-ha-142481" [ad42e72b-6208-44c3-b4cb-ac9794ec9683] Running
	I1010 18:16:58.138134   99368 system_pods.go:89] "kube-controller-manager-ha-142481-m02" [4bd4ce73-1b81-44db-a59f-06c89184d88c] Running
	I1010 18:16:58.138138   99368 system_pods.go:89] "kube-controller-manager-ha-142481-m03" [9444eb06-6dc4-44ab-a7d6-2d1d5b3e6410] Running
	I1010 18:16:58.138141   99368 system_pods.go:89] "kube-proxy-cdjzg" [98288460-9764-4e92-a589-e7e34654cfc5] Running
	I1010 18:16:58.138145   99368 system_pods.go:89] "kube-proxy-gwvrh" [06e8f455-b250-46ea-bbcc-75ed230f2b57] Running
	I1010 18:16:58.138148   99368 system_pods.go:89] "kube-proxy-srfng" [0aad186a-89b6-48d9-8434-1063c7c8a42f] Running
	I1010 18:16:58.138150   99368 system_pods.go:89] "kube-scheduler-ha-142481" [c28a2da7-1807-4111-bf99-84891a0b278a] Running
	I1010 18:16:58.138153   99368 system_pods.go:89] "kube-scheduler-ha-142481-m02" [f29260ee-8ec8-47c4-848d-ee8ff1d06610] Running
	I1010 18:16:58.138156   99368 system_pods.go:89] "kube-scheduler-ha-142481-m03" [a3eea545-bc31-4990-ad58-a43666964468] Running
	I1010 18:16:58.138160   99368 system_pods.go:89] "kube-vip-ha-142481" [3ae4dc2c-27b6-4f93-a605-8de6f0c096d0] Running
	I1010 18:16:58.138163   99368 system_pods.go:89] "kube-vip-ha-142481-m02" [bd50bf3a-5e9c-48d4-8bba-fe1fea6c017d] Running
	I1010 18:16:58.138165   99368 system_pods.go:89] "kube-vip-ha-142481-m03" [a93b4d63-0f6c-47b5-b987-a082b2b0d51a] Running
	I1010 18:16:58.138168   99368 system_pods.go:89] "storage-provisioner" [9d74f64e-6391-4ab0-99ad-8c1711840696] Running
	I1010 18:16:58.138175   99368 system_pods.go:126] duration metric: took 209.923309ms to wait for k8s-apps to be running ...
	I1010 18:16:58.138188   99368 system_svc.go:44] waiting for kubelet service to be running ....
	I1010 18:16:58.138234   99368 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 18:16:58.154620   99368 system_svc.go:56] duration metric: took 16.42135ms WaitForService to wait for kubelet
	I1010 18:16:58.154660   99368 kubeadm.go:582] duration metric: took 23.214494056s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1010 18:16:58.154684   99368 node_conditions.go:102] verifying NodePressure condition ...
	I1010 18:16:58.324577   99368 request.go:632] Waited for 169.800219ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/nodes
	I1010 18:16:58.324670   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes
	I1010 18:16:58.324677   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:58.324687   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:58.324694   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:58.328908   99368 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1010 18:16:58.329887   99368 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1010 18:16:58.329907   99368 node_conditions.go:123] node cpu capacity is 2
	I1010 18:16:58.329918   99368 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1010 18:16:58.329922   99368 node_conditions.go:123] node cpu capacity is 2
	I1010 18:16:58.329926   99368 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1010 18:16:58.329929   99368 node_conditions.go:123] node cpu capacity is 2
	I1010 18:16:58.329932   99368 node_conditions.go:105] duration metric: took 175.242574ms to run NodePressure ...
	I1010 18:16:58.329945   99368 start.go:241] waiting for startup goroutines ...
	I1010 18:16:58.329965   99368 start.go:255] writing updated cluster config ...
	I1010 18:16:58.330248   99368 ssh_runner.go:195] Run: rm -f paused
	I1010 18:16:58.382565   99368 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1010 18:16:58.384704   99368 out.go:177] * Done! kubectl is now configured to use "ha-142481" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 10 18:20:40 ha-142481 crio[662]: time="2024-10-10 18:20:40.453727899Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728584440453704092,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=73e1f1cc-14bc-401c-bfe9-1e33123a8a9f name=/runtime.v1.ImageService/ImageFsInfo
	Oct 10 18:20:40 ha-142481 crio[662]: time="2024-10-10 18:20:40.454277630Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2339045f-8ab5-4621-83d9-67f8d8ce636e name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 18:20:40 ha-142481 crio[662]: time="2024-10-10 18:20:40.454332740Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2339045f-8ab5-4621-83d9-67f8d8ce636e name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 18:20:40 ha-142481 crio[662]: time="2024-10-10 18:20:40.454639638Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c07ad1fe2bce4fa8373def3cddec28c3e2552242ac1c39c134f1ae1b46e61fc7,PodSandboxId:0cebb1db5e1d3f09063f319b62399492ea2520962e06e43ba786fceadf07a397,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728584222925867591,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-xnwpj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ddbe05a6-e1b2-4b9d-b285-ed6a97c9ea98,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:018e6370bdfdae261e6d11d25c43c0456fb2814ad544f82f775181d88055bf37,PodSandboxId:84952d68d14fb8ddef31a76a7d3e10fc7ae9a1bb453a588629cf59972d4cc64d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728584076458035667,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xfhq8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33d478fb-98f0-4029-87ae-9701a54cecd9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c208648c013d8c43a6964604c6ae8de3a53473a5fd4e7885a2755580435fb2e,PodSandboxId:20b740049c58515dfa475c0041efc539108a367ad35f9468d4a5dd454a1ba4c1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728584076394205343,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-28dll,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
1d8bde36-6ab6-4b48-b00a-2a80059ecc11,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2eb7357e74059a7301a73b2f96692b18182bf02d619521a79db9b92779a3b9d3,PodSandboxId:a78996796d2ead6b297dda5318a5aee95e782ac04d0f6d8ae10bb40361461d5e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1728584076376848222,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d74f64e-6391-4ab0-99ad-8c1711840696,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b32ac96128061a8cd1f65949d4849412dd90d7656d1619c49a39c2a6f8cb40d3,PodSandboxId:d5a1a0a19e5bca4c63661ad0dc5406e3b343827728591b94c793305e3054b5bb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17285840
64156910390,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4d9v4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f52506a7-d4f9-4112-8b28-f239c7c8230b,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f7d32719ebd232c16c4b9d557726714c2b6c1836b556ff4ca647c75d46c03e9,PodSandboxId:63eed92e7516ac5ee123fb9b271b50065e605524dc4b10606329006c357fe8c2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728584064036938001,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gwvrh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06e8f455-b250-46ea-bbcc-75ed230f2b57,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80e86419d2aadedb18174a33f8cbe97c26931608fd3d13de863fea4148c76dc4,PodSandboxId:ef586683ae3a5a110607194cda298fc606be593d4b2e270644dcc4828413d726,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1728584055179658221,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-142481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca912ab25249d4277c44ead7f9f185cc,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:751981b34b5e95c8f49366cd245e2877efdef340757e2957722cc0c59f16c09c,PodSandboxId:a1a198bd8221ce4fda4789734e1aba8d3c7f89613592cf88b2fe723e9517e7f8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728584052280961387,Labels:map[string]string{io.kubernetes.container.name: kub
e-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-142481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ba30574b93c2cf97990905a11b77e8d,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d7eb644bee429f0aef0448b6ea6d1efc310f7b08553ebb2af7781f4c493bbaf,PodSandboxId:df70f8cffd3d4746ecd973b5cc68d7d7162723d607a107e3fc78197661178de8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728584052264112180,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-ha-142481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ddee910ffb9f86d5728fc6243e0b7a0,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43b160f9e1140bbc977bac3d49805593ce66960b30fe85ac1e6b104437e5a026,PodSandboxId:cf562380e5c8d4bcf939e430aad1b55e383372d779380ea7d38feca9c503cb13,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728584052244936539,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.
kubernetes.pod.name: kube-scheduler-ha-142481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4d0e8861f11277dd58b525d7e9f233b,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:206693e605977eabf0659fe5400e46be795755080a225b01d46bb7533d055c58,PodSandboxId:84fece63e17b567b38dff30b098d026073682dacac0ce0da5bf2ed8a3b0c34de,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728584052158394276,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-142481,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9682645df643fbbf1d6db4f0d49467bf,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2339045f-8ab5-4621-83d9-67f8d8ce636e name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 18:20:40 ha-142481 crio[662]: time="2024-10-10 18:20:40.495440841Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=050a1142-e709-4457-8280-e69fde8c95e8 name=/runtime.v1.RuntimeService/Version
	Oct 10 18:20:40 ha-142481 crio[662]: time="2024-10-10 18:20:40.495533768Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=050a1142-e709-4457-8280-e69fde8c95e8 name=/runtime.v1.RuntimeService/Version
	Oct 10 18:20:40 ha-142481 crio[662]: time="2024-10-10 18:20:40.496756278Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c340cfe7-2bfe-472c-8875-ef07bfc03451 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 10 18:20:40 ha-142481 crio[662]: time="2024-10-10 18:20:40.497187903Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728584440497164538,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c340cfe7-2bfe-472c-8875-ef07bfc03451 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 10 18:20:40 ha-142481 crio[662]: time="2024-10-10 18:20:40.497708972Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ae1b970c-e014-4d29-bc07-6f1040bb66a0 name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 18:20:40 ha-142481 crio[662]: time="2024-10-10 18:20:40.497775960Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ae1b970c-e014-4d29-bc07-6f1040bb66a0 name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 18:20:40 ha-142481 crio[662]: time="2024-10-10 18:20:40.497998825Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c07ad1fe2bce4fa8373def3cddec28c3e2552242ac1c39c134f1ae1b46e61fc7,PodSandboxId:0cebb1db5e1d3f09063f319b62399492ea2520962e06e43ba786fceadf07a397,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728584222925867591,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-xnwpj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ddbe05a6-e1b2-4b9d-b285-ed6a97c9ea98,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:018e6370bdfdae261e6d11d25c43c0456fb2814ad544f82f775181d88055bf37,PodSandboxId:84952d68d14fb8ddef31a76a7d3e10fc7ae9a1bb453a588629cf59972d4cc64d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728584076458035667,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xfhq8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33d478fb-98f0-4029-87ae-9701a54cecd9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c208648c013d8c43a6964604c6ae8de3a53473a5fd4e7885a2755580435fb2e,PodSandboxId:20b740049c58515dfa475c0041efc539108a367ad35f9468d4a5dd454a1ba4c1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728584076394205343,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-28dll,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
1d8bde36-6ab6-4b48-b00a-2a80059ecc11,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2eb7357e74059a7301a73b2f96692b18182bf02d619521a79db9b92779a3b9d3,PodSandboxId:a78996796d2ead6b297dda5318a5aee95e782ac04d0f6d8ae10bb40361461d5e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1728584076376848222,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d74f64e-6391-4ab0-99ad-8c1711840696,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b32ac96128061a8cd1f65949d4849412dd90d7656d1619c49a39c2a6f8cb40d3,PodSandboxId:d5a1a0a19e5bca4c63661ad0dc5406e3b343827728591b94c793305e3054b5bb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17285840
64156910390,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4d9v4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f52506a7-d4f9-4112-8b28-f239c7c8230b,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f7d32719ebd232c16c4b9d557726714c2b6c1836b556ff4ca647c75d46c03e9,PodSandboxId:63eed92e7516ac5ee123fb9b271b50065e605524dc4b10606329006c357fe8c2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728584064036938001,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gwvrh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06e8f455-b250-46ea-bbcc-75ed230f2b57,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80e86419d2aadedb18174a33f8cbe97c26931608fd3d13de863fea4148c76dc4,PodSandboxId:ef586683ae3a5a110607194cda298fc606be593d4b2e270644dcc4828413d726,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1728584055179658221,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-142481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca912ab25249d4277c44ead7f9f185cc,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:751981b34b5e95c8f49366cd245e2877efdef340757e2957722cc0c59f16c09c,PodSandboxId:a1a198bd8221ce4fda4789734e1aba8d3c7f89613592cf88b2fe723e9517e7f8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728584052280961387,Labels:map[string]string{io.kubernetes.container.name: kub
e-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-142481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ba30574b93c2cf97990905a11b77e8d,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d7eb644bee429f0aef0448b6ea6d1efc310f7b08553ebb2af7781f4c493bbaf,PodSandboxId:df70f8cffd3d4746ecd973b5cc68d7d7162723d607a107e3fc78197661178de8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728584052264112180,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-ha-142481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ddee910ffb9f86d5728fc6243e0b7a0,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43b160f9e1140bbc977bac3d49805593ce66960b30fe85ac1e6b104437e5a026,PodSandboxId:cf562380e5c8d4bcf939e430aad1b55e383372d779380ea7d38feca9c503cb13,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728584052244936539,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.
kubernetes.pod.name: kube-scheduler-ha-142481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4d0e8861f11277dd58b525d7e9f233b,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:206693e605977eabf0659fe5400e46be795755080a225b01d46bb7533d055c58,PodSandboxId:84fece63e17b567b38dff30b098d026073682dacac0ce0da5bf2ed8a3b0c34de,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728584052158394276,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-142481,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9682645df643fbbf1d6db4f0d49467bf,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ae1b970c-e014-4d29-bc07-6f1040bb66a0 name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 18:20:40 ha-142481 crio[662]: time="2024-10-10 18:20:40.539691422Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=aa96842c-9b9c-4239-8e47-f35dc0424b90 name=/runtime.v1.RuntimeService/Version
	Oct 10 18:20:40 ha-142481 crio[662]: time="2024-10-10 18:20:40.539780217Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=aa96842c-9b9c-4239-8e47-f35dc0424b90 name=/runtime.v1.RuntimeService/Version
	Oct 10 18:20:40 ha-142481 crio[662]: time="2024-10-10 18:20:40.541130597Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6ebb3f3f-f66e-40ba-8fd3-9cb688d6e194 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 10 18:20:40 ha-142481 crio[662]: time="2024-10-10 18:20:40.541618946Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728584440541545333,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6ebb3f3f-f66e-40ba-8fd3-9cb688d6e194 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 10 18:20:40 ha-142481 crio[662]: time="2024-10-10 18:20:40.542220244Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=741192cf-4122-42d2-a13c-8d1d7124659a name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 18:20:40 ha-142481 crio[662]: time="2024-10-10 18:20:40.542290549Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=741192cf-4122-42d2-a13c-8d1d7124659a name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 18:20:40 ha-142481 crio[662]: time="2024-10-10 18:20:40.542518933Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c07ad1fe2bce4fa8373def3cddec28c3e2552242ac1c39c134f1ae1b46e61fc7,PodSandboxId:0cebb1db5e1d3f09063f319b62399492ea2520962e06e43ba786fceadf07a397,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728584222925867591,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-xnwpj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ddbe05a6-e1b2-4b9d-b285-ed6a97c9ea98,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:018e6370bdfdae261e6d11d25c43c0456fb2814ad544f82f775181d88055bf37,PodSandboxId:84952d68d14fb8ddef31a76a7d3e10fc7ae9a1bb453a588629cf59972d4cc64d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728584076458035667,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xfhq8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33d478fb-98f0-4029-87ae-9701a54cecd9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c208648c013d8c43a6964604c6ae8de3a53473a5fd4e7885a2755580435fb2e,PodSandboxId:20b740049c58515dfa475c0041efc539108a367ad35f9468d4a5dd454a1ba4c1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728584076394205343,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-28dll,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
1d8bde36-6ab6-4b48-b00a-2a80059ecc11,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2eb7357e74059a7301a73b2f96692b18182bf02d619521a79db9b92779a3b9d3,PodSandboxId:a78996796d2ead6b297dda5318a5aee95e782ac04d0f6d8ae10bb40361461d5e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1728584076376848222,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d74f64e-6391-4ab0-99ad-8c1711840696,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b32ac96128061a8cd1f65949d4849412dd90d7656d1619c49a39c2a6f8cb40d3,PodSandboxId:d5a1a0a19e5bca4c63661ad0dc5406e3b343827728591b94c793305e3054b5bb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17285840
64156910390,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4d9v4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f52506a7-d4f9-4112-8b28-f239c7c8230b,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f7d32719ebd232c16c4b9d557726714c2b6c1836b556ff4ca647c75d46c03e9,PodSandboxId:63eed92e7516ac5ee123fb9b271b50065e605524dc4b10606329006c357fe8c2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728584064036938001,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gwvrh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06e8f455-b250-46ea-bbcc-75ed230f2b57,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80e86419d2aadedb18174a33f8cbe97c26931608fd3d13de863fea4148c76dc4,PodSandboxId:ef586683ae3a5a110607194cda298fc606be593d4b2e270644dcc4828413d726,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1728584055179658221,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-142481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca912ab25249d4277c44ead7f9f185cc,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:751981b34b5e95c8f49366cd245e2877efdef340757e2957722cc0c59f16c09c,PodSandboxId:a1a198bd8221ce4fda4789734e1aba8d3c7f89613592cf88b2fe723e9517e7f8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728584052280961387,Labels:map[string]string{io.kubernetes.container.name: kub
e-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-142481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ba30574b93c2cf97990905a11b77e8d,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d7eb644bee429f0aef0448b6ea6d1efc310f7b08553ebb2af7781f4c493bbaf,PodSandboxId:df70f8cffd3d4746ecd973b5cc68d7d7162723d607a107e3fc78197661178de8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728584052264112180,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-ha-142481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ddee910ffb9f86d5728fc6243e0b7a0,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43b160f9e1140bbc977bac3d49805593ce66960b30fe85ac1e6b104437e5a026,PodSandboxId:cf562380e5c8d4bcf939e430aad1b55e383372d779380ea7d38feca9c503cb13,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728584052244936539,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.
kubernetes.pod.name: kube-scheduler-ha-142481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4d0e8861f11277dd58b525d7e9f233b,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:206693e605977eabf0659fe5400e46be795755080a225b01d46bb7533d055c58,PodSandboxId:84fece63e17b567b38dff30b098d026073682dacac0ce0da5bf2ed8a3b0c34de,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728584052158394276,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-142481,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9682645df643fbbf1d6db4f0d49467bf,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=741192cf-4122-42d2-a13c-8d1d7124659a name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 18:20:40 ha-142481 crio[662]: time="2024-10-10 18:20:40.585807189Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ba9bfb36-b572-4bff-af25-553c8a329d9e name=/runtime.v1.RuntimeService/Version
	Oct 10 18:20:40 ha-142481 crio[662]: time="2024-10-10 18:20:40.585901970Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ba9bfb36-b572-4bff-af25-553c8a329d9e name=/runtime.v1.RuntimeService/Version
	Oct 10 18:20:40 ha-142481 crio[662]: time="2024-10-10 18:20:40.587425158Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9dd4be2b-0b86-42dd-8855-9ba0cc40eecc name=/runtime.v1.ImageService/ImageFsInfo
	Oct 10 18:20:40 ha-142481 crio[662]: time="2024-10-10 18:20:40.587984743Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728584440587959376,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9dd4be2b-0b86-42dd-8855-9ba0cc40eecc name=/runtime.v1.ImageService/ImageFsInfo
	Oct 10 18:20:40 ha-142481 crio[662]: time="2024-10-10 18:20:40.588726639Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a04facdf-cdcc-4995-85eb-965d17ad7da7 name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 18:20:40 ha-142481 crio[662]: time="2024-10-10 18:20:40.588796077Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a04facdf-cdcc-4995-85eb-965d17ad7da7 name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 18:20:40 ha-142481 crio[662]: time="2024-10-10 18:20:40.589064727Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c07ad1fe2bce4fa8373def3cddec28c3e2552242ac1c39c134f1ae1b46e61fc7,PodSandboxId:0cebb1db5e1d3f09063f319b62399492ea2520962e06e43ba786fceadf07a397,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728584222925867591,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-xnwpj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ddbe05a6-e1b2-4b9d-b285-ed6a97c9ea98,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:018e6370bdfdae261e6d11d25c43c0456fb2814ad544f82f775181d88055bf37,PodSandboxId:84952d68d14fb8ddef31a76a7d3e10fc7ae9a1bb453a588629cf59972d4cc64d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728584076458035667,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xfhq8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33d478fb-98f0-4029-87ae-9701a54cecd9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c208648c013d8c43a6964604c6ae8de3a53473a5fd4e7885a2755580435fb2e,PodSandboxId:20b740049c58515dfa475c0041efc539108a367ad35f9468d4a5dd454a1ba4c1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728584076394205343,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-28dll,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
1d8bde36-6ab6-4b48-b00a-2a80059ecc11,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2eb7357e74059a7301a73b2f96692b18182bf02d619521a79db9b92779a3b9d3,PodSandboxId:a78996796d2ead6b297dda5318a5aee95e782ac04d0f6d8ae10bb40361461d5e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1728584076376848222,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d74f64e-6391-4ab0-99ad-8c1711840696,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b32ac96128061a8cd1f65949d4849412dd90d7656d1619c49a39c2a6f8cb40d3,PodSandboxId:d5a1a0a19e5bca4c63661ad0dc5406e3b343827728591b94c793305e3054b5bb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17285840
64156910390,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4d9v4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f52506a7-d4f9-4112-8b28-f239c7c8230b,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f7d32719ebd232c16c4b9d557726714c2b6c1836b556ff4ca647c75d46c03e9,PodSandboxId:63eed92e7516ac5ee123fb9b271b50065e605524dc4b10606329006c357fe8c2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728584064036938001,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gwvrh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06e8f455-b250-46ea-bbcc-75ed230f2b57,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80e86419d2aadedb18174a33f8cbe97c26931608fd3d13de863fea4148c76dc4,PodSandboxId:ef586683ae3a5a110607194cda298fc606be593d4b2e270644dcc4828413d726,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1728584055179658221,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-142481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca912ab25249d4277c44ead7f9f185cc,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:751981b34b5e95c8f49366cd245e2877efdef340757e2957722cc0c59f16c09c,PodSandboxId:a1a198bd8221ce4fda4789734e1aba8d3c7f89613592cf88b2fe723e9517e7f8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728584052280961387,Labels:map[string]string{io.kubernetes.container.name: kub
e-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-142481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ba30574b93c2cf97990905a11b77e8d,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d7eb644bee429f0aef0448b6ea6d1efc310f7b08553ebb2af7781f4c493bbaf,PodSandboxId:df70f8cffd3d4746ecd973b5cc68d7d7162723d607a107e3fc78197661178de8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728584052264112180,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-ha-142481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ddee910ffb9f86d5728fc6243e0b7a0,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43b160f9e1140bbc977bac3d49805593ce66960b30fe85ac1e6b104437e5a026,PodSandboxId:cf562380e5c8d4bcf939e430aad1b55e383372d779380ea7d38feca9c503cb13,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728584052244936539,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.
kubernetes.pod.name: kube-scheduler-ha-142481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4d0e8861f11277dd58b525d7e9f233b,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:206693e605977eabf0659fe5400e46be795755080a225b01d46bb7533d055c58,PodSandboxId:84fece63e17b567b38dff30b098d026073682dacac0ce0da5bf2ed8a3b0c34de,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728584052158394276,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-142481,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9682645df643fbbf1d6db4f0d49467bf,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a04facdf-cdcc-4995-85eb-965d17ad7da7 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c07ad1fe2bce4       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   0cebb1db5e1d3       busybox-7dff88458-xnwpj
	018e6370bdfda       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   84952d68d14fb       coredns-7c65d6cfc9-xfhq8
	5c208648c013d       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   20b740049c585       coredns-7c65d6cfc9-28dll
	2eb7357e74059       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   a78996796d2ea       storage-provisioner
	b32ac96128061       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      6 minutes ago       Running             kindnet-cni               0                   d5a1a0a19e5bc       kindnet-4d9v4
	9f7d32719ebd2       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      6 minutes ago       Running             kube-proxy                0                   63eed92e7516a       kube-proxy-gwvrh
	80e86419d2aad       ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413     6 minutes ago       Running             kube-vip                  0                   ef586683ae3a5       kube-vip-ha-142481
	751981b34b5e9       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      6 minutes ago       Running             kube-apiserver            0                   a1a198bd8221c       kube-apiserver-ha-142481
	4d7eb644bee42       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      6 minutes ago       Running             kube-controller-manager   0                   df70f8cffd3d4       kube-controller-manager-ha-142481
	43b160f9e1140       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      6 minutes ago       Running             kube-scheduler            0                   cf562380e5c8d       kube-scheduler-ha-142481
	206693e605977       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   84fece63e17b5       etcd-ha-142481
	
	
	==> coredns [018e6370bdfdae261e6d11d25c43c0456fb2814ad544f82f775181d88055bf37] <==
	[INFO] 10.244.1.2:34545 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001557695s
	[INFO] 10.244.1.2:38085 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000108964s
	[INFO] 10.244.1.2:51531 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000130545s
	[INFO] 10.244.0.4:44429 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002010271s
	[INFO] 10.244.0.4:54303 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000097043s
	[INFO] 10.244.0.4:42398 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000046814s
	[INFO] 10.244.0.4:45760 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00003792s
	[INFO] 10.244.2.2:37649 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000126566s
	[INFO] 10.244.2.2:40587 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000124439s
	[INFO] 10.244.2.2:57109 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00008569s
	[INFO] 10.244.1.2:44569 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000190494s
	[INFO] 10.244.1.2:36745 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000100275s
	[INFO] 10.244.1.2:43935 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000110935s
	[INFO] 10.244.0.4:38393 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000150867s
	[INFO] 10.244.0.4:42701 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000114037s
	[INFO] 10.244.0.4:38022 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000153775s
	[INFO] 10.244.0.4:54617 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000066619s
	[INFO] 10.244.2.2:38084 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000171s
	[INFO] 10.244.2.2:42518 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000188177s
	[INFO] 10.244.2.2:46288 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000151696s
	[INFO] 10.244.1.2:54065 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000167454s
	[INFO] 10.244.1.2:49349 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000138818s
	[INFO] 10.244.0.4:46873 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000110042s
	[INFO] 10.244.0.4:51740 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000092418s
	[INFO] 10.244.0.4:46743 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000066541s
	
	
	==> coredns [5c208648c013d8c43a6964604c6ae8de3a53473a5fd4e7885a2755580435fb2e] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:51137 - 38313 "HINFO IN 987630183612321637.831480708693955805. udp 55 false 512" NXDOMAIN qr,rd,ra 55 0.022844151s
	[INFO] 10.244.2.2:42578 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.001085393s
	[INFO] 10.244.1.2:46574 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.002185448s
	[INFO] 10.244.0.4:39782 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.001587443s
	[INFO] 10.244.0.4:53063 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000500521s
	[INFO] 10.244.2.2:54233 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000215976s
	[INFO] 10.244.2.2:58923 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000163879s
	[INFO] 10.244.1.2:45749 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000253197s
	[INFO] 10.244.1.2:48261 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001731s
	[INFO] 10.244.1.2:46306 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000179475s
	[INFO] 10.244.0.4:41358 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00015898s
	[INFO] 10.244.0.4:57383 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000192727s
	[INFO] 10.244.0.4:41993 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000083721s
	[INFO] 10.244.0.4:60789 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001398106s
	[INFO] 10.244.2.2:56030 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000145862s
	[INFO] 10.244.1.2:34434 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000144043s
	[INFO] 10.244.2.2:40687 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000170156s
	[INFO] 10.244.1.2:56591 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000140447s
	[INFO] 10.244.1.2:34586 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000215712s
	[INFO] 10.244.0.4:49420 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000094221s
	
	
	==> describe nodes <==
	Name:               ha-142481
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-142481
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e90d7de550e839433dc631623053398b652747fc
	                    minikube.k8s.io/name=ha-142481
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_10T18_14_22_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 10 Oct 2024 18:14:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-142481
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 10 Oct 2024 18:20:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 10 Oct 2024 18:17:25 +0000   Thu, 10 Oct 2024 18:14:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 10 Oct 2024 18:17:25 +0000   Thu, 10 Oct 2024 18:14:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 10 Oct 2024 18:17:25 +0000   Thu, 10 Oct 2024 18:14:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 10 Oct 2024 18:17:25 +0000   Thu, 10 Oct 2024 18:14:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.104
	  Hostname:    ha-142481
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 103fd1cad9094f108b20248867a8c9f2
	  System UUID:                103fd1ca-d909-4f10-8b20-248867a8c9f2
	  Boot ID:                    ea46d519-f733-4cdc-b631-5fb0eb75e07c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-xnwpj              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m41s
	  kube-system                 coredns-7c65d6cfc9-28dll             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m17s
	  kube-system                 coredns-7c65d6cfc9-xfhq8             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m17s
	  kube-system                 etcd-ha-142481                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m19s
	  kube-system                 kindnet-4d9v4                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m18s
	  kube-system                 kube-apiserver-ha-142481             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m19s
	  kube-system                 kube-controller-manager-ha-142481    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m19s
	  kube-system                 kube-proxy-gwvrh                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m18s
	  kube-system                 kube-scheduler-ha-142481             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m19s
	  kube-system                 kube-vip-ha-142481                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m19s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m16s                  kube-proxy       
	  Normal  NodeHasSufficientPID     6m29s (x7 over 6m29s)  kubelet          Node ha-142481 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m29s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 6m29s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m29s (x8 over 6m29s)  kubelet          Node ha-142481 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m29s (x8 over 6m29s)  kubelet          Node ha-142481 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 6m19s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m19s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m19s                  kubelet          Node ha-142481 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m19s                  kubelet          Node ha-142481 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m19s                  kubelet          Node ha-142481 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m18s                  node-controller  Node ha-142481 event: Registered Node ha-142481 in Controller
	  Normal  NodeReady                6m5s                   kubelet          Node ha-142481 status is now: NodeReady
	  Normal  RegisteredNode           5m19s                  node-controller  Node ha-142481 event: Registered Node ha-142481 in Controller
	  Normal  RegisteredNode           4m                     node-controller  Node ha-142481 event: Registered Node ha-142481 in Controller
	
	
	Name:               ha-142481-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-142481-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e90d7de550e839433dc631623053398b652747fc
	                    minikube.k8s.io/name=ha-142481
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_10T18_15_15_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 10 Oct 2024 18:15:12 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-142481-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 10 Oct 2024 18:18:16 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Thu, 10 Oct 2024 18:17:15 +0000   Thu, 10 Oct 2024 18:18:57 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Thu, 10 Oct 2024 18:17:15 +0000   Thu, 10 Oct 2024 18:18:57 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Thu, 10 Oct 2024 18:17:15 +0000   Thu, 10 Oct 2024 18:18:57 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Thu, 10 Oct 2024 18:17:15 +0000   Thu, 10 Oct 2024 18:18:57 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.186
	  Hostname:    ha-142481-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 64af1b9db3cc41a38fc696e261399a82
	  System UUID:                64af1b9d-b3cc-41a3-8fc6-96e261399a82
	  Boot ID:                    1ad9a5aa-6f71-4b62-94f2-fcfc6f775bcc
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-wf7qs                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m41s
	  kube-system                 etcd-ha-142481-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m26s
	  kube-system                 kindnet-5k6j8                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m28s
	  kube-system                 kube-apiserver-ha-142481-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m26s
	  kube-system                 kube-controller-manager-ha-142481-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m21s
	  kube-system                 kube-proxy-srfng                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m28s
	  kube-system                 kube-scheduler-ha-142481-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m26s
	  kube-system                 kube-vip-ha-142481-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m23s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m28s (x8 over 5m28s)  kubelet          Node ha-142481-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m28s (x8 over 5m28s)  kubelet          Node ha-142481-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m28s (x7 over 5m28s)  kubelet          Node ha-142481-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m28s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m23s                  node-controller  Node ha-142481-m02 event: Registered Node ha-142481-m02 in Controller
	  Normal  RegisteredNode           5m19s                  node-controller  Node ha-142481-m02 event: Registered Node ha-142481-m02 in Controller
	  Normal  RegisteredNode           4m                     node-controller  Node ha-142481-m02 event: Registered Node ha-142481-m02 in Controller
	  Normal  NodeNotReady             103s                   node-controller  Node ha-142481-m02 status is now: NodeNotReady
	
	
	Name:               ha-142481-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-142481-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e90d7de550e839433dc631623053398b652747fc
	                    minikube.k8s.io/name=ha-142481
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_10T18_16_34_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 10 Oct 2024 18:16:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-142481-m03
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 10 Oct 2024 18:20:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 10 Oct 2024 18:17:32 +0000   Thu, 10 Oct 2024 18:16:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 10 Oct 2024 18:17:32 +0000   Thu, 10 Oct 2024 18:16:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 10 Oct 2024 18:17:32 +0000   Thu, 10 Oct 2024 18:16:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 10 Oct 2024 18:17:32 +0000   Thu, 10 Oct 2024 18:16:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.175
	  Hostname:    ha-142481-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 940ef061e50d4431baad36dbbc54f8b4
	  System UUID:                940ef061-e50d-4431-baad-36dbbc54f8b4
	  Boot ID:                    48ae8d44-92c8-45fc-a610-982f0242851e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-5544l                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m42s
	  kube-system                 etcd-ha-142481-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m8s
	  kube-system                 kindnet-cjcsf                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m10s
	  kube-system                 kube-apiserver-ha-142481-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m8s
	  kube-system                 kube-controller-manager-ha-142481-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m8s
	  kube-system                 kube-proxy-cdjzg                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m10s
	  kube-system                 kube-scheduler-ha-142481-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m8s
	  kube-system                 kube-vip-ha-142481-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m4s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  4m10s (x8 over 4m10s)  kubelet          Node ha-142481-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m10s (x8 over 4m10s)  kubelet          Node ha-142481-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m10s (x7 over 4m10s)  kubelet          Node ha-142481-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m10s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m9s                   node-controller  Node ha-142481-m03 event: Registered Node ha-142481-m03 in Controller
	  Normal  RegisteredNode           4m5s                   node-controller  Node ha-142481-m03 event: Registered Node ha-142481-m03 in Controller
	  Normal  RegisteredNode           4m1s                   node-controller  Node ha-142481-m03 event: Registered Node ha-142481-m03 in Controller
	
	
	Name:               ha-142481-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-142481-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e90d7de550e839433dc631623053398b652747fc
	                    minikube.k8s.io/name=ha-142481
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_10T18_17_40_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 10 Oct 2024 18:17:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-142481-m04
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 10 Oct 2024 18:20:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 10 Oct 2024 18:18:09 +0000   Thu, 10 Oct 2024 18:17:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 10 Oct 2024 18:18:09 +0000   Thu, 10 Oct 2024 18:17:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 10 Oct 2024 18:18:09 +0000   Thu, 10 Oct 2024 18:17:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 10 Oct 2024 18:18:09 +0000   Thu, 10 Oct 2024 18:17:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.164
	  Hostname:    ha-142481-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 98346cf85e5d4e1e831142d0f2e86f20
	  System UUID:                98346cf8-5e5d-4e1e-8311-42d0f2e86f20
	  Boot ID:                    0fd379eb-2eaf-4e1b-aeda-b9abfe41644d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-qbvk6       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m2s
	  kube-system                 kube-proxy-4xzhw    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m2s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 2m56s                kube-proxy       
	  Normal  NodeHasSufficientMemory  3m2s (x2 over 3m2s)  kubelet          Node ha-142481-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m2s (x2 over 3m2s)  kubelet          Node ha-142481-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m2s (x2 over 3m2s)  kubelet          Node ha-142481-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m2s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m1s                 node-controller  Node ha-142481-m04 event: Registered Node ha-142481-m04 in Controller
	  Normal  RegisteredNode           3m                   node-controller  Node ha-142481-m04 event: Registered Node ha-142481-m04 in Controller
	  Normal  RegisteredNode           2m59s                node-controller  Node ha-142481-m04 event: Registered Node ha-142481-m04 in Controller
	  Normal  NodeReady                2m42s                kubelet          Node ha-142481-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct10 18:13] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050451] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040403] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.885132] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.655679] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.952802] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Oct10 18:14] systemd-fstab-generator[584]: Ignoring "noauto" option for root device
	[  +0.063573] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.063579] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.169358] systemd-fstab-generator[611]: Ignoring "noauto" option for root device
	[  +0.137879] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.284778] systemd-fstab-generator[653]: Ignoring "noauto" option for root device
	[  +4.055847] systemd-fstab-generator[745]: Ignoring "noauto" option for root device
	[  +3.359583] systemd-fstab-generator[872]: Ignoring "noauto" option for root device
	[  +0.065935] kauditd_printk_skb: 158 callbacks suppressed
	[ +10.163908] systemd-fstab-generator[1291]: Ignoring "noauto" option for root device
	[  +0.085716] kauditd_printk_skb: 79 callbacks suppressed
	[ +14.930913] kauditd_printk_skb: 69 callbacks suppressed
	[Oct10 18:15] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [206693e605977eabf0659fe5400e46be795755080a225b01d46bb7533d055c58] <==
	{"level":"warn","ts":"2024-10-10T18:20:40.877870Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"223628dc6b2f68bd","from":"223628dc6b2f68bd","remote-peer-id":"74546b85ba45f826","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-10T18:20:40.888289Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"223628dc6b2f68bd","from":"223628dc6b2f68bd","remote-peer-id":"74546b85ba45f826","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-10T18:20:40.892543Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"223628dc6b2f68bd","from":"223628dc6b2f68bd","remote-peer-id":"74546b85ba45f826","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-10T18:20:40.903239Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"223628dc6b2f68bd","from":"223628dc6b2f68bd","remote-peer-id":"74546b85ba45f826","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-10T18:20:40.914386Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"223628dc6b2f68bd","from":"223628dc6b2f68bd","remote-peer-id":"74546b85ba45f826","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-10T18:20:40.926151Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"223628dc6b2f68bd","from":"223628dc6b2f68bd","remote-peer-id":"74546b85ba45f826","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-10T18:20:40.929948Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"223628dc6b2f68bd","from":"223628dc6b2f68bd","remote-peer-id":"74546b85ba45f826","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-10T18:20:40.933159Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"223628dc6b2f68bd","from":"223628dc6b2f68bd","remote-peer-id":"74546b85ba45f826","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-10T18:20:40.935769Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"223628dc6b2f68bd","from":"223628dc6b2f68bd","remote-peer-id":"74546b85ba45f826","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-10T18:20:40.939811Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"223628dc6b2f68bd","from":"223628dc6b2f68bd","remote-peer-id":"74546b85ba45f826","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-10T18:20:40.946969Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"223628dc6b2f68bd","from":"223628dc6b2f68bd","remote-peer-id":"74546b85ba45f826","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-10T18:20:40.950715Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"223628dc6b2f68bd","from":"223628dc6b2f68bd","remote-peer-id":"74546b85ba45f826","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-10T18:20:40.965399Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"223628dc6b2f68bd","from":"223628dc6b2f68bd","remote-peer-id":"74546b85ba45f826","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-10T18:20:40.973086Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"223628dc6b2f68bd","from":"223628dc6b2f68bd","remote-peer-id":"74546b85ba45f826","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-10T18:20:40.977816Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"223628dc6b2f68bd","from":"223628dc6b2f68bd","remote-peer-id":"74546b85ba45f826","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-10T18:20:40.983475Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"223628dc6b2f68bd","from":"223628dc6b2f68bd","remote-peer-id":"74546b85ba45f826","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-10T18:20:40.990947Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"223628dc6b2f68bd","from":"223628dc6b2f68bd","remote-peer-id":"74546b85ba45f826","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-10T18:20:41.001151Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"223628dc6b2f68bd","from":"223628dc6b2f68bd","remote-peer-id":"74546b85ba45f826","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-10T18:20:41.008175Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"223628dc6b2f68bd","from":"223628dc6b2f68bd","remote-peer-id":"74546b85ba45f826","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-10T18:20:41.013830Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"223628dc6b2f68bd","from":"223628dc6b2f68bd","remote-peer-id":"74546b85ba45f826","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-10T18:20:41.019670Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"223628dc6b2f68bd","from":"223628dc6b2f68bd","remote-peer-id":"74546b85ba45f826","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-10T18:20:41.027061Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"223628dc6b2f68bd","from":"223628dc6b2f68bd","remote-peer-id":"74546b85ba45f826","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-10T18:20:41.038538Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"223628dc6b2f68bd","from":"223628dc6b2f68bd","remote-peer-id":"74546b85ba45f826","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-10T18:20:41.046802Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"223628dc6b2f68bd","from":"223628dc6b2f68bd","remote-peer-id":"74546b85ba45f826","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-10T18:20:41.048120Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"223628dc6b2f68bd","from":"223628dc6b2f68bd","remote-peer-id":"74546b85ba45f826","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 18:20:41 up 6 min,  0 users,  load average: 0.55, 0.39, 0.19
	Linux ha-142481 5.10.207 #1 SMP Tue Oct 8 15:16:25 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [b32ac96128061a8cd1f65949d4849412dd90d7656d1619c49a39c2a6f8cb40d3] <==
	I1010 18:20:05.395132       1 main.go:322] Node ha-142481-m04 has CIDR [10.244.3.0/24] 
	I1010 18:20:15.395054       1 main.go:295] Handling node with IPs: map[192.168.39.164:{}]
	I1010 18:20:15.395138       1 main.go:322] Node ha-142481-m04 has CIDR [10.244.3.0/24] 
	I1010 18:20:15.395354       1 main.go:295] Handling node with IPs: map[192.168.39.104:{}]
	I1010 18:20:15.395389       1 main.go:299] handling current node
	I1010 18:20:15.395418       1 main.go:295] Handling node with IPs: map[192.168.39.186:{}]
	I1010 18:20:15.395426       1 main.go:322] Node ha-142481-m02 has CIDR [10.244.1.0/24] 
	I1010 18:20:15.395505       1 main.go:295] Handling node with IPs: map[192.168.39.175:{}]
	I1010 18:20:15.395542       1 main.go:322] Node ha-142481-m03 has CIDR [10.244.2.0/24] 
	I1010 18:20:25.390200       1 main.go:295] Handling node with IPs: map[192.168.39.104:{}]
	I1010 18:20:25.390354       1 main.go:299] handling current node
	I1010 18:20:25.390392       1 main.go:295] Handling node with IPs: map[192.168.39.186:{}]
	I1010 18:20:25.390416       1 main.go:322] Node ha-142481-m02 has CIDR [10.244.1.0/24] 
	I1010 18:20:25.390644       1 main.go:295] Handling node with IPs: map[192.168.39.175:{}]
	I1010 18:20:25.390677       1 main.go:322] Node ha-142481-m03 has CIDR [10.244.2.0/24] 
	I1010 18:20:25.390737       1 main.go:295] Handling node with IPs: map[192.168.39.164:{}]
	I1010 18:20:25.390755       1 main.go:322] Node ha-142481-m04 has CIDR [10.244.3.0/24] 
	I1010 18:20:35.399378       1 main.go:295] Handling node with IPs: map[192.168.39.104:{}]
	I1010 18:20:35.399430       1 main.go:299] handling current node
	I1010 18:20:35.399452       1 main.go:295] Handling node with IPs: map[192.168.39.186:{}]
	I1010 18:20:35.399457       1 main.go:322] Node ha-142481-m02 has CIDR [10.244.1.0/24] 
	I1010 18:20:35.399642       1 main.go:295] Handling node with IPs: map[192.168.39.175:{}]
	I1010 18:20:35.399667       1 main.go:322] Node ha-142481-m03 has CIDR [10.244.2.0/24] 
	I1010 18:20:35.399718       1 main.go:295] Handling node with IPs: map[192.168.39.164:{}]
	I1010 18:20:35.399723       1 main.go:322] Node ha-142481-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [751981b34b5e95c8f49366cd245e2877efdef340757e2957722cc0c59f16c09c] <==
	I1010 18:14:21.601752       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1010 18:14:21.615538       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1010 18:14:22.685756       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I1010 18:14:22.961093       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E1010 18:15:13.597943       1 writers.go:122] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E1010 18:15:13.598021       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 8.162µs, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	E1010 18:15:13.599137       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E1010 18:15:13.600311       1 writers.go:135] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E1010 18:15:13.601619       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="3.769951ms" method="POST" path="/api/v1/namespaces/kube-system/events" result=null
	E1010 18:17:03.850296       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:50978: use of closed network connection
	E1010 18:17:04.060164       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:50998: use of closed network connection
	E1010 18:17:04.265073       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51022: use of closed network connection
	E1010 18:17:04.497148       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51026: use of closed network connection
	E1010 18:17:04.691753       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51052: use of closed network connection
	E1010 18:17:04.874313       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51072: use of closed network connection
	E1010 18:17:05.055509       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51096: use of closed network connection
	E1010 18:17:05.241806       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51110: use of closed network connection
	E1010 18:17:05.418962       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51128: use of closed network connection
	E1010 18:17:05.714305       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35886: use of closed network connection
	E1010 18:17:05.894226       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35894: use of closed network connection
	E1010 18:17:06.084951       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35922: use of closed network connection
	E1010 18:17:06.281751       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35936: use of closed network connection
	E1010 18:17:06.459430       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35954: use of closed network connection
	E1010 18:17:06.642941       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35966: use of closed network connection
	W1010 18:18:37.363890       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.104 192.168.39.175]
	
	
	==> kube-controller-manager [4d7eb644bee429f0aef0448b6ea6d1efc310f7b08553ebb2af7781f4c493bbaf] <==
	I1010 18:17:39.636355       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-142481-m04" podCIDRs=["10.244.3.0/24"]
	I1010 18:17:39.636414       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-142481-m04"
	I1010 18:17:39.636469       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-142481-m04"
	I1010 18:17:39.668112       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-142481-m04"
	I1010 18:17:39.689740       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-142481-m04"
	I1010 18:17:40.177402       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-142481-m04"
	I1010 18:17:40.233291       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-142481-m04"
	I1010 18:17:41.187681       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-142481-m04"
	I1010 18:17:41.226193       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-142481-m04"
	I1010 18:17:42.243172       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-142481-m04"
	I1010 18:17:42.243646       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-142481-m04"
	I1010 18:17:42.333986       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-142481-m04"
	I1010 18:17:49.941287       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-142481-m04"
	I1010 18:17:59.249000       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-142481-m04"
	I1010 18:17:59.249257       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-142481-m04"
	I1010 18:17:59.269371       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-142481-m04"
	I1010 18:18:00.212787       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-142481-m04"
	I1010 18:18:09.988078       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-142481-m04"
	I1010 18:18:57.270927       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-142481-m04"
	I1010 18:18:57.272138       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-142481-m02"
	I1010 18:18:57.296852       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-142481-m02"
	I1010 18:18:57.478314       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="49.230176ms"
	I1010 18:18:57.478428       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="80.474µs"
	I1010 18:19:00.278371       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-142481-m02"
	I1010 18:19:02.479119       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-142481-m02"
	
	
	==> kube-proxy [9f7d32719ebd232c16c4b9d557726714c2b6c1836b556ff4ca647c75d46c03e9] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1010 18:14:24.446239       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1010 18:14:24.508320       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.104"]
	E1010 18:14:24.508809       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1010 18:14:24.556831       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1010 18:14:24.556922       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1010 18:14:24.556961       1 server_linux.go:169] "Using iptables Proxier"
	I1010 18:14:24.559536       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1010 18:14:24.560518       1 server.go:483] "Version info" version="v1.31.1"
	I1010 18:14:24.560742       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1010 18:14:24.562971       1 config.go:199] "Starting service config controller"
	I1010 18:14:24.563611       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1010 18:14:24.563720       1 config.go:105] "Starting endpoint slice config controller"
	I1010 18:14:24.563744       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1010 18:14:24.566215       1 config.go:328] "Starting node config controller"
	I1010 18:14:24.566227       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1010 18:14:24.665476       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1010 18:14:24.665712       1 shared_informer.go:320] Caches are synced for service config
	I1010 18:14:24.667666       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [43b160f9e1140bbc977bac3d49805593ce66960b30fe85ac1e6b104437e5a026] <==
	W1010 18:14:16.494936       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1010 18:14:16.495042       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1010 18:14:16.517223       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1010 18:14:16.517488       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1010 18:14:16.544128       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1010 18:14:16.544233       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1010 18:14:16.560806       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1010 18:14:16.560856       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1010 18:14:16.640427       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1010 18:14:16.640554       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1010 18:14:16.701938       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1010 18:14:16.702008       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1010 18:14:16.773339       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1010 18:14:16.773523       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1010 18:14:16.873800       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1010 18:14:16.874006       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1010 18:14:18.221733       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1010 18:16:59.352658       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-wf7qs\": pod busybox-7dff88458-wf7qs is already assigned to node \"ha-142481-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-wf7qs" node="ha-142481-m02"
	E1010 18:16:59.352878       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 8cfeb378-41dd-4850-bbc6-610453612cf5(default/busybox-7dff88458-wf7qs) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-wf7qs"
	E1010 18:16:59.352933       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-wf7qs\": pod busybox-7dff88458-wf7qs is already assigned to node \"ha-142481-m02\"" pod="default/busybox-7dff88458-wf7qs"
	I1010 18:16:59.352990       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-wf7qs" node="ha-142481-m02"
	E1010 18:17:39.876287       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-qbvk6\": pod kindnet-qbvk6 is already assigned to node \"ha-142481-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-qbvk6" node="ha-142481-m04"
	E1010 18:17:39.876531       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 67b280c2-562d-45e0-a362-726dadaf5cf6(kube-system/kindnet-qbvk6) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-qbvk6"
	E1010 18:17:39.876554       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-qbvk6\": pod kindnet-qbvk6 is already assigned to node \"ha-142481-m04\"" pod="kube-system/kindnet-qbvk6"
	I1010 18:17:39.876861       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-qbvk6" node="ha-142481-m04"
	
	
	==> kubelet <==
	Oct 10 18:19:21 ha-142481 kubelet[1298]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 10 18:19:21 ha-142481 kubelet[1298]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 10 18:19:21 ha-142481 kubelet[1298]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 10 18:19:21 ha-142481 kubelet[1298]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 10 18:19:21 ha-142481 kubelet[1298]: E1010 18:19:21.653774    1298 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728584361653351989,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 18:19:21 ha-142481 kubelet[1298]: E1010 18:19:21.654165    1298 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728584361653351989,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 18:19:31 ha-142481 kubelet[1298]: E1010 18:19:31.655501    1298 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728584371655103881,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 18:19:31 ha-142481 kubelet[1298]: E1010 18:19:31.656061    1298 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728584371655103881,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 18:19:41 ha-142481 kubelet[1298]: E1010 18:19:41.657888    1298 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728584381657459506,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 18:19:41 ha-142481 kubelet[1298]: E1010 18:19:41.657923    1298 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728584381657459506,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 18:19:51 ha-142481 kubelet[1298]: E1010 18:19:51.662516    1298 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728584391660533273,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 18:19:51 ha-142481 kubelet[1298]: E1010 18:19:51.662805    1298 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728584391660533273,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 18:20:01 ha-142481 kubelet[1298]: E1010 18:20:01.665482    1298 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728584401664880599,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 18:20:01 ha-142481 kubelet[1298]: E1010 18:20:01.665528    1298 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728584401664880599,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 18:20:11 ha-142481 kubelet[1298]: E1010 18:20:11.668335    1298 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728584411667894103,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 18:20:11 ha-142481 kubelet[1298]: E1010 18:20:11.668374    1298 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728584411667894103,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 18:20:21 ha-142481 kubelet[1298]: E1010 18:20:21.541634    1298 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 10 18:20:21 ha-142481 kubelet[1298]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 10 18:20:21 ha-142481 kubelet[1298]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 10 18:20:21 ha-142481 kubelet[1298]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 10 18:20:21 ha-142481 kubelet[1298]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 10 18:20:21 ha-142481 kubelet[1298]: E1010 18:20:21.670317    1298 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728584421670063294,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 18:20:21 ha-142481 kubelet[1298]: E1010 18:20:21.670363    1298 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728584421670063294,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 18:20:31 ha-142481 kubelet[1298]: E1010 18:20:31.672182    1298 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728584431671864331,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 18:20:31 ha-142481 kubelet[1298]: E1010 18:20:31.672436    1298 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728584431671864331,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-142481 -n ha-142481
helpers_test.go:261: (dbg) Run:  kubectl --context ha-142481 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (141.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (5.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:392: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.390291796s)
ha_test.go:415: expected profile "ha-142481" in json of 'profile list' to have "Degraded" status but have "Unknown" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-142481\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"ha-142481\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"kvm2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":
1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-142481\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.39.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.39.104\",\"Port\":8443,\"Kube
rnetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.39.186\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.39.175\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.39.164\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":false,
\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/home/jenkins:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize
\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-142481 -n ha-142481
helpers_test.go:244: <<< TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-142481 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-142481 logs -n 25: (1.432496763s)
helpers_test.go:252: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-142481 cp ha-142481-m03:/home/docker/cp-test.txt                              | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile4115718102/001/cp-test_ha-142481-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-142481 ssh -n                                                                 | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | ha-142481-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-142481 cp ha-142481-m03:/home/docker/cp-test.txt                              | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | ha-142481:/home/docker/cp-test_ha-142481-m03_ha-142481.txt                       |           |         |         |                     |                     |
	| ssh     | ha-142481 ssh -n                                                                 | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | ha-142481-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-142481 ssh -n ha-142481 sudo cat                                              | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | /home/docker/cp-test_ha-142481-m03_ha-142481.txt                                 |           |         |         |                     |                     |
	| cp      | ha-142481 cp ha-142481-m03:/home/docker/cp-test.txt                              | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | ha-142481-m02:/home/docker/cp-test_ha-142481-m03_ha-142481-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-142481 ssh -n                                                                 | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | ha-142481-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-142481 ssh -n ha-142481-m02 sudo cat                                          | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | /home/docker/cp-test_ha-142481-m03_ha-142481-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-142481 cp ha-142481-m03:/home/docker/cp-test.txt                              | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | ha-142481-m04:/home/docker/cp-test_ha-142481-m03_ha-142481-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-142481 ssh -n                                                                 | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | ha-142481-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-142481 ssh -n ha-142481-m04 sudo cat                                          | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | /home/docker/cp-test_ha-142481-m03_ha-142481-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-142481 cp testdata/cp-test.txt                                                | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | ha-142481-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-142481 ssh -n                                                                 | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | ha-142481-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-142481 cp ha-142481-m04:/home/docker/cp-test.txt                              | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile4115718102/001/cp-test_ha-142481-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-142481 ssh -n                                                                 | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | ha-142481-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-142481 cp ha-142481-m04:/home/docker/cp-test.txt                              | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | ha-142481:/home/docker/cp-test_ha-142481-m04_ha-142481.txt                       |           |         |         |                     |                     |
	| ssh     | ha-142481 ssh -n                                                                 | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | ha-142481-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-142481 ssh -n ha-142481 sudo cat                                              | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | /home/docker/cp-test_ha-142481-m04_ha-142481.txt                                 |           |         |         |                     |                     |
	| cp      | ha-142481 cp ha-142481-m04:/home/docker/cp-test.txt                              | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | ha-142481-m02:/home/docker/cp-test_ha-142481-m04_ha-142481-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-142481 ssh -n                                                                 | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | ha-142481-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-142481 ssh -n ha-142481-m02 sudo cat                                          | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | /home/docker/cp-test_ha-142481-m04_ha-142481-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-142481 cp ha-142481-m04:/home/docker/cp-test.txt                              | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | ha-142481-m03:/home/docker/cp-test_ha-142481-m04_ha-142481-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-142481 ssh -n                                                                 | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | ha-142481-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-142481 ssh -n ha-142481-m03 sudo cat                                          | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | /home/docker/cp-test_ha-142481-m04_ha-142481-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-142481 node stop m02 -v=7                                                     | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/10 18:13:38
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1010 18:13:38.106562   99368 out.go:345] Setting OutFile to fd 1 ...
	I1010 18:13:38.106682   99368 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 18:13:38.106690   99368 out.go:358] Setting ErrFile to fd 2...
	I1010 18:13:38.106694   99368 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 18:13:38.106895   99368 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19787-81676/.minikube/bin
	I1010 18:13:38.107477   99368 out.go:352] Setting JSON to false
	I1010 18:13:38.108309   99368 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":6964,"bootTime":1728577054,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1010 18:13:38.108413   99368 start.go:139] virtualization: kvm guest
	I1010 18:13:38.110824   99368 out.go:177] * [ha-142481] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1010 18:13:38.112418   99368 out.go:177]   - MINIKUBE_LOCATION=19787
	I1010 18:13:38.112454   99368 notify.go:220] Checking for updates...
	I1010 18:13:38.114936   99368 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1010 18:13:38.116370   99368 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19787-81676/kubeconfig
	I1010 18:13:38.117745   99368 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19787-81676/.minikube
	I1010 18:13:38.118944   99368 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1010 18:13:38.120250   99368 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1010 18:13:38.121551   99368 driver.go:394] Setting default libvirt URI to qemu:///system
	I1010 18:13:38.157644   99368 out.go:177] * Using the kvm2 driver based on user configuration
	I1010 18:13:38.158888   99368 start.go:297] selected driver: kvm2
	I1010 18:13:38.158919   99368 start.go:901] validating driver "kvm2" against <nil>
	I1010 18:13:38.158934   99368 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1010 18:13:38.159711   99368 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 18:13:38.159814   99368 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19787-81676/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1010 18:13:38.174780   99368 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1010 18:13:38.174840   99368 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1010 18:13:38.175095   99368 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1010 18:13:38.175132   99368 cni.go:84] Creating CNI manager for ""
	I1010 18:13:38.175195   99368 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1010 18:13:38.175219   99368 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1010 18:13:38.175271   99368 start.go:340] cluster config:
	{Name:ha-142481 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-142481 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1010 18:13:38.175372   99368 iso.go:125] acquiring lock: {Name:mk53f4624ffe67cac4f5d6b29b9ed87292a010a5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 18:13:38.177295   99368 out.go:177] * Starting "ha-142481" primary control-plane node in "ha-142481" cluster
	I1010 18:13:38.178523   99368 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1010 18:13:38.178564   99368 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1010 18:13:38.178578   99368 cache.go:56] Caching tarball of preloaded images
	I1010 18:13:38.178671   99368 preload.go:172] Found /home/jenkins/minikube-integration/19787-81676/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1010 18:13:38.178686   99368 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1010 18:13:38.179056   99368 profile.go:143] Saving config to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/config.json ...
	I1010 18:13:38.179080   99368 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/config.json: {Name:mk6ba06e5ddbd39667f8d6031429fc5b567ca233 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:13:38.179240   99368 start.go:360] acquireMachinesLock for ha-142481: {Name:mkf50a8a3600473176c10a3ea212772e747151e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1010 18:13:38.179277   99368 start.go:364] duration metric: took 20.536µs to acquireMachinesLock for "ha-142481"
	I1010 18:13:38.179299   99368 start.go:93] Provisioning new machine with config: &{Name:ha-142481 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-142481 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1010 18:13:38.179350   99368 start.go:125] createHost starting for "" (driver="kvm2")
	I1010 18:13:38.180956   99368 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1010 18:13:38.181134   99368 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 18:13:38.181190   99368 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 18:13:38.195735   99368 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37675
	I1010 18:13:38.196239   99368 main.go:141] libmachine: () Calling .GetVersion
	I1010 18:13:38.196810   99368 main.go:141] libmachine: Using API Version  1
	I1010 18:13:38.196834   99368 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 18:13:38.197229   99368 main.go:141] libmachine: () Calling .GetMachineName
	I1010 18:13:38.197439   99368 main.go:141] libmachine: (ha-142481) Calling .GetMachineName
	I1010 18:13:38.197656   99368 main.go:141] libmachine: (ha-142481) Calling .DriverName
	I1010 18:13:38.197815   99368 start.go:159] libmachine.API.Create for "ha-142481" (driver="kvm2")
	I1010 18:13:38.197850   99368 client.go:168] LocalClient.Create starting
	I1010 18:13:38.197896   99368 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem
	I1010 18:13:38.197929   99368 main.go:141] libmachine: Decoding PEM data...
	I1010 18:13:38.197946   99368 main.go:141] libmachine: Parsing certificate...
	I1010 18:13:38.197994   99368 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem
	I1010 18:13:38.198011   99368 main.go:141] libmachine: Decoding PEM data...
	I1010 18:13:38.198032   99368 main.go:141] libmachine: Parsing certificate...
	I1010 18:13:38.198051   99368 main.go:141] libmachine: Running pre-create checks...
	I1010 18:13:38.198059   99368 main.go:141] libmachine: (ha-142481) Calling .PreCreateCheck
	I1010 18:13:38.198443   99368 main.go:141] libmachine: (ha-142481) Calling .GetConfigRaw
	I1010 18:13:38.198814   99368 main.go:141] libmachine: Creating machine...
	I1010 18:13:38.198829   99368 main.go:141] libmachine: (ha-142481) Calling .Create
	I1010 18:13:38.199006   99368 main.go:141] libmachine: (ha-142481) Creating KVM machine...
	I1010 18:13:38.200423   99368 main.go:141] libmachine: (ha-142481) DBG | found existing default KVM network
	I1010 18:13:38.201134   99368 main.go:141] libmachine: (ha-142481) DBG | I1010 18:13:38.200987   99391 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015b90}
	I1010 18:13:38.201152   99368 main.go:141] libmachine: (ha-142481) DBG | created network xml: 
	I1010 18:13:38.201163   99368 main.go:141] libmachine: (ha-142481) DBG | <network>
	I1010 18:13:38.201168   99368 main.go:141] libmachine: (ha-142481) DBG |   <name>mk-ha-142481</name>
	I1010 18:13:38.201173   99368 main.go:141] libmachine: (ha-142481) DBG |   <dns enable='no'/>
	I1010 18:13:38.201179   99368 main.go:141] libmachine: (ha-142481) DBG |   
	I1010 18:13:38.201186   99368 main.go:141] libmachine: (ha-142481) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1010 18:13:38.201195   99368 main.go:141] libmachine: (ha-142481) DBG |     <dhcp>
	I1010 18:13:38.201204   99368 main.go:141] libmachine: (ha-142481) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1010 18:13:38.201210   99368 main.go:141] libmachine: (ha-142481) DBG |     </dhcp>
	I1010 18:13:38.201224   99368 main.go:141] libmachine: (ha-142481) DBG |   </ip>
	I1010 18:13:38.201233   99368 main.go:141] libmachine: (ha-142481) DBG |   
	I1010 18:13:38.201241   99368 main.go:141] libmachine: (ha-142481) DBG | </network>
	I1010 18:13:38.201253   99368 main.go:141] libmachine: (ha-142481) DBG | 
	I1010 18:13:38.206109   99368 main.go:141] libmachine: (ha-142481) DBG | trying to create private KVM network mk-ha-142481 192.168.39.0/24...
	I1010 18:13:38.273921   99368 main.go:141] libmachine: (ha-142481) DBG | private KVM network mk-ha-142481 192.168.39.0/24 created
	I1010 18:13:38.273973   99368 main.go:141] libmachine: (ha-142481) DBG | I1010 18:13:38.273888   99391 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19787-81676/.minikube
	I1010 18:13:38.273987   99368 main.go:141] libmachine: (ha-142481) Setting up store path in /home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481 ...
	I1010 18:13:38.274008   99368 main.go:141] libmachine: (ha-142481) Building disk image from file:///home/jenkins/minikube-integration/19787-81676/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso
	I1010 18:13:38.274030   99368 main.go:141] libmachine: (ha-142481) Downloading /home/jenkins/minikube-integration/19787-81676/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19787-81676/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso...
	I1010 18:13:38.538580   99368 main.go:141] libmachine: (ha-142481) DBG | I1010 18:13:38.538442   99391 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481/id_rsa...
	I1010 18:13:38.734956   99368 main.go:141] libmachine: (ha-142481) DBG | I1010 18:13:38.734800   99391 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481/ha-142481.rawdisk...
	I1010 18:13:38.734986   99368 main.go:141] libmachine: (ha-142481) DBG | Writing magic tar header
	I1010 18:13:38.734996   99368 main.go:141] libmachine: (ha-142481) DBG | Writing SSH key tar header
	I1010 18:13:38.735006   99368 main.go:141] libmachine: (ha-142481) DBG | I1010 18:13:38.734920   99391 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481 ...
	I1010 18:13:38.735023   99368 main.go:141] libmachine: (ha-142481) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481
	I1010 18:13:38.735054   99368 main.go:141] libmachine: (ha-142481) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19787-81676/.minikube/machines
	I1010 18:13:38.735062   99368 main.go:141] libmachine: (ha-142481) Setting executable bit set on /home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481 (perms=drwx------)
	I1010 18:13:38.735074   99368 main.go:141] libmachine: (ha-142481) Setting executable bit set on /home/jenkins/minikube-integration/19787-81676/.minikube/machines (perms=drwxr-xr-x)
	I1010 18:13:38.735083   99368 main.go:141] libmachine: (ha-142481) Setting executable bit set on /home/jenkins/minikube-integration/19787-81676/.minikube (perms=drwxr-xr-x)
	I1010 18:13:38.735098   99368 main.go:141] libmachine: (ha-142481) Setting executable bit set on /home/jenkins/minikube-integration/19787-81676 (perms=drwxrwxr-x)
	I1010 18:13:38.735107   99368 main.go:141] libmachine: (ha-142481) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19787-81676/.minikube
	I1010 18:13:38.735121   99368 main.go:141] libmachine: (ha-142481) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1010 18:13:38.735132   99368 main.go:141] libmachine: (ha-142481) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1010 18:13:38.735139   99368 main.go:141] libmachine: (ha-142481) Creating domain...
	I1010 18:13:38.735156   99368 main.go:141] libmachine: (ha-142481) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19787-81676
	I1010 18:13:38.735166   99368 main.go:141] libmachine: (ha-142481) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1010 18:13:38.735171   99368 main.go:141] libmachine: (ha-142481) DBG | Checking permissions on dir: /home/jenkins
	I1010 18:13:38.735177   99368 main.go:141] libmachine: (ha-142481) DBG | Checking permissions on dir: /home
	I1010 18:13:38.735183   99368 main.go:141] libmachine: (ha-142481) DBG | Skipping /home - not owner
	I1010 18:13:38.736388   99368 main.go:141] libmachine: (ha-142481) define libvirt domain using xml: 
	I1010 18:13:38.736417   99368 main.go:141] libmachine: (ha-142481) <domain type='kvm'>
	I1010 18:13:38.736427   99368 main.go:141] libmachine: (ha-142481)   <name>ha-142481</name>
	I1010 18:13:38.736439   99368 main.go:141] libmachine: (ha-142481)   <memory unit='MiB'>2200</memory>
	I1010 18:13:38.736471   99368 main.go:141] libmachine: (ha-142481)   <vcpu>2</vcpu>
	I1010 18:13:38.736493   99368 main.go:141] libmachine: (ha-142481)   <features>
	I1010 18:13:38.736527   99368 main.go:141] libmachine: (ha-142481)     <acpi/>
	I1010 18:13:38.736554   99368 main.go:141] libmachine: (ha-142481)     <apic/>
	I1010 18:13:38.736566   99368 main.go:141] libmachine: (ha-142481)     <pae/>
	I1010 18:13:38.736588   99368 main.go:141] libmachine: (ha-142481)     
	I1010 18:13:38.736600   99368 main.go:141] libmachine: (ha-142481)   </features>
	I1010 18:13:38.736610   99368 main.go:141] libmachine: (ha-142481)   <cpu mode='host-passthrough'>
	I1010 18:13:38.736620   99368 main.go:141] libmachine: (ha-142481)   
	I1010 18:13:38.736633   99368 main.go:141] libmachine: (ha-142481)   </cpu>
	I1010 18:13:38.736643   99368 main.go:141] libmachine: (ha-142481)   <os>
	I1010 18:13:38.736649   99368 main.go:141] libmachine: (ha-142481)     <type>hvm</type>
	I1010 18:13:38.736661   99368 main.go:141] libmachine: (ha-142481)     <boot dev='cdrom'/>
	I1010 18:13:38.736672   99368 main.go:141] libmachine: (ha-142481)     <boot dev='hd'/>
	I1010 18:13:38.736684   99368 main.go:141] libmachine: (ha-142481)     <bootmenu enable='no'/>
	I1010 18:13:38.736693   99368 main.go:141] libmachine: (ha-142481)   </os>
	I1010 18:13:38.736700   99368 main.go:141] libmachine: (ha-142481)   <devices>
	I1010 18:13:38.736710   99368 main.go:141] libmachine: (ha-142481)     <disk type='file' device='cdrom'>
	I1010 18:13:38.736729   99368 main.go:141] libmachine: (ha-142481)       <source file='/home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481/boot2docker.iso'/>
	I1010 18:13:38.736737   99368 main.go:141] libmachine: (ha-142481)       <target dev='hdc' bus='scsi'/>
	I1010 18:13:38.736742   99368 main.go:141] libmachine: (ha-142481)       <readonly/>
	I1010 18:13:38.736748   99368 main.go:141] libmachine: (ha-142481)     </disk>
	I1010 18:13:38.736754   99368 main.go:141] libmachine: (ha-142481)     <disk type='file' device='disk'>
	I1010 18:13:38.736761   99368 main.go:141] libmachine: (ha-142481)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1010 18:13:38.736768   99368 main.go:141] libmachine: (ha-142481)       <source file='/home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481/ha-142481.rawdisk'/>
	I1010 18:13:38.736773   99368 main.go:141] libmachine: (ha-142481)       <target dev='hda' bus='virtio'/>
	I1010 18:13:38.736780   99368 main.go:141] libmachine: (ha-142481)     </disk>
	I1010 18:13:38.736789   99368 main.go:141] libmachine: (ha-142481)     <interface type='network'>
	I1010 18:13:38.736795   99368 main.go:141] libmachine: (ha-142481)       <source network='mk-ha-142481'/>
	I1010 18:13:38.736800   99368 main.go:141] libmachine: (ha-142481)       <model type='virtio'/>
	I1010 18:13:38.736804   99368 main.go:141] libmachine: (ha-142481)     </interface>
	I1010 18:13:38.736811   99368 main.go:141] libmachine: (ha-142481)     <interface type='network'>
	I1010 18:13:38.736816   99368 main.go:141] libmachine: (ha-142481)       <source network='default'/>
	I1010 18:13:38.736822   99368 main.go:141] libmachine: (ha-142481)       <model type='virtio'/>
	I1010 18:13:38.736831   99368 main.go:141] libmachine: (ha-142481)     </interface>
	I1010 18:13:38.736837   99368 main.go:141] libmachine: (ha-142481)     <serial type='pty'>
	I1010 18:13:38.736842   99368 main.go:141] libmachine: (ha-142481)       <target port='0'/>
	I1010 18:13:38.736868   99368 main.go:141] libmachine: (ha-142481)     </serial>
	I1010 18:13:38.736882   99368 main.go:141] libmachine: (ha-142481)     <console type='pty'>
	I1010 18:13:38.736896   99368 main.go:141] libmachine: (ha-142481)       <target type='serial' port='0'/>
	I1010 18:13:38.736911   99368 main.go:141] libmachine: (ha-142481)     </console>
	I1010 18:13:38.736921   99368 main.go:141] libmachine: (ha-142481)     <rng model='virtio'>
	I1010 18:13:38.736929   99368 main.go:141] libmachine: (ha-142481)       <backend model='random'>/dev/random</backend>
	I1010 18:13:38.736935   99368 main.go:141] libmachine: (ha-142481)     </rng>
	I1010 18:13:38.736942   99368 main.go:141] libmachine: (ha-142481)     
	I1010 18:13:38.736951   99368 main.go:141] libmachine: (ha-142481)     
	I1010 18:13:38.736962   99368 main.go:141] libmachine: (ha-142481)   </devices>
	I1010 18:13:38.736973   99368 main.go:141] libmachine: (ha-142481) </domain>
	I1010 18:13:38.737007   99368 main.go:141] libmachine: (ha-142481) 
	I1010 18:13:38.741472   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:b1:0c:5d in network default
	I1010 18:13:38.742188   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:13:38.742202   99368 main.go:141] libmachine: (ha-142481) Ensuring networks are active...
	I1010 18:13:38.743102   99368 main.go:141] libmachine: (ha-142481) Ensuring network default is active
	I1010 18:13:38.743484   99368 main.go:141] libmachine: (ha-142481) Ensuring network mk-ha-142481 is active
	I1010 18:13:38.743981   99368 main.go:141] libmachine: (ha-142481) Getting domain xml...
	I1010 18:13:38.744831   99368 main.go:141] libmachine: (ha-142481) Creating domain...
	I1010 18:13:39.943643   99368 main.go:141] libmachine: (ha-142481) Waiting to get IP...
	I1010 18:13:39.944415   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:13:39.944819   99368 main.go:141] libmachine: (ha-142481) DBG | unable to find current IP address of domain ha-142481 in network mk-ha-142481
	I1010 18:13:39.944886   99368 main.go:141] libmachine: (ha-142481) DBG | I1010 18:13:39.944805   99391 retry.go:31] will retry after 263.450232ms: waiting for machine to come up
	I1010 18:13:40.210494   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:13:40.210938   99368 main.go:141] libmachine: (ha-142481) DBG | unable to find current IP address of domain ha-142481 in network mk-ha-142481
	I1010 18:13:40.210979   99368 main.go:141] libmachine: (ha-142481) DBG | I1010 18:13:40.210904   99391 retry.go:31] will retry after 318.83444ms: waiting for machine to come up
	I1010 18:13:40.531556   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:13:40.531982   99368 main.go:141] libmachine: (ha-142481) DBG | unable to find current IP address of domain ha-142481 in network mk-ha-142481
	I1010 18:13:40.532010   99368 main.go:141] libmachine: (ha-142481) DBG | I1010 18:13:40.531946   99391 retry.go:31] will retry after 379.250744ms: waiting for machine to come up
	I1010 18:13:40.912440   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:13:40.912909   99368 main.go:141] libmachine: (ha-142481) DBG | unable to find current IP address of domain ha-142481 in network mk-ha-142481
	I1010 18:13:40.912942   99368 main.go:141] libmachine: (ha-142481) DBG | I1010 18:13:40.912844   99391 retry.go:31] will retry after 505.831382ms: waiting for machine to come up
	I1010 18:13:41.420670   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:13:41.421119   99368 main.go:141] libmachine: (ha-142481) DBG | unable to find current IP address of domain ha-142481 in network mk-ha-142481
	I1010 18:13:41.421141   99368 main.go:141] libmachine: (ha-142481) DBG | I1010 18:13:41.421071   99391 retry.go:31] will retry after 555.074801ms: waiting for machine to come up
	I1010 18:13:41.977849   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:13:41.978257   99368 main.go:141] libmachine: (ha-142481) DBG | unable to find current IP address of domain ha-142481 in network mk-ha-142481
	I1010 18:13:41.978281   99368 main.go:141] libmachine: (ha-142481) DBG | I1010 18:13:41.978194   99391 retry.go:31] will retry after 636.152434ms: waiting for machine to come up
	I1010 18:13:42.615909   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:13:42.616285   99368 main.go:141] libmachine: (ha-142481) DBG | unable to find current IP address of domain ha-142481 in network mk-ha-142481
	I1010 18:13:42.616320   99368 main.go:141] libmachine: (ha-142481) DBG | I1010 18:13:42.616236   99391 retry.go:31] will retry after 907.451913ms: waiting for machine to come up
	I1010 18:13:43.524700   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:13:43.525164   99368 main.go:141] libmachine: (ha-142481) DBG | unable to find current IP address of domain ha-142481 in network mk-ha-142481
	I1010 18:13:43.525241   99368 main.go:141] libmachine: (ha-142481) DBG | I1010 18:13:43.525119   99391 retry.go:31] will retry after 916.746032ms: waiting for machine to come up
	I1010 18:13:44.443019   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:13:44.443439   99368 main.go:141] libmachine: (ha-142481) DBG | unable to find current IP address of domain ha-142481 in network mk-ha-142481
	I1010 18:13:44.443463   99368 main.go:141] libmachine: (ha-142481) DBG | I1010 18:13:44.443379   99391 retry.go:31] will retry after 1.722399675s: waiting for machine to come up
	I1010 18:13:46.168252   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:13:46.168660   99368 main.go:141] libmachine: (ha-142481) DBG | unable to find current IP address of domain ha-142481 in network mk-ha-142481
	I1010 18:13:46.168691   99368 main.go:141] libmachine: (ha-142481) DBG | I1010 18:13:46.168625   99391 retry.go:31] will retry after 2.191060126s: waiting for machine to come up
	I1010 18:13:48.361115   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:13:48.361666   99368 main.go:141] libmachine: (ha-142481) DBG | unable to find current IP address of domain ha-142481 in network mk-ha-142481
	I1010 18:13:48.361699   99368 main.go:141] libmachine: (ha-142481) DBG | I1010 18:13:48.361609   99391 retry.go:31] will retry after 2.390239739s: waiting for machine to come up
	I1010 18:13:50.755200   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:13:50.755610   99368 main.go:141] libmachine: (ha-142481) DBG | unable to find current IP address of domain ha-142481 in network mk-ha-142481
	I1010 18:13:50.755636   99368 main.go:141] libmachine: (ha-142481) DBG | I1010 18:13:50.755576   99391 retry.go:31] will retry after 2.188596051s: waiting for machine to come up
	I1010 18:13:52.946995   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:13:52.947360   99368 main.go:141] libmachine: (ha-142481) DBG | unable to find current IP address of domain ha-142481 in network mk-ha-142481
	I1010 18:13:52.947382   99368 main.go:141] libmachine: (ha-142481) DBG | I1010 18:13:52.947318   99391 retry.go:31] will retry after 3.863064875s: waiting for machine to come up
	I1010 18:13:56.814839   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:13:56.815487   99368 main.go:141] libmachine: (ha-142481) DBG | unable to find current IP address of domain ha-142481 in network mk-ha-142481
	I1010 18:13:56.815508   99368 main.go:141] libmachine: (ha-142481) DBG | I1010 18:13:56.815409   99391 retry.go:31] will retry after 3.762373701s: waiting for machine to come up
	I1010 18:14:00.580406   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:00.580915   99368 main.go:141] libmachine: (ha-142481) Found IP for machine: 192.168.39.104
	I1010 18:14:00.580940   99368 main.go:141] libmachine: (ha-142481) Reserving static IP address...
	I1010 18:14:00.580952   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has current primary IP address 192.168.39.104 and MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:00.581384   99368 main.go:141] libmachine: (ha-142481) DBG | unable to find host DHCP lease matching {name: "ha-142481", mac: "52:54:00:3e:fa:00", ip: "192.168.39.104"} in network mk-ha-142481
	I1010 18:14:00.656496   99368 main.go:141] libmachine: (ha-142481) DBG | Getting to WaitForSSH function...
	I1010 18:14:00.656530   99368 main.go:141] libmachine: (ha-142481) Reserved static IP address: 192.168.39.104
	I1010 18:14:00.656576   99368 main.go:141] libmachine: (ha-142481) Waiting for SSH to be available...
	I1010 18:14:00.659584   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:00.659994   99368 main.go:141] libmachine: (ha-142481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:fa:00", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:13:53 +0000 UTC Type:0 Mac:52:54:00:3e:fa:00 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:minikube Clientid:01:52:54:00:3e:fa:00}
	I1010 18:14:00.660032   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined IP address 192.168.39.104 and MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:00.660120   99368 main.go:141] libmachine: (ha-142481) DBG | Using SSH client type: external
	I1010 18:14:00.660175   99368 main.go:141] libmachine: (ha-142481) DBG | Using SSH private key: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481/id_rsa (-rw-------)
	I1010 18:14:00.660252   99368 main.go:141] libmachine: (ha-142481) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.104 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1010 18:14:00.660280   99368 main.go:141] libmachine: (ha-142481) DBG | About to run SSH command:
	I1010 18:14:00.660297   99368 main.go:141] libmachine: (ha-142481) DBG | exit 0
	I1010 18:14:00.789008   99368 main.go:141] libmachine: (ha-142481) DBG | SSH cmd err, output: <nil>: 
	I1010 18:14:00.789292   99368 main.go:141] libmachine: (ha-142481) KVM machine creation complete!
	I1010 18:14:00.789591   99368 main.go:141] libmachine: (ha-142481) Calling .GetConfigRaw
	I1010 18:14:00.790247   99368 main.go:141] libmachine: (ha-142481) Calling .DriverName
	I1010 18:14:00.790563   99368 main.go:141] libmachine: (ha-142481) Calling .DriverName
	I1010 18:14:00.790779   99368 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1010 18:14:00.790797   99368 main.go:141] libmachine: (ha-142481) Calling .GetState
	I1010 18:14:00.791977   99368 main.go:141] libmachine: Detecting operating system of created instance...
	I1010 18:14:00.791993   99368 main.go:141] libmachine: Waiting for SSH to be available...
	I1010 18:14:00.792000   99368 main.go:141] libmachine: Getting to WaitForSSH function...
	I1010 18:14:00.792007   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHHostname
	I1010 18:14:00.795049   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:00.795517   99368 main.go:141] libmachine: (ha-142481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:fa:00", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:13:53 +0000 UTC Type:0 Mac:52:54:00:3e:fa:00 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-142481 Clientid:01:52:54:00:3e:fa:00}
	I1010 18:14:00.795546   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined IP address 192.168.39.104 and MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:00.795737   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHPort
	I1010 18:14:00.795931   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHKeyPath
	I1010 18:14:00.796109   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHKeyPath
	I1010 18:14:00.796201   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHUsername
	I1010 18:14:00.796384   99368 main.go:141] libmachine: Using SSH client type: native
	I1010 18:14:00.796677   99368 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I1010 18:14:00.796694   99368 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1010 18:14:00.904506   99368 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1010 18:14:00.904529   99368 main.go:141] libmachine: Detecting the provisioner...
	I1010 18:14:00.904538   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHHostname
	I1010 18:14:00.907535   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:00.907882   99368 main.go:141] libmachine: (ha-142481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:fa:00", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:13:53 +0000 UTC Type:0 Mac:52:54:00:3e:fa:00 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-142481 Clientid:01:52:54:00:3e:fa:00}
	I1010 18:14:00.907924   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined IP address 192.168.39.104 and MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:00.908104   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHPort
	I1010 18:14:00.908324   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHKeyPath
	I1010 18:14:00.908499   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHKeyPath
	I1010 18:14:00.908658   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHUsername
	I1010 18:14:00.908892   99368 main.go:141] libmachine: Using SSH client type: native
	I1010 18:14:00.909076   99368 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I1010 18:14:00.909086   99368 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1010 18:14:01.018108   99368 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1010 18:14:01.018217   99368 main.go:141] libmachine: found compatible host: buildroot
	I1010 18:14:01.018228   99368 main.go:141] libmachine: Provisioning with buildroot...
	I1010 18:14:01.018236   99368 main.go:141] libmachine: (ha-142481) Calling .GetMachineName
	I1010 18:14:01.018570   99368 buildroot.go:166] provisioning hostname "ha-142481"
	I1010 18:14:01.018602   99368 main.go:141] libmachine: (ha-142481) Calling .GetMachineName
	I1010 18:14:01.018780   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHHostname
	I1010 18:14:01.021625   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:01.022001   99368 main.go:141] libmachine: (ha-142481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:fa:00", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:13:53 +0000 UTC Type:0 Mac:52:54:00:3e:fa:00 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-142481 Clientid:01:52:54:00:3e:fa:00}
	I1010 18:14:01.022049   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined IP address 192.168.39.104 and MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:01.022142   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHPort
	I1010 18:14:01.022330   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHKeyPath
	I1010 18:14:01.022485   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHKeyPath
	I1010 18:14:01.022628   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHUsername
	I1010 18:14:01.022792   99368 main.go:141] libmachine: Using SSH client type: native
	I1010 18:14:01.023020   99368 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I1010 18:14:01.023040   99368 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-142481 && echo "ha-142481" | sudo tee /etc/hostname
	I1010 18:14:01.148746   99368 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-142481
	
	I1010 18:14:01.148780   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHHostname
	I1010 18:14:01.151700   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:01.152069   99368 main.go:141] libmachine: (ha-142481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:fa:00", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:13:53 +0000 UTC Type:0 Mac:52:54:00:3e:fa:00 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-142481 Clientid:01:52:54:00:3e:fa:00}
	I1010 18:14:01.152101   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined IP address 192.168.39.104 and MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:01.152379   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHPort
	I1010 18:14:01.152566   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHKeyPath
	I1010 18:14:01.152733   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHKeyPath
	I1010 18:14:01.153007   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHUsername
	I1010 18:14:01.153254   99368 main.go:141] libmachine: Using SSH client type: native
	I1010 18:14:01.153456   99368 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I1010 18:14:01.153473   99368 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-142481' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-142481/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-142481' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1010 18:14:01.270656   99368 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1010 18:14:01.270702   99368 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19787-81676/.minikube CaCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19787-81676/.minikube}
	I1010 18:14:01.270768   99368 buildroot.go:174] setting up certificates
	I1010 18:14:01.270784   99368 provision.go:84] configureAuth start
	I1010 18:14:01.270804   99368 main.go:141] libmachine: (ha-142481) Calling .GetMachineName
	I1010 18:14:01.271123   99368 main.go:141] libmachine: (ha-142481) Calling .GetIP
	I1010 18:14:01.274054   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:01.274377   99368 main.go:141] libmachine: (ha-142481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:fa:00", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:13:53 +0000 UTC Type:0 Mac:52:54:00:3e:fa:00 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-142481 Clientid:01:52:54:00:3e:fa:00}
	I1010 18:14:01.274414   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined IP address 192.168.39.104 and MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:01.274599   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHHostname
	I1010 18:14:01.277056   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:01.277372   99368 main.go:141] libmachine: (ha-142481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:fa:00", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:13:53 +0000 UTC Type:0 Mac:52:54:00:3e:fa:00 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-142481 Clientid:01:52:54:00:3e:fa:00}
	I1010 18:14:01.277402   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined IP address 192.168.39.104 and MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:01.277532   99368 provision.go:143] copyHostCerts
	I1010 18:14:01.277566   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem
	I1010 18:14:01.277608   99368 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem, removing ...
	I1010 18:14:01.277620   99368 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem
	I1010 18:14:01.277701   99368 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem (1675 bytes)
	I1010 18:14:01.277845   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem
	I1010 18:14:01.277882   99368 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem, removing ...
	I1010 18:14:01.277893   99368 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem
	I1010 18:14:01.277935   99368 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem (1078 bytes)
	I1010 18:14:01.278014   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem
	I1010 18:14:01.278037   99368 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem, removing ...
	I1010 18:14:01.278043   99368 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem
	I1010 18:14:01.278078   99368 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem (1123 bytes)
	I1010 18:14:01.278160   99368 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem org=jenkins.ha-142481 san=[127.0.0.1 192.168.39.104 ha-142481 localhost minikube]
	I1010 18:14:01.863097   99368 provision.go:177] copyRemoteCerts
	I1010 18:14:01.863162   99368 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1010 18:14:01.863187   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHHostname
	I1010 18:14:01.866290   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:01.866626   99368 main.go:141] libmachine: (ha-142481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:fa:00", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:13:53 +0000 UTC Type:0 Mac:52:54:00:3e:fa:00 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-142481 Clientid:01:52:54:00:3e:fa:00}
	I1010 18:14:01.866657   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined IP address 192.168.39.104 and MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:01.866843   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHPort
	I1010 18:14:01.867075   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHKeyPath
	I1010 18:14:01.867295   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHUsername
	I1010 18:14:01.867474   99368 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481/id_rsa Username:docker}
	I1010 18:14:01.951802   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1010 18:14:01.951888   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1010 18:14:01.976504   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1010 18:14:01.976590   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1010 18:14:02.000608   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1010 18:14:02.000694   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1010 18:14:02.025514   99368 provision.go:87] duration metric: took 754.678106ms to configureAuth
	I1010 18:14:02.025558   99368 buildroot.go:189] setting minikube options for container-runtime
	I1010 18:14:02.025780   99368 config.go:182] Loaded profile config "ha-142481": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 18:14:02.025872   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHHostname
	I1010 18:14:02.028822   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:02.029419   99368 main.go:141] libmachine: (ha-142481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:fa:00", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:13:53 +0000 UTC Type:0 Mac:52:54:00:3e:fa:00 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-142481 Clientid:01:52:54:00:3e:fa:00}
	I1010 18:14:02.029448   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined IP address 192.168.39.104 and MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:02.029637   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHPort
	I1010 18:14:02.029859   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHKeyPath
	I1010 18:14:02.030076   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHKeyPath
	I1010 18:14:02.030249   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHUsername
	I1010 18:14:02.030408   99368 main.go:141] libmachine: Using SSH client type: native
	I1010 18:14:02.030613   99368 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I1010 18:14:02.030638   99368 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1010 18:14:02.255598   99368 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1010 18:14:02.255635   99368 main.go:141] libmachine: Checking connection to Docker...
	I1010 18:14:02.255663   99368 main.go:141] libmachine: (ha-142481) Calling .GetURL
	I1010 18:14:02.256998   99368 main.go:141] libmachine: (ha-142481) DBG | Using libvirt version 6000000
	I1010 18:14:02.259693   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:02.260061   99368 main.go:141] libmachine: (ha-142481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:fa:00", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:13:53 +0000 UTC Type:0 Mac:52:54:00:3e:fa:00 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-142481 Clientid:01:52:54:00:3e:fa:00}
	I1010 18:14:02.260105   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined IP address 192.168.39.104 and MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:02.260245   99368 main.go:141] libmachine: Docker is up and running!
	I1010 18:14:02.260269   99368 main.go:141] libmachine: Reticulating splines...
	I1010 18:14:02.260277   99368 client.go:171] duration metric: took 24.062416136s to LocalClient.Create
	I1010 18:14:02.260305   99368 start.go:167] duration metric: took 24.062491775s to libmachine.API.Create "ha-142481"
	I1010 18:14:02.260317   99368 start.go:293] postStartSetup for "ha-142481" (driver="kvm2")
	I1010 18:14:02.260330   99368 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1010 18:14:02.260355   99368 main.go:141] libmachine: (ha-142481) Calling .DriverName
	I1010 18:14:02.260598   99368 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1010 18:14:02.260623   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHHostname
	I1010 18:14:02.262655   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:02.262966   99368 main.go:141] libmachine: (ha-142481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:fa:00", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:13:53 +0000 UTC Type:0 Mac:52:54:00:3e:fa:00 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-142481 Clientid:01:52:54:00:3e:fa:00}
	I1010 18:14:02.262995   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined IP address 192.168.39.104 and MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:02.263106   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHPort
	I1010 18:14:02.263281   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHKeyPath
	I1010 18:14:02.263418   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHUsername
	I1010 18:14:02.263549   99368 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481/id_rsa Username:docker}
	I1010 18:14:02.347386   99368 ssh_runner.go:195] Run: cat /etc/os-release
	I1010 18:14:02.352007   99368 info.go:137] Remote host: Buildroot 2023.02.9
	I1010 18:14:02.352037   99368 filesync.go:126] Scanning /home/jenkins/minikube-integration/19787-81676/.minikube/addons for local assets ...
	I1010 18:14:02.352118   99368 filesync.go:126] Scanning /home/jenkins/minikube-integration/19787-81676/.minikube/files for local assets ...
	I1010 18:14:02.352241   99368 filesync.go:149] local asset: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem -> 888762.pem in /etc/ssl/certs
	I1010 18:14:02.352255   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem -> /etc/ssl/certs/888762.pem
	I1010 18:14:02.352383   99368 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1010 18:14:02.361986   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem --> /etc/ssl/certs/888762.pem (1708 bytes)
	I1010 18:14:02.387757   99368 start.go:296] duration metric: took 127.42447ms for postStartSetup
	I1010 18:14:02.387817   99368 main.go:141] libmachine: (ha-142481) Calling .GetConfigRaw
	I1010 18:14:02.388481   99368 main.go:141] libmachine: (ha-142481) Calling .GetIP
	I1010 18:14:02.391530   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:02.391900   99368 main.go:141] libmachine: (ha-142481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:fa:00", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:13:53 +0000 UTC Type:0 Mac:52:54:00:3e:fa:00 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-142481 Clientid:01:52:54:00:3e:fa:00}
	I1010 18:14:02.391927   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined IP address 192.168.39.104 and MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:02.392187   99368 profile.go:143] Saving config to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/config.json ...
	I1010 18:14:02.392385   99368 start.go:128] duration metric: took 24.213024958s to createHost
	I1010 18:14:02.392410   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHHostname
	I1010 18:14:02.394865   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:02.395239   99368 main.go:141] libmachine: (ha-142481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:fa:00", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:13:53 +0000 UTC Type:0 Mac:52:54:00:3e:fa:00 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-142481 Clientid:01:52:54:00:3e:fa:00}
	I1010 18:14:02.395269   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined IP address 192.168.39.104 and MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:02.395418   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHPort
	I1010 18:14:02.395616   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHKeyPath
	I1010 18:14:02.395799   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHKeyPath
	I1010 18:14:02.395913   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHUsername
	I1010 18:14:02.396045   99368 main.go:141] libmachine: Using SSH client type: native
	I1010 18:14:02.396233   99368 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I1010 18:14:02.396253   99368 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1010 18:14:02.506374   99368 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728584042.463674877
	
	I1010 18:14:02.506405   99368 fix.go:216] guest clock: 1728584042.463674877
	I1010 18:14:02.506415   99368 fix.go:229] Guest: 2024-10-10 18:14:02.463674877 +0000 UTC Remote: 2024-10-10 18:14:02.392397471 +0000 UTC m=+24.322985546 (delta=71.277406ms)
	I1010 18:14:02.506501   99368 fix.go:200] guest clock delta is within tolerance: 71.277406ms
	I1010 18:14:02.506513   99368 start.go:83] releasing machines lock for "ha-142481", held for 24.327223548s
	I1010 18:14:02.506550   99368 main.go:141] libmachine: (ha-142481) Calling .DriverName
	I1010 18:14:02.506889   99368 main.go:141] libmachine: (ha-142481) Calling .GetIP
	I1010 18:14:02.509401   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:02.509764   99368 main.go:141] libmachine: (ha-142481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:fa:00", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:13:53 +0000 UTC Type:0 Mac:52:54:00:3e:fa:00 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-142481 Clientid:01:52:54:00:3e:fa:00}
	I1010 18:14:02.509802   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined IP address 192.168.39.104 and MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:02.509942   99368 main.go:141] libmachine: (ha-142481) Calling .DriverName
	I1010 18:14:02.510549   99368 main.go:141] libmachine: (ha-142481) Calling .DriverName
	I1010 18:14:02.510772   99368 main.go:141] libmachine: (ha-142481) Calling .DriverName
	I1010 18:14:02.510843   99368 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1010 18:14:02.510929   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHHostname
	I1010 18:14:02.511003   99368 ssh_runner.go:195] Run: cat /version.json
	I1010 18:14:02.511038   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHHostname
	I1010 18:14:02.513796   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:02.513896   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:02.514234   99368 main.go:141] libmachine: (ha-142481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:fa:00", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:13:53 +0000 UTC Type:0 Mac:52:54:00:3e:fa:00 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-142481 Clientid:01:52:54:00:3e:fa:00}
	I1010 18:14:02.514254   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined IP address 192.168.39.104 and MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:02.514280   99368 main.go:141] libmachine: (ha-142481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:fa:00", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:13:53 +0000 UTC Type:0 Mac:52:54:00:3e:fa:00 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-142481 Clientid:01:52:54:00:3e:fa:00}
	I1010 18:14:02.514293   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined IP address 192.168.39.104 and MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:02.514533   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHPort
	I1010 18:14:02.514631   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHPort
	I1010 18:14:02.514713   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHKeyPath
	I1010 18:14:02.514804   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHKeyPath
	I1010 18:14:02.514890   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHUsername
	I1010 18:14:02.514938   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHUsername
	I1010 18:14:02.515026   99368 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481/id_rsa Username:docker}
	I1010 18:14:02.515073   99368 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481/id_rsa Username:docker}
	I1010 18:14:02.615715   99368 ssh_runner.go:195] Run: systemctl --version
	I1010 18:14:02.621955   99368 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1010 18:14:02.785775   99368 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1010 18:14:02.792271   99368 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1010 18:14:02.792352   99368 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1010 18:14:02.808426   99368 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1010 18:14:02.808464   99368 start.go:495] detecting cgroup driver to use...
	I1010 18:14:02.808542   99368 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1010 18:14:02.825314   99368 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1010 18:14:02.842065   99368 docker.go:217] disabling cri-docker service (if available) ...
	I1010 18:14:02.842135   99368 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1010 18:14:02.858984   99368 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1010 18:14:02.876330   99368 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1010 18:14:02.990523   99368 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1010 18:14:03.132316   99368 docker.go:233] disabling docker service ...
	I1010 18:14:03.132386   99368 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1010 18:14:03.147477   99368 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1010 18:14:03.161268   99368 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1010 18:14:03.304325   99368 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1010 18:14:03.429397   99368 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1010 18:14:03.443898   99368 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1010 18:14:03.463181   99368 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1010 18:14:03.463273   99368 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:14:03.474215   99368 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1010 18:14:03.474286   99368 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:14:03.485513   99368 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:14:03.496394   99368 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:14:03.507084   99368 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1010 18:14:03.517675   99368 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:14:03.527867   99368 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:14:03.545825   99368 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:14:03.556723   99368 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1010 18:14:03.566428   99368 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1010 18:14:03.566513   99368 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1010 18:14:03.579726   99368 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1010 18:14:03.589897   99368 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 18:14:03.711306   99368 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1010 18:14:03.812353   99368 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1010 18:14:03.812440   99368 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1010 18:14:03.817265   99368 start.go:563] Will wait 60s for crictl version
	I1010 18:14:03.817331   99368 ssh_runner.go:195] Run: which crictl
	I1010 18:14:03.821238   99368 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1010 18:14:03.865031   99368 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1010 18:14:03.865131   99368 ssh_runner.go:195] Run: crio --version
	I1010 18:14:03.893405   99368 ssh_runner.go:195] Run: crio --version
	I1010 18:14:03.923688   99368 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1010 18:14:03.925089   99368 main.go:141] libmachine: (ha-142481) Calling .GetIP
	I1010 18:14:03.927862   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:03.928210   99368 main.go:141] libmachine: (ha-142481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:fa:00", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:13:53 +0000 UTC Type:0 Mac:52:54:00:3e:fa:00 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-142481 Clientid:01:52:54:00:3e:fa:00}
	I1010 18:14:03.928239   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined IP address 192.168.39.104 and MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:03.928482   99368 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1010 18:14:03.932808   99368 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 18:14:03.947607   99368 kubeadm.go:883] updating cluster {Name:ha-142481 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-142481 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.104 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1010 18:14:03.947723   99368 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1010 18:14:03.947771   99368 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 18:14:03.980321   99368 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1010 18:14:03.980402   99368 ssh_runner.go:195] Run: which lz4
	I1010 18:14:03.984490   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1010 18:14:03.984586   99368 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1010 18:14:03.988814   99368 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1010 18:14:03.988866   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1010 18:14:05.363098   99368 crio.go:462] duration metric: took 1.37853137s to copy over tarball
	I1010 18:14:05.363172   99368 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1010 18:14:07.378827   99368 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.01562073s)
	I1010 18:14:07.378863   99368 crio.go:469] duration metric: took 2.015730634s to extract the tarball
	I1010 18:14:07.378873   99368 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1010 18:14:07.415494   99368 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 18:14:07.461637   99368 crio.go:514] all images are preloaded for cri-o runtime.
	I1010 18:14:07.461668   99368 cache_images.go:84] Images are preloaded, skipping loading
	I1010 18:14:07.461678   99368 kubeadm.go:934] updating node { 192.168.39.104 8443 v1.31.1 crio true true} ...
	I1010 18:14:07.461810   99368 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-142481 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.104
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-142481 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1010 18:14:07.461895   99368 ssh_runner.go:195] Run: crio config
	I1010 18:14:07.511179   99368 cni.go:84] Creating CNI manager for ""
	I1010 18:14:07.511203   99368 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1010 18:14:07.511219   99368 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1010 18:14:07.511240   99368 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.104 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-142481 NodeName:ha-142481 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.104"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.104 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1010 18:14:07.511378   99368 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.104
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-142481"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.104
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.104"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1010 18:14:07.511402   99368 kube-vip.go:115] generating kube-vip config ...
	I1010 18:14:07.511447   99368 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1010 18:14:07.530825   99368 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1010 18:14:07.530966   99368 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1010 18:14:07.531061   99368 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1010 18:14:07.541336   99368 binaries.go:44] Found k8s binaries, skipping transfer
	I1010 18:14:07.541418   99368 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1010 18:14:07.551149   99368 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I1010 18:14:07.567775   99368 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1010 18:14:07.585048   99368 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I1010 18:14:07.601614   99368 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I1010 18:14:07.618435   99368 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1010 18:14:07.622366   99368 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 18:14:07.634534   99368 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 18:14:07.769061   99368 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 18:14:07.786728   99368 certs.go:68] Setting up /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481 for IP: 192.168.39.104
	I1010 18:14:07.786757   99368 certs.go:194] generating shared ca certs ...
	I1010 18:14:07.786780   99368 certs.go:226] acquiring lock for ca certs: {Name:mk934aa2a82050d7b4e8800492bf64f91d140517 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:14:07.786963   99368 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key
	I1010 18:14:07.787019   99368 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key
	I1010 18:14:07.787049   99368 certs.go:256] generating profile certs ...
	I1010 18:14:07.787126   99368 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/client.key
	I1010 18:14:07.787145   99368 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/client.crt with IP's: []
	I1010 18:14:07.903290   99368 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/client.crt ...
	I1010 18:14:07.903319   99368 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/client.crt: {Name:mkc3e45adeab2c56df47bde3919e2c30e370ae85 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:14:07.903506   99368 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/client.key ...
	I1010 18:14:07.903521   99368 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/client.key: {Name:mka461c8525916f7bc85840820bc278320ec6313 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:14:07.903626   99368 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.key.36896560
	I1010 18:14:07.903643   99368 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.crt.36896560 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.104 192.168.39.254]
	I1010 18:14:08.280801   99368 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.crt.36896560 ...
	I1010 18:14:08.280860   99368 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.crt.36896560: {Name:mk5acd7350e86bebedada3fd330840a975c10cfe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:14:08.281063   99368 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.key.36896560 ...
	I1010 18:14:08.281078   99368 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.key.36896560: {Name:mk1053269a10fe97cf940622a274d032edb2023c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:14:08.281164   99368 certs.go:381] copying /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.crt.36896560 -> /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.crt
	I1010 18:14:08.281248   99368 certs.go:385] copying /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.key.36896560 -> /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.key
	I1010 18:14:08.281307   99368 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/proxy-client.key
	I1010 18:14:08.281325   99368 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/proxy-client.crt with IP's: []
	I1010 18:14:08.428528   99368 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/proxy-client.crt ...
	I1010 18:14:08.428562   99368 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/proxy-client.crt: {Name:mk868dec1ca79ab4285d30dbc6ee93e0f0415a05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:14:08.428730   99368 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/proxy-client.key ...
	I1010 18:14:08.428741   99368 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/proxy-client.key: {Name:mk5632176fd6e0bd1fedbd590f44cb77fc86fc75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:14:08.428812   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1010 18:14:08.428829   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1010 18:14:08.428839   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1010 18:14:08.428867   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1010 18:14:08.428886   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1010 18:14:08.428905   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1010 18:14:08.428919   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1010 18:14:08.428930   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1010 18:14:08.428986   99368 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876.pem (1338 bytes)
	W1010 18:14:08.429023   99368 certs.go:480] ignoring /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876_empty.pem, impossibly tiny 0 bytes
	I1010 18:14:08.429032   99368 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem (1679 bytes)
	I1010 18:14:08.429057   99368 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem (1078 bytes)
	I1010 18:14:08.429082   99368 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem (1123 bytes)
	I1010 18:14:08.429103   99368 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem (1675 bytes)
	I1010 18:14:08.429139   99368 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem (1708 bytes)
	I1010 18:14:08.429166   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876.pem -> /usr/share/ca-certificates/88876.pem
	I1010 18:14:08.429180   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem -> /usr/share/ca-certificates/888762.pem
	I1010 18:14:08.429192   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:14:08.429725   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1010 18:14:08.459934   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1010 18:14:08.486537   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1010 18:14:08.511793   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1010 18:14:08.536743   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1010 18:14:08.569819   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1010 18:14:08.605499   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1010 18:14:08.633615   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1010 18:14:08.657501   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876.pem --> /usr/share/ca-certificates/88876.pem (1338 bytes)
	I1010 18:14:08.684906   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem --> /usr/share/ca-certificates/888762.pem (1708 bytes)
	I1010 18:14:08.712812   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1010 18:14:08.741219   99368 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1010 18:14:08.760444   99368 ssh_runner.go:195] Run: openssl version
	I1010 18:14:08.766741   99368 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/888762.pem && ln -fs /usr/share/ca-certificates/888762.pem /etc/ssl/certs/888762.pem"
	I1010 18:14:08.778475   99368 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/888762.pem
	I1010 18:14:08.783145   99368 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 10 18:09 /usr/share/ca-certificates/888762.pem
	I1010 18:14:08.783213   99368 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/888762.pem
	I1010 18:14:08.789500   99368 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/888762.pem /etc/ssl/certs/3ec20f2e.0"
	I1010 18:14:08.800279   99368 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1010 18:14:08.811452   99368 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:14:08.816338   99368 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 10 17:58 /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:14:08.816413   99368 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:14:08.822105   99368 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1010 18:14:08.833024   99368 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/88876.pem && ln -fs /usr/share/ca-certificates/88876.pem /etc/ssl/certs/88876.pem"
	I1010 18:14:08.844522   99368 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/88876.pem
	I1010 18:14:08.849855   99368 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 10 18:09 /usr/share/ca-certificates/88876.pem
	I1010 18:14:08.849915   99368 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/88876.pem
	I1010 18:14:08.856326   99368 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/88876.pem /etc/ssl/certs/51391683.0"
	I1010 18:14:08.868339   99368 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1010 18:14:08.873080   99368 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1010 18:14:08.873139   99368 kubeadm.go:392] StartCluster: {Name:ha-142481 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-142481 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.104 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 18:14:08.873227   99368 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1010 18:14:08.873270   99368 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1010 18:14:08.916635   99368 cri.go:89] found id: ""
	I1010 18:14:08.916701   99368 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1010 18:14:08.927424   99368 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1010 18:14:08.937639   99368 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1010 18:14:08.950754   99368 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1010 18:14:08.950779   99368 kubeadm.go:157] found existing configuration files:
	
	I1010 18:14:08.950834   99368 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1010 18:14:08.962204   99368 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1010 18:14:08.962290   99368 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1010 18:14:08.975261   99368 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1010 18:14:08.986716   99368 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1010 18:14:08.986809   99368 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1010 18:14:08.998689   99368 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1010 18:14:09.010244   99368 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1010 18:14:09.010336   99368 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1010 18:14:09.022153   99368 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1010 18:14:09.033360   99368 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1010 18:14:09.033436   99368 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1010 18:14:09.045356   99368 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1010 18:14:09.160966   99368 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1010 18:14:09.161052   99368 kubeadm.go:310] [preflight] Running pre-flight checks
	I1010 18:14:09.286355   99368 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1010 18:14:09.286552   99368 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1010 18:14:09.286700   99368 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1010 18:14:09.304139   99368 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1010 18:14:09.367960   99368 out.go:235]   - Generating certificates and keys ...
	I1010 18:14:09.368080   99368 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1010 18:14:09.368161   99368 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1010 18:14:09.384046   99368 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1010 18:14:09.463103   99368 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1010 18:14:09.567857   99368 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1010 18:14:09.723111   99368 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1010 18:14:09.854233   99368 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1010 18:14:09.854378   99368 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-142481 localhost] and IPs [192.168.39.104 127.0.0.1 ::1]
	I1010 18:14:09.939722   99368 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1010 18:14:09.939862   99368 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-142481 localhost] and IPs [192.168.39.104 127.0.0.1 ::1]
	I1010 18:14:10.144343   99368 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1010 18:14:10.236373   99368 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1010 18:14:10.313629   99368 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1010 18:14:10.313727   99368 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1010 18:14:10.420431   99368 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1010 18:14:10.571019   99368 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1010 18:14:10.736436   99368 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1010 18:14:10.835479   99368 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1010 18:14:10.964962   99368 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1010 18:14:10.965625   99368 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1010 18:14:10.970210   99368 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1010 18:14:10.974272   99368 out.go:235]   - Booting up control plane ...
	I1010 18:14:10.974411   99368 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1010 18:14:10.974532   99368 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1010 18:14:10.974647   99368 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1010 18:14:10.995458   99368 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1010 18:14:11.002605   99368 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1010 18:14:11.002687   99368 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1010 18:14:11.149847   99368 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1010 18:14:11.150007   99368 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1010 18:14:11.651121   99368 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.084729ms
	I1010 18:14:11.651236   99368 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1010 18:14:20.808127   99368 kubeadm.go:310] [api-check] The API server is healthy after 9.156536113s
	I1010 18:14:20.824946   99368 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1010 18:14:20.839773   99368 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1010 18:14:20.870820   99368 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1010 18:14:20.871016   99368 kubeadm.go:310] [mark-control-plane] Marking the node ha-142481 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1010 18:14:20.887157   99368 kubeadm.go:310] [bootstrap-token] Using token: 644oik.7go4jyqro7if5l4w
	I1010 18:14:20.888737   99368 out.go:235]   - Configuring RBAC rules ...
	I1010 18:14:20.888842   99368 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1010 18:14:20.898440   99368 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1010 18:14:20.910480   99368 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1010 18:14:20.915628   99368 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1010 18:14:20.920682   99368 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1010 18:14:20.931471   99368 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1010 18:14:21.219016   99368 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1010 18:14:21.647641   99368 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1010 18:14:22.223206   99368 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1010 18:14:22.224137   99368 kubeadm.go:310] 
	I1010 18:14:22.224257   99368 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1010 18:14:22.224281   99368 kubeadm.go:310] 
	I1010 18:14:22.224367   99368 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1010 18:14:22.224376   99368 kubeadm.go:310] 
	I1010 18:14:22.224411   99368 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1010 18:14:22.224481   99368 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1010 18:14:22.224552   99368 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1010 18:14:22.224561   99368 kubeadm.go:310] 
	I1010 18:14:22.224636   99368 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1010 18:14:22.224649   99368 kubeadm.go:310] 
	I1010 18:14:22.224716   99368 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1010 18:14:22.224728   99368 kubeadm.go:310] 
	I1010 18:14:22.224806   99368 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1010 18:14:22.224925   99368 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1010 18:14:22.225015   99368 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1010 18:14:22.225025   99368 kubeadm.go:310] 
	I1010 18:14:22.225149   99368 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1010 18:14:22.225266   99368 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1010 18:14:22.225276   99368 kubeadm.go:310] 
	I1010 18:14:22.225390   99368 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 644oik.7go4jyqro7if5l4w \
	I1010 18:14:22.225541   99368 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:bc8568f96f96483e4403a7eff9866ac7bd8b0e76d8203796a49dd1a769f11bda \
	I1010 18:14:22.225591   99368 kubeadm.go:310] 	--control-plane 
	I1010 18:14:22.225619   99368 kubeadm.go:310] 
	I1010 18:14:22.225743   99368 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1010 18:14:22.225753   99368 kubeadm.go:310] 
	I1010 18:14:22.225845   99368 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 644oik.7go4jyqro7if5l4w \
	I1010 18:14:22.225968   99368 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:bc8568f96f96483e4403a7eff9866ac7bd8b0e76d8203796a49dd1a769f11bda 
	I1010 18:14:22.226430   99368 kubeadm.go:310] W1010 18:14:09.112606     828 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1010 18:14:22.226836   99368 kubeadm.go:310] W1010 18:14:09.113373     828 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1010 18:14:22.226944   99368 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1010 18:14:22.226978   99368 cni.go:84] Creating CNI manager for ""
	I1010 18:14:22.226989   99368 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1010 18:14:22.229089   99368 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1010 18:14:22.230625   99368 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1010 18:14:22.236334   99368 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I1010 18:14:22.236358   99368 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1010 18:14:22.263826   99368 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1010 18:14:22.691291   99368 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1010 18:14:22.691383   99368 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:14:22.691399   99368 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-142481 minikube.k8s.io/updated_at=2024_10_10T18_14_22_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=e90d7de550e839433dc631623053398b652747fc minikube.k8s.io/name=ha-142481 minikube.k8s.io/primary=true
	I1010 18:14:22.748532   99368 ops.go:34] apiserver oom_adj: -16
	I1010 18:14:22.970463   99368 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:14:23.471032   99368 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:14:23.553414   99368 kubeadm.go:1113] duration metric: took 862.100636ms to wait for elevateKubeSystemPrivileges
	I1010 18:14:23.553464   99368 kubeadm.go:394] duration metric: took 14.680326546s to StartCluster
	I1010 18:14:23.553490   99368 settings.go:142] acquiring lock: {Name:mk8d73e472e6ca16e47be8eb712406a95625b8da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:14:23.553611   99368 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19787-81676/kubeconfig
	I1010 18:14:23.554487   99368 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19787-81676/kubeconfig: {Name:mk31297b607b774870865548d952622c04d970cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:14:23.554725   99368 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1010 18:14:23.554735   99368 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1010 18:14:23.554719   99368 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.104 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1010 18:14:23.554809   99368 addons.go:69] Setting storage-provisioner=true in profile "ha-142481"
	I1010 18:14:23.554818   99368 start.go:241] waiting for startup goroutines ...
	I1010 18:14:23.554825   99368 addons.go:234] Setting addon storage-provisioner=true in "ha-142481"
	I1010 18:14:23.554829   99368 addons.go:69] Setting default-storageclass=true in profile "ha-142481"
	I1010 18:14:23.554845   99368 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-142481"
	I1010 18:14:23.554853   99368 host.go:66] Checking if "ha-142481" exists ...
	I1010 18:14:23.554928   99368 config.go:182] Loaded profile config "ha-142481": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 18:14:23.555209   99368 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 18:14:23.555239   99368 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 18:14:23.555300   99368 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 18:14:23.555338   99368 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 18:14:23.570324   99368 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36105
	I1010 18:14:23.570445   99368 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38545
	I1010 18:14:23.570857   99368 main.go:141] libmachine: () Calling .GetVersion
	I1010 18:14:23.570886   99368 main.go:141] libmachine: () Calling .GetVersion
	I1010 18:14:23.571436   99368 main.go:141] libmachine: Using API Version  1
	I1010 18:14:23.571459   99368 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 18:14:23.571566   99368 main.go:141] libmachine: Using API Version  1
	I1010 18:14:23.571589   99368 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 18:14:23.571790   99368 main.go:141] libmachine: () Calling .GetMachineName
	I1010 18:14:23.571894   99368 main.go:141] libmachine: () Calling .GetMachineName
	I1010 18:14:23.571996   99368 main.go:141] libmachine: (ha-142481) Calling .GetState
	I1010 18:14:23.572434   99368 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 18:14:23.572484   99368 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 18:14:23.574225   99368 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19787-81676/kubeconfig
	I1010 18:14:23.574554   99368 kapi.go:59] client config for ha-142481: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/client.crt", KeyFile:"/home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/client.key", CAFile:"/home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2432700), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1010 18:14:23.575091   99368 cert_rotation.go:140] Starting client certificate rotation controller
	I1010 18:14:23.575347   99368 addons.go:234] Setting addon default-storageclass=true in "ha-142481"
	I1010 18:14:23.575391   99368 host.go:66] Checking if "ha-142481" exists ...
	I1010 18:14:23.575743   99368 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 18:14:23.575783   99368 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 18:14:23.587483   99368 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34291
	I1010 18:14:23.587940   99368 main.go:141] libmachine: () Calling .GetVersion
	I1010 18:14:23.588477   99368 main.go:141] libmachine: Using API Version  1
	I1010 18:14:23.588502   99368 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 18:14:23.588933   99368 main.go:141] libmachine: () Calling .GetMachineName
	I1010 18:14:23.589102   99368 main.go:141] libmachine: (ha-142481) Calling .GetState
	I1010 18:14:23.590856   99368 main.go:141] libmachine: (ha-142481) Calling .DriverName
	I1010 18:14:23.590904   99368 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42799
	I1010 18:14:23.591399   99368 main.go:141] libmachine: () Calling .GetVersion
	I1010 18:14:23.591917   99368 main.go:141] libmachine: Using API Version  1
	I1010 18:14:23.591946   99368 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 18:14:23.592234   99368 main.go:141] libmachine: () Calling .GetMachineName
	I1010 18:14:23.592690   99368 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 18:14:23.592731   99368 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 18:14:23.593082   99368 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 18:14:23.594593   99368 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 18:14:23.594613   99368 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1010 18:14:23.594629   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHHostname
	I1010 18:14:23.597561   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:23.598029   99368 main.go:141] libmachine: (ha-142481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:fa:00", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:13:53 +0000 UTC Type:0 Mac:52:54:00:3e:fa:00 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-142481 Clientid:01:52:54:00:3e:fa:00}
	I1010 18:14:23.598057   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined IP address 192.168.39.104 and MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:23.598292   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHPort
	I1010 18:14:23.598455   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHKeyPath
	I1010 18:14:23.598621   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHUsername
	I1010 18:14:23.598811   99368 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481/id_rsa Username:docker}
	I1010 18:14:23.608949   99368 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46053
	I1010 18:14:23.609372   99368 main.go:141] libmachine: () Calling .GetVersion
	I1010 18:14:23.609889   99368 main.go:141] libmachine: Using API Version  1
	I1010 18:14:23.609916   99368 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 18:14:23.610243   99368 main.go:141] libmachine: () Calling .GetMachineName
	I1010 18:14:23.610467   99368 main.go:141] libmachine: (ha-142481) Calling .GetState
	I1010 18:14:23.612216   99368 main.go:141] libmachine: (ha-142481) Calling .DriverName
	I1010 18:14:23.612447   99368 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1010 18:14:23.612464   99368 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1010 18:14:23.612481   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHHostname
	I1010 18:14:23.615402   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:23.615852   99368 main.go:141] libmachine: (ha-142481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:fa:00", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:13:53 +0000 UTC Type:0 Mac:52:54:00:3e:fa:00 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-142481 Clientid:01:52:54:00:3e:fa:00}
	I1010 18:14:23.615886   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined IP address 192.168.39.104 and MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:23.616075   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHPort
	I1010 18:14:23.616255   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHKeyPath
	I1010 18:14:23.616404   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHUsername
	I1010 18:14:23.616566   99368 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481/id_rsa Username:docker}
	I1010 18:14:23.680546   99368 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1010 18:14:23.774021   99368 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 18:14:23.820915   99368 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1010 18:14:24.197953   99368 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1010 18:14:24.533925   99368 main.go:141] libmachine: Making call to close driver server
	I1010 18:14:24.533960   99368 main.go:141] libmachine: (ha-142481) Calling .Close
	I1010 18:14:24.533990   99368 main.go:141] libmachine: Making call to close driver server
	I1010 18:14:24.534001   99368 main.go:141] libmachine: (ha-142481) Calling .Close
	I1010 18:14:24.534267   99368 main.go:141] libmachine: (ha-142481) DBG | Closing plugin on server side
	I1010 18:14:24.534297   99368 main.go:141] libmachine: Successfully made call to close driver server
	I1010 18:14:24.534313   99368 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 18:14:24.534319   99368 main.go:141] libmachine: Successfully made call to close driver server
	I1010 18:14:24.534320   99368 main.go:141] libmachine: (ha-142481) DBG | Closing plugin on server side
	I1010 18:14:24.534323   99368 main.go:141] libmachine: Making call to close driver server
	I1010 18:14:24.534342   99368 main.go:141] libmachine: (ha-142481) Calling .Close
	I1010 18:14:24.534328   99368 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 18:14:24.534394   99368 main.go:141] libmachine: Making call to close driver server
	I1010 18:14:24.534402   99368 main.go:141] libmachine: (ha-142481) Calling .Close
	I1010 18:14:24.534551   99368 main.go:141] libmachine: Successfully made call to close driver server
	I1010 18:14:24.534571   99368 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 18:14:24.534647   99368 main.go:141] libmachine: Successfully made call to close driver server
	I1010 18:14:24.534673   99368 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 18:14:24.534690   99368 main.go:141] libmachine: (ha-142481) DBG | Closing plugin on server side
	I1010 18:14:24.534743   99368 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1010 18:14:24.534893   99368 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1010 18:14:24.535016   99368 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1010 18:14:24.535028   99368 round_trippers.go:469] Request Headers:
	I1010 18:14:24.535038   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:14:24.535046   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:14:24.550066   99368 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I1010 18:14:24.550802   99368 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1010 18:14:24.550817   99368 round_trippers.go:469] Request Headers:
	I1010 18:14:24.550825   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:14:24.550830   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:14:24.550834   99368 round_trippers.go:473]     Content-Type: application/json
	I1010 18:14:24.554277   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:14:24.554448   99368 main.go:141] libmachine: Making call to close driver server
	I1010 18:14:24.554465   99368 main.go:141] libmachine: (ha-142481) Calling .Close
	I1010 18:14:24.554772   99368 main.go:141] libmachine: Successfully made call to close driver server
	I1010 18:14:24.554791   99368 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 18:14:24.556620   99368 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1010 18:14:24.558034   99368 addons.go:510] duration metric: took 1.003294102s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1010 18:14:24.558071   99368 start.go:246] waiting for cluster config update ...
	I1010 18:14:24.558083   99368 start.go:255] writing updated cluster config ...
	I1010 18:14:24.559825   99368 out.go:201] 
	I1010 18:14:24.561439   99368 config.go:182] Loaded profile config "ha-142481": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 18:14:24.561503   99368 profile.go:143] Saving config to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/config.json ...
	I1010 18:14:24.563101   99368 out.go:177] * Starting "ha-142481-m02" control-plane node in "ha-142481" cluster
	I1010 18:14:24.564327   99368 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1010 18:14:24.564349   99368 cache.go:56] Caching tarball of preloaded images
	I1010 18:14:24.564452   99368 preload.go:172] Found /home/jenkins/minikube-integration/19787-81676/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1010 18:14:24.564466   99368 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1010 18:14:24.564540   99368 profile.go:143] Saving config to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/config.json ...
	I1010 18:14:24.564701   99368 start.go:360] acquireMachinesLock for ha-142481-m02: {Name:mkf50a8a3600473176c10a3ea212772e747151e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1010 18:14:24.564749   99368 start.go:364] duration metric: took 27.041µs to acquireMachinesLock for "ha-142481-m02"
	I1010 18:14:24.564772   99368 start.go:93] Provisioning new machine with config: &{Name:ha-142481 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-142481 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.104 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1010 18:14:24.564841   99368 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1010 18:14:24.566583   99368 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1010 18:14:24.566679   99368 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 18:14:24.566707   99368 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 18:14:24.581685   99368 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43727
	I1010 18:14:24.582176   99368 main.go:141] libmachine: () Calling .GetVersion
	I1010 18:14:24.582682   99368 main.go:141] libmachine: Using API Version  1
	I1010 18:14:24.582704   99368 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 18:14:24.583014   99368 main.go:141] libmachine: () Calling .GetMachineName
	I1010 18:14:24.583206   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetMachineName
	I1010 18:14:24.583343   99368 main.go:141] libmachine: (ha-142481-m02) Calling .DriverName
	I1010 18:14:24.583500   99368 start.go:159] libmachine.API.Create for "ha-142481" (driver="kvm2")
	I1010 18:14:24.583528   99368 client.go:168] LocalClient.Create starting
	I1010 18:14:24.583563   99368 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem
	I1010 18:14:24.583608   99368 main.go:141] libmachine: Decoding PEM data...
	I1010 18:14:24.583628   99368 main.go:141] libmachine: Parsing certificate...
	I1010 18:14:24.583689   99368 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem
	I1010 18:14:24.583714   99368 main.go:141] libmachine: Decoding PEM data...
	I1010 18:14:24.583730   99368 main.go:141] libmachine: Parsing certificate...
	I1010 18:14:24.583754   99368 main.go:141] libmachine: Running pre-create checks...
	I1010 18:14:24.583765   99368 main.go:141] libmachine: (ha-142481-m02) Calling .PreCreateCheck
	I1010 18:14:24.584021   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetConfigRaw
	I1010 18:14:24.584567   99368 main.go:141] libmachine: Creating machine...
	I1010 18:14:24.584588   99368 main.go:141] libmachine: (ha-142481-m02) Calling .Create
	I1010 18:14:24.584740   99368 main.go:141] libmachine: (ha-142481-m02) Creating KVM machine...
	I1010 18:14:24.585948   99368 main.go:141] libmachine: (ha-142481-m02) DBG | found existing default KVM network
	I1010 18:14:24.586049   99368 main.go:141] libmachine: (ha-142481-m02) DBG | found existing private KVM network mk-ha-142481
	I1010 18:14:24.586156   99368 main.go:141] libmachine: (ha-142481-m02) Setting up store path in /home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481-m02 ...
	I1010 18:14:24.586179   99368 main.go:141] libmachine: (ha-142481-m02) Building disk image from file:///home/jenkins/minikube-integration/19787-81676/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso
	I1010 18:14:24.586274   99368 main.go:141] libmachine: (ha-142481-m02) DBG | I1010 18:14:24.586151   99736 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19787-81676/.minikube
	I1010 18:14:24.586354   99368 main.go:141] libmachine: (ha-142481-m02) Downloading /home/jenkins/minikube-integration/19787-81676/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19787-81676/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso...
	I1010 18:14:24.870233   99368 main.go:141] libmachine: (ha-142481-m02) DBG | I1010 18:14:24.870047   99736 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481-m02/id_rsa...
	I1010 18:14:25.124750   99368 main.go:141] libmachine: (ha-142481-m02) DBG | I1010 18:14:25.124608   99736 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481-m02/ha-142481-m02.rawdisk...
	I1010 18:14:25.124783   99368 main.go:141] libmachine: (ha-142481-m02) DBG | Writing magic tar header
	I1010 18:14:25.124795   99368 main.go:141] libmachine: (ha-142481-m02) DBG | Writing SSH key tar header
	I1010 18:14:25.124806   99368 main.go:141] libmachine: (ha-142481-m02) DBG | I1010 18:14:25.124735   99736 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481-m02 ...
	I1010 18:14:25.124821   99368 main.go:141] libmachine: (ha-142481-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481-m02
	I1010 18:14:25.124919   99368 main.go:141] libmachine: (ha-142481-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19787-81676/.minikube/machines
	I1010 18:14:25.124946   99368 main.go:141] libmachine: (ha-142481-m02) Setting executable bit set on /home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481-m02 (perms=drwx------)
	I1010 18:14:25.124954   99368 main.go:141] libmachine: (ha-142481-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19787-81676/.minikube
	I1010 18:14:25.124968   99368 main.go:141] libmachine: (ha-142481-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19787-81676
	I1010 18:14:25.124973   99368 main.go:141] libmachine: (ha-142481-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1010 18:14:25.124980   99368 main.go:141] libmachine: (ha-142481-m02) Setting executable bit set on /home/jenkins/minikube-integration/19787-81676/.minikube/machines (perms=drwxr-xr-x)
	I1010 18:14:25.124988   99368 main.go:141] libmachine: (ha-142481-m02) Setting executable bit set on /home/jenkins/minikube-integration/19787-81676/.minikube (perms=drwxr-xr-x)
	I1010 18:14:25.124994   99368 main.go:141] libmachine: (ha-142481-m02) Setting executable bit set on /home/jenkins/minikube-integration/19787-81676 (perms=drwxrwxr-x)
	I1010 18:14:25.124999   99368 main.go:141] libmachine: (ha-142481-m02) DBG | Checking permissions on dir: /home/jenkins
	I1010 18:14:25.125037   99368 main.go:141] libmachine: (ha-142481-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1010 18:14:25.125058   99368 main.go:141] libmachine: (ha-142481-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1010 18:14:25.125067   99368 main.go:141] libmachine: (ha-142481-m02) DBG | Checking permissions on dir: /home
	I1010 18:14:25.125079   99368 main.go:141] libmachine: (ha-142481-m02) Creating domain...
	I1010 18:14:25.125091   99368 main.go:141] libmachine: (ha-142481-m02) DBG | Skipping /home - not owner
	I1010 18:14:25.126075   99368 main.go:141] libmachine: (ha-142481-m02) define libvirt domain using xml: 
	I1010 18:14:25.126098   99368 main.go:141] libmachine: (ha-142481-m02) <domain type='kvm'>
	I1010 18:14:25.126107   99368 main.go:141] libmachine: (ha-142481-m02)   <name>ha-142481-m02</name>
	I1010 18:14:25.126114   99368 main.go:141] libmachine: (ha-142481-m02)   <memory unit='MiB'>2200</memory>
	I1010 18:14:25.126125   99368 main.go:141] libmachine: (ha-142481-m02)   <vcpu>2</vcpu>
	I1010 18:14:25.126132   99368 main.go:141] libmachine: (ha-142481-m02)   <features>
	I1010 18:14:25.126140   99368 main.go:141] libmachine: (ha-142481-m02)     <acpi/>
	I1010 18:14:25.126150   99368 main.go:141] libmachine: (ha-142481-m02)     <apic/>
	I1010 18:14:25.126164   99368 main.go:141] libmachine: (ha-142481-m02)     <pae/>
	I1010 18:14:25.126176   99368 main.go:141] libmachine: (ha-142481-m02)     
	I1010 18:14:25.126185   99368 main.go:141] libmachine: (ha-142481-m02)   </features>
	I1010 18:14:25.126193   99368 main.go:141] libmachine: (ha-142481-m02)   <cpu mode='host-passthrough'>
	I1010 18:14:25.126201   99368 main.go:141] libmachine: (ha-142481-m02)   
	I1010 18:14:25.126208   99368 main.go:141] libmachine: (ha-142481-m02)   </cpu>
	I1010 18:14:25.126215   99368 main.go:141] libmachine: (ha-142481-m02)   <os>
	I1010 18:14:25.126225   99368 main.go:141] libmachine: (ha-142481-m02)     <type>hvm</type>
	I1010 18:14:25.126232   99368 main.go:141] libmachine: (ha-142481-m02)     <boot dev='cdrom'/>
	I1010 18:14:25.126241   99368 main.go:141] libmachine: (ha-142481-m02)     <boot dev='hd'/>
	I1010 18:14:25.126251   99368 main.go:141] libmachine: (ha-142481-m02)     <bootmenu enable='no'/>
	I1010 18:14:25.126273   99368 main.go:141] libmachine: (ha-142481-m02)   </os>
	I1010 18:14:25.126284   99368 main.go:141] libmachine: (ha-142481-m02)   <devices>
	I1010 18:14:25.126294   99368 main.go:141] libmachine: (ha-142481-m02)     <disk type='file' device='cdrom'>
	I1010 18:14:25.126307   99368 main.go:141] libmachine: (ha-142481-m02)       <source file='/home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481-m02/boot2docker.iso'/>
	I1010 18:14:25.126318   99368 main.go:141] libmachine: (ha-142481-m02)       <target dev='hdc' bus='scsi'/>
	I1010 18:14:25.126329   99368 main.go:141] libmachine: (ha-142481-m02)       <readonly/>
	I1010 18:14:25.126342   99368 main.go:141] libmachine: (ha-142481-m02)     </disk>
	I1010 18:14:25.126353   99368 main.go:141] libmachine: (ha-142481-m02)     <disk type='file' device='disk'>
	I1010 18:14:25.126365   99368 main.go:141] libmachine: (ha-142481-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1010 18:14:25.126380   99368 main.go:141] libmachine: (ha-142481-m02)       <source file='/home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481-m02/ha-142481-m02.rawdisk'/>
	I1010 18:14:25.126391   99368 main.go:141] libmachine: (ha-142481-m02)       <target dev='hda' bus='virtio'/>
	I1010 18:14:25.126401   99368 main.go:141] libmachine: (ha-142481-m02)     </disk>
	I1010 18:14:25.126413   99368 main.go:141] libmachine: (ha-142481-m02)     <interface type='network'>
	I1010 18:14:25.126425   99368 main.go:141] libmachine: (ha-142481-m02)       <source network='mk-ha-142481'/>
	I1010 18:14:25.126434   99368 main.go:141] libmachine: (ha-142481-m02)       <model type='virtio'/>
	I1010 18:14:25.126443   99368 main.go:141] libmachine: (ha-142481-m02)     </interface>
	I1010 18:14:25.126454   99368 main.go:141] libmachine: (ha-142481-m02)     <interface type='network'>
	I1010 18:14:25.126463   99368 main.go:141] libmachine: (ha-142481-m02)       <source network='default'/>
	I1010 18:14:25.126473   99368 main.go:141] libmachine: (ha-142481-m02)       <model type='virtio'/>
	I1010 18:14:25.126494   99368 main.go:141] libmachine: (ha-142481-m02)     </interface>
	I1010 18:14:25.126518   99368 main.go:141] libmachine: (ha-142481-m02)     <serial type='pty'>
	I1010 18:14:25.126526   99368 main.go:141] libmachine: (ha-142481-m02)       <target port='0'/>
	I1010 18:14:25.126530   99368 main.go:141] libmachine: (ha-142481-m02)     </serial>
	I1010 18:14:25.126535   99368 main.go:141] libmachine: (ha-142481-m02)     <console type='pty'>
	I1010 18:14:25.126545   99368 main.go:141] libmachine: (ha-142481-m02)       <target type='serial' port='0'/>
	I1010 18:14:25.126550   99368 main.go:141] libmachine: (ha-142481-m02)     </console>
	I1010 18:14:25.126556   99368 main.go:141] libmachine: (ha-142481-m02)     <rng model='virtio'>
	I1010 18:14:25.126562   99368 main.go:141] libmachine: (ha-142481-m02)       <backend model='random'>/dev/random</backend>
	I1010 18:14:25.126569   99368 main.go:141] libmachine: (ha-142481-m02)     </rng>
	I1010 18:14:25.126574   99368 main.go:141] libmachine: (ha-142481-m02)     
	I1010 18:14:25.126579   99368 main.go:141] libmachine: (ha-142481-m02)     
	I1010 18:14:25.126610   99368 main.go:141] libmachine: (ha-142481-m02)   </devices>
	I1010 18:14:25.126633   99368 main.go:141] libmachine: (ha-142481-m02) </domain>
	I1010 18:14:25.126647   99368 main.go:141] libmachine: (ha-142481-m02) 
	I1010 18:14:25.133808   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:63:37:66 in network default
	I1010 18:14:25.134525   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:25.134551   99368 main.go:141] libmachine: (ha-142481-m02) Ensuring networks are active...
	I1010 18:14:25.135477   99368 main.go:141] libmachine: (ha-142481-m02) Ensuring network default is active
	I1010 18:14:25.135837   99368 main.go:141] libmachine: (ha-142481-m02) Ensuring network mk-ha-142481 is active
	I1010 18:14:25.136343   99368 main.go:141] libmachine: (ha-142481-m02) Getting domain xml...
	I1010 18:14:25.137263   99368 main.go:141] libmachine: (ha-142481-m02) Creating domain...
	I1010 18:14:26.362672   99368 main.go:141] libmachine: (ha-142481-m02) Waiting to get IP...
	I1010 18:14:26.363443   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:26.363821   99368 main.go:141] libmachine: (ha-142481-m02) DBG | unable to find current IP address of domain ha-142481-m02 in network mk-ha-142481
	I1010 18:14:26.363878   99368 main.go:141] libmachine: (ha-142481-m02) DBG | I1010 18:14:26.363829   99736 retry.go:31] will retry after 237.123337ms: waiting for machine to come up
	I1010 18:14:26.602398   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:26.602883   99368 main.go:141] libmachine: (ha-142481-m02) DBG | unable to find current IP address of domain ha-142481-m02 in network mk-ha-142481
	I1010 18:14:26.602910   99368 main.go:141] libmachine: (ha-142481-m02) DBG | I1010 18:14:26.602829   99736 retry.go:31] will retry after 255.919096ms: waiting for machine to come up
	I1010 18:14:26.860273   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:26.860891   99368 main.go:141] libmachine: (ha-142481-m02) DBG | unable to find current IP address of domain ha-142481-m02 in network mk-ha-142481
	I1010 18:14:26.860917   99368 main.go:141] libmachine: (ha-142481-m02) DBG | I1010 18:14:26.860860   99736 retry.go:31] will retry after 363.867823ms: waiting for machine to come up
	I1010 18:14:27.226493   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:27.226955   99368 main.go:141] libmachine: (ha-142481-m02) DBG | unable to find current IP address of domain ha-142481-m02 in network mk-ha-142481
	I1010 18:14:27.226984   99368 main.go:141] libmachine: (ha-142481-m02) DBG | I1010 18:14:27.226896   99736 retry.go:31] will retry after 430.931001ms: waiting for machine to come up
	I1010 18:14:27.659820   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:27.660273   99368 main.go:141] libmachine: (ha-142481-m02) DBG | unable to find current IP address of domain ha-142481-m02 in network mk-ha-142481
	I1010 18:14:27.660299   99368 main.go:141] libmachine: (ha-142481-m02) DBG | I1010 18:14:27.660222   99736 retry.go:31] will retry after 681.867141ms: waiting for machine to come up
	I1010 18:14:28.344366   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:28.344931   99368 main.go:141] libmachine: (ha-142481-m02) DBG | unable to find current IP address of domain ha-142481-m02 in network mk-ha-142481
	I1010 18:14:28.344989   99368 main.go:141] libmachine: (ha-142481-m02) DBG | I1010 18:14:28.344843   99736 retry.go:31] will retry after 753.410001ms: waiting for machine to come up
	I1010 18:14:29.099845   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:29.100316   99368 main.go:141] libmachine: (ha-142481-m02) DBG | unable to find current IP address of domain ha-142481-m02 in network mk-ha-142481
	I1010 18:14:29.100345   99368 main.go:141] libmachine: (ha-142481-m02) DBG | I1010 18:14:29.100254   99736 retry.go:31] will retry after 1.081998824s: waiting for machine to come up
	I1010 18:14:30.183319   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:30.183733   99368 main.go:141] libmachine: (ha-142481-m02) DBG | unable to find current IP address of domain ha-142481-m02 in network mk-ha-142481
	I1010 18:14:30.183762   99368 main.go:141] libmachine: (ha-142481-m02) DBG | I1010 18:14:30.183699   99736 retry.go:31] will retry after 1.2621544s: waiting for machine to come up
	I1010 18:14:31.448194   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:31.448615   99368 main.go:141] libmachine: (ha-142481-m02) DBG | unable to find current IP address of domain ha-142481-m02 in network mk-ha-142481
	I1010 18:14:31.448639   99368 main.go:141] libmachine: (ha-142481-m02) DBG | I1010 18:14:31.448571   99736 retry.go:31] will retry after 1.545841483s: waiting for machine to come up
	I1010 18:14:32.996370   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:32.996940   99368 main.go:141] libmachine: (ha-142481-m02) DBG | unable to find current IP address of domain ha-142481-m02 in network mk-ha-142481
	I1010 18:14:32.996970   99368 main.go:141] libmachine: (ha-142481-m02) DBG | I1010 18:14:32.996877   99736 retry.go:31] will retry after 1.954916368s: waiting for machine to come up
	I1010 18:14:34.953362   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:34.953810   99368 main.go:141] libmachine: (ha-142481-m02) DBG | unable to find current IP address of domain ha-142481-m02 in network mk-ha-142481
	I1010 18:14:34.953834   99368 main.go:141] libmachine: (ha-142481-m02) DBG | I1010 18:14:34.953765   99736 retry.go:31] will retry after 2.832021438s: waiting for machine to come up
	I1010 18:14:37.787030   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:37.787437   99368 main.go:141] libmachine: (ha-142481-m02) DBG | unable to find current IP address of domain ha-142481-m02 in network mk-ha-142481
	I1010 18:14:37.787462   99368 main.go:141] libmachine: (ha-142481-m02) DBG | I1010 18:14:37.787399   99736 retry.go:31] will retry after 3.372903659s: waiting for machine to come up
	I1010 18:14:41.162229   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:41.162830   99368 main.go:141] libmachine: (ha-142481-m02) DBG | unable to find current IP address of domain ha-142481-m02 in network mk-ha-142481
	I1010 18:14:41.162860   99368 main.go:141] libmachine: (ha-142481-m02) DBG | I1010 18:14:41.162748   99736 retry.go:31] will retry after 3.532610017s: waiting for machine to come up
	I1010 18:14:44.697346   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:44.697811   99368 main.go:141] libmachine: (ha-142481-m02) DBG | unable to find current IP address of domain ha-142481-m02 in network mk-ha-142481
	I1010 18:14:44.697838   99368 main.go:141] libmachine: (ha-142481-m02) DBG | I1010 18:14:44.697765   99736 retry.go:31] will retry after 4.121205885s: waiting for machine to come up
	I1010 18:14:48.820235   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:48.820691   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has current primary IP address 192.168.39.186 and MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:48.820707   99368 main.go:141] libmachine: (ha-142481-m02) Found IP for machine: 192.168.39.186
	I1010 18:14:48.820716   99368 main.go:141] libmachine: (ha-142481-m02) Reserving static IP address...
	I1010 18:14:48.821115   99368 main.go:141] libmachine: (ha-142481-m02) DBG | unable to find host DHCP lease matching {name: "ha-142481-m02", mac: "52:54:00:70:30:26", ip: "192.168.39.186"} in network mk-ha-142481
	I1010 18:14:48.903340   99368 main.go:141] libmachine: (ha-142481-m02) Reserved static IP address: 192.168.39.186
	I1010 18:14:48.903376   99368 main.go:141] libmachine: (ha-142481-m02) Waiting for SSH to be available...
	I1010 18:14:48.903387   99368 main.go:141] libmachine: (ha-142481-m02) DBG | Getting to WaitForSSH function...
	I1010 18:14:48.906232   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:48.906828   99368 main.go:141] libmachine: (ha-142481-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:30:26", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:14:39 +0000 UTC Type:0 Mac:52:54:00:70:30:26 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:minikube Clientid:01:52:54:00:70:30:26}
	I1010 18:14:48.906862   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined IP address 192.168.39.186 and MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:48.907057   99368 main.go:141] libmachine: (ha-142481-m02) DBG | Using SSH client type: external
	I1010 18:14:48.907087   99368 main.go:141] libmachine: (ha-142481-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481-m02/id_rsa (-rw-------)
	I1010 18:14:48.907120   99368 main.go:141] libmachine: (ha-142481-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.186 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1010 18:14:48.907134   99368 main.go:141] libmachine: (ha-142481-m02) DBG | About to run SSH command:
	I1010 18:14:48.907147   99368 main.go:141] libmachine: (ha-142481-m02) DBG | exit 0
	I1010 18:14:49.037555   99368 main.go:141] libmachine: (ha-142481-m02) DBG | SSH cmd err, output: <nil>: 
	I1010 18:14:49.037876   99368 main.go:141] libmachine: (ha-142481-m02) KVM machine creation complete!
	I1010 18:14:49.038189   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetConfigRaw
	I1010 18:14:49.038756   99368 main.go:141] libmachine: (ha-142481-m02) Calling .DriverName
	I1010 18:14:49.038950   99368 main.go:141] libmachine: (ha-142481-m02) Calling .DriverName
	I1010 18:14:49.039103   99368 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1010 18:14:49.039117   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetState
	I1010 18:14:49.040560   99368 main.go:141] libmachine: Detecting operating system of created instance...
	I1010 18:14:49.040573   99368 main.go:141] libmachine: Waiting for SSH to be available...
	I1010 18:14:49.040578   99368 main.go:141] libmachine: Getting to WaitForSSH function...
	I1010 18:14:49.040584   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHHostname
	I1010 18:14:49.042911   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:49.043240   99368 main.go:141] libmachine: (ha-142481-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:30:26", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:14:39 +0000 UTC Type:0 Mac:52:54:00:70:30:26 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-142481-m02 Clientid:01:52:54:00:70:30:26}
	I1010 18:14:49.043266   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined IP address 192.168.39.186 and MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:49.043533   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHPort
	I1010 18:14:49.043730   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHKeyPath
	I1010 18:14:49.043927   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHKeyPath
	I1010 18:14:49.044092   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHUsername
	I1010 18:14:49.044245   99368 main.go:141] libmachine: Using SSH client type: native
	I1010 18:14:49.044498   99368 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I1010 18:14:49.044515   99368 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1010 18:14:49.156568   99368 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1010 18:14:49.156599   99368 main.go:141] libmachine: Detecting the provisioner...
	I1010 18:14:49.156607   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHHostname
	I1010 18:14:49.159819   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:49.160299   99368 main.go:141] libmachine: (ha-142481-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:30:26", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:14:39 +0000 UTC Type:0 Mac:52:54:00:70:30:26 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-142481-m02 Clientid:01:52:54:00:70:30:26}
	I1010 18:14:49.160329   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined IP address 192.168.39.186 and MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:49.160572   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHPort
	I1010 18:14:49.160782   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHKeyPath
	I1010 18:14:49.160954   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHKeyPath
	I1010 18:14:49.161115   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHUsername
	I1010 18:14:49.161282   99368 main.go:141] libmachine: Using SSH client type: native
	I1010 18:14:49.161504   99368 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I1010 18:14:49.161519   99368 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1010 18:14:49.274150   99368 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1010 18:14:49.274238   99368 main.go:141] libmachine: found compatible host: buildroot
	I1010 18:14:49.274249   99368 main.go:141] libmachine: Provisioning with buildroot...
	I1010 18:14:49.274261   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetMachineName
	I1010 18:14:49.274541   99368 buildroot.go:166] provisioning hostname "ha-142481-m02"
	I1010 18:14:49.274574   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetMachineName
	I1010 18:14:49.274809   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHHostname
	I1010 18:14:49.277484   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:49.277861   99368 main.go:141] libmachine: (ha-142481-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:30:26", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:14:39 +0000 UTC Type:0 Mac:52:54:00:70:30:26 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-142481-m02 Clientid:01:52:54:00:70:30:26}
	I1010 18:14:49.277893   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined IP address 192.168.39.186 and MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:49.278037   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHPort
	I1010 18:14:49.278241   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHKeyPath
	I1010 18:14:49.278416   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHKeyPath
	I1010 18:14:49.278595   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHUsername
	I1010 18:14:49.278858   99368 main.go:141] libmachine: Using SSH client type: native
	I1010 18:14:49.279047   99368 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I1010 18:14:49.279061   99368 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-142481-m02 && echo "ha-142481-m02" | sudo tee /etc/hostname
	I1010 18:14:49.409335   99368 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-142481-m02
	
	I1010 18:14:49.409369   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHHostname
	I1010 18:14:49.412112   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:49.412427   99368 main.go:141] libmachine: (ha-142481-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:30:26", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:14:39 +0000 UTC Type:0 Mac:52:54:00:70:30:26 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-142481-m02 Clientid:01:52:54:00:70:30:26}
	I1010 18:14:49.412458   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined IP address 192.168.39.186 and MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:49.412712   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHPort
	I1010 18:14:49.412921   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHKeyPath
	I1010 18:14:49.413069   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHKeyPath
	I1010 18:14:49.413182   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHUsername
	I1010 18:14:49.413398   99368 main.go:141] libmachine: Using SSH client type: native
	I1010 18:14:49.413565   99368 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I1010 18:14:49.413581   99368 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-142481-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-142481-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-142481-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1010 18:14:49.542003   99368 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1010 18:14:49.542039   99368 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19787-81676/.minikube CaCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19787-81676/.minikube}
	I1010 18:14:49.542058   99368 buildroot.go:174] setting up certificates
	I1010 18:14:49.542069   99368 provision.go:84] configureAuth start
	I1010 18:14:49.542080   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetMachineName
	I1010 18:14:49.542340   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetIP
	I1010 18:14:49.545159   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:49.545524   99368 main.go:141] libmachine: (ha-142481-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:30:26", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:14:39 +0000 UTC Type:0 Mac:52:54:00:70:30:26 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-142481-m02 Clientid:01:52:54:00:70:30:26}
	I1010 18:14:49.545554   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined IP address 192.168.39.186 and MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:49.545698   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHHostname
	I1010 18:14:49.547804   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:49.548115   99368 main.go:141] libmachine: (ha-142481-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:30:26", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:14:39 +0000 UTC Type:0 Mac:52:54:00:70:30:26 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-142481-m02 Clientid:01:52:54:00:70:30:26}
	I1010 18:14:49.548135   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined IP address 192.168.39.186 and MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:49.548323   99368 provision.go:143] copyHostCerts
	I1010 18:14:49.548352   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem
	I1010 18:14:49.548392   99368 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem, removing ...
	I1010 18:14:49.548403   99368 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem
	I1010 18:14:49.548486   99368 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem (1078 bytes)
	I1010 18:14:49.548582   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem
	I1010 18:14:49.548609   99368 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem, removing ...
	I1010 18:14:49.548619   99368 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem
	I1010 18:14:49.548657   99368 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem (1123 bytes)
	I1010 18:14:49.548719   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem
	I1010 18:14:49.548743   99368 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem, removing ...
	I1010 18:14:49.548752   99368 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem
	I1010 18:14:49.548788   99368 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem (1675 bytes)
	I1010 18:14:49.548865   99368 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem org=jenkins.ha-142481-m02 san=[127.0.0.1 192.168.39.186 ha-142481-m02 localhost minikube]
	I1010 18:14:49.606708   99368 provision.go:177] copyRemoteCerts
	I1010 18:14:49.606781   99368 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1010 18:14:49.606811   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHHostname
	I1010 18:14:49.609620   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:49.609921   99368 main.go:141] libmachine: (ha-142481-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:30:26", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:14:39 +0000 UTC Type:0 Mac:52:54:00:70:30:26 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-142481-m02 Clientid:01:52:54:00:70:30:26}
	I1010 18:14:49.609952   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined IP address 192.168.39.186 and MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:49.610121   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHPort
	I1010 18:14:49.610322   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHKeyPath
	I1010 18:14:49.610506   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHUsername
	I1010 18:14:49.610631   99368 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481-m02/id_rsa Username:docker}
	I1010 18:14:49.695655   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1010 18:14:49.695736   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1010 18:14:49.723445   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1010 18:14:49.723520   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1010 18:14:49.748318   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1010 18:14:49.748402   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1010 18:14:49.773423   99368 provision.go:87] duration metric: took 231.339814ms to configureAuth
	I1010 18:14:49.773451   99368 buildroot.go:189] setting minikube options for container-runtime
	I1010 18:14:49.773626   99368 config.go:182] Loaded profile config "ha-142481": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 18:14:49.773705   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHHostname
	I1010 18:14:49.776350   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:49.776701   99368 main.go:141] libmachine: (ha-142481-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:30:26", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:14:39 +0000 UTC Type:0 Mac:52:54:00:70:30:26 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-142481-m02 Clientid:01:52:54:00:70:30:26}
	I1010 18:14:49.776726   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined IP address 192.168.39.186 and MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:49.776913   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHPort
	I1010 18:14:49.777128   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHKeyPath
	I1010 18:14:49.777292   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHKeyPath
	I1010 18:14:49.777435   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHUsername
	I1010 18:14:49.777590   99368 main.go:141] libmachine: Using SSH client type: native
	I1010 18:14:49.777795   99368 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I1010 18:14:49.777817   99368 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1010 18:14:50.018484   99368 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1010 18:14:50.018513   99368 main.go:141] libmachine: Checking connection to Docker...
	I1010 18:14:50.018525   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetURL
	I1010 18:14:50.019796   99368 main.go:141] libmachine: (ha-142481-m02) DBG | Using libvirt version 6000000
	I1010 18:14:50.022107   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:50.022432   99368 main.go:141] libmachine: (ha-142481-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:30:26", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:14:39 +0000 UTC Type:0 Mac:52:54:00:70:30:26 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-142481-m02 Clientid:01:52:54:00:70:30:26}
	I1010 18:14:50.022476   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined IP address 192.168.39.186 and MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:50.022628   99368 main.go:141] libmachine: Docker is up and running!
	I1010 18:14:50.022646   99368 main.go:141] libmachine: Reticulating splines...
	I1010 18:14:50.022657   99368 client.go:171] duration metric: took 25.439118717s to LocalClient.Create
	I1010 18:14:50.022695   99368 start.go:167] duration metric: took 25.439191435s to libmachine.API.Create "ha-142481"
	I1010 18:14:50.022708   99368 start.go:293] postStartSetup for "ha-142481-m02" (driver="kvm2")
	I1010 18:14:50.022725   99368 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1010 18:14:50.022763   99368 main.go:141] libmachine: (ha-142481-m02) Calling .DriverName
	I1010 18:14:50.023030   99368 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1010 18:14:50.023055   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHHostname
	I1010 18:14:50.025463   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:50.025834   99368 main.go:141] libmachine: (ha-142481-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:30:26", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:14:39 +0000 UTC Type:0 Mac:52:54:00:70:30:26 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-142481-m02 Clientid:01:52:54:00:70:30:26}
	I1010 18:14:50.025869   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined IP address 192.168.39.186 and MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:50.026093   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHPort
	I1010 18:14:50.026322   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHKeyPath
	I1010 18:14:50.026520   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHUsername
	I1010 18:14:50.026673   99368 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481-m02/id_rsa Username:docker}
	I1010 18:14:50.115488   99368 ssh_runner.go:195] Run: cat /etc/os-release
	I1010 18:14:50.120106   99368 info.go:137] Remote host: Buildroot 2023.02.9
	I1010 18:14:50.120146   99368 filesync.go:126] Scanning /home/jenkins/minikube-integration/19787-81676/.minikube/addons for local assets ...
	I1010 18:14:50.120259   99368 filesync.go:126] Scanning /home/jenkins/minikube-integration/19787-81676/.minikube/files for local assets ...
	I1010 18:14:50.120347   99368 filesync.go:149] local asset: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem -> 888762.pem in /etc/ssl/certs
	I1010 18:14:50.120360   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem -> /etc/ssl/certs/888762.pem
	I1010 18:14:50.120462   99368 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1010 18:14:50.130011   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem --> /etc/ssl/certs/888762.pem (1708 bytes)
	I1010 18:14:50.156296   99368 start.go:296] duration metric: took 133.570332ms for postStartSetup
	I1010 18:14:50.156350   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetConfigRaw
	I1010 18:14:50.156937   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetIP
	I1010 18:14:50.159597   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:50.160043   99368 main.go:141] libmachine: (ha-142481-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:30:26", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:14:39 +0000 UTC Type:0 Mac:52:54:00:70:30:26 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-142481-m02 Clientid:01:52:54:00:70:30:26}
	I1010 18:14:50.160071   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined IP address 192.168.39.186 and MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:50.160321   99368 profile.go:143] Saving config to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/config.json ...
	I1010 18:14:50.160495   99368 start.go:128] duration metric: took 25.595643097s to createHost
	I1010 18:14:50.160517   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHHostname
	I1010 18:14:50.162762   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:50.163085   99368 main.go:141] libmachine: (ha-142481-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:30:26", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:14:39 +0000 UTC Type:0 Mac:52:54:00:70:30:26 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-142481-m02 Clientid:01:52:54:00:70:30:26}
	I1010 18:14:50.163110   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined IP address 192.168.39.186 and MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:50.163276   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHPort
	I1010 18:14:50.163459   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHKeyPath
	I1010 18:14:50.163603   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHKeyPath
	I1010 18:14:50.163760   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHUsername
	I1010 18:14:50.163931   99368 main.go:141] libmachine: Using SSH client type: native
	I1010 18:14:50.164125   99368 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I1010 18:14:50.164139   99368 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1010 18:14:50.277898   99368 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728584090.237251579
	
	I1010 18:14:50.277925   99368 fix.go:216] guest clock: 1728584090.237251579
	I1010 18:14:50.277933   99368 fix.go:229] Guest: 2024-10-10 18:14:50.237251579 +0000 UTC Remote: 2024-10-10 18:14:50.160506288 +0000 UTC m=+72.091094363 (delta=76.745291ms)
	I1010 18:14:50.277949   99368 fix.go:200] guest clock delta is within tolerance: 76.745291ms
	I1010 18:14:50.277955   99368 start.go:83] releasing machines lock for "ha-142481-m02", held for 25.713195595s
	I1010 18:14:50.277975   99368 main.go:141] libmachine: (ha-142481-m02) Calling .DriverName
	I1010 18:14:50.278294   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetIP
	I1010 18:14:50.280842   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:50.281256   99368 main.go:141] libmachine: (ha-142481-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:30:26", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:14:39 +0000 UTC Type:0 Mac:52:54:00:70:30:26 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-142481-m02 Clientid:01:52:54:00:70:30:26}
	I1010 18:14:50.281283   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined IP address 192.168.39.186 and MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:50.283734   99368 out.go:177] * Found network options:
	I1010 18:14:50.285300   99368 out.go:177]   - NO_PROXY=192.168.39.104
	W1010 18:14:50.286708   99368 proxy.go:119] fail to check proxy env: Error ip not in block
	I1010 18:14:50.286748   99368 main.go:141] libmachine: (ha-142481-m02) Calling .DriverName
	I1010 18:14:50.287340   99368 main.go:141] libmachine: (ha-142481-m02) Calling .DriverName
	I1010 18:14:50.287549   99368 main.go:141] libmachine: (ha-142481-m02) Calling .DriverName
	I1010 18:14:50.287642   99368 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1010 18:14:50.287694   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHHostname
	W1010 18:14:50.287740   99368 proxy.go:119] fail to check proxy env: Error ip not in block
	I1010 18:14:50.287827   99368 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1010 18:14:50.287852   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHHostname
	I1010 18:14:50.290823   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:50.290971   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:50.291276   99368 main.go:141] libmachine: (ha-142481-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:30:26", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:14:39 +0000 UTC Type:0 Mac:52:54:00:70:30:26 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-142481-m02 Clientid:01:52:54:00:70:30:26}
	I1010 18:14:50.291307   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined IP address 192.168.39.186 and MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:50.291499   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHPort
	I1010 18:14:50.291594   99368 main.go:141] libmachine: (ha-142481-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:30:26", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:14:39 +0000 UTC Type:0 Mac:52:54:00:70:30:26 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-142481-m02 Clientid:01:52:54:00:70:30:26}
	I1010 18:14:50.291635   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined IP address 192.168.39.186 and MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:50.291693   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHKeyPath
	I1010 18:14:50.291858   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHPort
	I1010 18:14:50.291862   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHUsername
	I1010 18:14:50.292017   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHKeyPath
	I1010 18:14:50.292017   99368 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481-m02/id_rsa Username:docker}
	I1010 18:14:50.292146   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHUsername
	I1010 18:14:50.292458   99368 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481-m02/id_rsa Username:docker}
	I1010 18:14:50.532570   99368 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1010 18:14:50.540169   99368 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1010 18:14:50.540248   99368 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1010 18:14:50.557472   99368 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1010 18:14:50.557500   99368 start.go:495] detecting cgroup driver to use...
	I1010 18:14:50.557574   99368 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1010 18:14:50.574787   99368 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1010 18:14:50.590774   99368 docker.go:217] disabling cri-docker service (if available) ...
	I1010 18:14:50.590848   99368 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1010 18:14:50.605941   99368 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1010 18:14:50.620901   99368 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1010 18:14:50.753387   99368 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1010 18:14:50.919446   99368 docker.go:233] disabling docker service ...
	I1010 18:14:50.919535   99368 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1010 18:14:50.934691   99368 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1010 18:14:50.948383   99368 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1010 18:14:51.098212   99368 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1010 18:14:51.222205   99368 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1010 18:14:51.236395   99368 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1010 18:14:51.255620   99368 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1010 18:14:51.255682   99368 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:14:51.265706   99368 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1010 18:14:51.265766   99368 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:14:51.276288   99368 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:14:51.287384   99368 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:14:51.298290   99368 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1010 18:14:51.309391   99368 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:14:51.322059   99368 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:14:51.341165   99368 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:14:51.352334   99368 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1010 18:14:51.361995   99368 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1010 18:14:51.362055   99368 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1010 18:14:51.376647   99368 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1010 18:14:51.387344   99368 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 18:14:51.501276   99368 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1010 18:14:51.591570   99368 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1010 18:14:51.591667   99368 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1010 18:14:51.596519   99368 start.go:563] Will wait 60s for crictl version
	I1010 18:14:51.596593   99368 ssh_runner.go:195] Run: which crictl
	I1010 18:14:51.600964   99368 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1010 18:14:51.642625   99368 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1010 18:14:51.642709   99368 ssh_runner.go:195] Run: crio --version
	I1010 18:14:51.670857   99368 ssh_runner.go:195] Run: crio --version
	I1010 18:14:51.701992   99368 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1010 18:14:51.703402   99368 out.go:177]   - env NO_PROXY=192.168.39.104
	I1010 18:14:51.704577   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetIP
	I1010 18:14:51.707504   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:51.707889   99368 main.go:141] libmachine: (ha-142481-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:30:26", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:14:39 +0000 UTC Type:0 Mac:52:54:00:70:30:26 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-142481-m02 Clientid:01:52:54:00:70:30:26}
	I1010 18:14:51.707921   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined IP address 192.168.39.186 and MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:51.708187   99368 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1010 18:14:51.712581   99368 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 18:14:51.728042   99368 mustload.go:65] Loading cluster: ha-142481
	I1010 18:14:51.728254   99368 config.go:182] Loaded profile config "ha-142481": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 18:14:51.728534   99368 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 18:14:51.728571   99368 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 18:14:51.744127   99368 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38121
	I1010 18:14:51.744674   99368 main.go:141] libmachine: () Calling .GetVersion
	I1010 18:14:51.745223   99368 main.go:141] libmachine: Using API Version  1
	I1010 18:14:51.745247   99368 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 18:14:51.745620   99368 main.go:141] libmachine: () Calling .GetMachineName
	I1010 18:14:51.745831   99368 main.go:141] libmachine: (ha-142481) Calling .GetState
	I1010 18:14:51.747403   99368 host.go:66] Checking if "ha-142481" exists ...
	I1010 18:14:51.747706   99368 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 18:14:51.747737   99368 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 18:14:51.763030   99368 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41747
	I1010 18:14:51.763446   99368 main.go:141] libmachine: () Calling .GetVersion
	I1010 18:14:51.763925   99368 main.go:141] libmachine: Using API Version  1
	I1010 18:14:51.763949   99368 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 18:14:51.764295   99368 main.go:141] libmachine: () Calling .GetMachineName
	I1010 18:14:51.764486   99368 main.go:141] libmachine: (ha-142481) Calling .DriverName
	I1010 18:14:51.764627   99368 certs.go:68] Setting up /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481 for IP: 192.168.39.186
	I1010 18:14:51.764637   99368 certs.go:194] generating shared ca certs ...
	I1010 18:14:51.764650   99368 certs.go:226] acquiring lock for ca certs: {Name:mk934aa2a82050d7b4e8800492bf64f91d140517 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:14:51.764765   99368 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key
	I1010 18:14:51.764803   99368 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key
	I1010 18:14:51.764812   99368 certs.go:256] generating profile certs ...
	I1010 18:14:51.764912   99368 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/client.key
	I1010 18:14:51.764937   99368 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.key.b244f992
	I1010 18:14:51.764951   99368 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.crt.b244f992 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.104 192.168.39.186 192.168.39.254]
	I1010 18:14:51.993768   99368 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.crt.b244f992 ...
	I1010 18:14:51.993803   99368 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.crt.b244f992: {Name:mk9eca5b6bcf4de2bd1cb4984282b7c5168c504a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:14:51.993982   99368 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.key.b244f992 ...
	I1010 18:14:51.993996   99368 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.key.b244f992: {Name:mk53f522d230afb3a7d1b4f761a379d6be7ff843 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:14:51.994077   99368 certs.go:381] copying /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.crt.b244f992 -> /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.crt
	I1010 18:14:51.994210   99368 certs.go:385] copying /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.key.b244f992 -> /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.key
	I1010 18:14:51.994347   99368 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/proxy-client.key
	I1010 18:14:51.994363   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1010 18:14:51.994376   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1010 18:14:51.994389   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1010 18:14:51.994407   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1010 18:14:51.994420   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1010 18:14:51.994432   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1010 18:14:51.994443   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1010 18:14:51.994454   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1010 18:14:51.994507   99368 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876.pem (1338 bytes)
	W1010 18:14:51.994535   99368 certs.go:480] ignoring /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876_empty.pem, impossibly tiny 0 bytes
	I1010 18:14:51.994545   99368 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem (1679 bytes)
	I1010 18:14:51.994565   99368 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem (1078 bytes)
	I1010 18:14:51.994589   99368 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem (1123 bytes)
	I1010 18:14:51.994613   99368 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem (1675 bytes)
	I1010 18:14:51.994650   99368 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem (1708 bytes)
	I1010 18:14:51.994681   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:14:51.994695   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876.pem -> /usr/share/ca-certificates/88876.pem
	I1010 18:14:51.994706   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem -> /usr/share/ca-certificates/888762.pem
	I1010 18:14:51.994740   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHHostname
	I1010 18:14:51.997958   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:51.998443   99368 main.go:141] libmachine: (ha-142481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:fa:00", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:13:53 +0000 UTC Type:0 Mac:52:54:00:3e:fa:00 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-142481 Clientid:01:52:54:00:3e:fa:00}
	I1010 18:14:51.998473   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined IP address 192.168.39.104 and MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:51.998636   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHPort
	I1010 18:14:51.998839   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHKeyPath
	I1010 18:14:51.999035   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHUsername
	I1010 18:14:51.999239   99368 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481/id_rsa Username:docker}
	I1010 18:14:52.077280   99368 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1010 18:14:52.082655   99368 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1010 18:14:52.094293   99368 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1010 18:14:52.102951   99368 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1010 18:14:52.115800   99368 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1010 18:14:52.120082   99368 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1010 18:14:52.130693   99368 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1010 18:14:52.135696   99368 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1010 18:14:52.148816   99368 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1010 18:14:52.158283   99368 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1010 18:14:52.169959   99368 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1010 18:14:52.174352   99368 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1010 18:14:52.185494   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1010 18:14:52.211191   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1010 18:14:52.237842   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1010 18:14:52.263110   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1010 18:14:52.287843   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1010 18:14:52.313473   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1010 18:14:52.338065   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1010 18:14:52.363071   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1010 18:14:52.387579   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1010 18:14:52.412888   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876.pem --> /usr/share/ca-certificates/88876.pem (1338 bytes)
	I1010 18:14:52.437781   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem --> /usr/share/ca-certificates/888762.pem (1708 bytes)
	I1010 18:14:52.464757   99368 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1010 18:14:52.481913   99368 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1010 18:14:52.499025   99368 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1010 18:14:52.515900   99368 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1010 18:14:52.533545   99368 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1010 18:14:52.550809   99368 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1010 18:14:52.567422   99368 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1010 18:14:52.584795   99368 ssh_runner.go:195] Run: openssl version
	I1010 18:14:52.590891   99368 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/888762.pem && ln -fs /usr/share/ca-certificates/888762.pem /etc/ssl/certs/888762.pem"
	I1010 18:14:52.602879   99368 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/888762.pem
	I1010 18:14:52.607603   99368 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 10 18:09 /usr/share/ca-certificates/888762.pem
	I1010 18:14:52.607658   99368 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/888762.pem
	I1010 18:14:52.613708   99368 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/888762.pem /etc/ssl/certs/3ec20f2e.0"
	I1010 18:14:52.631468   99368 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1010 18:14:52.643064   99368 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:14:52.647811   99368 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 10 17:58 /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:14:52.647874   99368 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:14:52.653881   99368 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1010 18:14:52.665152   99368 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/88876.pem && ln -fs /usr/share/ca-certificates/88876.pem /etc/ssl/certs/88876.pem"
	I1010 18:14:52.676562   99368 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/88876.pem
	I1010 18:14:52.681256   99368 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 10 18:09 /usr/share/ca-certificates/88876.pem
	I1010 18:14:52.681313   99368 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/88876.pem
	I1010 18:14:52.687223   99368 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/88876.pem /etc/ssl/certs/51391683.0"
	I1010 18:14:52.699194   99368 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1010 18:14:52.703641   99368 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1010 18:14:52.703707   99368 kubeadm.go:934] updating node {m02 192.168.39.186 8443 v1.31.1 crio true true} ...
	I1010 18:14:52.703805   99368 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-142481-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.186
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-142481 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1010 18:14:52.703835   99368 kube-vip.go:115] generating kube-vip config ...
	I1010 18:14:52.703878   99368 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1010 18:14:52.723026   99368 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1010 18:14:52.723119   99368 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1010 18:14:52.723189   99368 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1010 18:14:52.734671   99368 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I1010 18:14:52.734752   99368 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I1010 18:14:52.745741   99368 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19787-81676/.minikube/cache/linux/amd64/v1.31.1/kubelet
	I1010 18:14:52.745751   99368 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19787-81676/.minikube/cache/linux/amd64/v1.31.1/kubeadm
	I1010 18:14:52.745751   99368 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I1010 18:14:52.745871   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I1010 18:14:52.745940   99368 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I1010 18:14:52.751099   99368 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I1010 18:14:52.751132   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I1010 18:14:53.544046   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1010 18:14:53.544130   99368 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1010 18:14:53.549472   99368 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I1010 18:14:53.549517   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I1010 18:14:53.647955   99368 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 18:14:53.681722   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I1010 18:14:53.681823   99368 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I1010 18:14:53.695932   99368 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I1010 18:14:53.695987   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I1010 18:14:54.175941   99368 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1010 18:14:54.187282   99368 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1010 18:14:54.205511   99368 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1010 18:14:54.223508   99368 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1010 18:14:54.241125   99368 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1010 18:14:54.245490   99368 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 18:14:54.259173   99368 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 18:14:54.401351   99368 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 18:14:54.419984   99368 host.go:66] Checking if "ha-142481" exists ...
	I1010 18:14:54.420484   99368 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 18:14:54.420546   99368 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 18:14:54.436033   99368 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37253
	I1010 18:14:54.436556   99368 main.go:141] libmachine: () Calling .GetVersion
	I1010 18:14:54.437251   99368 main.go:141] libmachine: Using API Version  1
	I1010 18:14:54.437281   99368 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 18:14:54.437607   99368 main.go:141] libmachine: () Calling .GetMachineName
	I1010 18:14:54.437831   99368 main.go:141] libmachine: (ha-142481) Calling .DriverName
	I1010 18:14:54.438020   99368 start.go:317] joinCluster: &{Name:ha-142481 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-142481 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.104 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.186 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 18:14:54.438157   99368 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1010 18:14:54.438180   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHHostname
	I1010 18:14:54.441157   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:54.441581   99368 main.go:141] libmachine: (ha-142481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:fa:00", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:13:53 +0000 UTC Type:0 Mac:52:54:00:3e:fa:00 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-142481 Clientid:01:52:54:00:3e:fa:00}
	I1010 18:14:54.441609   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined IP address 192.168.39.104 and MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:54.441854   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHPort
	I1010 18:14:54.442034   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHKeyPath
	I1010 18:14:54.442149   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHUsername
	I1010 18:14:54.442289   99368 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481/id_rsa Username:docker}
	I1010 18:14:54.604951   99368 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.186 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1010 18:14:54.605013   99368 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token wt3o3w.k6pkjtb13sd57t6w --discovery-token-ca-cert-hash sha256:bc8568f96f96483e4403a7eff9866ac7bd8b0e76d8203796a49dd1a769f11bda --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-142481-m02 --control-plane --apiserver-advertise-address=192.168.39.186 --apiserver-bind-port=8443"
	I1010 18:15:14.578208   99368 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token wt3o3w.k6pkjtb13sd57t6w --discovery-token-ca-cert-hash sha256:bc8568f96f96483e4403a7eff9866ac7bd8b0e76d8203796a49dd1a769f11bda --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-142481-m02 --control-plane --apiserver-advertise-address=192.168.39.186 --apiserver-bind-port=8443": (19.973131424s)
	I1010 18:15:14.578257   99368 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1010 18:15:15.095544   99368 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-142481-m02 minikube.k8s.io/updated_at=2024_10_10T18_15_15_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=e90d7de550e839433dc631623053398b652747fc minikube.k8s.io/name=ha-142481 minikube.k8s.io/primary=false
	I1010 18:15:15.208568   99368 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-142481-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I1010 18:15:15.337167   99368 start.go:319] duration metric: took 20.899144024s to joinCluster
	I1010 18:15:15.337270   99368 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.186 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1010 18:15:15.337601   99368 config.go:182] Loaded profile config "ha-142481": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 18:15:15.339949   99368 out.go:177] * Verifying Kubernetes components...
	I1010 18:15:15.341260   99368 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 18:15:15.615485   99368 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 18:15:15.642973   99368 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19787-81676/kubeconfig
	I1010 18:15:15.643325   99368 kapi.go:59] client config for ha-142481: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/client.crt", KeyFile:"/home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/client.key", CAFile:"/home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2432700), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1010 18:15:15.643422   99368 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.104:8443
	I1010 18:15:15.643731   99368 node_ready.go:35] waiting up to 6m0s for node "ha-142481-m02" to be "Ready" ...
	I1010 18:15:15.643859   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:15.643869   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:15.643880   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:15.643892   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:15.665402   99368 round_trippers.go:574] Response Status: 200 OK in 21 milliseconds
	I1010 18:15:16.144314   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:16.144340   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:16.144351   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:16.144357   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:16.150219   99368 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1010 18:15:16.644045   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:16.644074   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:16.644086   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:16.644093   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:16.654043   99368 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1010 18:15:17.144554   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:17.144581   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:17.144590   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:17.144595   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:17.148858   99368 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1010 18:15:17.643970   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:17.644078   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:17.644104   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:17.644122   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:17.653880   99368 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1010 18:15:17.654572   99368 node_ready.go:53] node "ha-142481-m02" has status "Ready":"False"
	I1010 18:15:18.144266   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:18.144294   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:18.144302   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:18.144308   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:18.147936   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:18.644346   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:18.644369   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:18.644378   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:18.644382   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:18.648587   99368 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1010 18:15:19.144413   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:19.144443   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:19.144454   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:19.144460   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:19.147695   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:19.644688   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:19.644715   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:19.644726   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:19.644730   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:19.648487   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:20.144679   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:20.144700   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:20.144708   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:20.144712   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:20.148475   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:20.149193   99368 node_ready.go:53] node "ha-142481-m02" has status "Ready":"False"
	I1010 18:15:20.644644   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:20.644675   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:20.644687   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:20.644694   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:20.648513   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:21.144341   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:21.144366   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:21.144377   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:21.144384   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:21.147839   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:21.644909   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:21.644934   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:21.644942   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:21.644946   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:21.648387   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:22.144173   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:22.144196   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:22.144205   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:22.144209   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:22.147385   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:22.644414   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:22.644444   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:22.644456   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:22.644462   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:22.713904   99368 round_trippers.go:574] Response Status: 200 OK in 69 milliseconds
	I1010 18:15:22.714410   99368 node_ready.go:53] node "ha-142481-m02" has status "Ready":"False"
	I1010 18:15:23.144902   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:23.144934   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:23.144947   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:23.144954   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:23.147993   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:23.644885   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:23.644971   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:23.644995   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:23.645002   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:23.648711   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:24.144645   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:24.144673   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:24.144685   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:24.144690   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:24.148415   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:24.644379   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:24.644413   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:24.644424   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:24.644429   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:24.648175   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:25.144097   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:25.144120   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:25.144128   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:25.144133   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:25.147203   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:25.147854   99368 node_ready.go:53] node "ha-142481-m02" has status "Ready":"False"
	I1010 18:15:25.644276   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:25.644303   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:25.644311   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:25.644316   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:25.647929   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:26.143986   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:26.144010   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:26.144018   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:26.144023   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:26.147277   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:26.644893   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:26.644924   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:26.644934   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:26.644939   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:26.648455   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:27.144020   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:27.144042   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:27.144050   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:27.144053   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:27.150719   99368 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1010 18:15:27.151307   99368 node_ready.go:53] node "ha-142481-m02" has status "Ready":"False"
	I1010 18:15:27.644596   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:27.644620   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:27.644628   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:27.644632   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:27.648391   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:28.144777   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:28.144801   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:28.144809   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:28.144813   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:28.148258   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:28.644636   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:28.644665   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:28.644673   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:28.644676   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:28.648181   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:29.144094   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:29.144120   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:29.144128   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:29.144133   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:29.147945   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:29.644955   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:29.644977   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:29.644986   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:29.644990   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:29.648391   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:29.649199   99368 node_ready.go:53] node "ha-142481-m02" has status "Ready":"False"
	I1010 18:15:30.144628   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:30.144653   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:30.144661   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:30.144665   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:30.148286   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:30.644255   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:30.644288   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:30.644299   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:30.644304   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:30.648062   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:31.144076   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:31.144101   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:31.144109   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:31.144112   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:31.148081   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:31.644011   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:31.644037   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:31.644049   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:31.644055   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:31.653327   99368 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1010 18:15:31.653921   99368 node_ready.go:53] node "ha-142481-m02" has status "Ready":"False"
	I1010 18:15:32.144247   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:32.144273   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:32.144282   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:32.144286   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:32.147700   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:32.644836   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:32.644894   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:32.644908   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:32.644913   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:32.648022   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:33.144204   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:33.144231   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:33.144240   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:33.144242   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:33.148094   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:33.644909   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:33.644932   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:33.644940   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:33.644943   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:33.648586   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:34.144644   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:34.144672   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:34.144680   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:34.144685   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:34.148129   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:34.148805   99368 node_ready.go:53] node "ha-142481-m02" has status "Ready":"False"
	I1010 18:15:34.644279   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:34.644310   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:34.644321   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:34.644329   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:34.648073   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:34.648695   99368 node_ready.go:49] node "ha-142481-m02" has status "Ready":"True"
	I1010 18:15:34.648716   99368 node_ready.go:38] duration metric: took 19.004960132s for node "ha-142481-m02" to be "Ready" ...
	I1010 18:15:34.648732   99368 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 18:15:34.648874   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods
	I1010 18:15:34.648887   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:34.648899   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:34.648905   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:34.653067   99368 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1010 18:15:34.660867   99368 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-28dll" in "kube-system" namespace to be "Ready" ...
	I1010 18:15:34.660985   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-28dll
	I1010 18:15:34.660996   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:34.661004   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:34.661008   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:34.673094   99368 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I1010 18:15:34.673807   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481
	I1010 18:15:34.673825   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:34.673833   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:34.673838   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:34.679300   99368 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1010 18:15:34.679893   99368 pod_ready.go:93] pod "coredns-7c65d6cfc9-28dll" in "kube-system" namespace has status "Ready":"True"
	I1010 18:15:34.679919   99368 pod_ready.go:82] duration metric: took 19.021803ms for pod "coredns-7c65d6cfc9-28dll" in "kube-system" namespace to be "Ready" ...
	I1010 18:15:34.679934   99368 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-xfhq8" in "kube-system" namespace to be "Ready" ...
	I1010 18:15:34.680016   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-xfhq8
	I1010 18:15:34.680028   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:34.680039   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:34.680046   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:34.687874   99368 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1010 18:15:34.688550   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481
	I1010 18:15:34.688567   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:34.688575   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:34.688578   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:34.693607   99368 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1010 18:15:34.694298   99368 pod_ready.go:93] pod "coredns-7c65d6cfc9-xfhq8" in "kube-system" namespace has status "Ready":"True"
	I1010 18:15:34.694318   99368 pod_ready.go:82] duration metric: took 14.376081ms for pod "coredns-7c65d6cfc9-xfhq8" in "kube-system" namespace to be "Ready" ...
	I1010 18:15:34.694329   99368 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-142481" in "kube-system" namespace to be "Ready" ...
	I1010 18:15:34.694401   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/etcd-ha-142481
	I1010 18:15:34.694412   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:34.694422   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:34.694427   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:34.705466   99368 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1010 18:15:34.706122   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481
	I1010 18:15:34.706142   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:34.706152   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:34.706157   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:34.713862   99368 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1010 18:15:34.714292   99368 pod_ready.go:93] pod "etcd-ha-142481" in "kube-system" namespace has status "Ready":"True"
	I1010 18:15:34.714313   99368 pod_ready.go:82] duration metric: took 19.977824ms for pod "etcd-ha-142481" in "kube-system" namespace to be "Ready" ...
	I1010 18:15:34.714324   99368 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-142481-m02" in "kube-system" namespace to be "Ready" ...
	I1010 18:15:34.714393   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/etcd-ha-142481-m02
	I1010 18:15:34.714397   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:34.714407   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:34.714411   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:34.724173   99368 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1010 18:15:34.725474   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:34.725492   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:34.725502   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:34.725507   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:34.728517   99368 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1010 18:15:34.729350   99368 pod_ready.go:93] pod "etcd-ha-142481-m02" in "kube-system" namespace has status "Ready":"True"
	I1010 18:15:34.729374   99368 pod_ready.go:82] duration metric: took 15.044498ms for pod "etcd-ha-142481-m02" in "kube-system" namespace to be "Ready" ...
	I1010 18:15:34.729392   99368 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-142481" in "kube-system" namespace to be "Ready" ...
	I1010 18:15:34.844828   99368 request.go:632] Waited for 115.352966ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-142481
	I1010 18:15:34.844940   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-142481
	I1010 18:15:34.844954   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:34.844965   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:34.844980   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:34.849582   99368 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1010 18:15:35.044720   99368 request.go:632] Waited for 194.440409ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/nodes/ha-142481
	I1010 18:15:35.044815   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481
	I1010 18:15:35.044823   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:35.044922   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:35.044934   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:35.049101   99368 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1010 18:15:35.049648   99368 pod_ready.go:93] pod "kube-apiserver-ha-142481" in "kube-system" namespace has status "Ready":"True"
	I1010 18:15:35.049671   99368 pod_ready.go:82] duration metric: took 320.272231ms for pod "kube-apiserver-ha-142481" in "kube-system" namespace to be "Ready" ...
	I1010 18:15:35.049694   99368 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-142481-m02" in "kube-system" namespace to be "Ready" ...
	I1010 18:15:35.244714   99368 request.go:632] Waited for 194.93387ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-142481-m02
	I1010 18:15:35.244774   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-142481-m02
	I1010 18:15:35.244780   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:35.244788   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:35.244791   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:35.248696   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:35.444831   99368 request.go:632] Waited for 195.412897ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:35.444927   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:35.444933   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:35.444942   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:35.444946   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:35.448991   99368 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1010 18:15:35.450079   99368 pod_ready.go:93] pod "kube-apiserver-ha-142481-m02" in "kube-system" namespace has status "Ready":"True"
	I1010 18:15:35.450103   99368 pod_ready.go:82] duration metric: took 400.401007ms for pod "kube-apiserver-ha-142481-m02" in "kube-system" namespace to be "Ready" ...
	I1010 18:15:35.450118   99368 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-142481" in "kube-system" namespace to be "Ready" ...
	I1010 18:15:35.645157   99368 request.go:632] Waited for 194.960575ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-142481
	I1010 18:15:35.645249   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-142481
	I1010 18:15:35.645257   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:35.645268   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:35.645274   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:35.648746   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:35.844906   99368 request.go:632] Waited for 195.418533ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/nodes/ha-142481
	I1010 18:15:35.844969   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481
	I1010 18:15:35.844974   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:35.844982   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:35.844985   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:35.849036   99368 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1010 18:15:35.849631   99368 pod_ready.go:93] pod "kube-controller-manager-ha-142481" in "kube-system" namespace has status "Ready":"True"
	I1010 18:15:35.849652   99368 pod_ready.go:82] duration metric: took 399.526564ms for pod "kube-controller-manager-ha-142481" in "kube-system" namespace to be "Ready" ...
	I1010 18:15:35.849663   99368 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-142481-m02" in "kube-system" namespace to be "Ready" ...
	I1010 18:15:36.044750   99368 request.go:632] Waited for 194.993362ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-142481-m02
	I1010 18:15:36.044821   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-142481-m02
	I1010 18:15:36.044829   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:36.044841   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:36.044860   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:36.048403   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:36.244872   99368 request.go:632] Waited for 195.41194ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:36.244966   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:36.244978   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:36.244991   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:36.245003   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:36.248422   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:36.249090   99368 pod_ready.go:93] pod "kube-controller-manager-ha-142481-m02" in "kube-system" namespace has status "Ready":"True"
	I1010 18:15:36.249112   99368 pod_ready.go:82] duration metric: took 399.440459ms for pod "kube-controller-manager-ha-142481-m02" in "kube-system" namespace to be "Ready" ...
	I1010 18:15:36.249127   99368 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-gwvrh" in "kube-system" namespace to be "Ready" ...
	I1010 18:15:36.445275   99368 request.go:632] Waited for 196.04196ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gwvrh
	I1010 18:15:36.445337   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gwvrh
	I1010 18:15:36.445343   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:36.445350   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:36.445354   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:36.449425   99368 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1010 18:15:36.644689   99368 request.go:632] Waited for 194.411636ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/nodes/ha-142481
	I1010 18:15:36.644795   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481
	I1010 18:15:36.644806   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:36.644817   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:36.644825   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:36.648756   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:36.649220   99368 pod_ready.go:93] pod "kube-proxy-gwvrh" in "kube-system" namespace has status "Ready":"True"
	I1010 18:15:36.649241   99368 pod_ready.go:82] duration metric: took 400.105171ms for pod "kube-proxy-gwvrh" in "kube-system" namespace to be "Ready" ...
	I1010 18:15:36.649254   99368 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-srfng" in "kube-system" namespace to be "Ready" ...
	I1010 18:15:36.844338   99368 request.go:632] Waited for 194.987151ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-proxy-srfng
	I1010 18:15:36.844405   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-proxy-srfng
	I1010 18:15:36.844411   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:36.844420   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:36.844434   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:36.848477   99368 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1010 18:15:37.044640   99368 request.go:632] Waited for 195.367234ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:37.044708   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:37.044715   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:37.044726   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:37.044731   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:37.048116   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:37.048721   99368 pod_ready.go:93] pod "kube-proxy-srfng" in "kube-system" namespace has status "Ready":"True"
	I1010 18:15:37.048745   99368 pod_ready.go:82] duration metric: took 399.483125ms for pod "kube-proxy-srfng" in "kube-system" namespace to be "Ready" ...
	I1010 18:15:37.048759   99368 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-142481" in "kube-system" namespace to be "Ready" ...
	I1010 18:15:37.244914   99368 request.go:632] Waited for 196.022775ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-142481
	I1010 18:15:37.244993   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-142481
	I1010 18:15:37.245004   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:37.245029   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:37.245036   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:37.248801   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:37.444916   99368 request.go:632] Waited for 195.401869ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/nodes/ha-142481
	I1010 18:15:37.444984   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481
	I1010 18:15:37.444991   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:37.445002   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:37.445008   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:37.448457   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:37.449008   99368 pod_ready.go:93] pod "kube-scheduler-ha-142481" in "kube-system" namespace has status "Ready":"True"
	I1010 18:15:37.449028   99368 pod_ready.go:82] duration metric: took 400.260773ms for pod "kube-scheduler-ha-142481" in "kube-system" namespace to be "Ready" ...
	I1010 18:15:37.449039   99368 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-142481-m02" in "kube-system" namespace to be "Ready" ...
	I1010 18:15:37.645172   99368 request.go:632] Waited for 196.046461ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-142481-m02
	I1010 18:15:37.645249   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-142481-m02
	I1010 18:15:37.645256   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:37.645265   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:37.645271   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:37.648894   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:37.844799   99368 request.go:632] Waited for 195.42858ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:37.844915   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:37.844926   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:37.844937   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:37.844945   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:37.848459   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:37.849058   99368 pod_ready.go:93] pod "kube-scheduler-ha-142481-m02" in "kube-system" namespace has status "Ready":"True"
	I1010 18:15:37.849077   99368 pod_ready.go:82] duration metric: took 400.031968ms for pod "kube-scheduler-ha-142481-m02" in "kube-system" namespace to be "Ready" ...
	I1010 18:15:37.849089   99368 pod_ready.go:39] duration metric: took 3.200308757s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 18:15:37.849113   99368 api_server.go:52] waiting for apiserver process to appear ...
	I1010 18:15:37.849168   99368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 18:15:37.867701   99368 api_server.go:72] duration metric: took 22.53038697s to wait for apiserver process to appear ...
	I1010 18:15:37.867737   99368 api_server.go:88] waiting for apiserver healthz status ...
	I1010 18:15:37.867762   99368 api_server.go:253] Checking apiserver healthz at https://192.168.39.104:8443/healthz ...
	I1010 18:15:37.874449   99368 api_server.go:279] https://192.168.39.104:8443/healthz returned 200:
	ok
	I1010 18:15:37.874534   99368 round_trippers.go:463] GET https://192.168.39.104:8443/version
	I1010 18:15:37.874545   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:37.874561   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:37.874568   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:37.875635   99368 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1010 18:15:37.875761   99368 api_server.go:141] control plane version: v1.31.1
	I1010 18:15:37.875781   99368 api_server.go:131] duration metric: took 8.036588ms to wait for apiserver health ...
	I1010 18:15:37.875792   99368 system_pods.go:43] waiting for kube-system pods to appear ...
	I1010 18:15:38.045248   99368 request.go:632] Waited for 169.346857ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods
	I1010 18:15:38.045336   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods
	I1010 18:15:38.045344   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:38.045356   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:38.045367   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:38.051387   99368 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1010 18:15:38.056244   99368 system_pods.go:59] 17 kube-system pods found
	I1010 18:15:38.056282   99368 system_pods.go:61] "coredns-7c65d6cfc9-28dll" [1d8bde36-6ab6-4b48-b00a-2a80059ecc11] Running
	I1010 18:15:38.056289   99368 system_pods.go:61] "coredns-7c65d6cfc9-xfhq8" [33d478fb-98f0-4029-87ae-9701a54cecd9] Running
	I1010 18:15:38.056293   99368 system_pods.go:61] "etcd-ha-142481" [fa917068-3ac4-4929-b80b-bfabdc3a9b94] Running
	I1010 18:15:38.056297   99368 system_pods.go:61] "etcd-ha-142481-m02" [8e6c1111-716c-41ff-9d31-ee189956993b] Running
	I1010 18:15:38.056300   99368 system_pods.go:61] "kindnet-4d9v4" [f52506a7-d4f9-4112-8b28-f239c7c8230b] Running
	I1010 18:15:38.056308   99368 system_pods.go:61] "kindnet-5k6j8" [e2d0fb49-23e6-4389-b890-25c03f6089a1] Running
	I1010 18:15:38.056311   99368 system_pods.go:61] "kube-apiserver-ha-142481" [b3d0326d-123b-43af-b1a2-b745b884c3b5] Running
	I1010 18:15:38.056315   99368 system_pods.go:61] "kube-apiserver-ha-142481-m02" [36f5732a-f056-4751-83e3-67f84744af8e] Running
	I1010 18:15:38.056318   99368 system_pods.go:61] "kube-controller-manager-ha-142481" [ad42e72b-6208-44c3-b4cb-ac9794ec9683] Running
	I1010 18:15:38.056323   99368 system_pods.go:61] "kube-controller-manager-ha-142481-m02" [4bd4ce73-1b81-44db-a59f-06c89184d88c] Running
	I1010 18:15:38.056327   99368 system_pods.go:61] "kube-proxy-gwvrh" [06e8f455-b250-46ea-bbcc-75ed230f2b57] Running
	I1010 18:15:38.056331   99368 system_pods.go:61] "kube-proxy-srfng" [0aad186a-89b6-48d9-8434-1063c7c8a42f] Running
	I1010 18:15:38.056334   99368 system_pods.go:61] "kube-scheduler-ha-142481" [c28a2da7-1807-4111-bf99-84891a0b278a] Running
	I1010 18:15:38.056337   99368 system_pods.go:61] "kube-scheduler-ha-142481-m02" [f29260ee-8ec8-47c4-848d-ee8ff1d06610] Running
	I1010 18:15:38.056340   99368 system_pods.go:61] "kube-vip-ha-142481" [3ae4dc2c-27b6-4f93-a605-8de6f0c096d0] Running
	I1010 18:15:38.056343   99368 system_pods.go:61] "kube-vip-ha-142481-m02" [bd50bf3a-5e9c-48d4-8bba-fe1fea6c017d] Running
	I1010 18:15:38.056345   99368 system_pods.go:61] "storage-provisioner" [9d74f64e-6391-4ab0-99ad-8c1711840696] Running
	I1010 18:15:38.056352   99368 system_pods.go:74] duration metric: took 180.553557ms to wait for pod list to return data ...
	I1010 18:15:38.056362   99368 default_sa.go:34] waiting for default service account to be created ...
	I1010 18:15:38.244537   99368 request.go:632] Waited for 188.093724ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/default/serviceaccounts
	I1010 18:15:38.244618   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/default/serviceaccounts
	I1010 18:15:38.244624   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:38.244633   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:38.244641   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:38.248165   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:38.248399   99368 default_sa.go:45] found service account: "default"
	I1010 18:15:38.248416   99368 default_sa.go:55] duration metric: took 192.046524ms for default service account to be created ...
	I1010 18:15:38.248427   99368 system_pods.go:116] waiting for k8s-apps to be running ...
	I1010 18:15:38.444704   99368 request.go:632] Waited for 196.206785ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods
	I1010 18:15:38.444765   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods
	I1010 18:15:38.444770   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:38.444778   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:38.444783   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:38.479585   99368 round_trippers.go:574] Response Status: 200 OK in 34 milliseconds
	I1010 18:15:38.484055   99368 system_pods.go:86] 17 kube-system pods found
	I1010 18:15:38.484088   99368 system_pods.go:89] "coredns-7c65d6cfc9-28dll" [1d8bde36-6ab6-4b48-b00a-2a80059ecc11] Running
	I1010 18:15:38.484094   99368 system_pods.go:89] "coredns-7c65d6cfc9-xfhq8" [33d478fb-98f0-4029-87ae-9701a54cecd9] Running
	I1010 18:15:38.484098   99368 system_pods.go:89] "etcd-ha-142481" [fa917068-3ac4-4929-b80b-bfabdc3a9b94] Running
	I1010 18:15:38.484102   99368 system_pods.go:89] "etcd-ha-142481-m02" [8e6c1111-716c-41ff-9d31-ee189956993b] Running
	I1010 18:15:38.484106   99368 system_pods.go:89] "kindnet-4d9v4" [f52506a7-d4f9-4112-8b28-f239c7c8230b] Running
	I1010 18:15:38.484109   99368 system_pods.go:89] "kindnet-5k6j8" [e2d0fb49-23e6-4389-b890-25c03f6089a1] Running
	I1010 18:15:38.484113   99368 system_pods.go:89] "kube-apiserver-ha-142481" [b3d0326d-123b-43af-b1a2-b745b884c3b5] Running
	I1010 18:15:38.484116   99368 system_pods.go:89] "kube-apiserver-ha-142481-m02" [36f5732a-f056-4751-83e3-67f84744af8e] Running
	I1010 18:15:38.484119   99368 system_pods.go:89] "kube-controller-manager-ha-142481" [ad42e72b-6208-44c3-b4cb-ac9794ec9683] Running
	I1010 18:15:38.484122   99368 system_pods.go:89] "kube-controller-manager-ha-142481-m02" [4bd4ce73-1b81-44db-a59f-06c89184d88c] Running
	I1010 18:15:38.484125   99368 system_pods.go:89] "kube-proxy-gwvrh" [06e8f455-b250-46ea-bbcc-75ed230f2b57] Running
	I1010 18:15:38.484128   99368 system_pods.go:89] "kube-proxy-srfng" [0aad186a-89b6-48d9-8434-1063c7c8a42f] Running
	I1010 18:15:38.484132   99368 system_pods.go:89] "kube-scheduler-ha-142481" [c28a2da7-1807-4111-bf99-84891a0b278a] Running
	I1010 18:15:38.484135   99368 system_pods.go:89] "kube-scheduler-ha-142481-m02" [f29260ee-8ec8-47c4-848d-ee8ff1d06610] Running
	I1010 18:15:38.484139   99368 system_pods.go:89] "kube-vip-ha-142481" [3ae4dc2c-27b6-4f93-a605-8de6f0c096d0] Running
	I1010 18:15:38.484141   99368 system_pods.go:89] "kube-vip-ha-142481-m02" [bd50bf3a-5e9c-48d4-8bba-fe1fea6c017d] Running
	I1010 18:15:38.484144   99368 system_pods.go:89] "storage-provisioner" [9d74f64e-6391-4ab0-99ad-8c1711840696] Running
	I1010 18:15:38.484152   99368 system_pods.go:126] duration metric: took 235.71716ms to wait for k8s-apps to be running ...
	I1010 18:15:38.484162   99368 system_svc.go:44] waiting for kubelet service to be running ....
	I1010 18:15:38.484219   99368 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 18:15:38.499587   99368 system_svc.go:56] duration metric: took 15.413149ms WaitForService to wait for kubelet
	I1010 18:15:38.499630   99368 kubeadm.go:582] duration metric: took 23.162321939s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1010 18:15:38.499655   99368 node_conditions.go:102] verifying NodePressure condition ...
	I1010 18:15:38.645127   99368 request.go:632] Waited for 145.342386ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/nodes
	I1010 18:15:38.645247   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes
	I1010 18:15:38.645259   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:38.645267   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:38.645272   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:38.649291   99368 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1010 18:15:38.650032   99368 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1010 18:15:38.650065   99368 node_conditions.go:123] node cpu capacity is 2
	I1010 18:15:38.650077   99368 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1010 18:15:38.650081   99368 node_conditions.go:123] node cpu capacity is 2
	I1010 18:15:38.650086   99368 node_conditions.go:105] duration metric: took 150.425543ms to run NodePressure ...
	I1010 18:15:38.650104   99368 start.go:241] waiting for startup goroutines ...
	I1010 18:15:38.650137   99368 start.go:255] writing updated cluster config ...
	I1010 18:15:38.652551   99368 out.go:201] 
	I1010 18:15:38.654476   99368 config.go:182] Loaded profile config "ha-142481": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 18:15:38.654593   99368 profile.go:143] Saving config to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/config.json ...
	I1010 18:15:38.656332   99368 out.go:177] * Starting "ha-142481-m03" control-plane node in "ha-142481" cluster
	I1010 18:15:38.657633   99368 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1010 18:15:38.657659   99368 cache.go:56] Caching tarball of preloaded images
	I1010 18:15:38.657790   99368 preload.go:172] Found /home/jenkins/minikube-integration/19787-81676/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1010 18:15:38.657806   99368 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1010 18:15:38.657908   99368 profile.go:143] Saving config to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/config.json ...
	I1010 18:15:38.658076   99368 start.go:360] acquireMachinesLock for ha-142481-m03: {Name:mkf50a8a3600473176c10a3ea212772e747151e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1010 18:15:38.658122   99368 start.go:364] duration metric: took 26.16µs to acquireMachinesLock for "ha-142481-m03"
	I1010 18:15:38.658147   99368 start.go:93] Provisioning new machine with config: &{Name:ha-142481 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-142481 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.104 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.186 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspekto
r-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1010 18:15:38.658249   99368 start.go:125] createHost starting for "m03" (driver="kvm2")
	I1010 18:15:38.660071   99368 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1010 18:15:38.660197   99368 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 18:15:38.660258   99368 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 18:15:38.676361   99368 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34941
	I1010 18:15:38.676935   99368 main.go:141] libmachine: () Calling .GetVersion
	I1010 18:15:38.677467   99368 main.go:141] libmachine: Using API Version  1
	I1010 18:15:38.677506   99368 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 18:15:38.677892   99368 main.go:141] libmachine: () Calling .GetMachineName
	I1010 18:15:38.678105   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetMachineName
	I1010 18:15:38.678326   99368 main.go:141] libmachine: (ha-142481-m03) Calling .DriverName
	I1010 18:15:38.678504   99368 start.go:159] libmachine.API.Create for "ha-142481" (driver="kvm2")
	I1010 18:15:38.678538   99368 client.go:168] LocalClient.Create starting
	I1010 18:15:38.678568   99368 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem
	I1010 18:15:38.678601   99368 main.go:141] libmachine: Decoding PEM data...
	I1010 18:15:38.678614   99368 main.go:141] libmachine: Parsing certificate...
	I1010 18:15:38.678663   99368 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem
	I1010 18:15:38.678681   99368 main.go:141] libmachine: Decoding PEM data...
	I1010 18:15:38.678691   99368 main.go:141] libmachine: Parsing certificate...
	I1010 18:15:38.678707   99368 main.go:141] libmachine: Running pre-create checks...
	I1010 18:15:38.678715   99368 main.go:141] libmachine: (ha-142481-m03) Calling .PreCreateCheck
	I1010 18:15:38.678898   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetConfigRaw
	I1010 18:15:38.679630   99368 main.go:141] libmachine: Creating machine...
	I1010 18:15:38.679653   99368 main.go:141] libmachine: (ha-142481-m03) Calling .Create
	I1010 18:15:38.680877   99368 main.go:141] libmachine: (ha-142481-m03) Creating KVM machine...
	I1010 18:15:38.681726   99368 main.go:141] libmachine: (ha-142481-m03) DBG | found existing default KVM network
	I1010 18:15:38.681754   99368 main.go:141] libmachine: (ha-142481-m03) DBG | found existing private KVM network mk-ha-142481
	I1010 18:15:38.681811   99368 main.go:141] libmachine: (ha-142481-m03) Setting up store path in /home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481-m03 ...
	I1010 18:15:38.681845   99368 main.go:141] libmachine: (ha-142481-m03) Building disk image from file:///home/jenkins/minikube-integration/19787-81676/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso
	I1010 18:15:38.681908   99368 main.go:141] libmachine: (ha-142481-m03) DBG | I1010 18:15:38.681805  100144 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19787-81676/.minikube
	I1010 18:15:38.681991   99368 main.go:141] libmachine: (ha-142481-m03) Downloading /home/jenkins/minikube-integration/19787-81676/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19787-81676/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso...
	I1010 18:15:38.938889   99368 main.go:141] libmachine: (ha-142481-m03) DBG | I1010 18:15:38.938689  100144 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481-m03/id_rsa...
	I1010 18:15:39.048405   99368 main.go:141] libmachine: (ha-142481-m03) DBG | I1010 18:15:39.048265  100144 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481-m03/ha-142481-m03.rawdisk...
	I1010 18:15:39.048440   99368 main.go:141] libmachine: (ha-142481-m03) DBG | Writing magic tar header
	I1010 18:15:39.048457   99368 main.go:141] libmachine: (ha-142481-m03) DBG | Writing SSH key tar header
	I1010 18:15:39.048467   99368 main.go:141] libmachine: (ha-142481-m03) DBG | I1010 18:15:39.048382  100144 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481-m03 ...
	I1010 18:15:39.048494   99368 main.go:141] libmachine: (ha-142481-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481-m03
	I1010 18:15:39.048510   99368 main.go:141] libmachine: (ha-142481-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19787-81676/.minikube/machines
	I1010 18:15:39.048527   99368 main.go:141] libmachine: (ha-142481-m03) Setting executable bit set on /home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481-m03 (perms=drwx------)
	I1010 18:15:39.048549   99368 main.go:141] libmachine: (ha-142481-m03) Setting executable bit set on /home/jenkins/minikube-integration/19787-81676/.minikube/machines (perms=drwxr-xr-x)
	I1010 18:15:39.048564   99368 main.go:141] libmachine: (ha-142481-m03) Setting executable bit set on /home/jenkins/minikube-integration/19787-81676/.minikube (perms=drwxr-xr-x)
	I1010 18:15:39.048578   99368 main.go:141] libmachine: (ha-142481-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19787-81676/.minikube
	I1010 18:15:39.048592   99368 main.go:141] libmachine: (ha-142481-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19787-81676
	I1010 18:15:39.048605   99368 main.go:141] libmachine: (ha-142481-m03) Setting executable bit set on /home/jenkins/minikube-integration/19787-81676 (perms=drwxrwxr-x)
	I1010 18:15:39.048635   99368 main.go:141] libmachine: (ha-142481-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1010 18:15:39.048655   99368 main.go:141] libmachine: (ha-142481-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1010 18:15:39.048662   99368 main.go:141] libmachine: (ha-142481-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1010 18:15:39.048676   99368 main.go:141] libmachine: (ha-142481-m03) Creating domain...
	I1010 18:15:39.048685   99368 main.go:141] libmachine: (ha-142481-m03) DBG | Checking permissions on dir: /home/jenkins
	I1010 18:15:39.048696   99368 main.go:141] libmachine: (ha-142481-m03) DBG | Checking permissions on dir: /home
	I1010 18:15:39.048710   99368 main.go:141] libmachine: (ha-142481-m03) DBG | Skipping /home - not owner
	I1010 18:15:39.049753   99368 main.go:141] libmachine: (ha-142481-m03) define libvirt domain using xml: 
	I1010 18:15:39.049779   99368 main.go:141] libmachine: (ha-142481-m03) <domain type='kvm'>
	I1010 18:15:39.049790   99368 main.go:141] libmachine: (ha-142481-m03)   <name>ha-142481-m03</name>
	I1010 18:15:39.049799   99368 main.go:141] libmachine: (ha-142481-m03)   <memory unit='MiB'>2200</memory>
	I1010 18:15:39.049809   99368 main.go:141] libmachine: (ha-142481-m03)   <vcpu>2</vcpu>
	I1010 18:15:39.049816   99368 main.go:141] libmachine: (ha-142481-m03)   <features>
	I1010 18:15:39.049822   99368 main.go:141] libmachine: (ha-142481-m03)     <acpi/>
	I1010 18:15:39.049830   99368 main.go:141] libmachine: (ha-142481-m03)     <apic/>
	I1010 18:15:39.049835   99368 main.go:141] libmachine: (ha-142481-m03)     <pae/>
	I1010 18:15:39.049839   99368 main.go:141] libmachine: (ha-142481-m03)     
	I1010 18:15:39.049845   99368 main.go:141] libmachine: (ha-142481-m03)   </features>
	I1010 18:15:39.049849   99368 main.go:141] libmachine: (ha-142481-m03)   <cpu mode='host-passthrough'>
	I1010 18:15:39.049856   99368 main.go:141] libmachine: (ha-142481-m03)   
	I1010 18:15:39.049862   99368 main.go:141] libmachine: (ha-142481-m03)   </cpu>
	I1010 18:15:39.049890   99368 main.go:141] libmachine: (ha-142481-m03)   <os>
	I1010 18:15:39.049903   99368 main.go:141] libmachine: (ha-142481-m03)     <type>hvm</type>
	I1010 18:15:39.049915   99368 main.go:141] libmachine: (ha-142481-m03)     <boot dev='cdrom'/>
	I1010 18:15:39.049926   99368 main.go:141] libmachine: (ha-142481-m03)     <boot dev='hd'/>
	I1010 18:15:39.049939   99368 main.go:141] libmachine: (ha-142481-m03)     <bootmenu enable='no'/>
	I1010 18:15:39.049945   99368 main.go:141] libmachine: (ha-142481-m03)   </os>
	I1010 18:15:39.049956   99368 main.go:141] libmachine: (ha-142481-m03)   <devices>
	I1010 18:15:39.049966   99368 main.go:141] libmachine: (ha-142481-m03)     <disk type='file' device='cdrom'>
	I1010 18:15:39.049980   99368 main.go:141] libmachine: (ha-142481-m03)       <source file='/home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481-m03/boot2docker.iso'/>
	I1010 18:15:39.049991   99368 main.go:141] libmachine: (ha-142481-m03)       <target dev='hdc' bus='scsi'/>
	I1010 18:15:39.050016   99368 main.go:141] libmachine: (ha-142481-m03)       <readonly/>
	I1010 18:15:39.050029   99368 main.go:141] libmachine: (ha-142481-m03)     </disk>
	I1010 18:15:39.050036   99368 main.go:141] libmachine: (ha-142481-m03)     <disk type='file' device='disk'>
	I1010 18:15:39.050044   99368 main.go:141] libmachine: (ha-142481-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1010 18:15:39.050056   99368 main.go:141] libmachine: (ha-142481-m03)       <source file='/home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481-m03/ha-142481-m03.rawdisk'/>
	I1010 18:15:39.050065   99368 main.go:141] libmachine: (ha-142481-m03)       <target dev='hda' bus='virtio'/>
	I1010 18:15:39.050070   99368 main.go:141] libmachine: (ha-142481-m03)     </disk>
	I1010 18:15:39.050075   99368 main.go:141] libmachine: (ha-142481-m03)     <interface type='network'>
	I1010 18:15:39.050081   99368 main.go:141] libmachine: (ha-142481-m03)       <source network='mk-ha-142481'/>
	I1010 18:15:39.050087   99368 main.go:141] libmachine: (ha-142481-m03)       <model type='virtio'/>
	I1010 18:15:39.050092   99368 main.go:141] libmachine: (ha-142481-m03)     </interface>
	I1010 18:15:39.050099   99368 main.go:141] libmachine: (ha-142481-m03)     <interface type='network'>
	I1010 18:15:39.050104   99368 main.go:141] libmachine: (ha-142481-m03)       <source network='default'/>
	I1010 18:15:39.050114   99368 main.go:141] libmachine: (ha-142481-m03)       <model type='virtio'/>
	I1010 18:15:39.050121   99368 main.go:141] libmachine: (ha-142481-m03)     </interface>
	I1010 18:15:39.050128   99368 main.go:141] libmachine: (ha-142481-m03)     <serial type='pty'>
	I1010 18:15:39.050232   99368 main.go:141] libmachine: (ha-142481-m03)       <target port='0'/>
	I1010 18:15:39.050268   99368 main.go:141] libmachine: (ha-142481-m03)     </serial>
	I1010 18:15:39.050282   99368 main.go:141] libmachine: (ha-142481-m03)     <console type='pty'>
	I1010 18:15:39.050294   99368 main.go:141] libmachine: (ha-142481-m03)       <target type='serial' port='0'/>
	I1010 18:15:39.050305   99368 main.go:141] libmachine: (ha-142481-m03)     </console>
	I1010 18:15:39.050315   99368 main.go:141] libmachine: (ha-142481-m03)     <rng model='virtio'>
	I1010 18:15:39.050328   99368 main.go:141] libmachine: (ha-142481-m03)       <backend model='random'>/dev/random</backend>
	I1010 18:15:39.050340   99368 main.go:141] libmachine: (ha-142481-m03)     </rng>
	I1010 18:15:39.050350   99368 main.go:141] libmachine: (ha-142481-m03)     
	I1010 18:15:39.050359   99368 main.go:141] libmachine: (ha-142481-m03)     
	I1010 18:15:39.050371   99368 main.go:141] libmachine: (ha-142481-m03)   </devices>
	I1010 18:15:39.050378   99368 main.go:141] libmachine: (ha-142481-m03) </domain>
	I1010 18:15:39.050391   99368 main.go:141] libmachine: (ha-142481-m03) 
	I1010 18:15:39.057742   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:01:68:df in network default
	I1010 18:15:39.058339   99368 main.go:141] libmachine: (ha-142481-m03) Ensuring networks are active...
	I1010 18:15:39.058372   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:15:39.059040   99368 main.go:141] libmachine: (ha-142481-m03) Ensuring network default is active
	I1010 18:15:39.059385   99368 main.go:141] libmachine: (ha-142481-m03) Ensuring network mk-ha-142481 is active
	I1010 18:15:39.060065   99368 main.go:141] libmachine: (ha-142481-m03) Getting domain xml...
	I1010 18:15:39.061108   99368 main.go:141] libmachine: (ha-142481-m03) Creating domain...
	I1010 18:15:40.343936   99368 main.go:141] libmachine: (ha-142481-m03) Waiting to get IP...
	I1010 18:15:40.344892   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:15:40.345373   99368 main.go:141] libmachine: (ha-142481-m03) DBG | unable to find current IP address of domain ha-142481-m03 in network mk-ha-142481
	I1010 18:15:40.345401   99368 main.go:141] libmachine: (ha-142481-m03) DBG | I1010 18:15:40.345319  100144 retry.go:31] will retry after 289.570163ms: waiting for machine to come up
	I1010 18:15:40.637167   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:15:40.637765   99368 main.go:141] libmachine: (ha-142481-m03) DBG | unable to find current IP address of domain ha-142481-m03 in network mk-ha-142481
	I1010 18:15:40.637799   99368 main.go:141] libmachine: (ha-142481-m03) DBG | I1010 18:15:40.637685  100144 retry.go:31] will retry after 311.078832ms: waiting for machine to come up
	I1010 18:15:40.950108   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:15:40.950581   99368 main.go:141] libmachine: (ha-142481-m03) DBG | unable to find current IP address of domain ha-142481-m03 in network mk-ha-142481
	I1010 18:15:40.950610   99368 main.go:141] libmachine: (ha-142481-m03) DBG | I1010 18:15:40.950529  100144 retry.go:31] will retry after 356.951796ms: waiting for machine to come up
	I1010 18:15:41.309147   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:15:41.309650   99368 main.go:141] libmachine: (ha-142481-m03) DBG | unable to find current IP address of domain ha-142481-m03 in network mk-ha-142481
	I1010 18:15:41.309677   99368 main.go:141] libmachine: (ha-142481-m03) DBG | I1010 18:15:41.309602  100144 retry.go:31] will retry after 532.45566ms: waiting for machine to come up
	I1010 18:15:41.843545   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:15:41.844119   99368 main.go:141] libmachine: (ha-142481-m03) DBG | unable to find current IP address of domain ha-142481-m03 in network mk-ha-142481
	I1010 18:15:41.844147   99368 main.go:141] libmachine: (ha-142481-m03) DBG | I1010 18:15:41.844054  100144 retry.go:31] will retry after 601.557958ms: waiting for machine to come up
	I1010 18:15:42.447022   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:15:42.447619   99368 main.go:141] libmachine: (ha-142481-m03) DBG | unable to find current IP address of domain ha-142481-m03 in network mk-ha-142481
	I1010 18:15:42.447649   99368 main.go:141] libmachine: (ha-142481-m03) DBG | I1010 18:15:42.447560  100144 retry.go:31] will retry after 756.716179ms: waiting for machine to come up
	I1010 18:15:43.206472   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:15:43.207013   99368 main.go:141] libmachine: (ha-142481-m03) DBG | unable to find current IP address of domain ha-142481-m03 in network mk-ha-142481
	I1010 18:15:43.207043   99368 main.go:141] libmachine: (ha-142481-m03) DBG | I1010 18:15:43.206973  100144 retry.go:31] will retry after 1.170057285s: waiting for machine to come up
	I1010 18:15:44.378682   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:15:44.379169   99368 main.go:141] libmachine: (ha-142481-m03) DBG | unable to find current IP address of domain ha-142481-m03 in network mk-ha-142481
	I1010 18:15:44.379199   99368 main.go:141] libmachine: (ha-142481-m03) DBG | I1010 18:15:44.379123  100144 retry.go:31] will retry after 1.176461257s: waiting for machine to come up
	I1010 18:15:45.558684   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:15:45.559193   99368 main.go:141] libmachine: (ha-142481-m03) DBG | unable to find current IP address of domain ha-142481-m03 in network mk-ha-142481
	I1010 18:15:45.559220   99368 main.go:141] libmachine: (ha-142481-m03) DBG | I1010 18:15:45.559154  100144 retry.go:31] will retry after 1.48319029s: waiting for machine to come up
	I1010 18:15:47.044036   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:15:47.044496   99368 main.go:141] libmachine: (ha-142481-m03) DBG | unable to find current IP address of domain ha-142481-m03 in network mk-ha-142481
	I1010 18:15:47.044521   99368 main.go:141] libmachine: (ha-142481-m03) DBG | I1010 18:15:47.044430  100144 retry.go:31] will retry after 1.688231692s: waiting for machine to come up
	I1010 18:15:48.734646   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:15:48.735151   99368 main.go:141] libmachine: (ha-142481-m03) DBG | unable to find current IP address of domain ha-142481-m03 in network mk-ha-142481
	I1010 18:15:48.735174   99368 main.go:141] libmachine: (ha-142481-m03) DBG | I1010 18:15:48.735104  100144 retry.go:31] will retry after 2.212019945s: waiting for machine to come up
	I1010 18:15:50.948675   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:15:50.949207   99368 main.go:141] libmachine: (ha-142481-m03) DBG | unable to find current IP address of domain ha-142481-m03 in network mk-ha-142481
	I1010 18:15:50.949236   99368 main.go:141] libmachine: (ha-142481-m03) DBG | I1010 18:15:50.949160  100144 retry.go:31] will retry after 2.319000915s: waiting for machine to come up
	I1010 18:15:53.270642   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:15:53.271193   99368 main.go:141] libmachine: (ha-142481-m03) DBG | unable to find current IP address of domain ha-142481-m03 in network mk-ha-142481
	I1010 18:15:53.271216   99368 main.go:141] libmachine: (ha-142481-m03) DBG | I1010 18:15:53.271155  100144 retry.go:31] will retry after 3.719042495s: waiting for machine to come up
	I1010 18:15:56.994579   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:15:56.995029   99368 main.go:141] libmachine: (ha-142481-m03) DBG | unable to find current IP address of domain ha-142481-m03 in network mk-ha-142481
	I1010 18:15:56.995054   99368 main.go:141] libmachine: (ha-142481-m03) DBG | I1010 18:15:56.994970  100144 retry.go:31] will retry after 5.298417625s: waiting for machine to come up
	I1010 18:16:02.294993   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:02.295462   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has current primary IP address 192.168.39.175 and MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:02.295487   99368 main.go:141] libmachine: (ha-142481-m03) Found IP for machine: 192.168.39.175
	I1010 18:16:02.295500   99368 main.go:141] libmachine: (ha-142481-m03) Reserving static IP address...
	I1010 18:16:02.295917   99368 main.go:141] libmachine: (ha-142481-m03) DBG | unable to find host DHCP lease matching {name: "ha-142481-m03", mac: "52:54:00:06:ed:5a", ip: "192.168.39.175"} in network mk-ha-142481
	I1010 18:16:02.376364   99368 main.go:141] libmachine: (ha-142481-m03) DBG | Getting to WaitForSSH function...
	I1010 18:16:02.376400   99368 main.go:141] libmachine: (ha-142481-m03) Reserved static IP address: 192.168.39.175
	I1010 18:16:02.376420   99368 main.go:141] libmachine: (ha-142481-m03) Waiting for SSH to be available...
	I1010 18:16:02.379038   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:02.379428   99368 main.go:141] libmachine: (ha-142481-m03) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:06:ed:5a", ip: ""} in network mk-ha-142481
	I1010 18:16:02.379482   99368 main.go:141] libmachine: (ha-142481-m03) DBG | unable to find defined IP address of network mk-ha-142481 interface with MAC address 52:54:00:06:ed:5a
	I1010 18:16:02.379643   99368 main.go:141] libmachine: (ha-142481-m03) DBG | Using SSH client type: external
	I1010 18:16:02.379666   99368 main.go:141] libmachine: (ha-142481-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481-m03/id_rsa (-rw-------)
	I1010 18:16:02.379695   99368 main.go:141] libmachine: (ha-142481-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1010 18:16:02.379708   99368 main.go:141] libmachine: (ha-142481-m03) DBG | About to run SSH command:
	I1010 18:16:02.379720   99368 main.go:141] libmachine: (ha-142481-m03) DBG | exit 0
	I1010 18:16:02.383609   99368 main.go:141] libmachine: (ha-142481-m03) DBG | SSH cmd err, output: exit status 255: 
	I1010 18:16:02.383645   99368 main.go:141] libmachine: (ha-142481-m03) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1010 18:16:02.383673   99368 main.go:141] libmachine: (ha-142481-m03) DBG | command : exit 0
	I1010 18:16:02.383687   99368 main.go:141] libmachine: (ha-142481-m03) DBG | err     : exit status 255
	I1010 18:16:02.383701   99368 main.go:141] libmachine: (ha-142481-m03) DBG | output  : 
	I1010 18:16:05.385045   99368 main.go:141] libmachine: (ha-142481-m03) DBG | Getting to WaitForSSH function...
	I1010 18:16:05.387500   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:05.388024   99368 main.go:141] libmachine: (ha-142481-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:ed:5a", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:15:53 +0000 UTC Type:0 Mac:52:54:00:06:ed:5a Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:ha-142481-m03 Clientid:01:52:54:00:06:ed:5a}
	I1010 18:16:05.388058   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined IP address 192.168.39.175 and MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:05.388149   99368 main.go:141] libmachine: (ha-142481-m03) DBG | Using SSH client type: external
	I1010 18:16:05.388172   99368 main.go:141] libmachine: (ha-142481-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481-m03/id_rsa (-rw-------)
	I1010 18:16:05.388198   99368 main.go:141] libmachine: (ha-142481-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.175 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1010 18:16:05.388212   99368 main.go:141] libmachine: (ha-142481-m03) DBG | About to run SSH command:
	I1010 18:16:05.388222   99368 main.go:141] libmachine: (ha-142481-m03) DBG | exit 0
	I1010 18:16:05.517373   99368 main.go:141] libmachine: (ha-142481-m03) DBG | SSH cmd err, output: <nil>: 
	I1010 18:16:05.517675   99368 main.go:141] libmachine: (ha-142481-m03) KVM machine creation complete!
	I1010 18:16:05.517976   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetConfigRaw
	I1010 18:16:05.518524   99368 main.go:141] libmachine: (ha-142481-m03) Calling .DriverName
	I1010 18:16:05.518756   99368 main.go:141] libmachine: (ha-142481-m03) Calling .DriverName
	I1010 18:16:05.518928   99368 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1010 18:16:05.518944   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetState
	I1010 18:16:05.520359   99368 main.go:141] libmachine: Detecting operating system of created instance...
	I1010 18:16:05.520374   99368 main.go:141] libmachine: Waiting for SSH to be available...
	I1010 18:16:05.520382   99368 main.go:141] libmachine: Getting to WaitForSSH function...
	I1010 18:16:05.520388   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHHostname
	I1010 18:16:05.523092   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:05.523568   99368 main.go:141] libmachine: (ha-142481-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:ed:5a", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:15:53 +0000 UTC Type:0 Mac:52:54:00:06:ed:5a Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:ha-142481-m03 Clientid:01:52:54:00:06:ed:5a}
	I1010 18:16:05.523601   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined IP address 192.168.39.175 and MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:05.523714   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHPort
	I1010 18:16:05.523901   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHKeyPath
	I1010 18:16:05.524055   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHKeyPath
	I1010 18:16:05.524156   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHUsername
	I1010 18:16:05.524338   99368 main.go:141] libmachine: Using SSH client type: native
	I1010 18:16:05.524636   99368 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.175 22 <nil> <nil>}
	I1010 18:16:05.524669   99368 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1010 18:16:05.632367   99368 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1010 18:16:05.632396   99368 main.go:141] libmachine: Detecting the provisioner...
	I1010 18:16:05.632408   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHHostname
	I1010 18:16:05.635809   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:05.636216   99368 main.go:141] libmachine: (ha-142481-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:ed:5a", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:15:53 +0000 UTC Type:0 Mac:52:54:00:06:ed:5a Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:ha-142481-m03 Clientid:01:52:54:00:06:ed:5a}
	I1010 18:16:05.636238   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined IP address 192.168.39.175 and MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:05.636547   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHPort
	I1010 18:16:05.636757   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHKeyPath
	I1010 18:16:05.636963   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHKeyPath
	I1010 18:16:05.637090   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHUsername
	I1010 18:16:05.637319   99368 main.go:141] libmachine: Using SSH client type: native
	I1010 18:16:05.637523   99368 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.175 22 <nil> <nil>}
	I1010 18:16:05.637539   99368 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1010 18:16:05.749769   99368 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1010 18:16:05.749833   99368 main.go:141] libmachine: found compatible host: buildroot
	I1010 18:16:05.749840   99368 main.go:141] libmachine: Provisioning with buildroot...
	I1010 18:16:05.749847   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetMachineName
	I1010 18:16:05.750100   99368 buildroot.go:166] provisioning hostname "ha-142481-m03"
	I1010 18:16:05.750135   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetMachineName
	I1010 18:16:05.750348   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHHostname
	I1010 18:16:05.753204   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:05.753697   99368 main.go:141] libmachine: (ha-142481-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:ed:5a", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:15:53 +0000 UTC Type:0 Mac:52:54:00:06:ed:5a Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:ha-142481-m03 Clientid:01:52:54:00:06:ed:5a}
	I1010 18:16:05.753724   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined IP address 192.168.39.175 and MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:05.753970   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHPort
	I1010 18:16:05.754155   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHKeyPath
	I1010 18:16:05.754326   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHKeyPath
	I1010 18:16:05.754456   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHUsername
	I1010 18:16:05.754597   99368 main.go:141] libmachine: Using SSH client type: native
	I1010 18:16:05.754815   99368 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.175 22 <nil> <nil>}
	I1010 18:16:05.754835   99368 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-142481-m03 && echo "ha-142481-m03" | sudo tee /etc/hostname
	I1010 18:16:05.886094   99368 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-142481-m03
	
	I1010 18:16:05.886129   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHHostname
	I1010 18:16:05.889027   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:05.889401   99368 main.go:141] libmachine: (ha-142481-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:ed:5a", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:15:53 +0000 UTC Type:0 Mac:52:54:00:06:ed:5a Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:ha-142481-m03 Clientid:01:52:54:00:06:ed:5a}
	I1010 18:16:05.889420   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined IP address 192.168.39.175 and MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:05.889629   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHPort
	I1010 18:16:05.889843   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHKeyPath
	I1010 18:16:05.889995   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHKeyPath
	I1010 18:16:05.890115   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHUsername
	I1010 18:16:05.890271   99368 main.go:141] libmachine: Using SSH client type: native
	I1010 18:16:05.890474   99368 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.175 22 <nil> <nil>}
	I1010 18:16:05.890491   99368 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-142481-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-142481-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-142481-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1010 18:16:06.011027   99368 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1010 18:16:06.011075   99368 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19787-81676/.minikube CaCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19787-81676/.minikube}
	I1010 18:16:06.011118   99368 buildroot.go:174] setting up certificates
	I1010 18:16:06.011128   99368 provision.go:84] configureAuth start
	I1010 18:16:06.011159   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetMachineName
	I1010 18:16:06.011515   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetIP
	I1010 18:16:06.014592   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:06.015019   99368 main.go:141] libmachine: (ha-142481-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:ed:5a", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:15:53 +0000 UTC Type:0 Mac:52:54:00:06:ed:5a Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:ha-142481-m03 Clientid:01:52:54:00:06:ed:5a}
	I1010 18:16:06.015050   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined IP address 192.168.39.175 and MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:06.015255   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHHostname
	I1010 18:16:06.017745   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:06.018212   99368 main.go:141] libmachine: (ha-142481-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:ed:5a", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:15:53 +0000 UTC Type:0 Mac:52:54:00:06:ed:5a Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:ha-142481-m03 Clientid:01:52:54:00:06:ed:5a}
	I1010 18:16:06.018241   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined IP address 192.168.39.175 and MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:06.018399   99368 provision.go:143] copyHostCerts
	I1010 18:16:06.018428   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem
	I1010 18:16:06.018461   99368 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem, removing ...
	I1010 18:16:06.018471   99368 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem
	I1010 18:16:06.018534   99368 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem (1675 bytes)
	I1010 18:16:06.018611   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem
	I1010 18:16:06.018628   99368 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem, removing ...
	I1010 18:16:06.018635   99368 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem
	I1010 18:16:06.018659   99368 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem (1078 bytes)
	I1010 18:16:06.018703   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem
	I1010 18:16:06.018722   99368 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem, removing ...
	I1010 18:16:06.018728   99368 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem
	I1010 18:16:06.018748   99368 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem (1123 bytes)
	I1010 18:16:06.018800   99368 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem org=jenkins.ha-142481-m03 san=[127.0.0.1 192.168.39.175 ha-142481-m03 localhost minikube]
	I1010 18:16:06.222717   99368 provision.go:177] copyRemoteCerts
	I1010 18:16:06.222779   99368 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1010 18:16:06.222805   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHHostname
	I1010 18:16:06.225434   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:06.225825   99368 main.go:141] libmachine: (ha-142481-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:ed:5a", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:15:53 +0000 UTC Type:0 Mac:52:54:00:06:ed:5a Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:ha-142481-m03 Clientid:01:52:54:00:06:ed:5a}
	I1010 18:16:06.225848   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined IP address 192.168.39.175 and MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:06.226065   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHPort
	I1010 18:16:06.226286   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHKeyPath
	I1010 18:16:06.226456   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHUsername
	I1010 18:16:06.226630   99368 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481-m03/id_rsa Username:docker}
	I1010 18:16:06.315791   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1010 18:16:06.315882   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1010 18:16:06.343259   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1010 18:16:06.343345   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1010 18:16:06.370749   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1010 18:16:06.370822   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1010 18:16:06.397148   99368 provision.go:87] duration metric: took 386.005417ms to configureAuth
	I1010 18:16:06.397183   99368 buildroot.go:189] setting minikube options for container-runtime
	I1010 18:16:06.397452   99368 config.go:182] Loaded profile config "ha-142481": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 18:16:06.397548   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHHostname
	I1010 18:16:06.400947   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:06.401493   99368 main.go:141] libmachine: (ha-142481-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:ed:5a", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:15:53 +0000 UTC Type:0 Mac:52:54:00:06:ed:5a Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:ha-142481-m03 Clientid:01:52:54:00:06:ed:5a}
	I1010 18:16:06.401529   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined IP address 192.168.39.175 and MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:06.401697   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHPort
	I1010 18:16:06.401877   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHKeyPath
	I1010 18:16:06.402099   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHKeyPath
	I1010 18:16:06.402329   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHUsername
	I1010 18:16:06.402536   99368 main.go:141] libmachine: Using SSH client type: native
	I1010 18:16:06.402752   99368 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.175 22 <nil> <nil>}
	I1010 18:16:06.402772   99368 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1010 18:16:06.637717   99368 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1010 18:16:06.637751   99368 main.go:141] libmachine: Checking connection to Docker...
	I1010 18:16:06.637762   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetURL
	I1010 18:16:06.639112   99368 main.go:141] libmachine: (ha-142481-m03) DBG | Using libvirt version 6000000
	I1010 18:16:06.641181   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:06.641548   99368 main.go:141] libmachine: (ha-142481-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:ed:5a", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:15:53 +0000 UTC Type:0 Mac:52:54:00:06:ed:5a Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:ha-142481-m03 Clientid:01:52:54:00:06:ed:5a}
	I1010 18:16:06.641587   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined IP address 192.168.39.175 and MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:06.641730   99368 main.go:141] libmachine: Docker is up and running!
	I1010 18:16:06.641747   99368 main.go:141] libmachine: Reticulating splines...
	I1010 18:16:06.641756   99368 client.go:171] duration metric: took 27.963208724s to LocalClient.Create
	I1010 18:16:06.641785   99368 start.go:167] duration metric: took 27.963279742s to libmachine.API.Create "ha-142481"
	I1010 18:16:06.641795   99368 start.go:293] postStartSetup for "ha-142481-m03" (driver="kvm2")
	I1010 18:16:06.641804   99368 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1010 18:16:06.641824   99368 main.go:141] libmachine: (ha-142481-m03) Calling .DriverName
	I1010 18:16:06.642091   99368 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1010 18:16:06.642123   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHHostname
	I1010 18:16:06.644087   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:06.644396   99368 main.go:141] libmachine: (ha-142481-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:ed:5a", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:15:53 +0000 UTC Type:0 Mac:52:54:00:06:ed:5a Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:ha-142481-m03 Clientid:01:52:54:00:06:ed:5a}
	I1010 18:16:06.644432   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined IP address 192.168.39.175 and MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:06.644567   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHPort
	I1010 18:16:06.644765   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHKeyPath
	I1010 18:16:06.644924   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHUsername
	I1010 18:16:06.645078   99368 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481-m03/id_rsa Username:docker}
	I1010 18:16:06.732228   99368 ssh_runner.go:195] Run: cat /etc/os-release
	I1010 18:16:06.736988   99368 info.go:137] Remote host: Buildroot 2023.02.9
	I1010 18:16:06.737036   99368 filesync.go:126] Scanning /home/jenkins/minikube-integration/19787-81676/.minikube/addons for local assets ...
	I1010 18:16:06.737116   99368 filesync.go:126] Scanning /home/jenkins/minikube-integration/19787-81676/.minikube/files for local assets ...
	I1010 18:16:06.737228   99368 filesync.go:149] local asset: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem -> 888762.pem in /etc/ssl/certs
	I1010 18:16:06.737241   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem -> /etc/ssl/certs/888762.pem
	I1010 18:16:06.737350   99368 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1010 18:16:06.747599   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem --> /etc/ssl/certs/888762.pem (1708 bytes)
	I1010 18:16:06.779643   99368 start.go:296] duration metric: took 137.832802ms for postStartSetup
	I1010 18:16:06.779701   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetConfigRaw
	I1010 18:16:06.780474   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetIP
	I1010 18:16:06.783287   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:06.783711   99368 main.go:141] libmachine: (ha-142481-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:ed:5a", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:15:53 +0000 UTC Type:0 Mac:52:54:00:06:ed:5a Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:ha-142481-m03 Clientid:01:52:54:00:06:ed:5a}
	I1010 18:16:06.783739   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined IP address 192.168.39.175 and MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:06.784133   99368 profile.go:143] Saving config to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/config.json ...
	I1010 18:16:06.784363   99368 start.go:128] duration metric: took 28.126102871s to createHost
	I1010 18:16:06.784390   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHHostname
	I1010 18:16:06.786724   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:06.787090   99368 main.go:141] libmachine: (ha-142481-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:ed:5a", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:15:53 +0000 UTC Type:0 Mac:52:54:00:06:ed:5a Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:ha-142481-m03 Clientid:01:52:54:00:06:ed:5a}
	I1010 18:16:06.787113   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined IP address 192.168.39.175 and MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:06.787327   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHPort
	I1010 18:16:06.787526   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHKeyPath
	I1010 18:16:06.787700   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHKeyPath
	I1010 18:16:06.787826   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHUsername
	I1010 18:16:06.787997   99368 main.go:141] libmachine: Using SSH client type: native
	I1010 18:16:06.788211   99368 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.175 22 <nil> <nil>}
	I1010 18:16:06.788226   99368 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1010 18:16:06.901742   99368 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728584166.882037024
	
	I1010 18:16:06.901769   99368 fix.go:216] guest clock: 1728584166.882037024
	I1010 18:16:06.901778   99368 fix.go:229] Guest: 2024-10-10 18:16:06.882037024 +0000 UTC Remote: 2024-10-10 18:16:06.784377622 +0000 UTC m=+148.714965698 (delta=97.659402ms)
	I1010 18:16:06.901799   99368 fix.go:200] guest clock delta is within tolerance: 97.659402ms
	I1010 18:16:06.901806   99368 start.go:83] releasing machines lock for "ha-142481-m03", held for 28.24367452s
	I1010 18:16:06.901831   99368 main.go:141] libmachine: (ha-142481-m03) Calling .DriverName
	I1010 18:16:06.902170   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetIP
	I1010 18:16:06.904709   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:06.905164   99368 main.go:141] libmachine: (ha-142481-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:ed:5a", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:15:53 +0000 UTC Type:0 Mac:52:54:00:06:ed:5a Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:ha-142481-m03 Clientid:01:52:54:00:06:ed:5a}
	I1010 18:16:06.905194   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined IP address 192.168.39.175 and MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:06.907619   99368 out.go:177] * Found network options:
	I1010 18:16:06.909057   99368 out.go:177]   - NO_PROXY=192.168.39.104,192.168.39.186
	W1010 18:16:06.910397   99368 proxy.go:119] fail to check proxy env: Error ip not in block
	W1010 18:16:06.910422   99368 proxy.go:119] fail to check proxy env: Error ip not in block
	I1010 18:16:06.910439   99368 main.go:141] libmachine: (ha-142481-m03) Calling .DriverName
	I1010 18:16:06.911020   99368 main.go:141] libmachine: (ha-142481-m03) Calling .DriverName
	I1010 18:16:06.911247   99368 main.go:141] libmachine: (ha-142481-m03) Calling .DriverName
	I1010 18:16:06.911351   99368 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1010 18:16:06.911394   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHHostname
	W1010 18:16:06.911428   99368 proxy.go:119] fail to check proxy env: Error ip not in block
	W1010 18:16:06.911458   99368 proxy.go:119] fail to check proxy env: Error ip not in block
	I1010 18:16:06.911514   99368 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1010 18:16:06.911529   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHHostname
	I1010 18:16:06.914295   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:06.914543   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:06.914629   99368 main.go:141] libmachine: (ha-142481-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:ed:5a", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:15:53 +0000 UTC Type:0 Mac:52:54:00:06:ed:5a Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:ha-142481-m03 Clientid:01:52:54:00:06:ed:5a}
	I1010 18:16:06.914656   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined IP address 192.168.39.175 and MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:06.914760   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHPort
	I1010 18:16:06.914838   99368 main.go:141] libmachine: (ha-142481-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:ed:5a", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:15:53 +0000 UTC Type:0 Mac:52:54:00:06:ed:5a Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:ha-142481-m03 Clientid:01:52:54:00:06:ed:5a}
	I1010 18:16:06.914856   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined IP address 192.168.39.175 and MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:06.914913   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHKeyPath
	I1010 18:16:06.915049   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHPort
	I1010 18:16:06.915098   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHUsername
	I1010 18:16:06.915168   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHKeyPath
	I1010 18:16:06.915225   99368 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481-m03/id_rsa Username:docker}
	I1010 18:16:06.915381   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHUsername
	I1010 18:16:06.915497   99368 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481-m03/id_rsa Username:docker}
	I1010 18:16:07.163627   99368 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1010 18:16:07.170344   99368 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1010 18:16:07.170418   99368 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1010 18:16:07.188658   99368 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1010 18:16:07.188691   99368 start.go:495] detecting cgroup driver to use...
	I1010 18:16:07.188764   99368 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1010 18:16:07.207458   99368 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1010 18:16:07.223388   99368 docker.go:217] disabling cri-docker service (if available) ...
	I1010 18:16:07.223465   99368 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1010 18:16:07.240312   99368 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1010 18:16:07.258338   99368 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1010 18:16:07.397297   99368 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1010 18:16:07.555534   99368 docker.go:233] disabling docker service ...
	I1010 18:16:07.555621   99368 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1010 18:16:07.571003   99368 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1010 18:16:07.585612   99368 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1010 18:16:07.724995   99368 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1010 18:16:07.861369   99368 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1010 18:16:07.876144   99368 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1010 18:16:07.895651   99368 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1010 18:16:07.895716   99368 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:16:07.906721   99368 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1010 18:16:07.906792   99368 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:16:07.917729   99368 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:16:07.929016   99368 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:16:07.940559   99368 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1010 18:16:07.953995   99368 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:16:07.965226   99368 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:16:07.984344   99368 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:16:07.995983   99368 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1010 18:16:08.006420   99368 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1010 18:16:08.006504   99368 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1010 18:16:08.021735   99368 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1010 18:16:08.033011   99368 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 18:16:08.164791   99368 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1010 18:16:08.260672   99368 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1010 18:16:08.260742   99368 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1010 18:16:08.271900   99368 start.go:563] Will wait 60s for crictl version
	I1010 18:16:08.271960   99368 ssh_runner.go:195] Run: which crictl
	I1010 18:16:08.275929   99368 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1010 18:16:08.314672   99368 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1010 18:16:08.314749   99368 ssh_runner.go:195] Run: crio --version
	I1010 18:16:08.346340   99368 ssh_runner.go:195] Run: crio --version
	I1010 18:16:08.377606   99368 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1010 18:16:08.379014   99368 out.go:177]   - env NO_PROXY=192.168.39.104
	I1010 18:16:08.380435   99368 out.go:177]   - env NO_PROXY=192.168.39.104,192.168.39.186
	I1010 18:16:08.381694   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetIP
	I1010 18:16:08.384544   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:08.384908   99368 main.go:141] libmachine: (ha-142481-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:ed:5a", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:15:53 +0000 UTC Type:0 Mac:52:54:00:06:ed:5a Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:ha-142481-m03 Clientid:01:52:54:00:06:ed:5a}
	I1010 18:16:08.384939   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined IP address 192.168.39.175 and MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:08.385183   99368 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1010 18:16:08.389725   99368 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 18:16:08.402638   99368 mustload.go:65] Loading cluster: ha-142481
	I1010 18:16:08.402881   99368 config.go:182] Loaded profile config "ha-142481": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 18:16:08.403135   99368 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 18:16:08.403183   99368 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 18:16:08.418274   99368 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43461
	I1010 18:16:08.418827   99368 main.go:141] libmachine: () Calling .GetVersion
	I1010 18:16:08.419392   99368 main.go:141] libmachine: Using API Version  1
	I1010 18:16:08.419418   99368 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 18:16:08.419747   99368 main.go:141] libmachine: () Calling .GetMachineName
	I1010 18:16:08.419899   99368 main.go:141] libmachine: (ha-142481) Calling .GetState
	I1010 18:16:08.421605   99368 host.go:66] Checking if "ha-142481" exists ...
	I1010 18:16:08.421927   99368 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 18:16:08.421980   99368 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 18:16:08.437329   99368 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37965
	I1010 18:16:08.437789   99368 main.go:141] libmachine: () Calling .GetVersion
	I1010 18:16:08.438250   99368 main.go:141] libmachine: Using API Version  1
	I1010 18:16:08.438271   99368 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 18:16:08.438615   99368 main.go:141] libmachine: () Calling .GetMachineName
	I1010 18:16:08.438801   99368 main.go:141] libmachine: (ha-142481) Calling .DriverName
	I1010 18:16:08.438970   99368 certs.go:68] Setting up /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481 for IP: 192.168.39.175
	I1010 18:16:08.438988   99368 certs.go:194] generating shared ca certs ...
	I1010 18:16:08.439008   99368 certs.go:226] acquiring lock for ca certs: {Name:mk934aa2a82050d7b4e8800492bf64f91d140517 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:16:08.439150   99368 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key
	I1010 18:16:08.439211   99368 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key
	I1010 18:16:08.439224   99368 certs.go:256] generating profile certs ...
	I1010 18:16:08.439325   99368 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/client.key
	I1010 18:16:08.439355   99368 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.key.7bb2202d
	I1010 18:16:08.439376   99368 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.crt.7bb2202d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.104 192.168.39.186 192.168.39.175 192.168.39.254]
	I1010 18:16:08.528731   99368 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.crt.7bb2202d ...
	I1010 18:16:08.528764   99368 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.crt.7bb2202d: {Name:mk202db6f01b46b51940ca7afe581ede7b3af4e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:16:08.528980   99368 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.key.7bb2202d ...
	I1010 18:16:08.528997   99368 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.key.7bb2202d: {Name:mk61783eedf299ba3a6dbb3f62b131938823078c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:16:08.529112   99368 certs.go:381] copying /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.crt.7bb2202d -> /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.crt
	I1010 18:16:08.529294   99368 certs.go:385] copying /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.key.7bb2202d -> /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.key
	I1010 18:16:08.529465   99368 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/proxy-client.key
	I1010 18:16:08.529488   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1010 18:16:08.529506   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1010 18:16:08.529521   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1010 18:16:08.529540   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1010 18:16:08.529557   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1010 18:16:08.529580   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1010 18:16:08.529599   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1010 18:16:08.545002   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1010 18:16:08.545123   99368 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876.pem (1338 bytes)
	W1010 18:16:08.545166   99368 certs.go:480] ignoring /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876_empty.pem, impossibly tiny 0 bytes
	I1010 18:16:08.545178   99368 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem (1679 bytes)
	I1010 18:16:08.545225   99368 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem (1078 bytes)
	I1010 18:16:08.545259   99368 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem (1123 bytes)
	I1010 18:16:08.545291   99368 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem (1675 bytes)
	I1010 18:16:08.545339   99368 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem (1708 bytes)
	I1010 18:16:08.545380   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem -> /usr/share/ca-certificates/888762.pem
	I1010 18:16:08.545401   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:16:08.545415   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876.pem -> /usr/share/ca-certificates/88876.pem
	I1010 18:16:08.545465   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHHostname
	I1010 18:16:08.548797   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:16:08.549296   99368 main.go:141] libmachine: (ha-142481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:fa:00", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:13:53 +0000 UTC Type:0 Mac:52:54:00:3e:fa:00 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-142481 Clientid:01:52:54:00:3e:fa:00}
	I1010 18:16:08.549316   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined IP address 192.168.39.104 and MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:16:08.549545   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHPort
	I1010 18:16:08.549789   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHKeyPath
	I1010 18:16:08.549993   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHUsername
	I1010 18:16:08.550143   99368 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481/id_rsa Username:docker}
	I1010 18:16:08.629272   99368 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1010 18:16:08.635349   99368 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1010 18:16:08.648258   99368 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1010 18:16:08.653797   99368 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1010 18:16:08.665553   99368 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1010 18:16:08.670066   99368 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1010 18:16:08.681281   99368 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1010 18:16:08.685851   99368 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1010 18:16:08.696759   99368 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1010 18:16:08.701070   99368 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1010 18:16:08.719143   99368 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1010 18:16:08.723782   99368 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1010 18:16:08.735082   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1010 18:16:08.763420   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1010 18:16:08.789246   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1010 18:16:08.814697   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1010 18:16:08.840641   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1010 18:16:08.865783   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1010 18:16:08.890663   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1010 18:16:08.916077   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1010 18:16:08.941574   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem --> /usr/share/ca-certificates/888762.pem (1708 bytes)
	I1010 18:16:08.971689   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1010 18:16:08.996394   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876.pem --> /usr/share/ca-certificates/88876.pem (1338 bytes)
	I1010 18:16:09.021329   99368 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1010 18:16:09.039289   99368 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1010 18:16:09.058514   99368 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1010 18:16:09.075508   99368 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1010 18:16:09.094047   99368 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1010 18:16:09.112093   99368 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1010 18:16:09.130182   99368 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1010 18:16:09.147655   99368 ssh_runner.go:195] Run: openssl version
	I1010 18:16:09.153962   99368 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/888762.pem && ln -fs /usr/share/ca-certificates/888762.pem /etc/ssl/certs/888762.pem"
	I1010 18:16:09.165361   99368 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/888762.pem
	I1010 18:16:09.170099   99368 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 10 18:09 /usr/share/ca-certificates/888762.pem
	I1010 18:16:09.170163   99368 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/888762.pem
	I1010 18:16:09.175991   99368 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/888762.pem /etc/ssl/certs/3ec20f2e.0"
	I1010 18:16:09.187134   99368 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1010 18:16:09.199298   99368 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:16:09.204550   99368 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 10 17:58 /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:16:09.204607   99368 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:16:09.210501   99368 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1010 18:16:09.222047   99368 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/88876.pem && ln -fs /usr/share/ca-certificates/88876.pem /etc/ssl/certs/88876.pem"
	I1010 18:16:09.233165   99368 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/88876.pem
	I1010 18:16:09.238141   99368 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 10 18:09 /usr/share/ca-certificates/88876.pem
	I1010 18:16:09.238209   99368 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/88876.pem
	I1010 18:16:09.243899   99368 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/88876.pem /etc/ssl/certs/51391683.0"
	I1010 18:16:09.256154   99368 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1010 18:16:09.260558   99368 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1010 18:16:09.260620   99368 kubeadm.go:934] updating node {m03 192.168.39.175 8443 v1.31.1 crio true true} ...
	I1010 18:16:09.260712   99368 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-142481-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.175
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-142481 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1010 18:16:09.260747   99368 kube-vip.go:115] generating kube-vip config ...
	I1010 18:16:09.260788   99368 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1010 18:16:09.281432   99368 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1010 18:16:09.281532   99368 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1010 18:16:09.281598   99368 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1010 18:16:09.292238   99368 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I1010 18:16:09.292302   99368 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I1010 18:16:09.302815   99368 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I1010 18:16:09.302834   99368 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I1010 18:16:09.302847   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I1010 18:16:09.302858   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1010 18:16:09.302874   99368 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I1010 18:16:09.302911   99368 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I1010 18:16:09.302925   99368 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1010 18:16:09.302927   99368 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 18:16:09.313038   99368 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I1010 18:16:09.313076   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I1010 18:16:09.313295   99368 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I1010 18:16:09.313324   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I1010 18:16:09.329019   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I1010 18:16:09.329132   99368 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I1010 18:16:09.460792   99368 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I1010 18:16:09.460863   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I1010 18:16:10.167695   99368 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1010 18:16:10.178304   99368 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1010 18:16:10.196198   99368 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1010 18:16:10.214107   99368 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1010 18:16:10.231699   99368 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1010 18:16:10.235598   99368 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 18:16:10.249379   99368 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 18:16:10.372228   99368 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 18:16:10.389956   99368 host.go:66] Checking if "ha-142481" exists ...
	I1010 18:16:10.390482   99368 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 18:16:10.390543   99368 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 18:16:10.406538   99368 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33965
	I1010 18:16:10.407120   99368 main.go:141] libmachine: () Calling .GetVersion
	I1010 18:16:10.407715   99368 main.go:141] libmachine: Using API Version  1
	I1010 18:16:10.407745   99368 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 18:16:10.408171   99368 main.go:141] libmachine: () Calling .GetMachineName
	I1010 18:16:10.408424   99368 main.go:141] libmachine: (ha-142481) Calling .DriverName
	I1010 18:16:10.408616   99368 start.go:317] joinCluster: &{Name:ha-142481 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-142481 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.104 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.186 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.175 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:f
alse istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 18:16:10.408761   99368 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1010 18:16:10.408786   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHHostname
	I1010 18:16:10.412501   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:16:10.412938   99368 main.go:141] libmachine: (ha-142481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:fa:00", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:13:53 +0000 UTC Type:0 Mac:52:54:00:3e:fa:00 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-142481 Clientid:01:52:54:00:3e:fa:00}
	I1010 18:16:10.412967   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined IP address 192.168.39.104 and MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:16:10.413287   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHPort
	I1010 18:16:10.413489   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHKeyPath
	I1010 18:16:10.413662   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHUsername
	I1010 18:16:10.413878   99368 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481/id_rsa Username:docker}
	I1010 18:16:10.584962   99368 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.175 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1010 18:16:10.585036   99368 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 01a2dn.g9vqo5mbslppupip --discovery-token-ca-cert-hash sha256:bc8568f96f96483e4403a7eff9866ac7bd8b0e76d8203796a49dd1a769f11bda --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-142481-m03 --control-plane --apiserver-advertise-address=192.168.39.175 --apiserver-bind-port=8443"
	I1010 18:16:34.116751   99368 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 01a2dn.g9vqo5mbslppupip --discovery-token-ca-cert-hash sha256:bc8568f96f96483e4403a7eff9866ac7bd8b0e76d8203796a49dd1a769f11bda --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-142481-m03 --control-plane --apiserver-advertise-address=192.168.39.175 --apiserver-bind-port=8443": (23.531656117s)
	I1010 18:16:34.116799   99368 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1010 18:16:34.662406   99368 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-142481-m03 minikube.k8s.io/updated_at=2024_10_10T18_16_34_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=e90d7de550e839433dc631623053398b652747fc minikube.k8s.io/name=ha-142481 minikube.k8s.io/primary=false
	I1010 18:16:34.812925   99368 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-142481-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I1010 18:16:34.939968   99368 start.go:319] duration metric: took 24.531346267s to joinCluster
	I1010 18:16:34.940121   99368 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.175 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1010 18:16:34.940600   99368 config.go:182] Loaded profile config "ha-142481": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 18:16:34.942338   99368 out.go:177] * Verifying Kubernetes components...
	I1010 18:16:34.943872   99368 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 18:16:35.261137   99368 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 18:16:35.322955   99368 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19787-81676/kubeconfig
	I1010 18:16:35.323214   99368 kapi.go:59] client config for ha-142481: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/client.crt", KeyFile:"/home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/client.key", CAFile:"/home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2432700), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1010 18:16:35.323281   99368 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.104:8443
	I1010 18:16:35.323557   99368 node_ready.go:35] waiting up to 6m0s for node "ha-142481-m03" to be "Ready" ...
	I1010 18:16:35.323656   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:35.323668   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:35.323679   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:35.323685   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:35.327318   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:35.823831   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:35.823858   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:35.823871   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:35.823877   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:35.828659   99368 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1010 18:16:36.324358   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:36.324382   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:36.324391   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:36.324395   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:36.327758   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:36.823911   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:36.823934   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:36.823942   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:36.823946   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:36.827063   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:37.323987   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:37.324011   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:37.324019   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:37.324023   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:37.327375   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:37.328058   99368 node_ready.go:53] node "ha-142481-m03" has status "Ready":"False"
	I1010 18:16:37.824329   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:37.824354   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:37.824443   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:37.824455   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:37.828067   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:38.323986   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:38.324025   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:38.324040   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:38.324046   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:38.327494   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:38.823762   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:38.823785   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:38.823794   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:38.823798   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:38.827926   99368 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1010 18:16:39.323928   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:39.323957   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:39.323969   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:39.323975   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:39.330422   99368 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1010 18:16:39.331171   99368 node_ready.go:53] node "ha-142481-m03" has status "Ready":"False"
	I1010 18:16:39.824574   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:39.824598   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:39.824607   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:39.824610   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:39.828722   99368 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1010 18:16:40.324796   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:40.324827   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:40.324838   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:40.324845   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:40.328842   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:40.823953   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:40.823979   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:40.823990   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:40.823996   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:40.828272   99368 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1010 18:16:41.324192   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:41.324218   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:41.324227   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:41.324230   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:41.327987   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:41.824162   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:41.824186   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:41.824198   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:41.824204   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:41.827541   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:41.828232   99368 node_ready.go:53] node "ha-142481-m03" has status "Ready":"False"
	I1010 18:16:42.324743   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:42.324783   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:42.324794   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:42.324801   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:42.328551   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:42.824718   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:42.824744   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:42.824755   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:42.824760   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:42.828428   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:43.324320   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:43.324346   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:43.324355   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:43.324364   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:43.328322   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:43.823956   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:43.824002   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:43.824013   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:43.824019   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:43.827615   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:43.828260   99368 node_ready.go:53] node "ha-142481-m03" has status "Ready":"False"
	I1010 18:16:44.324587   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:44.324612   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:44.324620   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:44.324623   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:44.328569   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:44.823816   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:44.823840   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:44.823849   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:44.823853   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:44.827589   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:45.324648   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:45.324673   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:45.324681   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:45.324684   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:45.328227   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:45.824305   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:45.824330   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:45.824338   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:45.824342   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:45.827901   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:45.828489   99368 node_ready.go:53] node "ha-142481-m03" has status "Ready":"False"
	I1010 18:16:46.323779   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:46.323813   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:46.323825   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:46.323830   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:46.327223   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:46.823931   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:46.823955   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:46.823964   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:46.823968   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:46.828168   99368 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1010 18:16:47.324172   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:47.324200   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:47.324214   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:47.324232   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:47.327405   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:47.824446   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:47.824470   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:47.824478   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:47.824483   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:47.828085   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:47.828574   99368 node_ready.go:53] node "ha-142481-m03" has status "Ready":"False"
	I1010 18:16:48.324641   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:48.324666   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:48.324674   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:48.324678   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:48.328399   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:48.823841   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:48.823872   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:48.823883   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:48.823899   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:48.827862   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:49.324364   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:49.324391   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:49.324402   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:49.324410   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:49.329836   99368 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1010 18:16:49.824868   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:49.824898   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:49.824909   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:49.824916   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:49.832424   99368 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1010 18:16:49.833781   99368 node_ready.go:53] node "ha-142481-m03" has status "Ready":"False"
	I1010 18:16:50.324106   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:50.324129   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:50.324137   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:50.324141   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:50.327377   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:50.824781   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:50.824809   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:50.824818   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:50.824824   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:50.828461   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:51.324626   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:51.324651   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:51.324659   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:51.324663   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:51.327965   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:51.824004   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:51.824028   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:51.824036   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:51.824041   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:51.827827   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:52.323895   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:52.323930   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:52.323939   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:52.323943   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:52.327292   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:52.327943   99368 node_ready.go:49] node "ha-142481-m03" has status "Ready":"True"
	I1010 18:16:52.327963   99368 node_ready.go:38] duration metric: took 17.004388796s for node "ha-142481-m03" to be "Ready" ...
	I1010 18:16:52.327973   99368 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 18:16:52.328041   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods
	I1010 18:16:52.328051   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:52.328058   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:52.328063   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:52.335352   99368 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1010 18:16:52.341969   99368 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-28dll" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:52.342092   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-28dll
	I1010 18:16:52.342105   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:52.342116   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:52.342121   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:52.346524   99368 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1010 18:16:52.347823   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481
	I1010 18:16:52.347844   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:52.347853   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:52.347860   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:52.352427   99368 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1010 18:16:52.353100   99368 pod_ready.go:93] pod "coredns-7c65d6cfc9-28dll" in "kube-system" namespace has status "Ready":"True"
	I1010 18:16:52.353132   99368 pod_ready.go:82] duration metric: took 11.131703ms for pod "coredns-7c65d6cfc9-28dll" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:52.353146   99368 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-xfhq8" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:52.353233   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-xfhq8
	I1010 18:16:52.353246   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:52.353255   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:52.353262   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:52.358189   99368 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1010 18:16:52.359137   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481
	I1010 18:16:52.359158   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:52.359170   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:52.359194   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:52.361882   99368 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1010 18:16:52.362586   99368 pod_ready.go:93] pod "coredns-7c65d6cfc9-xfhq8" in "kube-system" namespace has status "Ready":"True"
	I1010 18:16:52.362606   99368 pod_ready.go:82] duration metric: took 9.449469ms for pod "coredns-7c65d6cfc9-xfhq8" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:52.362618   99368 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-142481" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:52.362680   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/etcd-ha-142481
	I1010 18:16:52.362689   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:52.362696   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:52.362701   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:52.365259   99368 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1010 18:16:52.365819   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481
	I1010 18:16:52.365835   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:52.365842   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:52.365857   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:52.368864   99368 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1010 18:16:52.369337   99368 pod_ready.go:93] pod "etcd-ha-142481" in "kube-system" namespace has status "Ready":"True"
	I1010 18:16:52.369355   99368 pod_ready.go:82] duration metric: took 6.728138ms for pod "etcd-ha-142481" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:52.369365   99368 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-142481-m02" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:52.369427   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/etcd-ha-142481-m02
	I1010 18:16:52.369435   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:52.369442   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:52.369447   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:52.371801   99368 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1010 18:16:52.372469   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:16:52.372485   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:52.372496   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:52.372501   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:52.374845   99368 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1010 18:16:52.375380   99368 pod_ready.go:93] pod "etcd-ha-142481-m02" in "kube-system" namespace has status "Ready":"True"
	I1010 18:16:52.375400   99368 pod_ready.go:82] duration metric: took 6.028654ms for pod "etcd-ha-142481-m02" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:52.375414   99368 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-142481-m03" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:52.524876   99368 request.go:632] Waited for 149.316037ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/etcd-ha-142481-m03
	I1010 18:16:52.524969   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/etcd-ha-142481-m03
	I1010 18:16:52.524980   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:52.524993   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:52.525002   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:52.528336   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:52.724349   99368 request.go:632] Waited for 195.357304ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:52.724414   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:52.724419   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:52.724429   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:52.724433   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:52.727821   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:52.728420   99368 pod_ready.go:93] pod "etcd-ha-142481-m03" in "kube-system" namespace has status "Ready":"True"
	I1010 18:16:52.728440   99368 pod_ready.go:82] duration metric: took 353.013897ms for pod "etcd-ha-142481-m03" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:52.728461   99368 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-142481" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:52.924606   99368 request.go:632] Waited for 196.006652ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-142481
	I1010 18:16:52.924680   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-142481
	I1010 18:16:52.924687   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:52.924697   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:52.924702   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:52.928387   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:53.124197   99368 request.go:632] Waited for 194.992104ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/nodes/ha-142481
	I1010 18:16:53.124259   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481
	I1010 18:16:53.124264   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:53.124276   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:53.124281   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:53.127550   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:53.128097   99368 pod_ready.go:93] pod "kube-apiserver-ha-142481" in "kube-system" namespace has status "Ready":"True"
	I1010 18:16:53.128116   99368 pod_ready.go:82] duration metric: took 399.647709ms for pod "kube-apiserver-ha-142481" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:53.128127   99368 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-142481-m02" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:53.324538   99368 request.go:632] Waited for 196.340534ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-142481-m02
	I1010 18:16:53.324600   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-142481-m02
	I1010 18:16:53.324606   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:53.324613   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:53.324617   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:53.328266   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:53.524803   99368 request.go:632] Waited for 195.841443ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:16:53.524898   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:16:53.524906   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:53.524920   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:53.524931   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:53.529027   99368 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1010 18:16:53.529616   99368 pod_ready.go:93] pod "kube-apiserver-ha-142481-m02" in "kube-system" namespace has status "Ready":"True"
	I1010 18:16:53.529639   99368 pod_ready.go:82] duration metric: took 401.504985ms for pod "kube-apiserver-ha-142481-m02" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:53.529650   99368 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-142481-m03" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:53.724123   99368 request.go:632] Waited for 194.402378ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-142481-m03
	I1010 18:16:53.724207   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-142481-m03
	I1010 18:16:53.724212   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:53.724220   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:53.724226   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:53.728029   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:53.924000   99368 request.go:632] Waited for 195.20231ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:53.924121   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:53.924136   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:53.924145   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:53.924149   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:53.927318   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:53.927936   99368 pod_ready.go:93] pod "kube-apiserver-ha-142481-m03" in "kube-system" namespace has status "Ready":"True"
	I1010 18:16:53.927963   99368 pod_ready.go:82] duration metric: took 398.303309ms for pod "kube-apiserver-ha-142481-m03" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:53.927977   99368 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-142481" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:54.124931   99368 request.go:632] Waited for 196.86396ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-142481
	I1010 18:16:54.125030   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-142481
	I1010 18:16:54.125037   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:54.125045   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:54.125050   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:54.129323   99368 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1010 18:16:54.324484   99368 request.go:632] Waited for 194.400861ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/nodes/ha-142481
	I1010 18:16:54.324554   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481
	I1010 18:16:54.324564   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:54.324574   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:54.324580   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:54.327854   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:54.328431   99368 pod_ready.go:93] pod "kube-controller-manager-ha-142481" in "kube-system" namespace has status "Ready":"True"
	I1010 18:16:54.328451   99368 pod_ready.go:82] duration metric: took 400.466203ms for pod "kube-controller-manager-ha-142481" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:54.328463   99368 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-142481-m02" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:54.524928   99368 request.go:632] Waited for 196.394012ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-142481-m02
	I1010 18:16:54.524994   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-142481-m02
	I1010 18:16:54.525000   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:54.525008   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:54.525013   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:54.528390   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:54.724248   99368 request.go:632] Waited for 195.108613ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:16:54.724318   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:16:54.724325   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:54.724335   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:54.724341   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:54.727499   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:54.727990   99368 pod_ready.go:93] pod "kube-controller-manager-ha-142481-m02" in "kube-system" namespace has status "Ready":"True"
	I1010 18:16:54.728011   99368 pod_ready.go:82] duration metric: took 399.541027ms for pod "kube-controller-manager-ha-142481-m02" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:54.728023   99368 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-142481-m03" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:54.924017   99368 request.go:632] Waited for 195.924922ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-142481-m03
	I1010 18:16:54.924118   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-142481-m03
	I1010 18:16:54.924129   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:54.924137   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:54.924142   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:54.928875   99368 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1010 18:16:55.123960   99368 request.go:632] Waited for 194.31178ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:55.124017   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:55.124022   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:55.124030   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:55.124033   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:55.127461   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:55.128120   99368 pod_ready.go:93] pod "kube-controller-manager-ha-142481-m03" in "kube-system" namespace has status "Ready":"True"
	I1010 18:16:55.128144   99368 pod_ready.go:82] duration metric: took 400.113475ms for pod "kube-controller-manager-ha-142481-m03" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:55.128160   99368 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-cdjzg" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:55.323986   99368 request.go:632] Waited for 195.748073ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cdjzg
	I1010 18:16:55.324049   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cdjzg
	I1010 18:16:55.324055   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:55.324063   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:55.324069   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:55.327396   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:55.524493   99368 request.go:632] Waited for 196.370396ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:55.524560   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:55.524567   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:55.524578   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:55.524586   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:55.534026   99368 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1010 18:16:55.534701   99368 pod_ready.go:93] pod "kube-proxy-cdjzg" in "kube-system" namespace has status "Ready":"True"
	I1010 18:16:55.534728   99368 pod_ready.go:82] duration metric: took 406.559679ms for pod "kube-proxy-cdjzg" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:55.534745   99368 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-gwvrh" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:55.724765   99368 request.go:632] Waited for 189.945021ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gwvrh
	I1010 18:16:55.724857   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gwvrh
	I1010 18:16:55.724864   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:55.724872   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:55.724878   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:55.727940   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:55.923972   99368 request.go:632] Waited for 195.304711ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/nodes/ha-142481
	I1010 18:16:55.924037   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481
	I1010 18:16:55.924052   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:55.924078   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:55.924085   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:55.927605   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:55.928243   99368 pod_ready.go:93] pod "kube-proxy-gwvrh" in "kube-system" namespace has status "Ready":"True"
	I1010 18:16:55.928264   99368 pod_ready.go:82] duration metric: took 393.511622ms for pod "kube-proxy-gwvrh" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:55.928278   99368 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-srfng" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:56.124193   99368 request.go:632] Waited for 195.82573ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-proxy-srfng
	I1010 18:16:56.124313   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-proxy-srfng
	I1010 18:16:56.124327   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:56.124336   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:56.124340   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:56.127896   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:56.324881   99368 request.go:632] Waited for 196.244687ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:16:56.324996   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:16:56.325012   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:56.325022   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:56.325029   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:56.328576   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:56.329284   99368 pod_ready.go:93] pod "kube-proxy-srfng" in "kube-system" namespace has status "Ready":"True"
	I1010 18:16:56.329304   99368 pod_ready.go:82] duration metric: took 401.01865ms for pod "kube-proxy-srfng" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:56.329315   99368 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-142481" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:56.524473   99368 request.go:632] Waited for 195.075639ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-142481
	I1010 18:16:56.524535   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-142481
	I1010 18:16:56.524541   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:56.524548   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:56.524554   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:56.527661   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:56.724798   99368 request.go:632] Waited for 196.388114ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/nodes/ha-142481
	I1010 18:16:56.724919   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481
	I1010 18:16:56.724934   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:56.724945   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:56.724955   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:56.728172   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:56.728664   99368 pod_ready.go:93] pod "kube-scheduler-ha-142481" in "kube-system" namespace has status "Ready":"True"
	I1010 18:16:56.728684   99368 pod_ready.go:82] duration metric: took 399.362342ms for pod "kube-scheduler-ha-142481" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:56.728700   99368 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-142481-m02" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:56.924703   99368 request.go:632] Waited for 195.908558ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-142481-m02
	I1010 18:16:56.924769   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-142481-m02
	I1010 18:16:56.924784   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:56.924793   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:56.924796   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:56.928241   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:57.124466   99368 request.go:632] Waited for 195.354302ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:16:57.124566   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:16:57.124592   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:57.124604   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:57.124613   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:57.128217   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:57.128748   99368 pod_ready.go:93] pod "kube-scheduler-ha-142481-m02" in "kube-system" namespace has status "Ready":"True"
	I1010 18:16:57.128773   99368 pod_ready.go:82] duration metric: took 400.06441ms for pod "kube-scheduler-ha-142481-m02" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:57.128788   99368 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-142481-m03" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:57.323894   99368 request.go:632] Waited for 195.025916ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-142481-m03
	I1010 18:16:57.323960   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-142481-m03
	I1010 18:16:57.324019   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:57.324032   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:57.324036   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:57.328239   99368 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1010 18:16:57.524431   99368 request.go:632] Waited for 195.425292ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:57.524497   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:57.524503   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:57.524511   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:57.524515   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:57.527825   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:57.528689   99368 pod_ready.go:93] pod "kube-scheduler-ha-142481-m03" in "kube-system" namespace has status "Ready":"True"
	I1010 18:16:57.528706   99368 pod_ready.go:82] duration metric: took 399.911051ms for pod "kube-scheduler-ha-142481-m03" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:57.528718   99368 pod_ready.go:39] duration metric: took 5.200736466s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 18:16:57.528734   99368 api_server.go:52] waiting for apiserver process to appear ...
	I1010 18:16:57.528787   99368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 18:16:57.545663   99368 api_server.go:72] duration metric: took 22.605494204s to wait for apiserver process to appear ...
	I1010 18:16:57.545694   99368 api_server.go:88] waiting for apiserver healthz status ...
	I1010 18:16:57.545718   99368 api_server.go:253] Checking apiserver healthz at https://192.168.39.104:8443/healthz ...
	I1010 18:16:57.552066   99368 api_server.go:279] https://192.168.39.104:8443/healthz returned 200:
	ok
	I1010 18:16:57.552813   99368 round_trippers.go:463] GET https://192.168.39.104:8443/version
	I1010 18:16:57.552870   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:57.552882   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:57.552890   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:57.555288   99368 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1010 18:16:57.555381   99368 api_server.go:141] control plane version: v1.31.1
	I1010 18:16:57.555401   99368 api_server.go:131] duration metric: took 9.699914ms to wait for apiserver health ...
	I1010 18:16:57.555411   99368 system_pods.go:43] waiting for kube-system pods to appear ...
	I1010 18:16:57.724005   99368 request.go:632] Waited for 168.467999ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods
	I1010 18:16:57.724082   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods
	I1010 18:16:57.724091   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:57.724106   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:57.724114   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:57.730879   99368 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1010 18:16:57.737404   99368 system_pods.go:59] 24 kube-system pods found
	I1010 18:16:57.737436   99368 system_pods.go:61] "coredns-7c65d6cfc9-28dll" [1d8bde36-6ab6-4b48-b00a-2a80059ecc11] Running
	I1010 18:16:57.737442   99368 system_pods.go:61] "coredns-7c65d6cfc9-xfhq8" [33d478fb-98f0-4029-87ae-9701a54cecd9] Running
	I1010 18:16:57.737445   99368 system_pods.go:61] "etcd-ha-142481" [fa917068-3ac4-4929-b80b-bfabdc3a9b94] Running
	I1010 18:16:57.737449   99368 system_pods.go:61] "etcd-ha-142481-m02" [8e6c1111-716c-41ff-9d31-ee189956993b] Running
	I1010 18:16:57.737452   99368 system_pods.go:61] "etcd-ha-142481-m03" [3f1ae212-d09b-446c-9172-52b9bfc6c20c] Running
	I1010 18:16:57.737456   99368 system_pods.go:61] "kindnet-4d9v4" [f52506a7-d4f9-4112-8b28-f239c7c8230b] Running
	I1010 18:16:57.737459   99368 system_pods.go:61] "kindnet-5k6j8" [e2d0fb49-23e6-4389-b890-25c03f6089a1] Running
	I1010 18:16:57.737463   99368 system_pods.go:61] "kindnet-cjcsf" [237e5649-ed64-401c-befd-99ef520d0761] Running
	I1010 18:16:57.737466   99368 system_pods.go:61] "kube-apiserver-ha-142481" [b3d0326d-123b-43af-b1a2-b745b884c3b5] Running
	I1010 18:16:57.737469   99368 system_pods.go:61] "kube-apiserver-ha-142481-m02" [36f5732a-f056-4751-83e3-67f84744af8e] Running
	I1010 18:16:57.737472   99368 system_pods.go:61] "kube-apiserver-ha-142481-m03" [4c7836a0-6697-4ce5-87d6-582097925f80] Running
	I1010 18:16:57.737476   99368 system_pods.go:61] "kube-controller-manager-ha-142481" [ad42e72b-6208-44c3-b4cb-ac9794ec9683] Running
	I1010 18:16:57.737480   99368 system_pods.go:61] "kube-controller-manager-ha-142481-m02" [4bd4ce73-1b81-44db-a59f-06c89184d88c] Running
	I1010 18:16:57.737484   99368 system_pods.go:61] "kube-controller-manager-ha-142481-m03" [9444eb06-6dc4-44ab-a7d6-2d1d5b3e6410] Running
	I1010 18:16:57.737487   99368 system_pods.go:61] "kube-proxy-cdjzg" [98288460-9764-4e92-a589-e7e34654cfc5] Running
	I1010 18:16:57.737491   99368 system_pods.go:61] "kube-proxy-gwvrh" [06e8f455-b250-46ea-bbcc-75ed230f2b57] Running
	I1010 18:16:57.737494   99368 system_pods.go:61] "kube-proxy-srfng" [0aad186a-89b6-48d9-8434-1063c7c8a42f] Running
	I1010 18:16:57.737499   99368 system_pods.go:61] "kube-scheduler-ha-142481" [c28a2da7-1807-4111-bf99-84891a0b278a] Running
	I1010 18:16:57.737505   99368 system_pods.go:61] "kube-scheduler-ha-142481-m02" [f29260ee-8ec8-47c4-848d-ee8ff1d06610] Running
	I1010 18:16:57.737509   99368 system_pods.go:61] "kube-scheduler-ha-142481-m03" [a3eea545-bc31-4990-ad58-a43666964468] Running
	I1010 18:16:57.737512   99368 system_pods.go:61] "kube-vip-ha-142481" [3ae4dc2c-27b6-4f93-a605-8de6f0c096d0] Running
	I1010 18:16:57.737515   99368 system_pods.go:61] "kube-vip-ha-142481-m02" [bd50bf3a-5e9c-48d4-8bba-fe1fea6c017d] Running
	I1010 18:16:57.737519   99368 system_pods.go:61] "kube-vip-ha-142481-m03" [a93b4d63-0f6c-47b5-b987-a082b2b0d51a] Running
	I1010 18:16:57.737522   99368 system_pods.go:61] "storage-provisioner" [9d74f64e-6391-4ab0-99ad-8c1711840696] Running
	I1010 18:16:57.737528   99368 system_pods.go:74] duration metric: took 182.108204ms to wait for pod list to return data ...
	I1010 18:16:57.737537   99368 default_sa.go:34] waiting for default service account to be created ...
	I1010 18:16:57.923961   99368 request.go:632] Waited for 186.32043ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/default/serviceaccounts
	I1010 18:16:57.924040   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/default/serviceaccounts
	I1010 18:16:57.924048   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:57.924059   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:57.924064   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:57.928023   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:57.928206   99368 default_sa.go:45] found service account: "default"
	I1010 18:16:57.928229   99368 default_sa.go:55] duration metric: took 190.684117ms for default service account to be created ...
	I1010 18:16:57.928243   99368 system_pods.go:116] waiting for k8s-apps to be running ...
	I1010 18:16:58.124915   99368 request.go:632] Waited for 196.547566ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods
	I1010 18:16:58.124982   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods
	I1010 18:16:58.124989   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:58.124999   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:58.125007   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:58.131096   99368 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1010 18:16:58.138059   99368 system_pods.go:86] 24 kube-system pods found
	I1010 18:16:58.138089   99368 system_pods.go:89] "coredns-7c65d6cfc9-28dll" [1d8bde36-6ab6-4b48-b00a-2a80059ecc11] Running
	I1010 18:16:58.138095   99368 system_pods.go:89] "coredns-7c65d6cfc9-xfhq8" [33d478fb-98f0-4029-87ae-9701a54cecd9] Running
	I1010 18:16:58.138099   99368 system_pods.go:89] "etcd-ha-142481" [fa917068-3ac4-4929-b80b-bfabdc3a9b94] Running
	I1010 18:16:58.138103   99368 system_pods.go:89] "etcd-ha-142481-m02" [8e6c1111-716c-41ff-9d31-ee189956993b] Running
	I1010 18:16:58.138107   99368 system_pods.go:89] "etcd-ha-142481-m03" [3f1ae212-d09b-446c-9172-52b9bfc6c20c] Running
	I1010 18:16:58.138111   99368 system_pods.go:89] "kindnet-4d9v4" [f52506a7-d4f9-4112-8b28-f239c7c8230b] Running
	I1010 18:16:58.138114   99368 system_pods.go:89] "kindnet-5k6j8" [e2d0fb49-23e6-4389-b890-25c03f6089a1] Running
	I1010 18:16:58.138117   99368 system_pods.go:89] "kindnet-cjcsf" [237e5649-ed64-401c-befd-99ef520d0761] Running
	I1010 18:16:58.138120   99368 system_pods.go:89] "kube-apiserver-ha-142481" [b3d0326d-123b-43af-b1a2-b745b884c3b5] Running
	I1010 18:16:58.138124   99368 system_pods.go:89] "kube-apiserver-ha-142481-m02" [36f5732a-f056-4751-83e3-67f84744af8e] Running
	I1010 18:16:58.138127   99368 system_pods.go:89] "kube-apiserver-ha-142481-m03" [4c7836a0-6697-4ce5-87d6-582097925f80] Running
	I1010 18:16:58.138131   99368 system_pods.go:89] "kube-controller-manager-ha-142481" [ad42e72b-6208-44c3-b4cb-ac9794ec9683] Running
	I1010 18:16:58.138134   99368 system_pods.go:89] "kube-controller-manager-ha-142481-m02" [4bd4ce73-1b81-44db-a59f-06c89184d88c] Running
	I1010 18:16:58.138138   99368 system_pods.go:89] "kube-controller-manager-ha-142481-m03" [9444eb06-6dc4-44ab-a7d6-2d1d5b3e6410] Running
	I1010 18:16:58.138141   99368 system_pods.go:89] "kube-proxy-cdjzg" [98288460-9764-4e92-a589-e7e34654cfc5] Running
	I1010 18:16:58.138145   99368 system_pods.go:89] "kube-proxy-gwvrh" [06e8f455-b250-46ea-bbcc-75ed230f2b57] Running
	I1010 18:16:58.138148   99368 system_pods.go:89] "kube-proxy-srfng" [0aad186a-89b6-48d9-8434-1063c7c8a42f] Running
	I1010 18:16:58.138150   99368 system_pods.go:89] "kube-scheduler-ha-142481" [c28a2da7-1807-4111-bf99-84891a0b278a] Running
	I1010 18:16:58.138153   99368 system_pods.go:89] "kube-scheduler-ha-142481-m02" [f29260ee-8ec8-47c4-848d-ee8ff1d06610] Running
	I1010 18:16:58.138156   99368 system_pods.go:89] "kube-scheduler-ha-142481-m03" [a3eea545-bc31-4990-ad58-a43666964468] Running
	I1010 18:16:58.138160   99368 system_pods.go:89] "kube-vip-ha-142481" [3ae4dc2c-27b6-4f93-a605-8de6f0c096d0] Running
	I1010 18:16:58.138163   99368 system_pods.go:89] "kube-vip-ha-142481-m02" [bd50bf3a-5e9c-48d4-8bba-fe1fea6c017d] Running
	I1010 18:16:58.138165   99368 system_pods.go:89] "kube-vip-ha-142481-m03" [a93b4d63-0f6c-47b5-b987-a082b2b0d51a] Running
	I1010 18:16:58.138168   99368 system_pods.go:89] "storage-provisioner" [9d74f64e-6391-4ab0-99ad-8c1711840696] Running
	I1010 18:16:58.138175   99368 system_pods.go:126] duration metric: took 209.923309ms to wait for k8s-apps to be running ...
	I1010 18:16:58.138188   99368 system_svc.go:44] waiting for kubelet service to be running ....
	I1010 18:16:58.138234   99368 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 18:16:58.154620   99368 system_svc.go:56] duration metric: took 16.42135ms WaitForService to wait for kubelet
	I1010 18:16:58.154660   99368 kubeadm.go:582] duration metric: took 23.214494056s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1010 18:16:58.154684   99368 node_conditions.go:102] verifying NodePressure condition ...
	I1010 18:16:58.324577   99368 request.go:632] Waited for 169.800219ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/nodes
	I1010 18:16:58.324670   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes
	I1010 18:16:58.324677   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:58.324687   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:58.324694   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:58.328908   99368 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1010 18:16:58.329887   99368 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1010 18:16:58.329907   99368 node_conditions.go:123] node cpu capacity is 2
	I1010 18:16:58.329918   99368 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1010 18:16:58.329922   99368 node_conditions.go:123] node cpu capacity is 2
	I1010 18:16:58.329926   99368 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1010 18:16:58.329929   99368 node_conditions.go:123] node cpu capacity is 2
	I1010 18:16:58.329932   99368 node_conditions.go:105] duration metric: took 175.242574ms to run NodePressure ...
	I1010 18:16:58.329945   99368 start.go:241] waiting for startup goroutines ...
	I1010 18:16:58.329965   99368 start.go:255] writing updated cluster config ...
	I1010 18:16:58.330248   99368 ssh_runner.go:195] Run: rm -f paused
	I1010 18:16:58.382565   99368 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1010 18:16:58.384704   99368 out.go:177] * Done! kubectl is now configured to use "ha-142481" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 10 18:20:46 ha-142481 crio[662]: time="2024-10-10 18:20:46.281177859Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728584446281156574,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0775c130-00b8-4234-8994-45f595e5c3c6 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 10 18:20:46 ha-142481 crio[662]: time="2024-10-10 18:20:46.281748558Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d43bcd58-68f1-4515-9736-e44769ca8445 name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 18:20:46 ha-142481 crio[662]: time="2024-10-10 18:20:46.281819936Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d43bcd58-68f1-4515-9736-e44769ca8445 name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 18:20:46 ha-142481 crio[662]: time="2024-10-10 18:20:46.282041446Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c07ad1fe2bce4fa8373def3cddec28c3e2552242ac1c39c134f1ae1b46e61fc7,PodSandboxId:0cebb1db5e1d3f09063f319b62399492ea2520962e06e43ba786fceadf07a397,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728584222925867591,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-xnwpj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ddbe05a6-e1b2-4b9d-b285-ed6a97c9ea98,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:018e6370bdfdae261e6d11d25c43c0456fb2814ad544f82f775181d88055bf37,PodSandboxId:84952d68d14fb8ddef31a76a7d3e10fc7ae9a1bb453a588629cf59972d4cc64d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728584076458035667,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xfhq8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33d478fb-98f0-4029-87ae-9701a54cecd9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c208648c013d8c43a6964604c6ae8de3a53473a5fd4e7885a2755580435fb2e,PodSandboxId:20b740049c58515dfa475c0041efc539108a367ad35f9468d4a5dd454a1ba4c1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728584076394205343,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-28dll,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
1d8bde36-6ab6-4b48-b00a-2a80059ecc11,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2eb7357e74059a7301a73b2f96692b18182bf02d619521a79db9b92779a3b9d3,PodSandboxId:a78996796d2ead6b297dda5318a5aee95e782ac04d0f6d8ae10bb40361461d5e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1728584076376848222,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d74f64e-6391-4ab0-99ad-8c1711840696,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b32ac96128061a8cd1f65949d4849412dd90d7656d1619c49a39c2a6f8cb40d3,PodSandboxId:d5a1a0a19e5bca4c63661ad0dc5406e3b343827728591b94c793305e3054b5bb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17285840
64156910390,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4d9v4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f52506a7-d4f9-4112-8b28-f239c7c8230b,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f7d32719ebd232c16c4b9d557726714c2b6c1836b556ff4ca647c75d46c03e9,PodSandboxId:63eed92e7516ac5ee123fb9b271b50065e605524dc4b10606329006c357fe8c2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728584064036938001,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gwvrh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06e8f455-b250-46ea-bbcc-75ed230f2b57,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80e86419d2aadedb18174a33f8cbe97c26931608fd3d13de863fea4148c76dc4,PodSandboxId:ef586683ae3a5a110607194cda298fc606be593d4b2e270644dcc4828413d726,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1728584055179658221,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-142481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca912ab25249d4277c44ead7f9f185cc,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:751981b34b5e95c8f49366cd245e2877efdef340757e2957722cc0c59f16c09c,PodSandboxId:a1a198bd8221ce4fda4789734e1aba8d3c7f89613592cf88b2fe723e9517e7f8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728584052280961387,Labels:map[string]string{io.kubernetes.container.name: kub
e-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-142481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ba30574b93c2cf97990905a11b77e8d,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d7eb644bee429f0aef0448b6ea6d1efc310f7b08553ebb2af7781f4c493bbaf,PodSandboxId:df70f8cffd3d4746ecd973b5cc68d7d7162723d607a107e3fc78197661178de8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728584052264112180,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-ha-142481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ddee910ffb9f86d5728fc6243e0b7a0,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43b160f9e1140bbc977bac3d49805593ce66960b30fe85ac1e6b104437e5a026,PodSandboxId:cf562380e5c8d4bcf939e430aad1b55e383372d779380ea7d38feca9c503cb13,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728584052244936539,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.
kubernetes.pod.name: kube-scheduler-ha-142481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4d0e8861f11277dd58b525d7e9f233b,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:206693e605977eabf0659fe5400e46be795755080a225b01d46bb7533d055c58,PodSandboxId:84fece63e17b567b38dff30b098d026073682dacac0ce0da5bf2ed8a3b0c34de,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728584052158394276,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-142481,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9682645df643fbbf1d6db4f0d49467bf,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d43bcd58-68f1-4515-9736-e44769ca8445 name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 18:20:46 ha-142481 crio[662]: time="2024-10-10 18:20:46.322268752Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f0e7fbd0-a626-4777-8c3f-1016ea9402c5 name=/runtime.v1.RuntimeService/Version
	Oct 10 18:20:46 ha-142481 crio[662]: time="2024-10-10 18:20:46.322370748Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f0e7fbd0-a626-4777-8c3f-1016ea9402c5 name=/runtime.v1.RuntimeService/Version
	Oct 10 18:20:46 ha-142481 crio[662]: time="2024-10-10 18:20:46.323487276Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6e0e3a4d-ee93-4449-91e1-3950375059a6 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 10 18:20:46 ha-142481 crio[662]: time="2024-10-10 18:20:46.323942847Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728584446323922064,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6e0e3a4d-ee93-4449-91e1-3950375059a6 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 10 18:20:46 ha-142481 crio[662]: time="2024-10-10 18:20:46.324494066Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b3d16684-7524-4914-b8d0-9a7b65b25d97 name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 18:20:46 ha-142481 crio[662]: time="2024-10-10 18:20:46.324609519Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b3d16684-7524-4914-b8d0-9a7b65b25d97 name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 18:20:46 ha-142481 crio[662]: time="2024-10-10 18:20:46.324871422Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c07ad1fe2bce4fa8373def3cddec28c3e2552242ac1c39c134f1ae1b46e61fc7,PodSandboxId:0cebb1db5e1d3f09063f319b62399492ea2520962e06e43ba786fceadf07a397,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728584222925867591,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-xnwpj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ddbe05a6-e1b2-4b9d-b285-ed6a97c9ea98,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:018e6370bdfdae261e6d11d25c43c0456fb2814ad544f82f775181d88055bf37,PodSandboxId:84952d68d14fb8ddef31a76a7d3e10fc7ae9a1bb453a588629cf59972d4cc64d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728584076458035667,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xfhq8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33d478fb-98f0-4029-87ae-9701a54cecd9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c208648c013d8c43a6964604c6ae8de3a53473a5fd4e7885a2755580435fb2e,PodSandboxId:20b740049c58515dfa475c0041efc539108a367ad35f9468d4a5dd454a1ba4c1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728584076394205343,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-28dll,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
1d8bde36-6ab6-4b48-b00a-2a80059ecc11,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2eb7357e74059a7301a73b2f96692b18182bf02d619521a79db9b92779a3b9d3,PodSandboxId:a78996796d2ead6b297dda5318a5aee95e782ac04d0f6d8ae10bb40361461d5e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1728584076376848222,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d74f64e-6391-4ab0-99ad-8c1711840696,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b32ac96128061a8cd1f65949d4849412dd90d7656d1619c49a39c2a6f8cb40d3,PodSandboxId:d5a1a0a19e5bca4c63661ad0dc5406e3b343827728591b94c793305e3054b5bb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17285840
64156910390,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4d9v4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f52506a7-d4f9-4112-8b28-f239c7c8230b,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f7d32719ebd232c16c4b9d557726714c2b6c1836b556ff4ca647c75d46c03e9,PodSandboxId:63eed92e7516ac5ee123fb9b271b50065e605524dc4b10606329006c357fe8c2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728584064036938001,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gwvrh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06e8f455-b250-46ea-bbcc-75ed230f2b57,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80e86419d2aadedb18174a33f8cbe97c26931608fd3d13de863fea4148c76dc4,PodSandboxId:ef586683ae3a5a110607194cda298fc606be593d4b2e270644dcc4828413d726,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1728584055179658221,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-142481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca912ab25249d4277c44ead7f9f185cc,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:751981b34b5e95c8f49366cd245e2877efdef340757e2957722cc0c59f16c09c,PodSandboxId:a1a198bd8221ce4fda4789734e1aba8d3c7f89613592cf88b2fe723e9517e7f8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728584052280961387,Labels:map[string]string{io.kubernetes.container.name: kub
e-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-142481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ba30574b93c2cf97990905a11b77e8d,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d7eb644bee429f0aef0448b6ea6d1efc310f7b08553ebb2af7781f4c493bbaf,PodSandboxId:df70f8cffd3d4746ecd973b5cc68d7d7162723d607a107e3fc78197661178de8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728584052264112180,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-ha-142481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ddee910ffb9f86d5728fc6243e0b7a0,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43b160f9e1140bbc977bac3d49805593ce66960b30fe85ac1e6b104437e5a026,PodSandboxId:cf562380e5c8d4bcf939e430aad1b55e383372d779380ea7d38feca9c503cb13,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728584052244936539,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.
kubernetes.pod.name: kube-scheduler-ha-142481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4d0e8861f11277dd58b525d7e9f233b,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:206693e605977eabf0659fe5400e46be795755080a225b01d46bb7533d055c58,PodSandboxId:84fece63e17b567b38dff30b098d026073682dacac0ce0da5bf2ed8a3b0c34de,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728584052158394276,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-142481,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9682645df643fbbf1d6db4f0d49467bf,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b3d16684-7524-4914-b8d0-9a7b65b25d97 name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 18:20:46 ha-142481 crio[662]: time="2024-10-10 18:20:46.370080693Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ad1c24d6-9d71-46be-961a-bdbf787e450b name=/runtime.v1.RuntimeService/Version
	Oct 10 18:20:46 ha-142481 crio[662]: time="2024-10-10 18:20:46.370179443Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ad1c24d6-9d71-46be-961a-bdbf787e450b name=/runtime.v1.RuntimeService/Version
	Oct 10 18:20:46 ha-142481 crio[662]: time="2024-10-10 18:20:46.371672273Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1c7e9b34-69b5-405c-ba6a-40cceba01fab name=/runtime.v1.ImageService/ImageFsInfo
	Oct 10 18:20:46 ha-142481 crio[662]: time="2024-10-10 18:20:46.372104750Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728584446372080374,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1c7e9b34-69b5-405c-ba6a-40cceba01fab name=/runtime.v1.ImageService/ImageFsInfo
	Oct 10 18:20:46 ha-142481 crio[662]: time="2024-10-10 18:20:46.372965288Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6900efab-21d9-4e0f-a756-7910791b7a97 name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 18:20:46 ha-142481 crio[662]: time="2024-10-10 18:20:46.373040871Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6900efab-21d9-4e0f-a756-7910791b7a97 name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 18:20:46 ha-142481 crio[662]: time="2024-10-10 18:20:46.373286632Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c07ad1fe2bce4fa8373def3cddec28c3e2552242ac1c39c134f1ae1b46e61fc7,PodSandboxId:0cebb1db5e1d3f09063f319b62399492ea2520962e06e43ba786fceadf07a397,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728584222925867591,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-xnwpj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ddbe05a6-e1b2-4b9d-b285-ed6a97c9ea98,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:018e6370bdfdae261e6d11d25c43c0456fb2814ad544f82f775181d88055bf37,PodSandboxId:84952d68d14fb8ddef31a76a7d3e10fc7ae9a1bb453a588629cf59972d4cc64d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728584076458035667,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xfhq8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33d478fb-98f0-4029-87ae-9701a54cecd9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c208648c013d8c43a6964604c6ae8de3a53473a5fd4e7885a2755580435fb2e,PodSandboxId:20b740049c58515dfa475c0041efc539108a367ad35f9468d4a5dd454a1ba4c1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728584076394205343,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-28dll,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
1d8bde36-6ab6-4b48-b00a-2a80059ecc11,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2eb7357e74059a7301a73b2f96692b18182bf02d619521a79db9b92779a3b9d3,PodSandboxId:a78996796d2ead6b297dda5318a5aee95e782ac04d0f6d8ae10bb40361461d5e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1728584076376848222,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d74f64e-6391-4ab0-99ad-8c1711840696,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b32ac96128061a8cd1f65949d4849412dd90d7656d1619c49a39c2a6f8cb40d3,PodSandboxId:d5a1a0a19e5bca4c63661ad0dc5406e3b343827728591b94c793305e3054b5bb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17285840
64156910390,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4d9v4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f52506a7-d4f9-4112-8b28-f239c7c8230b,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f7d32719ebd232c16c4b9d557726714c2b6c1836b556ff4ca647c75d46c03e9,PodSandboxId:63eed92e7516ac5ee123fb9b271b50065e605524dc4b10606329006c357fe8c2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728584064036938001,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gwvrh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06e8f455-b250-46ea-bbcc-75ed230f2b57,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80e86419d2aadedb18174a33f8cbe97c26931608fd3d13de863fea4148c76dc4,PodSandboxId:ef586683ae3a5a110607194cda298fc606be593d4b2e270644dcc4828413d726,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1728584055179658221,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-142481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca912ab25249d4277c44ead7f9f185cc,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:751981b34b5e95c8f49366cd245e2877efdef340757e2957722cc0c59f16c09c,PodSandboxId:a1a198bd8221ce4fda4789734e1aba8d3c7f89613592cf88b2fe723e9517e7f8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728584052280961387,Labels:map[string]string{io.kubernetes.container.name: kub
e-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-142481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ba30574b93c2cf97990905a11b77e8d,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d7eb644bee429f0aef0448b6ea6d1efc310f7b08553ebb2af7781f4c493bbaf,PodSandboxId:df70f8cffd3d4746ecd973b5cc68d7d7162723d607a107e3fc78197661178de8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728584052264112180,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-ha-142481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ddee910ffb9f86d5728fc6243e0b7a0,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43b160f9e1140bbc977bac3d49805593ce66960b30fe85ac1e6b104437e5a026,PodSandboxId:cf562380e5c8d4bcf939e430aad1b55e383372d779380ea7d38feca9c503cb13,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728584052244936539,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.
kubernetes.pod.name: kube-scheduler-ha-142481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4d0e8861f11277dd58b525d7e9f233b,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:206693e605977eabf0659fe5400e46be795755080a225b01d46bb7533d055c58,PodSandboxId:84fece63e17b567b38dff30b098d026073682dacac0ce0da5bf2ed8a3b0c34de,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728584052158394276,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-142481,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9682645df643fbbf1d6db4f0d49467bf,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6900efab-21d9-4e0f-a756-7910791b7a97 name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 18:20:46 ha-142481 crio[662]: time="2024-10-10 18:20:46.411915174Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e8975c4b-3d28-41ba-8feb-324f08cb70a5 name=/runtime.v1.RuntimeService/Version
	Oct 10 18:20:46 ha-142481 crio[662]: time="2024-10-10 18:20:46.412009246Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e8975c4b-3d28-41ba-8feb-324f08cb70a5 name=/runtime.v1.RuntimeService/Version
	Oct 10 18:20:46 ha-142481 crio[662]: time="2024-10-10 18:20:46.414464527Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2b34dbbb-e020-469c-bffb-484c744e424e name=/runtime.v1.ImageService/ImageFsInfo
	Oct 10 18:20:46 ha-142481 crio[662]: time="2024-10-10 18:20:46.415144790Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728584446415114195,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2b34dbbb-e020-469c-bffb-484c744e424e name=/runtime.v1.ImageService/ImageFsInfo
	Oct 10 18:20:46 ha-142481 crio[662]: time="2024-10-10 18:20:46.415803458Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=58d9089e-19e7-427c-a98e-abe80f28b304 name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 18:20:46 ha-142481 crio[662]: time="2024-10-10 18:20:46.415961532Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=58d9089e-19e7-427c-a98e-abe80f28b304 name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 18:20:46 ha-142481 crio[662]: time="2024-10-10 18:20:46.416200343Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c07ad1fe2bce4fa8373def3cddec28c3e2552242ac1c39c134f1ae1b46e61fc7,PodSandboxId:0cebb1db5e1d3f09063f319b62399492ea2520962e06e43ba786fceadf07a397,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728584222925867591,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-xnwpj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ddbe05a6-e1b2-4b9d-b285-ed6a97c9ea98,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:018e6370bdfdae261e6d11d25c43c0456fb2814ad544f82f775181d88055bf37,PodSandboxId:84952d68d14fb8ddef31a76a7d3e10fc7ae9a1bb453a588629cf59972d4cc64d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728584076458035667,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xfhq8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33d478fb-98f0-4029-87ae-9701a54cecd9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c208648c013d8c43a6964604c6ae8de3a53473a5fd4e7885a2755580435fb2e,PodSandboxId:20b740049c58515dfa475c0041efc539108a367ad35f9468d4a5dd454a1ba4c1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728584076394205343,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-28dll,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
1d8bde36-6ab6-4b48-b00a-2a80059ecc11,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2eb7357e74059a7301a73b2f96692b18182bf02d619521a79db9b92779a3b9d3,PodSandboxId:a78996796d2ead6b297dda5318a5aee95e782ac04d0f6d8ae10bb40361461d5e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1728584076376848222,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d74f64e-6391-4ab0-99ad-8c1711840696,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b32ac96128061a8cd1f65949d4849412dd90d7656d1619c49a39c2a6f8cb40d3,PodSandboxId:d5a1a0a19e5bca4c63661ad0dc5406e3b343827728591b94c793305e3054b5bb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17285840
64156910390,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4d9v4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f52506a7-d4f9-4112-8b28-f239c7c8230b,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f7d32719ebd232c16c4b9d557726714c2b6c1836b556ff4ca647c75d46c03e9,PodSandboxId:63eed92e7516ac5ee123fb9b271b50065e605524dc4b10606329006c357fe8c2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728584064036938001,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gwvrh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06e8f455-b250-46ea-bbcc-75ed230f2b57,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80e86419d2aadedb18174a33f8cbe97c26931608fd3d13de863fea4148c76dc4,PodSandboxId:ef586683ae3a5a110607194cda298fc606be593d4b2e270644dcc4828413d726,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1728584055179658221,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-142481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca912ab25249d4277c44ead7f9f185cc,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:751981b34b5e95c8f49366cd245e2877efdef340757e2957722cc0c59f16c09c,PodSandboxId:a1a198bd8221ce4fda4789734e1aba8d3c7f89613592cf88b2fe723e9517e7f8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728584052280961387,Labels:map[string]string{io.kubernetes.container.name: kub
e-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-142481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ba30574b93c2cf97990905a11b77e8d,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d7eb644bee429f0aef0448b6ea6d1efc310f7b08553ebb2af7781f4c493bbaf,PodSandboxId:df70f8cffd3d4746ecd973b5cc68d7d7162723d607a107e3fc78197661178de8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728584052264112180,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-ha-142481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ddee910ffb9f86d5728fc6243e0b7a0,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43b160f9e1140bbc977bac3d49805593ce66960b30fe85ac1e6b104437e5a026,PodSandboxId:cf562380e5c8d4bcf939e430aad1b55e383372d779380ea7d38feca9c503cb13,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728584052244936539,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.
kubernetes.pod.name: kube-scheduler-ha-142481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4d0e8861f11277dd58b525d7e9f233b,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:206693e605977eabf0659fe5400e46be795755080a225b01d46bb7533d055c58,PodSandboxId:84fece63e17b567b38dff30b098d026073682dacac0ce0da5bf2ed8a3b0c34de,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728584052158394276,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-142481,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9682645df643fbbf1d6db4f0d49467bf,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=58d9089e-19e7-427c-a98e-abe80f28b304 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c07ad1fe2bce4       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   0cebb1db5e1d3       busybox-7dff88458-xnwpj
	018e6370bdfda       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   84952d68d14fb       coredns-7c65d6cfc9-xfhq8
	5c208648c013d       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   20b740049c585       coredns-7c65d6cfc9-28dll
	2eb7357e74059       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   a78996796d2ea       storage-provisioner
	b32ac96128061       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      6 minutes ago       Running             kindnet-cni               0                   d5a1a0a19e5bc       kindnet-4d9v4
	9f7d32719ebd2       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      6 minutes ago       Running             kube-proxy                0                   63eed92e7516a       kube-proxy-gwvrh
	80e86419d2aad       ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413     6 minutes ago       Running             kube-vip                  0                   ef586683ae3a5       kube-vip-ha-142481
	751981b34b5e9       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      6 minutes ago       Running             kube-apiserver            0                   a1a198bd8221c       kube-apiserver-ha-142481
	4d7eb644bee42       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      6 minutes ago       Running             kube-controller-manager   0                   df70f8cffd3d4       kube-controller-manager-ha-142481
	43b160f9e1140       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      6 minutes ago       Running             kube-scheduler            0                   cf562380e5c8d       kube-scheduler-ha-142481
	206693e605977       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   84fece63e17b5       etcd-ha-142481
	
	
	==> coredns [018e6370bdfdae261e6d11d25c43c0456fb2814ad544f82f775181d88055bf37] <==
	[INFO] 10.244.1.2:34545 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001557695s
	[INFO] 10.244.1.2:38085 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000108964s
	[INFO] 10.244.1.2:51531 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000130545s
	[INFO] 10.244.0.4:44429 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002010271s
	[INFO] 10.244.0.4:54303 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000097043s
	[INFO] 10.244.0.4:42398 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000046814s
	[INFO] 10.244.0.4:45760 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00003792s
	[INFO] 10.244.2.2:37649 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000126566s
	[INFO] 10.244.2.2:40587 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000124439s
	[INFO] 10.244.2.2:57109 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00008569s
	[INFO] 10.244.1.2:44569 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000190494s
	[INFO] 10.244.1.2:36745 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000100275s
	[INFO] 10.244.1.2:43935 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000110935s
	[INFO] 10.244.0.4:38393 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000150867s
	[INFO] 10.244.0.4:42701 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000114037s
	[INFO] 10.244.0.4:38022 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000153775s
	[INFO] 10.244.0.4:54617 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000066619s
	[INFO] 10.244.2.2:38084 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000171s
	[INFO] 10.244.2.2:42518 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000188177s
	[INFO] 10.244.2.2:46288 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000151696s
	[INFO] 10.244.1.2:54065 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000167454s
	[INFO] 10.244.1.2:49349 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000138818s
	[INFO] 10.244.0.4:46873 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000110042s
	[INFO] 10.244.0.4:51740 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000092418s
	[INFO] 10.244.0.4:46743 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000066541s
	
	
	==> coredns [5c208648c013d8c43a6964604c6ae8de3a53473a5fd4e7885a2755580435fb2e] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:51137 - 38313 "HINFO IN 987630183612321637.831480708693955805. udp 55 false 512" NXDOMAIN qr,rd,ra 55 0.022844151s
	[INFO] 10.244.2.2:42578 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.001085393s
	[INFO] 10.244.1.2:46574 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.002185448s
	[INFO] 10.244.0.4:39782 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.001587443s
	[INFO] 10.244.0.4:53063 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000500521s
	[INFO] 10.244.2.2:54233 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000215976s
	[INFO] 10.244.2.2:58923 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000163879s
	[INFO] 10.244.1.2:45749 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000253197s
	[INFO] 10.244.1.2:48261 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001731s
	[INFO] 10.244.1.2:46306 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000179475s
	[INFO] 10.244.0.4:41358 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00015898s
	[INFO] 10.244.0.4:57383 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000192727s
	[INFO] 10.244.0.4:41993 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000083721s
	[INFO] 10.244.0.4:60789 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001398106s
	[INFO] 10.244.2.2:56030 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000145862s
	[INFO] 10.244.1.2:34434 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000144043s
	[INFO] 10.244.2.2:40687 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000170156s
	[INFO] 10.244.1.2:56591 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000140447s
	[INFO] 10.244.1.2:34586 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000215712s
	[INFO] 10.244.0.4:49420 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000094221s
	
	
	==> describe nodes <==
	Name:               ha-142481
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-142481
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e90d7de550e839433dc631623053398b652747fc
	                    minikube.k8s.io/name=ha-142481
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_10T18_14_22_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 10 Oct 2024 18:14:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-142481
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 10 Oct 2024 18:20:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 10 Oct 2024 18:17:25 +0000   Thu, 10 Oct 2024 18:14:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 10 Oct 2024 18:17:25 +0000   Thu, 10 Oct 2024 18:14:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 10 Oct 2024 18:17:25 +0000   Thu, 10 Oct 2024 18:14:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 10 Oct 2024 18:17:25 +0000   Thu, 10 Oct 2024 18:14:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.104
	  Hostname:    ha-142481
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 103fd1cad9094f108b20248867a8c9f2
	  System UUID:                103fd1ca-d909-4f10-8b20-248867a8c9f2
	  Boot ID:                    ea46d519-f733-4cdc-b631-5fb0eb75e07c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-xnwpj              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m47s
	  kube-system                 coredns-7c65d6cfc9-28dll             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m23s
	  kube-system                 coredns-7c65d6cfc9-xfhq8             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m23s
	  kube-system                 etcd-ha-142481                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m25s
	  kube-system                 kindnet-4d9v4                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m24s
	  kube-system                 kube-apiserver-ha-142481             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m25s
	  kube-system                 kube-controller-manager-ha-142481    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m25s
	  kube-system                 kube-proxy-gwvrh                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m24s
	  kube-system                 kube-scheduler-ha-142481             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m25s
	  kube-system                 kube-vip-ha-142481                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m25s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m22s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m22s                  kube-proxy       
	  Normal  NodeHasSufficientPID     6m35s (x7 over 6m35s)  kubelet          Node ha-142481 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m35s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 6m35s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m35s (x8 over 6m35s)  kubelet          Node ha-142481 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m35s (x8 over 6m35s)  kubelet          Node ha-142481 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 6m25s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m25s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m25s                  kubelet          Node ha-142481 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m25s                  kubelet          Node ha-142481 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m25s                  kubelet          Node ha-142481 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m24s                  node-controller  Node ha-142481 event: Registered Node ha-142481 in Controller
	  Normal  NodeReady                6m11s                  kubelet          Node ha-142481 status is now: NodeReady
	  Normal  RegisteredNode           5m25s                  node-controller  Node ha-142481 event: Registered Node ha-142481 in Controller
	  Normal  RegisteredNode           4m6s                   node-controller  Node ha-142481 event: Registered Node ha-142481 in Controller
	
	
	Name:               ha-142481-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-142481-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e90d7de550e839433dc631623053398b652747fc
	                    minikube.k8s.io/name=ha-142481
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_10T18_15_15_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 10 Oct 2024 18:15:12 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-142481-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 10 Oct 2024 18:18:16 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Thu, 10 Oct 2024 18:17:15 +0000   Thu, 10 Oct 2024 18:18:57 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Thu, 10 Oct 2024 18:17:15 +0000   Thu, 10 Oct 2024 18:18:57 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Thu, 10 Oct 2024 18:17:15 +0000   Thu, 10 Oct 2024 18:18:57 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Thu, 10 Oct 2024 18:17:15 +0000   Thu, 10 Oct 2024 18:18:57 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.186
	  Hostname:    ha-142481-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 64af1b9db3cc41a38fc696e261399a82
	  System UUID:                64af1b9d-b3cc-41a3-8fc6-96e261399a82
	  Boot ID:                    1ad9a5aa-6f71-4b62-94f2-fcfc6f775bcc
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-wf7qs                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m47s
	  kube-system                 etcd-ha-142481-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m32s
	  kube-system                 kindnet-5k6j8                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m34s
	  kube-system                 kube-apiserver-ha-142481-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m32s
	  kube-system                 kube-controller-manager-ha-142481-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m27s
	  kube-system                 kube-proxy-srfng                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m34s
	  kube-system                 kube-scheduler-ha-142481-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m32s
	  kube-system                 kube-vip-ha-142481-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m29s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m34s (x8 over 5m34s)  kubelet          Node ha-142481-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m34s (x8 over 5m34s)  kubelet          Node ha-142481-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m34s (x7 over 5m34s)  kubelet          Node ha-142481-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m34s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m29s                  node-controller  Node ha-142481-m02 event: Registered Node ha-142481-m02 in Controller
	  Normal  RegisteredNode           5m25s                  node-controller  Node ha-142481-m02 event: Registered Node ha-142481-m02 in Controller
	  Normal  RegisteredNode           4m6s                   node-controller  Node ha-142481-m02 event: Registered Node ha-142481-m02 in Controller
	  Normal  NodeNotReady             109s                   node-controller  Node ha-142481-m02 status is now: NodeNotReady
	
	
	Name:               ha-142481-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-142481-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e90d7de550e839433dc631623053398b652747fc
	                    minikube.k8s.io/name=ha-142481
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_10T18_16_34_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 10 Oct 2024 18:16:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-142481-m03
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 10 Oct 2024 18:20:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 10 Oct 2024 18:17:32 +0000   Thu, 10 Oct 2024 18:16:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 10 Oct 2024 18:17:32 +0000   Thu, 10 Oct 2024 18:16:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 10 Oct 2024 18:17:32 +0000   Thu, 10 Oct 2024 18:16:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 10 Oct 2024 18:17:32 +0000   Thu, 10 Oct 2024 18:16:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.175
	  Hostname:    ha-142481-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 940ef061e50d4431baad36dbbc54f8b4
	  System UUID:                940ef061-e50d-4431-baad-36dbbc54f8b4
	  Boot ID:                    48ae8d44-92c8-45fc-a610-982f0242851e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-5544l                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m47s
	  kube-system                 etcd-ha-142481-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m13s
	  kube-system                 kindnet-cjcsf                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m15s
	  kube-system                 kube-apiserver-ha-142481-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m13s
	  kube-system                 kube-controller-manager-ha-142481-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m13s
	  kube-system                 kube-proxy-cdjzg                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m15s
	  kube-system                 kube-scheduler-ha-142481-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m13s
	  kube-system                 kube-vip-ha-142481-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m10s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m10s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m15s (x8 over 4m15s)  kubelet          Node ha-142481-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m15s (x8 over 4m15s)  kubelet          Node ha-142481-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m15s (x7 over 4m15s)  kubelet          Node ha-142481-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m15s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m14s                  node-controller  Node ha-142481-m03 event: Registered Node ha-142481-m03 in Controller
	  Normal  RegisteredNode           4m10s                  node-controller  Node ha-142481-m03 event: Registered Node ha-142481-m03 in Controller
	  Normal  RegisteredNode           4m6s                   node-controller  Node ha-142481-m03 event: Registered Node ha-142481-m03 in Controller
	
	
	Name:               ha-142481-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-142481-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e90d7de550e839433dc631623053398b652747fc
	                    minikube.k8s.io/name=ha-142481
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_10T18_17_40_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 10 Oct 2024 18:17:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-142481-m04
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 10 Oct 2024 18:20:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 10 Oct 2024 18:18:09 +0000   Thu, 10 Oct 2024 18:17:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 10 Oct 2024 18:18:09 +0000   Thu, 10 Oct 2024 18:17:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 10 Oct 2024 18:18:09 +0000   Thu, 10 Oct 2024 18:17:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 10 Oct 2024 18:18:09 +0000   Thu, 10 Oct 2024 18:17:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.164
	  Hostname:    ha-142481-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 98346cf85e5d4e1e831142d0f2e86f20
	  System UUID:                98346cf8-5e5d-4e1e-8311-42d0f2e86f20
	  Boot ID:                    0fd379eb-2eaf-4e1b-aeda-b9abfe41644d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-qbvk6       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m7s
	  kube-system                 kube-proxy-4xzhw    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m7s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 3m2s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  3m7s (x2 over 3m7s)  kubelet          Node ha-142481-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m7s (x2 over 3m7s)  kubelet          Node ha-142481-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m7s (x2 over 3m7s)  kubelet          Node ha-142481-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m7s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m6s                 node-controller  Node ha-142481-m04 event: Registered Node ha-142481-m04 in Controller
	  Normal  RegisteredNode           3m5s                 node-controller  Node ha-142481-m04 event: Registered Node ha-142481-m04 in Controller
	  Normal  RegisteredNode           3m4s                 node-controller  Node ha-142481-m04 event: Registered Node ha-142481-m04 in Controller
	  Normal  NodeReady                2m47s                kubelet          Node ha-142481-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct10 18:13] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050451] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040403] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.885132] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.655679] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.952802] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Oct10 18:14] systemd-fstab-generator[584]: Ignoring "noauto" option for root device
	[  +0.063573] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.063579] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.169358] systemd-fstab-generator[611]: Ignoring "noauto" option for root device
	[  +0.137879] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.284778] systemd-fstab-generator[653]: Ignoring "noauto" option for root device
	[  +4.055847] systemd-fstab-generator[745]: Ignoring "noauto" option for root device
	[  +3.359583] systemd-fstab-generator[872]: Ignoring "noauto" option for root device
	[  +0.065935] kauditd_printk_skb: 158 callbacks suppressed
	[ +10.163908] systemd-fstab-generator[1291]: Ignoring "noauto" option for root device
	[  +0.085716] kauditd_printk_skb: 79 callbacks suppressed
	[ +14.930913] kauditd_printk_skb: 69 callbacks suppressed
	[Oct10 18:15] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [206693e605977eabf0659fe5400e46be795755080a225b01d46bb7533d055c58] <==
	{"level":"warn","ts":"2024-10-10T18:20:46.646878Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"223628dc6b2f68bd","from":"223628dc6b2f68bd","remote-peer-id":"74546b85ba45f826","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-10T18:20:46.678669Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"223628dc6b2f68bd","from":"223628dc6b2f68bd","remote-peer-id":"74546b85ba45f826","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-10T18:20:46.687456Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"223628dc6b2f68bd","from":"223628dc6b2f68bd","remote-peer-id":"74546b85ba45f826","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-10T18:20:46.691395Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"223628dc6b2f68bd","from":"223628dc6b2f68bd","remote-peer-id":"74546b85ba45f826","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-10T18:20:46.703268Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"223628dc6b2f68bd","from":"223628dc6b2f68bd","remote-peer-id":"74546b85ba45f826","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-10T18:20:46.712386Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"223628dc6b2f68bd","from":"223628dc6b2f68bd","remote-peer-id":"74546b85ba45f826","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-10T18:20:46.720780Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"223628dc6b2f68bd","from":"223628dc6b2f68bd","remote-peer-id":"74546b85ba45f826","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-10T18:20:46.725806Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"223628dc6b2f68bd","from":"223628dc6b2f68bd","remote-peer-id":"74546b85ba45f826","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-10T18:20:46.729242Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"223628dc6b2f68bd","from":"223628dc6b2f68bd","remote-peer-id":"74546b85ba45f826","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-10T18:20:46.734932Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"223628dc6b2f68bd","from":"223628dc6b2f68bd","remote-peer-id":"74546b85ba45f826","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-10T18:20:46.742839Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"223628dc6b2f68bd","from":"223628dc6b2f68bd","remote-peer-id":"74546b85ba45f826","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-10T18:20:46.746682Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"223628dc6b2f68bd","from":"223628dc6b2f68bd","remote-peer-id":"74546b85ba45f826","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-10T18:20:46.750275Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"223628dc6b2f68bd","from":"223628dc6b2f68bd","remote-peer-id":"74546b85ba45f826","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-10T18:20:46.757546Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"223628dc6b2f68bd","from":"223628dc6b2f68bd","remote-peer-id":"74546b85ba45f826","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-10T18:20:46.760853Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"223628dc6b2f68bd","from":"223628dc6b2f68bd","remote-peer-id":"74546b85ba45f826","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-10T18:20:46.767452Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"223628dc6b2f68bd","from":"223628dc6b2f68bd","remote-peer-id":"74546b85ba45f826","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-10T18:20:46.774152Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"223628dc6b2f68bd","from":"223628dc6b2f68bd","remote-peer-id":"74546b85ba45f826","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-10T18:20:46.782088Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"223628dc6b2f68bd","from":"223628dc6b2f68bd","remote-peer-id":"74546b85ba45f826","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-10T18:20:46.785769Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"223628dc6b2f68bd","from":"223628dc6b2f68bd","remote-peer-id":"74546b85ba45f826","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-10T18:20:46.789155Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"223628dc6b2f68bd","from":"223628dc6b2f68bd","remote-peer-id":"74546b85ba45f826","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-10T18:20:46.793224Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"223628dc6b2f68bd","from":"223628dc6b2f68bd","remote-peer-id":"74546b85ba45f826","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-10T18:20:46.800415Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"223628dc6b2f68bd","from":"223628dc6b2f68bd","remote-peer-id":"74546b85ba45f826","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-10T18:20:46.808476Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"223628dc6b2f68bd","from":"223628dc6b2f68bd","remote-peer-id":"74546b85ba45f826","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-10T18:20:46.838371Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"223628dc6b2f68bd","from":"223628dc6b2f68bd","remote-peer-id":"74546b85ba45f826","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-10T18:20:46.847653Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"223628dc6b2f68bd","from":"223628dc6b2f68bd","remote-peer-id":"74546b85ba45f826","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 18:20:46 up 7 min,  0 users,  load average: 0.50, 0.39, 0.19
	Linux ha-142481 5.10.207 #1 SMP Tue Oct 8 15:16:25 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [b32ac96128061a8cd1f65949d4849412dd90d7656d1619c49a39c2a6f8cb40d3] <==
	I1010 18:20:15.395542       1 main.go:322] Node ha-142481-m03 has CIDR [10.244.2.0/24] 
	I1010 18:20:25.390200       1 main.go:295] Handling node with IPs: map[192.168.39.104:{}]
	I1010 18:20:25.390354       1 main.go:299] handling current node
	I1010 18:20:25.390392       1 main.go:295] Handling node with IPs: map[192.168.39.186:{}]
	I1010 18:20:25.390416       1 main.go:322] Node ha-142481-m02 has CIDR [10.244.1.0/24] 
	I1010 18:20:25.390644       1 main.go:295] Handling node with IPs: map[192.168.39.175:{}]
	I1010 18:20:25.390677       1 main.go:322] Node ha-142481-m03 has CIDR [10.244.2.0/24] 
	I1010 18:20:25.390737       1 main.go:295] Handling node with IPs: map[192.168.39.164:{}]
	I1010 18:20:25.390755       1 main.go:322] Node ha-142481-m04 has CIDR [10.244.3.0/24] 
	I1010 18:20:35.399378       1 main.go:295] Handling node with IPs: map[192.168.39.104:{}]
	I1010 18:20:35.399430       1 main.go:299] handling current node
	I1010 18:20:35.399452       1 main.go:295] Handling node with IPs: map[192.168.39.186:{}]
	I1010 18:20:35.399457       1 main.go:322] Node ha-142481-m02 has CIDR [10.244.1.0/24] 
	I1010 18:20:35.399642       1 main.go:295] Handling node with IPs: map[192.168.39.175:{}]
	I1010 18:20:35.399667       1 main.go:322] Node ha-142481-m03 has CIDR [10.244.2.0/24] 
	I1010 18:20:35.399718       1 main.go:295] Handling node with IPs: map[192.168.39.164:{}]
	I1010 18:20:35.399723       1 main.go:322] Node ha-142481-m04 has CIDR [10.244.3.0/24] 
	I1010 18:20:45.399629       1 main.go:295] Handling node with IPs: map[192.168.39.175:{}]
	I1010 18:20:45.399760       1 main.go:322] Node ha-142481-m03 has CIDR [10.244.2.0/24] 
	I1010 18:20:45.399950       1 main.go:295] Handling node with IPs: map[192.168.39.164:{}]
	I1010 18:20:45.399978       1 main.go:322] Node ha-142481-m04 has CIDR [10.244.3.0/24] 
	I1010 18:20:45.400080       1 main.go:295] Handling node with IPs: map[192.168.39.104:{}]
	I1010 18:20:45.400105       1 main.go:299] handling current node
	I1010 18:20:45.400138       1 main.go:295] Handling node with IPs: map[192.168.39.186:{}]
	I1010 18:20:45.400158       1 main.go:322] Node ha-142481-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [751981b34b5e95c8f49366cd245e2877efdef340757e2957722cc0c59f16c09c] <==
	I1010 18:14:21.601752       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1010 18:14:21.615538       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1010 18:14:22.685756       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I1010 18:14:22.961093       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E1010 18:15:13.597943       1 writers.go:122] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E1010 18:15:13.598021       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 8.162µs, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	E1010 18:15:13.599137       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E1010 18:15:13.600311       1 writers.go:135] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E1010 18:15:13.601619       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="3.769951ms" method="POST" path="/api/v1/namespaces/kube-system/events" result=null
	E1010 18:17:03.850296       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:50978: use of closed network connection
	E1010 18:17:04.060164       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:50998: use of closed network connection
	E1010 18:17:04.265073       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51022: use of closed network connection
	E1010 18:17:04.497148       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51026: use of closed network connection
	E1010 18:17:04.691753       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51052: use of closed network connection
	E1010 18:17:04.874313       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51072: use of closed network connection
	E1010 18:17:05.055509       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51096: use of closed network connection
	E1010 18:17:05.241806       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51110: use of closed network connection
	E1010 18:17:05.418962       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51128: use of closed network connection
	E1010 18:17:05.714305       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35886: use of closed network connection
	E1010 18:17:05.894226       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35894: use of closed network connection
	E1010 18:17:06.084951       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35922: use of closed network connection
	E1010 18:17:06.281751       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35936: use of closed network connection
	E1010 18:17:06.459430       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35954: use of closed network connection
	E1010 18:17:06.642941       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35966: use of closed network connection
	W1010 18:18:37.363890       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.104 192.168.39.175]
	
	
	==> kube-controller-manager [4d7eb644bee429f0aef0448b6ea6d1efc310f7b08553ebb2af7781f4c493bbaf] <==
	I1010 18:17:39.636355       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-142481-m04" podCIDRs=["10.244.3.0/24"]
	I1010 18:17:39.636414       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-142481-m04"
	I1010 18:17:39.636469       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-142481-m04"
	I1010 18:17:39.668112       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-142481-m04"
	I1010 18:17:39.689740       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-142481-m04"
	I1010 18:17:40.177402       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-142481-m04"
	I1010 18:17:40.233291       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-142481-m04"
	I1010 18:17:41.187681       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-142481-m04"
	I1010 18:17:41.226193       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-142481-m04"
	I1010 18:17:42.243172       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-142481-m04"
	I1010 18:17:42.243646       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-142481-m04"
	I1010 18:17:42.333986       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-142481-m04"
	I1010 18:17:49.941287       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-142481-m04"
	I1010 18:17:59.249000       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-142481-m04"
	I1010 18:17:59.249257       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-142481-m04"
	I1010 18:17:59.269371       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-142481-m04"
	I1010 18:18:00.212787       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-142481-m04"
	I1010 18:18:09.988078       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-142481-m04"
	I1010 18:18:57.270927       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-142481-m04"
	I1010 18:18:57.272138       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-142481-m02"
	I1010 18:18:57.296852       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-142481-m02"
	I1010 18:18:57.478314       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="49.230176ms"
	I1010 18:18:57.478428       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="80.474µs"
	I1010 18:19:00.278371       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-142481-m02"
	I1010 18:19:02.479119       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-142481-m02"
	
	
	==> kube-proxy [9f7d32719ebd232c16c4b9d557726714c2b6c1836b556ff4ca647c75d46c03e9] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1010 18:14:24.446239       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1010 18:14:24.508320       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.104"]
	E1010 18:14:24.508809       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1010 18:14:24.556831       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1010 18:14:24.556922       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1010 18:14:24.556961       1 server_linux.go:169] "Using iptables Proxier"
	I1010 18:14:24.559536       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1010 18:14:24.560518       1 server.go:483] "Version info" version="v1.31.1"
	I1010 18:14:24.560742       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1010 18:14:24.562971       1 config.go:199] "Starting service config controller"
	I1010 18:14:24.563611       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1010 18:14:24.563720       1 config.go:105] "Starting endpoint slice config controller"
	I1010 18:14:24.563744       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1010 18:14:24.566215       1 config.go:328] "Starting node config controller"
	I1010 18:14:24.566227       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1010 18:14:24.665476       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1010 18:14:24.665712       1 shared_informer.go:320] Caches are synced for service config
	I1010 18:14:24.667666       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [43b160f9e1140bbc977bac3d49805593ce66960b30fe85ac1e6b104437e5a026] <==
	W1010 18:14:16.494936       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1010 18:14:16.495042       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1010 18:14:16.517223       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1010 18:14:16.517488       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1010 18:14:16.544128       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1010 18:14:16.544233       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1010 18:14:16.560806       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1010 18:14:16.560856       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1010 18:14:16.640427       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1010 18:14:16.640554       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1010 18:14:16.701938       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1010 18:14:16.702008       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1010 18:14:16.773339       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1010 18:14:16.773523       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1010 18:14:16.873800       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1010 18:14:16.874006       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1010 18:14:18.221733       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1010 18:16:59.352658       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-wf7qs\": pod busybox-7dff88458-wf7qs is already assigned to node \"ha-142481-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-wf7qs" node="ha-142481-m02"
	E1010 18:16:59.352878       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 8cfeb378-41dd-4850-bbc6-610453612cf5(default/busybox-7dff88458-wf7qs) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-wf7qs"
	E1010 18:16:59.352933       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-wf7qs\": pod busybox-7dff88458-wf7qs is already assigned to node \"ha-142481-m02\"" pod="default/busybox-7dff88458-wf7qs"
	I1010 18:16:59.352990       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-wf7qs" node="ha-142481-m02"
	E1010 18:17:39.876287       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-qbvk6\": pod kindnet-qbvk6 is already assigned to node \"ha-142481-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-qbvk6" node="ha-142481-m04"
	E1010 18:17:39.876531       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 67b280c2-562d-45e0-a362-726dadaf5cf6(kube-system/kindnet-qbvk6) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-qbvk6"
	E1010 18:17:39.876554       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-qbvk6\": pod kindnet-qbvk6 is already assigned to node \"ha-142481-m04\"" pod="kube-system/kindnet-qbvk6"
	I1010 18:17:39.876861       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-qbvk6" node="ha-142481-m04"
	
	
	==> kubelet <==
	Oct 10 18:19:21 ha-142481 kubelet[1298]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 10 18:19:21 ha-142481 kubelet[1298]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 10 18:19:21 ha-142481 kubelet[1298]: E1010 18:19:21.653774    1298 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728584361653351989,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 18:19:21 ha-142481 kubelet[1298]: E1010 18:19:21.654165    1298 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728584361653351989,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 18:19:31 ha-142481 kubelet[1298]: E1010 18:19:31.655501    1298 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728584371655103881,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 18:19:31 ha-142481 kubelet[1298]: E1010 18:19:31.656061    1298 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728584371655103881,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 18:19:41 ha-142481 kubelet[1298]: E1010 18:19:41.657888    1298 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728584381657459506,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 18:19:41 ha-142481 kubelet[1298]: E1010 18:19:41.657923    1298 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728584381657459506,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 18:19:51 ha-142481 kubelet[1298]: E1010 18:19:51.662516    1298 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728584391660533273,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 18:19:51 ha-142481 kubelet[1298]: E1010 18:19:51.662805    1298 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728584391660533273,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 18:20:01 ha-142481 kubelet[1298]: E1010 18:20:01.665482    1298 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728584401664880599,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 18:20:01 ha-142481 kubelet[1298]: E1010 18:20:01.665528    1298 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728584401664880599,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 18:20:11 ha-142481 kubelet[1298]: E1010 18:20:11.668335    1298 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728584411667894103,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 18:20:11 ha-142481 kubelet[1298]: E1010 18:20:11.668374    1298 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728584411667894103,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 18:20:21 ha-142481 kubelet[1298]: E1010 18:20:21.541634    1298 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 10 18:20:21 ha-142481 kubelet[1298]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 10 18:20:21 ha-142481 kubelet[1298]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 10 18:20:21 ha-142481 kubelet[1298]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 10 18:20:21 ha-142481 kubelet[1298]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 10 18:20:21 ha-142481 kubelet[1298]: E1010 18:20:21.670317    1298 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728584421670063294,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 18:20:21 ha-142481 kubelet[1298]: E1010 18:20:21.670363    1298 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728584421670063294,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 18:20:31 ha-142481 kubelet[1298]: E1010 18:20:31.672182    1298 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728584431671864331,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 18:20:31 ha-142481 kubelet[1298]: E1010 18:20:31.672436    1298 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728584431671864331,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 18:20:41 ha-142481 kubelet[1298]: E1010 18:20:41.682034    1298 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728584441680876363,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 18:20:41 ha-142481 kubelet[1298]: E1010 18:20:41.682449    1298 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728584441680876363,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-142481 -n ha-142481
helpers_test.go:261: (dbg) Run:  kubectl --context ha-142481 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (5.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (6.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-142481 node start m02 -v=7 --alsologtostderr
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-142481 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Done: out/minikube-linux-amd64 -p ha-142481 status -v=7 --alsologtostderr: (4.10637107s)
ha_test.go:437: status says not all three control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-142481 status -v=7 --alsologtostderr": 
ha_test.go:440: status says not all four hosts are running: args "out/minikube-linux-amd64 -p ha-142481 status -v=7 --alsologtostderr": 
ha_test.go:443: status says not all four kubelets are running: args "out/minikube-linux-amd64 -p ha-142481 status -v=7 --alsologtostderr": 
ha_test.go:446: status says not all three apiservers are running: args "out/minikube-linux-amd64 -p ha-142481 status -v=7 --alsologtostderr": 
ha_test.go:450: (dbg) Run:  kubectl get nodes
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-142481 -n ha-142481
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-142481 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-142481 logs -n 25: (1.567989401s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-142481 ssh -n                                                                 | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | ha-142481-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-142481 cp ha-142481-m03:/home/docker/cp-test.txt                              | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | ha-142481:/home/docker/cp-test_ha-142481-m03_ha-142481.txt                       |           |         |         |                     |                     |
	| ssh     | ha-142481 ssh -n                                                                 | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | ha-142481-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-142481 ssh -n ha-142481 sudo cat                                              | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | /home/docker/cp-test_ha-142481-m03_ha-142481.txt                                 |           |         |         |                     |                     |
	| cp      | ha-142481 cp ha-142481-m03:/home/docker/cp-test.txt                              | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | ha-142481-m02:/home/docker/cp-test_ha-142481-m03_ha-142481-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-142481 ssh -n                                                                 | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | ha-142481-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-142481 ssh -n ha-142481-m02 sudo cat                                          | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | /home/docker/cp-test_ha-142481-m03_ha-142481-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-142481 cp ha-142481-m03:/home/docker/cp-test.txt                              | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | ha-142481-m04:/home/docker/cp-test_ha-142481-m03_ha-142481-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-142481 ssh -n                                                                 | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | ha-142481-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-142481 ssh -n ha-142481-m04 sudo cat                                          | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | /home/docker/cp-test_ha-142481-m03_ha-142481-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-142481 cp testdata/cp-test.txt                                                | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | ha-142481-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-142481 ssh -n                                                                 | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | ha-142481-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-142481 cp ha-142481-m04:/home/docker/cp-test.txt                              | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile4115718102/001/cp-test_ha-142481-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-142481 ssh -n                                                                 | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | ha-142481-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-142481 cp ha-142481-m04:/home/docker/cp-test.txt                              | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | ha-142481:/home/docker/cp-test_ha-142481-m04_ha-142481.txt                       |           |         |         |                     |                     |
	| ssh     | ha-142481 ssh -n                                                                 | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | ha-142481-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-142481 ssh -n ha-142481 sudo cat                                              | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | /home/docker/cp-test_ha-142481-m04_ha-142481.txt                                 |           |         |         |                     |                     |
	| cp      | ha-142481 cp ha-142481-m04:/home/docker/cp-test.txt                              | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | ha-142481-m02:/home/docker/cp-test_ha-142481-m04_ha-142481-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-142481 ssh -n                                                                 | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | ha-142481-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-142481 ssh -n ha-142481-m02 sudo cat                                          | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | /home/docker/cp-test_ha-142481-m04_ha-142481-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-142481 cp ha-142481-m04:/home/docker/cp-test.txt                              | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | ha-142481-m03:/home/docker/cp-test_ha-142481-m04_ha-142481-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-142481 ssh -n                                                                 | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | ha-142481-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-142481 ssh -n ha-142481-m03 sudo cat                                          | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | /home/docker/cp-test_ha-142481-m04_ha-142481-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-142481 node stop m02 -v=7                                                     | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-142481 node start m02 -v=7                                                    | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:20 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/10 18:13:38
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1010 18:13:38.106562   99368 out.go:345] Setting OutFile to fd 1 ...
	I1010 18:13:38.106682   99368 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 18:13:38.106690   99368 out.go:358] Setting ErrFile to fd 2...
	I1010 18:13:38.106694   99368 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 18:13:38.106895   99368 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19787-81676/.minikube/bin
	I1010 18:13:38.107477   99368 out.go:352] Setting JSON to false
	I1010 18:13:38.108309   99368 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":6964,"bootTime":1728577054,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1010 18:13:38.108413   99368 start.go:139] virtualization: kvm guest
	I1010 18:13:38.110824   99368 out.go:177] * [ha-142481] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1010 18:13:38.112418   99368 out.go:177]   - MINIKUBE_LOCATION=19787
	I1010 18:13:38.112454   99368 notify.go:220] Checking for updates...
	I1010 18:13:38.114936   99368 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1010 18:13:38.116370   99368 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19787-81676/kubeconfig
	I1010 18:13:38.117745   99368 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19787-81676/.minikube
	I1010 18:13:38.118944   99368 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1010 18:13:38.120250   99368 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1010 18:13:38.121551   99368 driver.go:394] Setting default libvirt URI to qemu:///system
	I1010 18:13:38.157644   99368 out.go:177] * Using the kvm2 driver based on user configuration
	I1010 18:13:38.158888   99368 start.go:297] selected driver: kvm2
	I1010 18:13:38.158919   99368 start.go:901] validating driver "kvm2" against <nil>
	I1010 18:13:38.158934   99368 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1010 18:13:38.159711   99368 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 18:13:38.159814   99368 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19787-81676/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1010 18:13:38.174780   99368 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1010 18:13:38.174840   99368 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1010 18:13:38.175095   99368 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1010 18:13:38.175132   99368 cni.go:84] Creating CNI manager for ""
	I1010 18:13:38.175195   99368 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1010 18:13:38.175219   99368 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1010 18:13:38.175271   99368 start.go:340] cluster config:
	{Name:ha-142481 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-142481 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1010 18:13:38.175372   99368 iso.go:125] acquiring lock: {Name:mk53f4624ffe67cac4f5d6b29b9ed87292a010a5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 18:13:38.177295   99368 out.go:177] * Starting "ha-142481" primary control-plane node in "ha-142481" cluster
	I1010 18:13:38.178523   99368 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1010 18:13:38.178564   99368 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1010 18:13:38.178578   99368 cache.go:56] Caching tarball of preloaded images
	I1010 18:13:38.178671   99368 preload.go:172] Found /home/jenkins/minikube-integration/19787-81676/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1010 18:13:38.178686   99368 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1010 18:13:38.179056   99368 profile.go:143] Saving config to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/config.json ...
	I1010 18:13:38.179080   99368 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/config.json: {Name:mk6ba06e5ddbd39667f8d6031429fc5b567ca233 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:13:38.179240   99368 start.go:360] acquireMachinesLock for ha-142481: {Name:mkf50a8a3600473176c10a3ea212772e747151e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1010 18:13:38.179277   99368 start.go:364] duration metric: took 20.536µs to acquireMachinesLock for "ha-142481"
	I1010 18:13:38.179299   99368 start.go:93] Provisioning new machine with config: &{Name:ha-142481 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-142481 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1010 18:13:38.179350   99368 start.go:125] createHost starting for "" (driver="kvm2")
	I1010 18:13:38.180956   99368 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1010 18:13:38.181134   99368 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 18:13:38.181190   99368 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 18:13:38.195735   99368 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37675
	I1010 18:13:38.196239   99368 main.go:141] libmachine: () Calling .GetVersion
	I1010 18:13:38.196810   99368 main.go:141] libmachine: Using API Version  1
	I1010 18:13:38.196834   99368 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 18:13:38.197229   99368 main.go:141] libmachine: () Calling .GetMachineName
	I1010 18:13:38.197439   99368 main.go:141] libmachine: (ha-142481) Calling .GetMachineName
	I1010 18:13:38.197656   99368 main.go:141] libmachine: (ha-142481) Calling .DriverName
	I1010 18:13:38.197815   99368 start.go:159] libmachine.API.Create for "ha-142481" (driver="kvm2")
	I1010 18:13:38.197850   99368 client.go:168] LocalClient.Create starting
	I1010 18:13:38.197896   99368 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem
	I1010 18:13:38.197929   99368 main.go:141] libmachine: Decoding PEM data...
	I1010 18:13:38.197946   99368 main.go:141] libmachine: Parsing certificate...
	I1010 18:13:38.197994   99368 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem
	I1010 18:13:38.198011   99368 main.go:141] libmachine: Decoding PEM data...
	I1010 18:13:38.198032   99368 main.go:141] libmachine: Parsing certificate...
	I1010 18:13:38.198051   99368 main.go:141] libmachine: Running pre-create checks...
	I1010 18:13:38.198059   99368 main.go:141] libmachine: (ha-142481) Calling .PreCreateCheck
	I1010 18:13:38.198443   99368 main.go:141] libmachine: (ha-142481) Calling .GetConfigRaw
	I1010 18:13:38.198814   99368 main.go:141] libmachine: Creating machine...
	I1010 18:13:38.198829   99368 main.go:141] libmachine: (ha-142481) Calling .Create
	I1010 18:13:38.199006   99368 main.go:141] libmachine: (ha-142481) Creating KVM machine...
	I1010 18:13:38.200423   99368 main.go:141] libmachine: (ha-142481) DBG | found existing default KVM network
	I1010 18:13:38.201134   99368 main.go:141] libmachine: (ha-142481) DBG | I1010 18:13:38.200987   99391 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015b90}
	I1010 18:13:38.201152   99368 main.go:141] libmachine: (ha-142481) DBG | created network xml: 
	I1010 18:13:38.201163   99368 main.go:141] libmachine: (ha-142481) DBG | <network>
	I1010 18:13:38.201168   99368 main.go:141] libmachine: (ha-142481) DBG |   <name>mk-ha-142481</name>
	I1010 18:13:38.201173   99368 main.go:141] libmachine: (ha-142481) DBG |   <dns enable='no'/>
	I1010 18:13:38.201179   99368 main.go:141] libmachine: (ha-142481) DBG |   
	I1010 18:13:38.201186   99368 main.go:141] libmachine: (ha-142481) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1010 18:13:38.201195   99368 main.go:141] libmachine: (ha-142481) DBG |     <dhcp>
	I1010 18:13:38.201204   99368 main.go:141] libmachine: (ha-142481) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1010 18:13:38.201210   99368 main.go:141] libmachine: (ha-142481) DBG |     </dhcp>
	I1010 18:13:38.201224   99368 main.go:141] libmachine: (ha-142481) DBG |   </ip>
	I1010 18:13:38.201233   99368 main.go:141] libmachine: (ha-142481) DBG |   
	I1010 18:13:38.201241   99368 main.go:141] libmachine: (ha-142481) DBG | </network>
	I1010 18:13:38.201253   99368 main.go:141] libmachine: (ha-142481) DBG | 
	I1010 18:13:38.206109   99368 main.go:141] libmachine: (ha-142481) DBG | trying to create private KVM network mk-ha-142481 192.168.39.0/24...
	I1010 18:13:38.273921   99368 main.go:141] libmachine: (ha-142481) DBG | private KVM network mk-ha-142481 192.168.39.0/24 created
	I1010 18:13:38.273973   99368 main.go:141] libmachine: (ha-142481) DBG | I1010 18:13:38.273888   99391 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19787-81676/.minikube
	I1010 18:13:38.273987   99368 main.go:141] libmachine: (ha-142481) Setting up store path in /home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481 ...
	I1010 18:13:38.274008   99368 main.go:141] libmachine: (ha-142481) Building disk image from file:///home/jenkins/minikube-integration/19787-81676/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso
	I1010 18:13:38.274030   99368 main.go:141] libmachine: (ha-142481) Downloading /home/jenkins/minikube-integration/19787-81676/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19787-81676/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso...
	I1010 18:13:38.538580   99368 main.go:141] libmachine: (ha-142481) DBG | I1010 18:13:38.538442   99391 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481/id_rsa...
	I1010 18:13:38.734956   99368 main.go:141] libmachine: (ha-142481) DBG | I1010 18:13:38.734800   99391 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481/ha-142481.rawdisk...
	I1010 18:13:38.734986   99368 main.go:141] libmachine: (ha-142481) DBG | Writing magic tar header
	I1010 18:13:38.734996   99368 main.go:141] libmachine: (ha-142481) DBG | Writing SSH key tar header
	I1010 18:13:38.735006   99368 main.go:141] libmachine: (ha-142481) DBG | I1010 18:13:38.734920   99391 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481 ...
	I1010 18:13:38.735023   99368 main.go:141] libmachine: (ha-142481) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481
	I1010 18:13:38.735054   99368 main.go:141] libmachine: (ha-142481) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19787-81676/.minikube/machines
	I1010 18:13:38.735062   99368 main.go:141] libmachine: (ha-142481) Setting executable bit set on /home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481 (perms=drwx------)
	I1010 18:13:38.735074   99368 main.go:141] libmachine: (ha-142481) Setting executable bit set on /home/jenkins/minikube-integration/19787-81676/.minikube/machines (perms=drwxr-xr-x)
	I1010 18:13:38.735083   99368 main.go:141] libmachine: (ha-142481) Setting executable bit set on /home/jenkins/minikube-integration/19787-81676/.minikube (perms=drwxr-xr-x)
	I1010 18:13:38.735098   99368 main.go:141] libmachine: (ha-142481) Setting executable bit set on /home/jenkins/minikube-integration/19787-81676 (perms=drwxrwxr-x)
	I1010 18:13:38.735107   99368 main.go:141] libmachine: (ha-142481) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19787-81676/.minikube
	I1010 18:13:38.735121   99368 main.go:141] libmachine: (ha-142481) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1010 18:13:38.735132   99368 main.go:141] libmachine: (ha-142481) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1010 18:13:38.735139   99368 main.go:141] libmachine: (ha-142481) Creating domain...
	I1010 18:13:38.735156   99368 main.go:141] libmachine: (ha-142481) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19787-81676
	I1010 18:13:38.735166   99368 main.go:141] libmachine: (ha-142481) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1010 18:13:38.735171   99368 main.go:141] libmachine: (ha-142481) DBG | Checking permissions on dir: /home/jenkins
	I1010 18:13:38.735177   99368 main.go:141] libmachine: (ha-142481) DBG | Checking permissions on dir: /home
	I1010 18:13:38.735183   99368 main.go:141] libmachine: (ha-142481) DBG | Skipping /home - not owner
	I1010 18:13:38.736388   99368 main.go:141] libmachine: (ha-142481) define libvirt domain using xml: 
	I1010 18:13:38.736417   99368 main.go:141] libmachine: (ha-142481) <domain type='kvm'>
	I1010 18:13:38.736427   99368 main.go:141] libmachine: (ha-142481)   <name>ha-142481</name>
	I1010 18:13:38.736439   99368 main.go:141] libmachine: (ha-142481)   <memory unit='MiB'>2200</memory>
	I1010 18:13:38.736471   99368 main.go:141] libmachine: (ha-142481)   <vcpu>2</vcpu>
	I1010 18:13:38.736493   99368 main.go:141] libmachine: (ha-142481)   <features>
	I1010 18:13:38.736527   99368 main.go:141] libmachine: (ha-142481)     <acpi/>
	I1010 18:13:38.736554   99368 main.go:141] libmachine: (ha-142481)     <apic/>
	I1010 18:13:38.736566   99368 main.go:141] libmachine: (ha-142481)     <pae/>
	I1010 18:13:38.736588   99368 main.go:141] libmachine: (ha-142481)     
	I1010 18:13:38.736600   99368 main.go:141] libmachine: (ha-142481)   </features>
	I1010 18:13:38.736610   99368 main.go:141] libmachine: (ha-142481)   <cpu mode='host-passthrough'>
	I1010 18:13:38.736620   99368 main.go:141] libmachine: (ha-142481)   
	I1010 18:13:38.736633   99368 main.go:141] libmachine: (ha-142481)   </cpu>
	I1010 18:13:38.736643   99368 main.go:141] libmachine: (ha-142481)   <os>
	I1010 18:13:38.736649   99368 main.go:141] libmachine: (ha-142481)     <type>hvm</type>
	I1010 18:13:38.736661   99368 main.go:141] libmachine: (ha-142481)     <boot dev='cdrom'/>
	I1010 18:13:38.736672   99368 main.go:141] libmachine: (ha-142481)     <boot dev='hd'/>
	I1010 18:13:38.736684   99368 main.go:141] libmachine: (ha-142481)     <bootmenu enable='no'/>
	I1010 18:13:38.736693   99368 main.go:141] libmachine: (ha-142481)   </os>
	I1010 18:13:38.736700   99368 main.go:141] libmachine: (ha-142481)   <devices>
	I1010 18:13:38.736710   99368 main.go:141] libmachine: (ha-142481)     <disk type='file' device='cdrom'>
	I1010 18:13:38.736729   99368 main.go:141] libmachine: (ha-142481)       <source file='/home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481/boot2docker.iso'/>
	I1010 18:13:38.736737   99368 main.go:141] libmachine: (ha-142481)       <target dev='hdc' bus='scsi'/>
	I1010 18:13:38.736742   99368 main.go:141] libmachine: (ha-142481)       <readonly/>
	I1010 18:13:38.736748   99368 main.go:141] libmachine: (ha-142481)     </disk>
	I1010 18:13:38.736754   99368 main.go:141] libmachine: (ha-142481)     <disk type='file' device='disk'>
	I1010 18:13:38.736761   99368 main.go:141] libmachine: (ha-142481)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1010 18:13:38.736768   99368 main.go:141] libmachine: (ha-142481)       <source file='/home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481/ha-142481.rawdisk'/>
	I1010 18:13:38.736773   99368 main.go:141] libmachine: (ha-142481)       <target dev='hda' bus='virtio'/>
	I1010 18:13:38.736780   99368 main.go:141] libmachine: (ha-142481)     </disk>
	I1010 18:13:38.736789   99368 main.go:141] libmachine: (ha-142481)     <interface type='network'>
	I1010 18:13:38.736795   99368 main.go:141] libmachine: (ha-142481)       <source network='mk-ha-142481'/>
	I1010 18:13:38.736800   99368 main.go:141] libmachine: (ha-142481)       <model type='virtio'/>
	I1010 18:13:38.736804   99368 main.go:141] libmachine: (ha-142481)     </interface>
	I1010 18:13:38.736811   99368 main.go:141] libmachine: (ha-142481)     <interface type='network'>
	I1010 18:13:38.736816   99368 main.go:141] libmachine: (ha-142481)       <source network='default'/>
	I1010 18:13:38.736822   99368 main.go:141] libmachine: (ha-142481)       <model type='virtio'/>
	I1010 18:13:38.736831   99368 main.go:141] libmachine: (ha-142481)     </interface>
	I1010 18:13:38.736837   99368 main.go:141] libmachine: (ha-142481)     <serial type='pty'>
	I1010 18:13:38.736842   99368 main.go:141] libmachine: (ha-142481)       <target port='0'/>
	I1010 18:13:38.736868   99368 main.go:141] libmachine: (ha-142481)     </serial>
	I1010 18:13:38.736882   99368 main.go:141] libmachine: (ha-142481)     <console type='pty'>
	I1010 18:13:38.736896   99368 main.go:141] libmachine: (ha-142481)       <target type='serial' port='0'/>
	I1010 18:13:38.736911   99368 main.go:141] libmachine: (ha-142481)     </console>
	I1010 18:13:38.736921   99368 main.go:141] libmachine: (ha-142481)     <rng model='virtio'>
	I1010 18:13:38.736929   99368 main.go:141] libmachine: (ha-142481)       <backend model='random'>/dev/random</backend>
	I1010 18:13:38.736935   99368 main.go:141] libmachine: (ha-142481)     </rng>
	I1010 18:13:38.736942   99368 main.go:141] libmachine: (ha-142481)     
	I1010 18:13:38.736951   99368 main.go:141] libmachine: (ha-142481)     
	I1010 18:13:38.736962   99368 main.go:141] libmachine: (ha-142481)   </devices>
	I1010 18:13:38.736973   99368 main.go:141] libmachine: (ha-142481) </domain>
	I1010 18:13:38.737007   99368 main.go:141] libmachine: (ha-142481) 
	I1010 18:13:38.741472   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:b1:0c:5d in network default
	I1010 18:13:38.742188   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:13:38.742202   99368 main.go:141] libmachine: (ha-142481) Ensuring networks are active...
	I1010 18:13:38.743102   99368 main.go:141] libmachine: (ha-142481) Ensuring network default is active
	I1010 18:13:38.743484   99368 main.go:141] libmachine: (ha-142481) Ensuring network mk-ha-142481 is active
	I1010 18:13:38.743981   99368 main.go:141] libmachine: (ha-142481) Getting domain xml...
	I1010 18:13:38.744831   99368 main.go:141] libmachine: (ha-142481) Creating domain...
	I1010 18:13:39.943643   99368 main.go:141] libmachine: (ha-142481) Waiting to get IP...
	I1010 18:13:39.944415   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:13:39.944819   99368 main.go:141] libmachine: (ha-142481) DBG | unable to find current IP address of domain ha-142481 in network mk-ha-142481
	I1010 18:13:39.944886   99368 main.go:141] libmachine: (ha-142481) DBG | I1010 18:13:39.944805   99391 retry.go:31] will retry after 263.450232ms: waiting for machine to come up
	I1010 18:13:40.210494   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:13:40.210938   99368 main.go:141] libmachine: (ha-142481) DBG | unable to find current IP address of domain ha-142481 in network mk-ha-142481
	I1010 18:13:40.210979   99368 main.go:141] libmachine: (ha-142481) DBG | I1010 18:13:40.210904   99391 retry.go:31] will retry after 318.83444ms: waiting for machine to come up
	I1010 18:13:40.531556   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:13:40.531982   99368 main.go:141] libmachine: (ha-142481) DBG | unable to find current IP address of domain ha-142481 in network mk-ha-142481
	I1010 18:13:40.532010   99368 main.go:141] libmachine: (ha-142481) DBG | I1010 18:13:40.531946   99391 retry.go:31] will retry after 379.250744ms: waiting for machine to come up
	I1010 18:13:40.912440   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:13:40.912909   99368 main.go:141] libmachine: (ha-142481) DBG | unable to find current IP address of domain ha-142481 in network mk-ha-142481
	I1010 18:13:40.912942   99368 main.go:141] libmachine: (ha-142481) DBG | I1010 18:13:40.912844   99391 retry.go:31] will retry after 505.831382ms: waiting for machine to come up
	I1010 18:13:41.420670   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:13:41.421119   99368 main.go:141] libmachine: (ha-142481) DBG | unable to find current IP address of domain ha-142481 in network mk-ha-142481
	I1010 18:13:41.421141   99368 main.go:141] libmachine: (ha-142481) DBG | I1010 18:13:41.421071   99391 retry.go:31] will retry after 555.074801ms: waiting for machine to come up
	I1010 18:13:41.977849   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:13:41.978257   99368 main.go:141] libmachine: (ha-142481) DBG | unable to find current IP address of domain ha-142481 in network mk-ha-142481
	I1010 18:13:41.978281   99368 main.go:141] libmachine: (ha-142481) DBG | I1010 18:13:41.978194   99391 retry.go:31] will retry after 636.152434ms: waiting for machine to come up
	I1010 18:13:42.615909   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:13:42.616285   99368 main.go:141] libmachine: (ha-142481) DBG | unable to find current IP address of domain ha-142481 in network mk-ha-142481
	I1010 18:13:42.616320   99368 main.go:141] libmachine: (ha-142481) DBG | I1010 18:13:42.616236   99391 retry.go:31] will retry after 907.451913ms: waiting for machine to come up
	I1010 18:13:43.524700   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:13:43.525164   99368 main.go:141] libmachine: (ha-142481) DBG | unable to find current IP address of domain ha-142481 in network mk-ha-142481
	I1010 18:13:43.525241   99368 main.go:141] libmachine: (ha-142481) DBG | I1010 18:13:43.525119   99391 retry.go:31] will retry after 916.746032ms: waiting for machine to come up
	I1010 18:13:44.443019   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:13:44.443439   99368 main.go:141] libmachine: (ha-142481) DBG | unable to find current IP address of domain ha-142481 in network mk-ha-142481
	I1010 18:13:44.443463   99368 main.go:141] libmachine: (ha-142481) DBG | I1010 18:13:44.443379   99391 retry.go:31] will retry after 1.722399675s: waiting for machine to come up
	I1010 18:13:46.168252   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:13:46.168660   99368 main.go:141] libmachine: (ha-142481) DBG | unable to find current IP address of domain ha-142481 in network mk-ha-142481
	I1010 18:13:46.168691   99368 main.go:141] libmachine: (ha-142481) DBG | I1010 18:13:46.168625   99391 retry.go:31] will retry after 2.191060126s: waiting for machine to come up
	I1010 18:13:48.361115   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:13:48.361666   99368 main.go:141] libmachine: (ha-142481) DBG | unable to find current IP address of domain ha-142481 in network mk-ha-142481
	I1010 18:13:48.361699   99368 main.go:141] libmachine: (ha-142481) DBG | I1010 18:13:48.361609   99391 retry.go:31] will retry after 2.390239739s: waiting for machine to come up
	I1010 18:13:50.755200   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:13:50.755610   99368 main.go:141] libmachine: (ha-142481) DBG | unable to find current IP address of domain ha-142481 in network mk-ha-142481
	I1010 18:13:50.755636   99368 main.go:141] libmachine: (ha-142481) DBG | I1010 18:13:50.755576   99391 retry.go:31] will retry after 2.188596051s: waiting for machine to come up
	I1010 18:13:52.946995   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:13:52.947360   99368 main.go:141] libmachine: (ha-142481) DBG | unable to find current IP address of domain ha-142481 in network mk-ha-142481
	I1010 18:13:52.947382   99368 main.go:141] libmachine: (ha-142481) DBG | I1010 18:13:52.947318   99391 retry.go:31] will retry after 3.863064875s: waiting for machine to come up
	I1010 18:13:56.814839   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:13:56.815487   99368 main.go:141] libmachine: (ha-142481) DBG | unable to find current IP address of domain ha-142481 in network mk-ha-142481
	I1010 18:13:56.815508   99368 main.go:141] libmachine: (ha-142481) DBG | I1010 18:13:56.815409   99391 retry.go:31] will retry after 3.762373701s: waiting for machine to come up
	I1010 18:14:00.580406   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:00.580915   99368 main.go:141] libmachine: (ha-142481) Found IP for machine: 192.168.39.104
	I1010 18:14:00.580940   99368 main.go:141] libmachine: (ha-142481) Reserving static IP address...
	I1010 18:14:00.580952   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has current primary IP address 192.168.39.104 and MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:00.581384   99368 main.go:141] libmachine: (ha-142481) DBG | unable to find host DHCP lease matching {name: "ha-142481", mac: "52:54:00:3e:fa:00", ip: "192.168.39.104"} in network mk-ha-142481
	I1010 18:14:00.656496   99368 main.go:141] libmachine: (ha-142481) DBG | Getting to WaitForSSH function...
	I1010 18:14:00.656530   99368 main.go:141] libmachine: (ha-142481) Reserved static IP address: 192.168.39.104
	I1010 18:14:00.656576   99368 main.go:141] libmachine: (ha-142481) Waiting for SSH to be available...
	I1010 18:14:00.659584   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:00.659994   99368 main.go:141] libmachine: (ha-142481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:fa:00", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:13:53 +0000 UTC Type:0 Mac:52:54:00:3e:fa:00 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:minikube Clientid:01:52:54:00:3e:fa:00}
	I1010 18:14:00.660032   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined IP address 192.168.39.104 and MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:00.660120   99368 main.go:141] libmachine: (ha-142481) DBG | Using SSH client type: external
	I1010 18:14:00.660175   99368 main.go:141] libmachine: (ha-142481) DBG | Using SSH private key: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481/id_rsa (-rw-------)
	I1010 18:14:00.660252   99368 main.go:141] libmachine: (ha-142481) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.104 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1010 18:14:00.660280   99368 main.go:141] libmachine: (ha-142481) DBG | About to run SSH command:
	I1010 18:14:00.660297   99368 main.go:141] libmachine: (ha-142481) DBG | exit 0
	I1010 18:14:00.789008   99368 main.go:141] libmachine: (ha-142481) DBG | SSH cmd err, output: <nil>: 
	I1010 18:14:00.789292   99368 main.go:141] libmachine: (ha-142481) KVM machine creation complete!
	I1010 18:14:00.789591   99368 main.go:141] libmachine: (ha-142481) Calling .GetConfigRaw
	I1010 18:14:00.790247   99368 main.go:141] libmachine: (ha-142481) Calling .DriverName
	I1010 18:14:00.790563   99368 main.go:141] libmachine: (ha-142481) Calling .DriverName
	I1010 18:14:00.790779   99368 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1010 18:14:00.790797   99368 main.go:141] libmachine: (ha-142481) Calling .GetState
	I1010 18:14:00.791977   99368 main.go:141] libmachine: Detecting operating system of created instance...
	I1010 18:14:00.791993   99368 main.go:141] libmachine: Waiting for SSH to be available...
	I1010 18:14:00.792000   99368 main.go:141] libmachine: Getting to WaitForSSH function...
	I1010 18:14:00.792007   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHHostname
	I1010 18:14:00.795049   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:00.795517   99368 main.go:141] libmachine: (ha-142481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:fa:00", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:13:53 +0000 UTC Type:0 Mac:52:54:00:3e:fa:00 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-142481 Clientid:01:52:54:00:3e:fa:00}
	I1010 18:14:00.795546   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined IP address 192.168.39.104 and MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:00.795737   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHPort
	I1010 18:14:00.795931   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHKeyPath
	I1010 18:14:00.796109   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHKeyPath
	I1010 18:14:00.796201   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHUsername
	I1010 18:14:00.796384   99368 main.go:141] libmachine: Using SSH client type: native
	I1010 18:14:00.796677   99368 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I1010 18:14:00.796694   99368 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1010 18:14:00.904506   99368 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1010 18:14:00.904529   99368 main.go:141] libmachine: Detecting the provisioner...
	I1010 18:14:00.904538   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHHostname
	I1010 18:14:00.907535   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:00.907882   99368 main.go:141] libmachine: (ha-142481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:fa:00", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:13:53 +0000 UTC Type:0 Mac:52:54:00:3e:fa:00 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-142481 Clientid:01:52:54:00:3e:fa:00}
	I1010 18:14:00.907924   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined IP address 192.168.39.104 and MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:00.908104   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHPort
	I1010 18:14:00.908324   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHKeyPath
	I1010 18:14:00.908499   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHKeyPath
	I1010 18:14:00.908658   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHUsername
	I1010 18:14:00.908892   99368 main.go:141] libmachine: Using SSH client type: native
	I1010 18:14:00.909076   99368 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I1010 18:14:00.909086   99368 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1010 18:14:01.018108   99368 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1010 18:14:01.018217   99368 main.go:141] libmachine: found compatible host: buildroot
	I1010 18:14:01.018228   99368 main.go:141] libmachine: Provisioning with buildroot...
	I1010 18:14:01.018236   99368 main.go:141] libmachine: (ha-142481) Calling .GetMachineName
	I1010 18:14:01.018570   99368 buildroot.go:166] provisioning hostname "ha-142481"
	I1010 18:14:01.018602   99368 main.go:141] libmachine: (ha-142481) Calling .GetMachineName
	I1010 18:14:01.018780   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHHostname
	I1010 18:14:01.021625   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:01.022001   99368 main.go:141] libmachine: (ha-142481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:fa:00", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:13:53 +0000 UTC Type:0 Mac:52:54:00:3e:fa:00 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-142481 Clientid:01:52:54:00:3e:fa:00}
	I1010 18:14:01.022049   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined IP address 192.168.39.104 and MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:01.022142   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHPort
	I1010 18:14:01.022330   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHKeyPath
	I1010 18:14:01.022485   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHKeyPath
	I1010 18:14:01.022628   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHUsername
	I1010 18:14:01.022792   99368 main.go:141] libmachine: Using SSH client type: native
	I1010 18:14:01.023020   99368 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I1010 18:14:01.023040   99368 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-142481 && echo "ha-142481" | sudo tee /etc/hostname
	I1010 18:14:01.148746   99368 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-142481
	
	I1010 18:14:01.148780   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHHostname
	I1010 18:14:01.151700   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:01.152069   99368 main.go:141] libmachine: (ha-142481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:fa:00", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:13:53 +0000 UTC Type:0 Mac:52:54:00:3e:fa:00 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-142481 Clientid:01:52:54:00:3e:fa:00}
	I1010 18:14:01.152101   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined IP address 192.168.39.104 and MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:01.152379   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHPort
	I1010 18:14:01.152566   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHKeyPath
	I1010 18:14:01.152733   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHKeyPath
	I1010 18:14:01.153007   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHUsername
	I1010 18:14:01.153254   99368 main.go:141] libmachine: Using SSH client type: native
	I1010 18:14:01.153456   99368 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I1010 18:14:01.153473   99368 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-142481' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-142481/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-142481' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1010 18:14:01.270656   99368 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1010 18:14:01.270702   99368 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19787-81676/.minikube CaCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19787-81676/.minikube}
	I1010 18:14:01.270768   99368 buildroot.go:174] setting up certificates
	I1010 18:14:01.270784   99368 provision.go:84] configureAuth start
	I1010 18:14:01.270804   99368 main.go:141] libmachine: (ha-142481) Calling .GetMachineName
	I1010 18:14:01.271123   99368 main.go:141] libmachine: (ha-142481) Calling .GetIP
	I1010 18:14:01.274054   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:01.274377   99368 main.go:141] libmachine: (ha-142481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:fa:00", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:13:53 +0000 UTC Type:0 Mac:52:54:00:3e:fa:00 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-142481 Clientid:01:52:54:00:3e:fa:00}
	I1010 18:14:01.274414   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined IP address 192.168.39.104 and MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:01.274599   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHHostname
	I1010 18:14:01.277056   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:01.277372   99368 main.go:141] libmachine: (ha-142481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:fa:00", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:13:53 +0000 UTC Type:0 Mac:52:54:00:3e:fa:00 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-142481 Clientid:01:52:54:00:3e:fa:00}
	I1010 18:14:01.277402   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined IP address 192.168.39.104 and MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:01.277532   99368 provision.go:143] copyHostCerts
	I1010 18:14:01.277566   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem
	I1010 18:14:01.277608   99368 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem, removing ...
	I1010 18:14:01.277620   99368 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem
	I1010 18:14:01.277701   99368 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem (1675 bytes)
	I1010 18:14:01.277845   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem
	I1010 18:14:01.277882   99368 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem, removing ...
	I1010 18:14:01.277893   99368 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem
	I1010 18:14:01.277935   99368 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem (1078 bytes)
	I1010 18:14:01.278014   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem
	I1010 18:14:01.278037   99368 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem, removing ...
	I1010 18:14:01.278043   99368 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem
	I1010 18:14:01.278078   99368 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem (1123 bytes)
	I1010 18:14:01.278160   99368 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem org=jenkins.ha-142481 san=[127.0.0.1 192.168.39.104 ha-142481 localhost minikube]
	I1010 18:14:01.863097   99368 provision.go:177] copyRemoteCerts
	I1010 18:14:01.863162   99368 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1010 18:14:01.863187   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHHostname
	I1010 18:14:01.866290   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:01.866626   99368 main.go:141] libmachine: (ha-142481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:fa:00", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:13:53 +0000 UTC Type:0 Mac:52:54:00:3e:fa:00 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-142481 Clientid:01:52:54:00:3e:fa:00}
	I1010 18:14:01.866657   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined IP address 192.168.39.104 and MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:01.866843   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHPort
	I1010 18:14:01.867075   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHKeyPath
	I1010 18:14:01.867295   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHUsername
	I1010 18:14:01.867474   99368 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481/id_rsa Username:docker}
	I1010 18:14:01.951802   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1010 18:14:01.951888   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1010 18:14:01.976504   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1010 18:14:01.976590   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1010 18:14:02.000608   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1010 18:14:02.000694   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1010 18:14:02.025514   99368 provision.go:87] duration metric: took 754.678106ms to configureAuth
	I1010 18:14:02.025558   99368 buildroot.go:189] setting minikube options for container-runtime
	I1010 18:14:02.025780   99368 config.go:182] Loaded profile config "ha-142481": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 18:14:02.025872   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHHostname
	I1010 18:14:02.028822   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:02.029419   99368 main.go:141] libmachine: (ha-142481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:fa:00", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:13:53 +0000 UTC Type:0 Mac:52:54:00:3e:fa:00 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-142481 Clientid:01:52:54:00:3e:fa:00}
	I1010 18:14:02.029448   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined IP address 192.168.39.104 and MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:02.029637   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHPort
	I1010 18:14:02.029859   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHKeyPath
	I1010 18:14:02.030076   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHKeyPath
	I1010 18:14:02.030249   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHUsername
	I1010 18:14:02.030408   99368 main.go:141] libmachine: Using SSH client type: native
	I1010 18:14:02.030613   99368 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I1010 18:14:02.030638   99368 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1010 18:14:02.255598   99368 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1010 18:14:02.255635   99368 main.go:141] libmachine: Checking connection to Docker...
	I1010 18:14:02.255663   99368 main.go:141] libmachine: (ha-142481) Calling .GetURL
	I1010 18:14:02.256998   99368 main.go:141] libmachine: (ha-142481) DBG | Using libvirt version 6000000
	I1010 18:14:02.259693   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:02.260061   99368 main.go:141] libmachine: (ha-142481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:fa:00", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:13:53 +0000 UTC Type:0 Mac:52:54:00:3e:fa:00 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-142481 Clientid:01:52:54:00:3e:fa:00}
	I1010 18:14:02.260105   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined IP address 192.168.39.104 and MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:02.260245   99368 main.go:141] libmachine: Docker is up and running!
	I1010 18:14:02.260269   99368 main.go:141] libmachine: Reticulating splines...
	I1010 18:14:02.260277   99368 client.go:171] duration metric: took 24.062416136s to LocalClient.Create
	I1010 18:14:02.260305   99368 start.go:167] duration metric: took 24.062491775s to libmachine.API.Create "ha-142481"
	I1010 18:14:02.260317   99368 start.go:293] postStartSetup for "ha-142481" (driver="kvm2")
	I1010 18:14:02.260330   99368 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1010 18:14:02.260355   99368 main.go:141] libmachine: (ha-142481) Calling .DriverName
	I1010 18:14:02.260598   99368 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1010 18:14:02.260623   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHHostname
	I1010 18:14:02.262655   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:02.262966   99368 main.go:141] libmachine: (ha-142481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:fa:00", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:13:53 +0000 UTC Type:0 Mac:52:54:00:3e:fa:00 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-142481 Clientid:01:52:54:00:3e:fa:00}
	I1010 18:14:02.262995   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined IP address 192.168.39.104 and MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:02.263106   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHPort
	I1010 18:14:02.263281   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHKeyPath
	I1010 18:14:02.263418   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHUsername
	I1010 18:14:02.263549   99368 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481/id_rsa Username:docker}
	I1010 18:14:02.347386   99368 ssh_runner.go:195] Run: cat /etc/os-release
	I1010 18:14:02.352007   99368 info.go:137] Remote host: Buildroot 2023.02.9
	I1010 18:14:02.352037   99368 filesync.go:126] Scanning /home/jenkins/minikube-integration/19787-81676/.minikube/addons for local assets ...
	I1010 18:14:02.352118   99368 filesync.go:126] Scanning /home/jenkins/minikube-integration/19787-81676/.minikube/files for local assets ...
	I1010 18:14:02.352241   99368 filesync.go:149] local asset: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem -> 888762.pem in /etc/ssl/certs
	I1010 18:14:02.352255   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem -> /etc/ssl/certs/888762.pem
	I1010 18:14:02.352383   99368 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1010 18:14:02.361986   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem --> /etc/ssl/certs/888762.pem (1708 bytes)
	I1010 18:14:02.387757   99368 start.go:296] duration metric: took 127.42447ms for postStartSetup
	I1010 18:14:02.387817   99368 main.go:141] libmachine: (ha-142481) Calling .GetConfigRaw
	I1010 18:14:02.388481   99368 main.go:141] libmachine: (ha-142481) Calling .GetIP
	I1010 18:14:02.391530   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:02.391900   99368 main.go:141] libmachine: (ha-142481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:fa:00", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:13:53 +0000 UTC Type:0 Mac:52:54:00:3e:fa:00 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-142481 Clientid:01:52:54:00:3e:fa:00}
	I1010 18:14:02.391927   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined IP address 192.168.39.104 and MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:02.392187   99368 profile.go:143] Saving config to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/config.json ...
	I1010 18:14:02.392385   99368 start.go:128] duration metric: took 24.213024958s to createHost
	I1010 18:14:02.392410   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHHostname
	I1010 18:14:02.394865   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:02.395239   99368 main.go:141] libmachine: (ha-142481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:fa:00", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:13:53 +0000 UTC Type:0 Mac:52:54:00:3e:fa:00 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-142481 Clientid:01:52:54:00:3e:fa:00}
	I1010 18:14:02.395269   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined IP address 192.168.39.104 and MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:02.395418   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHPort
	I1010 18:14:02.395616   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHKeyPath
	I1010 18:14:02.395799   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHKeyPath
	I1010 18:14:02.395913   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHUsername
	I1010 18:14:02.396045   99368 main.go:141] libmachine: Using SSH client type: native
	I1010 18:14:02.396233   99368 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I1010 18:14:02.396253   99368 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1010 18:14:02.506374   99368 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728584042.463674877
	
	I1010 18:14:02.506405   99368 fix.go:216] guest clock: 1728584042.463674877
	I1010 18:14:02.506415   99368 fix.go:229] Guest: 2024-10-10 18:14:02.463674877 +0000 UTC Remote: 2024-10-10 18:14:02.392397471 +0000 UTC m=+24.322985546 (delta=71.277406ms)
	I1010 18:14:02.506501   99368 fix.go:200] guest clock delta is within tolerance: 71.277406ms
	I1010 18:14:02.506513   99368 start.go:83] releasing machines lock for "ha-142481", held for 24.327223548s
	I1010 18:14:02.506550   99368 main.go:141] libmachine: (ha-142481) Calling .DriverName
	I1010 18:14:02.506889   99368 main.go:141] libmachine: (ha-142481) Calling .GetIP
	I1010 18:14:02.509401   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:02.509764   99368 main.go:141] libmachine: (ha-142481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:fa:00", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:13:53 +0000 UTC Type:0 Mac:52:54:00:3e:fa:00 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-142481 Clientid:01:52:54:00:3e:fa:00}
	I1010 18:14:02.509802   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined IP address 192.168.39.104 and MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:02.509942   99368 main.go:141] libmachine: (ha-142481) Calling .DriverName
	I1010 18:14:02.510549   99368 main.go:141] libmachine: (ha-142481) Calling .DriverName
	I1010 18:14:02.510772   99368 main.go:141] libmachine: (ha-142481) Calling .DriverName
	I1010 18:14:02.510843   99368 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1010 18:14:02.510929   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHHostname
	I1010 18:14:02.511003   99368 ssh_runner.go:195] Run: cat /version.json
	I1010 18:14:02.511038   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHHostname
	I1010 18:14:02.513796   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:02.513896   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:02.514234   99368 main.go:141] libmachine: (ha-142481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:fa:00", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:13:53 +0000 UTC Type:0 Mac:52:54:00:3e:fa:00 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-142481 Clientid:01:52:54:00:3e:fa:00}
	I1010 18:14:02.514254   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined IP address 192.168.39.104 and MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:02.514280   99368 main.go:141] libmachine: (ha-142481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:fa:00", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:13:53 +0000 UTC Type:0 Mac:52:54:00:3e:fa:00 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-142481 Clientid:01:52:54:00:3e:fa:00}
	I1010 18:14:02.514293   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined IP address 192.168.39.104 and MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:02.514533   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHPort
	I1010 18:14:02.514631   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHPort
	I1010 18:14:02.514713   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHKeyPath
	I1010 18:14:02.514804   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHKeyPath
	I1010 18:14:02.514890   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHUsername
	I1010 18:14:02.514938   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHUsername
	I1010 18:14:02.515026   99368 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481/id_rsa Username:docker}
	I1010 18:14:02.515073   99368 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481/id_rsa Username:docker}
	I1010 18:14:02.615715   99368 ssh_runner.go:195] Run: systemctl --version
	I1010 18:14:02.621955   99368 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1010 18:14:02.785775   99368 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1010 18:14:02.792271   99368 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1010 18:14:02.792352   99368 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1010 18:14:02.808426   99368 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1010 18:14:02.808464   99368 start.go:495] detecting cgroup driver to use...
	I1010 18:14:02.808542   99368 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1010 18:14:02.825314   99368 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1010 18:14:02.842065   99368 docker.go:217] disabling cri-docker service (if available) ...
	I1010 18:14:02.842135   99368 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1010 18:14:02.858984   99368 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1010 18:14:02.876330   99368 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1010 18:14:02.990523   99368 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1010 18:14:03.132316   99368 docker.go:233] disabling docker service ...
	I1010 18:14:03.132386   99368 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1010 18:14:03.147477   99368 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1010 18:14:03.161268   99368 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1010 18:14:03.304325   99368 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1010 18:14:03.429397   99368 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1010 18:14:03.443898   99368 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1010 18:14:03.463181   99368 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1010 18:14:03.463273   99368 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:14:03.474215   99368 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1010 18:14:03.474286   99368 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:14:03.485513   99368 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:14:03.496394   99368 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:14:03.507084   99368 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1010 18:14:03.517675   99368 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:14:03.527867   99368 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:14:03.545825   99368 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:14:03.556723   99368 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1010 18:14:03.566428   99368 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1010 18:14:03.566513   99368 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1010 18:14:03.579726   99368 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1010 18:14:03.589897   99368 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 18:14:03.711306   99368 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1010 18:14:03.812353   99368 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1010 18:14:03.812440   99368 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1010 18:14:03.817265   99368 start.go:563] Will wait 60s for crictl version
	I1010 18:14:03.817331   99368 ssh_runner.go:195] Run: which crictl
	I1010 18:14:03.821238   99368 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1010 18:14:03.865031   99368 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1010 18:14:03.865131   99368 ssh_runner.go:195] Run: crio --version
	I1010 18:14:03.893405   99368 ssh_runner.go:195] Run: crio --version
	I1010 18:14:03.923688   99368 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1010 18:14:03.925089   99368 main.go:141] libmachine: (ha-142481) Calling .GetIP
	I1010 18:14:03.927862   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:03.928210   99368 main.go:141] libmachine: (ha-142481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:fa:00", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:13:53 +0000 UTC Type:0 Mac:52:54:00:3e:fa:00 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-142481 Clientid:01:52:54:00:3e:fa:00}
	I1010 18:14:03.928239   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined IP address 192.168.39.104 and MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:03.928482   99368 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1010 18:14:03.932808   99368 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 18:14:03.947607   99368 kubeadm.go:883] updating cluster {Name:ha-142481 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-142481 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.104 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1010 18:14:03.947723   99368 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1010 18:14:03.947771   99368 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 18:14:03.980321   99368 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1010 18:14:03.980402   99368 ssh_runner.go:195] Run: which lz4
	I1010 18:14:03.984490   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1010 18:14:03.984586   99368 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1010 18:14:03.988814   99368 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1010 18:14:03.988866   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1010 18:14:05.363098   99368 crio.go:462] duration metric: took 1.37853137s to copy over tarball
	I1010 18:14:05.363172   99368 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1010 18:14:07.378827   99368 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.01562073s)
	I1010 18:14:07.378863   99368 crio.go:469] duration metric: took 2.015730634s to extract the tarball
	I1010 18:14:07.378873   99368 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1010 18:14:07.415494   99368 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 18:14:07.461637   99368 crio.go:514] all images are preloaded for cri-o runtime.
	I1010 18:14:07.461668   99368 cache_images.go:84] Images are preloaded, skipping loading
	I1010 18:14:07.461678   99368 kubeadm.go:934] updating node { 192.168.39.104 8443 v1.31.1 crio true true} ...
	I1010 18:14:07.461810   99368 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-142481 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.104
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-142481 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1010 18:14:07.461895   99368 ssh_runner.go:195] Run: crio config
	I1010 18:14:07.511179   99368 cni.go:84] Creating CNI manager for ""
	I1010 18:14:07.511203   99368 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1010 18:14:07.511219   99368 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1010 18:14:07.511240   99368 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.104 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-142481 NodeName:ha-142481 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.104"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.104 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1010 18:14:07.511378   99368 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.104
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-142481"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.104
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.104"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1010 18:14:07.511402   99368 kube-vip.go:115] generating kube-vip config ...
	I1010 18:14:07.511447   99368 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1010 18:14:07.530825   99368 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1010 18:14:07.530966   99368 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1010 18:14:07.531061   99368 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1010 18:14:07.541336   99368 binaries.go:44] Found k8s binaries, skipping transfer
	I1010 18:14:07.541418   99368 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1010 18:14:07.551149   99368 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I1010 18:14:07.567775   99368 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1010 18:14:07.585048   99368 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I1010 18:14:07.601614   99368 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I1010 18:14:07.618435   99368 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1010 18:14:07.622366   99368 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 18:14:07.634534   99368 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 18:14:07.769061   99368 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 18:14:07.786728   99368 certs.go:68] Setting up /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481 for IP: 192.168.39.104
	I1010 18:14:07.786757   99368 certs.go:194] generating shared ca certs ...
	I1010 18:14:07.786780   99368 certs.go:226] acquiring lock for ca certs: {Name:mk934aa2a82050d7b4e8800492bf64f91d140517 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:14:07.786963   99368 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key
	I1010 18:14:07.787019   99368 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key
	I1010 18:14:07.787049   99368 certs.go:256] generating profile certs ...
	I1010 18:14:07.787126   99368 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/client.key
	I1010 18:14:07.787145   99368 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/client.crt with IP's: []
	I1010 18:14:07.903290   99368 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/client.crt ...
	I1010 18:14:07.903319   99368 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/client.crt: {Name:mkc3e45adeab2c56df47bde3919e2c30e370ae85 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:14:07.903506   99368 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/client.key ...
	I1010 18:14:07.903521   99368 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/client.key: {Name:mka461c8525916f7bc85840820bc278320ec6313 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:14:07.903626   99368 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.key.36896560
	I1010 18:14:07.903643   99368 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.crt.36896560 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.104 192.168.39.254]
	I1010 18:14:08.280801   99368 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.crt.36896560 ...
	I1010 18:14:08.280860   99368 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.crt.36896560: {Name:mk5acd7350e86bebedada3fd330840a975c10cfe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:14:08.281063   99368 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.key.36896560 ...
	I1010 18:14:08.281078   99368 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.key.36896560: {Name:mk1053269a10fe97cf940622a274d032edb2023c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:14:08.281164   99368 certs.go:381] copying /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.crt.36896560 -> /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.crt
	I1010 18:14:08.281248   99368 certs.go:385] copying /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.key.36896560 -> /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.key
	I1010 18:14:08.281307   99368 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/proxy-client.key
	I1010 18:14:08.281325   99368 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/proxy-client.crt with IP's: []
	I1010 18:14:08.428528   99368 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/proxy-client.crt ...
	I1010 18:14:08.428562   99368 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/proxy-client.crt: {Name:mk868dec1ca79ab4285d30dbc6ee93e0f0415a05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:14:08.428730   99368 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/proxy-client.key ...
	I1010 18:14:08.428741   99368 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/proxy-client.key: {Name:mk5632176fd6e0bd1fedbd590f44cb77fc86fc75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:14:08.428812   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1010 18:14:08.428829   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1010 18:14:08.428839   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1010 18:14:08.428867   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1010 18:14:08.428886   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1010 18:14:08.428905   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1010 18:14:08.428919   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1010 18:14:08.428930   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1010 18:14:08.428986   99368 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876.pem (1338 bytes)
	W1010 18:14:08.429023   99368 certs.go:480] ignoring /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876_empty.pem, impossibly tiny 0 bytes
	I1010 18:14:08.429032   99368 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem (1679 bytes)
	I1010 18:14:08.429057   99368 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem (1078 bytes)
	I1010 18:14:08.429082   99368 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem (1123 bytes)
	I1010 18:14:08.429103   99368 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem (1675 bytes)
	I1010 18:14:08.429139   99368 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem (1708 bytes)
	I1010 18:14:08.429166   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876.pem -> /usr/share/ca-certificates/88876.pem
	I1010 18:14:08.429180   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem -> /usr/share/ca-certificates/888762.pem
	I1010 18:14:08.429192   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:14:08.429725   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1010 18:14:08.459934   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1010 18:14:08.486537   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1010 18:14:08.511793   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1010 18:14:08.536743   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1010 18:14:08.569819   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1010 18:14:08.605499   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1010 18:14:08.633615   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1010 18:14:08.657501   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876.pem --> /usr/share/ca-certificates/88876.pem (1338 bytes)
	I1010 18:14:08.684906   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem --> /usr/share/ca-certificates/888762.pem (1708 bytes)
	I1010 18:14:08.712812   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1010 18:14:08.741219   99368 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1010 18:14:08.760444   99368 ssh_runner.go:195] Run: openssl version
	I1010 18:14:08.766741   99368 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/888762.pem && ln -fs /usr/share/ca-certificates/888762.pem /etc/ssl/certs/888762.pem"
	I1010 18:14:08.778475   99368 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/888762.pem
	I1010 18:14:08.783145   99368 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 10 18:09 /usr/share/ca-certificates/888762.pem
	I1010 18:14:08.783213   99368 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/888762.pem
	I1010 18:14:08.789500   99368 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/888762.pem /etc/ssl/certs/3ec20f2e.0"
	I1010 18:14:08.800279   99368 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1010 18:14:08.811452   99368 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:14:08.816338   99368 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 10 17:58 /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:14:08.816413   99368 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:14:08.822105   99368 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1010 18:14:08.833024   99368 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/88876.pem && ln -fs /usr/share/ca-certificates/88876.pem /etc/ssl/certs/88876.pem"
	I1010 18:14:08.844522   99368 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/88876.pem
	I1010 18:14:08.849855   99368 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 10 18:09 /usr/share/ca-certificates/88876.pem
	I1010 18:14:08.849915   99368 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/88876.pem
	I1010 18:14:08.856326   99368 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/88876.pem /etc/ssl/certs/51391683.0"
	I1010 18:14:08.868339   99368 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1010 18:14:08.873080   99368 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1010 18:14:08.873139   99368 kubeadm.go:392] StartCluster: {Name:ha-142481 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-142481 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.104 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 18:14:08.873227   99368 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1010 18:14:08.873270   99368 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1010 18:14:08.916635   99368 cri.go:89] found id: ""
	I1010 18:14:08.916701   99368 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1010 18:14:08.927424   99368 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1010 18:14:08.937639   99368 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1010 18:14:08.950754   99368 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1010 18:14:08.950779   99368 kubeadm.go:157] found existing configuration files:
	
	I1010 18:14:08.950834   99368 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1010 18:14:08.962204   99368 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1010 18:14:08.962290   99368 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1010 18:14:08.975261   99368 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1010 18:14:08.986716   99368 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1010 18:14:08.986809   99368 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1010 18:14:08.998689   99368 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1010 18:14:09.010244   99368 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1010 18:14:09.010336   99368 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1010 18:14:09.022153   99368 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1010 18:14:09.033360   99368 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1010 18:14:09.033436   99368 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1010 18:14:09.045356   99368 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1010 18:14:09.160966   99368 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1010 18:14:09.161052   99368 kubeadm.go:310] [preflight] Running pre-flight checks
	I1010 18:14:09.286355   99368 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1010 18:14:09.286552   99368 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1010 18:14:09.286700   99368 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1010 18:14:09.304139   99368 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1010 18:14:09.367960   99368 out.go:235]   - Generating certificates and keys ...
	I1010 18:14:09.368080   99368 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1010 18:14:09.368161   99368 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1010 18:14:09.384046   99368 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1010 18:14:09.463103   99368 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1010 18:14:09.567857   99368 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1010 18:14:09.723111   99368 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1010 18:14:09.854233   99368 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1010 18:14:09.854378   99368 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-142481 localhost] and IPs [192.168.39.104 127.0.0.1 ::1]
	I1010 18:14:09.939722   99368 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1010 18:14:09.939862   99368 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-142481 localhost] and IPs [192.168.39.104 127.0.0.1 ::1]
	I1010 18:14:10.144343   99368 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1010 18:14:10.236373   99368 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1010 18:14:10.313629   99368 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1010 18:14:10.313727   99368 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1010 18:14:10.420431   99368 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1010 18:14:10.571019   99368 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1010 18:14:10.736436   99368 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1010 18:14:10.835479   99368 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1010 18:14:10.964962   99368 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1010 18:14:10.965625   99368 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1010 18:14:10.970210   99368 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1010 18:14:10.974272   99368 out.go:235]   - Booting up control plane ...
	I1010 18:14:10.974411   99368 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1010 18:14:10.974532   99368 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1010 18:14:10.974647   99368 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1010 18:14:10.995458   99368 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1010 18:14:11.002605   99368 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1010 18:14:11.002687   99368 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1010 18:14:11.149847   99368 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1010 18:14:11.150007   99368 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1010 18:14:11.651121   99368 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.084729ms
	I1010 18:14:11.651236   99368 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1010 18:14:20.808127   99368 kubeadm.go:310] [api-check] The API server is healthy after 9.156536113s
	I1010 18:14:20.824946   99368 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1010 18:14:20.839773   99368 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1010 18:14:20.870820   99368 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1010 18:14:20.871016   99368 kubeadm.go:310] [mark-control-plane] Marking the node ha-142481 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1010 18:14:20.887157   99368 kubeadm.go:310] [bootstrap-token] Using token: 644oik.7go4jyqro7if5l4w
	I1010 18:14:20.888737   99368 out.go:235]   - Configuring RBAC rules ...
	I1010 18:14:20.888842   99368 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1010 18:14:20.898440   99368 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1010 18:14:20.910480   99368 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1010 18:14:20.915628   99368 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1010 18:14:20.920682   99368 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1010 18:14:20.931471   99368 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1010 18:14:21.219016   99368 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1010 18:14:21.647641   99368 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1010 18:14:22.223206   99368 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1010 18:14:22.224137   99368 kubeadm.go:310] 
	I1010 18:14:22.224257   99368 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1010 18:14:22.224281   99368 kubeadm.go:310] 
	I1010 18:14:22.224367   99368 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1010 18:14:22.224376   99368 kubeadm.go:310] 
	I1010 18:14:22.224411   99368 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1010 18:14:22.224481   99368 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1010 18:14:22.224552   99368 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1010 18:14:22.224561   99368 kubeadm.go:310] 
	I1010 18:14:22.224636   99368 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1010 18:14:22.224649   99368 kubeadm.go:310] 
	I1010 18:14:22.224716   99368 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1010 18:14:22.224728   99368 kubeadm.go:310] 
	I1010 18:14:22.224806   99368 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1010 18:14:22.224925   99368 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1010 18:14:22.225015   99368 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1010 18:14:22.225025   99368 kubeadm.go:310] 
	I1010 18:14:22.225149   99368 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1010 18:14:22.225266   99368 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1010 18:14:22.225276   99368 kubeadm.go:310] 
	I1010 18:14:22.225390   99368 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 644oik.7go4jyqro7if5l4w \
	I1010 18:14:22.225541   99368 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:bc8568f96f96483e4403a7eff9866ac7bd8b0e76d8203796a49dd1a769f11bda \
	I1010 18:14:22.225591   99368 kubeadm.go:310] 	--control-plane 
	I1010 18:14:22.225619   99368 kubeadm.go:310] 
	I1010 18:14:22.225743   99368 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1010 18:14:22.225753   99368 kubeadm.go:310] 
	I1010 18:14:22.225845   99368 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 644oik.7go4jyqro7if5l4w \
	I1010 18:14:22.225968   99368 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:bc8568f96f96483e4403a7eff9866ac7bd8b0e76d8203796a49dd1a769f11bda 
	I1010 18:14:22.226430   99368 kubeadm.go:310] W1010 18:14:09.112606     828 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1010 18:14:22.226836   99368 kubeadm.go:310] W1010 18:14:09.113373     828 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1010 18:14:22.226944   99368 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1010 18:14:22.226978   99368 cni.go:84] Creating CNI manager for ""
	I1010 18:14:22.226989   99368 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1010 18:14:22.229089   99368 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1010 18:14:22.230625   99368 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1010 18:14:22.236334   99368 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I1010 18:14:22.236358   99368 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1010 18:14:22.263826   99368 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1010 18:14:22.691291   99368 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1010 18:14:22.691383   99368 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:14:22.691399   99368 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-142481 minikube.k8s.io/updated_at=2024_10_10T18_14_22_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=e90d7de550e839433dc631623053398b652747fc minikube.k8s.io/name=ha-142481 minikube.k8s.io/primary=true
	I1010 18:14:22.748532   99368 ops.go:34] apiserver oom_adj: -16
	I1010 18:14:22.970463   99368 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:14:23.471032   99368 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:14:23.553414   99368 kubeadm.go:1113] duration metric: took 862.100636ms to wait for elevateKubeSystemPrivileges
	I1010 18:14:23.553464   99368 kubeadm.go:394] duration metric: took 14.680326546s to StartCluster
	I1010 18:14:23.553490   99368 settings.go:142] acquiring lock: {Name:mk8d73e472e6ca16e47be8eb712406a95625b8da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:14:23.553611   99368 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19787-81676/kubeconfig
	I1010 18:14:23.554487   99368 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19787-81676/kubeconfig: {Name:mk31297b607b774870865548d952622c04d970cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:14:23.554725   99368 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1010 18:14:23.554735   99368 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1010 18:14:23.554719   99368 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.104 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1010 18:14:23.554809   99368 addons.go:69] Setting storage-provisioner=true in profile "ha-142481"
	I1010 18:14:23.554818   99368 start.go:241] waiting for startup goroutines ...
	I1010 18:14:23.554825   99368 addons.go:234] Setting addon storage-provisioner=true in "ha-142481"
	I1010 18:14:23.554829   99368 addons.go:69] Setting default-storageclass=true in profile "ha-142481"
	I1010 18:14:23.554845   99368 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-142481"
	I1010 18:14:23.554853   99368 host.go:66] Checking if "ha-142481" exists ...
	I1010 18:14:23.554928   99368 config.go:182] Loaded profile config "ha-142481": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 18:14:23.555209   99368 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 18:14:23.555239   99368 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 18:14:23.555300   99368 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 18:14:23.555338   99368 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 18:14:23.570324   99368 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36105
	I1010 18:14:23.570445   99368 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38545
	I1010 18:14:23.570857   99368 main.go:141] libmachine: () Calling .GetVersion
	I1010 18:14:23.570886   99368 main.go:141] libmachine: () Calling .GetVersion
	I1010 18:14:23.571436   99368 main.go:141] libmachine: Using API Version  1
	I1010 18:14:23.571459   99368 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 18:14:23.571566   99368 main.go:141] libmachine: Using API Version  1
	I1010 18:14:23.571589   99368 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 18:14:23.571790   99368 main.go:141] libmachine: () Calling .GetMachineName
	I1010 18:14:23.571894   99368 main.go:141] libmachine: () Calling .GetMachineName
	I1010 18:14:23.571996   99368 main.go:141] libmachine: (ha-142481) Calling .GetState
	I1010 18:14:23.572434   99368 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 18:14:23.572484   99368 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 18:14:23.574225   99368 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19787-81676/kubeconfig
	I1010 18:14:23.574554   99368 kapi.go:59] client config for ha-142481: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/client.crt", KeyFile:"/home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/client.key", CAFile:"/home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2432700), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1010 18:14:23.575091   99368 cert_rotation.go:140] Starting client certificate rotation controller
	I1010 18:14:23.575347   99368 addons.go:234] Setting addon default-storageclass=true in "ha-142481"
	I1010 18:14:23.575391   99368 host.go:66] Checking if "ha-142481" exists ...
	I1010 18:14:23.575743   99368 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 18:14:23.575783   99368 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 18:14:23.587483   99368 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34291
	I1010 18:14:23.587940   99368 main.go:141] libmachine: () Calling .GetVersion
	I1010 18:14:23.588477   99368 main.go:141] libmachine: Using API Version  1
	I1010 18:14:23.588502   99368 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 18:14:23.588933   99368 main.go:141] libmachine: () Calling .GetMachineName
	I1010 18:14:23.589102   99368 main.go:141] libmachine: (ha-142481) Calling .GetState
	I1010 18:14:23.590856   99368 main.go:141] libmachine: (ha-142481) Calling .DriverName
	I1010 18:14:23.590904   99368 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42799
	I1010 18:14:23.591399   99368 main.go:141] libmachine: () Calling .GetVersion
	I1010 18:14:23.591917   99368 main.go:141] libmachine: Using API Version  1
	I1010 18:14:23.591946   99368 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 18:14:23.592234   99368 main.go:141] libmachine: () Calling .GetMachineName
	I1010 18:14:23.592690   99368 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 18:14:23.592731   99368 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 18:14:23.593082   99368 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 18:14:23.594593   99368 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 18:14:23.594613   99368 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1010 18:14:23.594629   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHHostname
	I1010 18:14:23.597561   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:23.598029   99368 main.go:141] libmachine: (ha-142481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:fa:00", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:13:53 +0000 UTC Type:0 Mac:52:54:00:3e:fa:00 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-142481 Clientid:01:52:54:00:3e:fa:00}
	I1010 18:14:23.598057   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined IP address 192.168.39.104 and MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:23.598292   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHPort
	I1010 18:14:23.598455   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHKeyPath
	I1010 18:14:23.598621   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHUsername
	I1010 18:14:23.598811   99368 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481/id_rsa Username:docker}
	I1010 18:14:23.608949   99368 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46053
	I1010 18:14:23.609372   99368 main.go:141] libmachine: () Calling .GetVersion
	I1010 18:14:23.609889   99368 main.go:141] libmachine: Using API Version  1
	I1010 18:14:23.609916   99368 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 18:14:23.610243   99368 main.go:141] libmachine: () Calling .GetMachineName
	I1010 18:14:23.610467   99368 main.go:141] libmachine: (ha-142481) Calling .GetState
	I1010 18:14:23.612216   99368 main.go:141] libmachine: (ha-142481) Calling .DriverName
	I1010 18:14:23.612447   99368 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1010 18:14:23.612464   99368 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1010 18:14:23.612481   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHHostname
	I1010 18:14:23.615402   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:23.615852   99368 main.go:141] libmachine: (ha-142481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:fa:00", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:13:53 +0000 UTC Type:0 Mac:52:54:00:3e:fa:00 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-142481 Clientid:01:52:54:00:3e:fa:00}
	I1010 18:14:23.615886   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined IP address 192.168.39.104 and MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:23.616075   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHPort
	I1010 18:14:23.616255   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHKeyPath
	I1010 18:14:23.616404   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHUsername
	I1010 18:14:23.616566   99368 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481/id_rsa Username:docker}
	I1010 18:14:23.680546   99368 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1010 18:14:23.774021   99368 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 18:14:23.820915   99368 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1010 18:14:24.197953   99368 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1010 18:14:24.533925   99368 main.go:141] libmachine: Making call to close driver server
	I1010 18:14:24.533960   99368 main.go:141] libmachine: (ha-142481) Calling .Close
	I1010 18:14:24.533990   99368 main.go:141] libmachine: Making call to close driver server
	I1010 18:14:24.534001   99368 main.go:141] libmachine: (ha-142481) Calling .Close
	I1010 18:14:24.534267   99368 main.go:141] libmachine: (ha-142481) DBG | Closing plugin on server side
	I1010 18:14:24.534297   99368 main.go:141] libmachine: Successfully made call to close driver server
	I1010 18:14:24.534313   99368 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 18:14:24.534319   99368 main.go:141] libmachine: Successfully made call to close driver server
	I1010 18:14:24.534320   99368 main.go:141] libmachine: (ha-142481) DBG | Closing plugin on server side
	I1010 18:14:24.534323   99368 main.go:141] libmachine: Making call to close driver server
	I1010 18:14:24.534342   99368 main.go:141] libmachine: (ha-142481) Calling .Close
	I1010 18:14:24.534328   99368 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 18:14:24.534394   99368 main.go:141] libmachine: Making call to close driver server
	I1010 18:14:24.534402   99368 main.go:141] libmachine: (ha-142481) Calling .Close
	I1010 18:14:24.534551   99368 main.go:141] libmachine: Successfully made call to close driver server
	I1010 18:14:24.534571   99368 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 18:14:24.534647   99368 main.go:141] libmachine: Successfully made call to close driver server
	I1010 18:14:24.534673   99368 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 18:14:24.534690   99368 main.go:141] libmachine: (ha-142481) DBG | Closing plugin on server side
	I1010 18:14:24.534743   99368 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1010 18:14:24.534893   99368 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1010 18:14:24.535016   99368 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1010 18:14:24.535028   99368 round_trippers.go:469] Request Headers:
	I1010 18:14:24.535038   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:14:24.535046   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:14:24.550066   99368 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I1010 18:14:24.550802   99368 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1010 18:14:24.550817   99368 round_trippers.go:469] Request Headers:
	I1010 18:14:24.550825   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:14:24.550830   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:14:24.550834   99368 round_trippers.go:473]     Content-Type: application/json
	I1010 18:14:24.554277   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:14:24.554448   99368 main.go:141] libmachine: Making call to close driver server
	I1010 18:14:24.554465   99368 main.go:141] libmachine: (ha-142481) Calling .Close
	I1010 18:14:24.554772   99368 main.go:141] libmachine: Successfully made call to close driver server
	I1010 18:14:24.554791   99368 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 18:14:24.556620   99368 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1010 18:14:24.558034   99368 addons.go:510] duration metric: took 1.003294102s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1010 18:14:24.558071   99368 start.go:246] waiting for cluster config update ...
	I1010 18:14:24.558083   99368 start.go:255] writing updated cluster config ...
	I1010 18:14:24.559825   99368 out.go:201] 
	I1010 18:14:24.561439   99368 config.go:182] Loaded profile config "ha-142481": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 18:14:24.561503   99368 profile.go:143] Saving config to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/config.json ...
	I1010 18:14:24.563101   99368 out.go:177] * Starting "ha-142481-m02" control-plane node in "ha-142481" cluster
	I1010 18:14:24.564327   99368 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1010 18:14:24.564349   99368 cache.go:56] Caching tarball of preloaded images
	I1010 18:14:24.564452   99368 preload.go:172] Found /home/jenkins/minikube-integration/19787-81676/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1010 18:14:24.564466   99368 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1010 18:14:24.564540   99368 profile.go:143] Saving config to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/config.json ...
	I1010 18:14:24.564701   99368 start.go:360] acquireMachinesLock for ha-142481-m02: {Name:mkf50a8a3600473176c10a3ea212772e747151e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1010 18:14:24.564749   99368 start.go:364] duration metric: took 27.041µs to acquireMachinesLock for "ha-142481-m02"
	I1010 18:14:24.564772   99368 start.go:93] Provisioning new machine with config: &{Name:ha-142481 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-142481 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.104 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1010 18:14:24.564841   99368 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1010 18:14:24.566583   99368 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1010 18:14:24.566679   99368 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 18:14:24.566707   99368 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 18:14:24.581685   99368 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43727
	I1010 18:14:24.582176   99368 main.go:141] libmachine: () Calling .GetVersion
	I1010 18:14:24.582682   99368 main.go:141] libmachine: Using API Version  1
	I1010 18:14:24.582704   99368 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 18:14:24.583014   99368 main.go:141] libmachine: () Calling .GetMachineName
	I1010 18:14:24.583206   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetMachineName
	I1010 18:14:24.583343   99368 main.go:141] libmachine: (ha-142481-m02) Calling .DriverName
	I1010 18:14:24.583500   99368 start.go:159] libmachine.API.Create for "ha-142481" (driver="kvm2")
	I1010 18:14:24.583528   99368 client.go:168] LocalClient.Create starting
	I1010 18:14:24.583563   99368 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem
	I1010 18:14:24.583608   99368 main.go:141] libmachine: Decoding PEM data...
	I1010 18:14:24.583628   99368 main.go:141] libmachine: Parsing certificate...
	I1010 18:14:24.583689   99368 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem
	I1010 18:14:24.583714   99368 main.go:141] libmachine: Decoding PEM data...
	I1010 18:14:24.583730   99368 main.go:141] libmachine: Parsing certificate...
	I1010 18:14:24.583754   99368 main.go:141] libmachine: Running pre-create checks...
	I1010 18:14:24.583765   99368 main.go:141] libmachine: (ha-142481-m02) Calling .PreCreateCheck
	I1010 18:14:24.584021   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetConfigRaw
	I1010 18:14:24.584567   99368 main.go:141] libmachine: Creating machine...
	I1010 18:14:24.584588   99368 main.go:141] libmachine: (ha-142481-m02) Calling .Create
	I1010 18:14:24.584740   99368 main.go:141] libmachine: (ha-142481-m02) Creating KVM machine...
	I1010 18:14:24.585948   99368 main.go:141] libmachine: (ha-142481-m02) DBG | found existing default KVM network
	I1010 18:14:24.586049   99368 main.go:141] libmachine: (ha-142481-m02) DBG | found existing private KVM network mk-ha-142481
	I1010 18:14:24.586156   99368 main.go:141] libmachine: (ha-142481-m02) Setting up store path in /home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481-m02 ...
	I1010 18:14:24.586179   99368 main.go:141] libmachine: (ha-142481-m02) Building disk image from file:///home/jenkins/minikube-integration/19787-81676/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso
	I1010 18:14:24.586274   99368 main.go:141] libmachine: (ha-142481-m02) DBG | I1010 18:14:24.586151   99736 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19787-81676/.minikube
	I1010 18:14:24.586354   99368 main.go:141] libmachine: (ha-142481-m02) Downloading /home/jenkins/minikube-integration/19787-81676/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19787-81676/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso...
	I1010 18:14:24.870233   99368 main.go:141] libmachine: (ha-142481-m02) DBG | I1010 18:14:24.870047   99736 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481-m02/id_rsa...
	I1010 18:14:25.124750   99368 main.go:141] libmachine: (ha-142481-m02) DBG | I1010 18:14:25.124608   99736 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481-m02/ha-142481-m02.rawdisk...
	I1010 18:14:25.124783   99368 main.go:141] libmachine: (ha-142481-m02) DBG | Writing magic tar header
	I1010 18:14:25.124795   99368 main.go:141] libmachine: (ha-142481-m02) DBG | Writing SSH key tar header
	I1010 18:14:25.124806   99368 main.go:141] libmachine: (ha-142481-m02) DBG | I1010 18:14:25.124735   99736 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481-m02 ...
	I1010 18:14:25.124821   99368 main.go:141] libmachine: (ha-142481-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481-m02
	I1010 18:14:25.124919   99368 main.go:141] libmachine: (ha-142481-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19787-81676/.minikube/machines
	I1010 18:14:25.124946   99368 main.go:141] libmachine: (ha-142481-m02) Setting executable bit set on /home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481-m02 (perms=drwx------)
	I1010 18:14:25.124954   99368 main.go:141] libmachine: (ha-142481-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19787-81676/.minikube
	I1010 18:14:25.124968   99368 main.go:141] libmachine: (ha-142481-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19787-81676
	I1010 18:14:25.124973   99368 main.go:141] libmachine: (ha-142481-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1010 18:14:25.124980   99368 main.go:141] libmachine: (ha-142481-m02) Setting executable bit set on /home/jenkins/minikube-integration/19787-81676/.minikube/machines (perms=drwxr-xr-x)
	I1010 18:14:25.124988   99368 main.go:141] libmachine: (ha-142481-m02) Setting executable bit set on /home/jenkins/minikube-integration/19787-81676/.minikube (perms=drwxr-xr-x)
	I1010 18:14:25.124994   99368 main.go:141] libmachine: (ha-142481-m02) Setting executable bit set on /home/jenkins/minikube-integration/19787-81676 (perms=drwxrwxr-x)
	I1010 18:14:25.124999   99368 main.go:141] libmachine: (ha-142481-m02) DBG | Checking permissions on dir: /home/jenkins
	I1010 18:14:25.125037   99368 main.go:141] libmachine: (ha-142481-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1010 18:14:25.125058   99368 main.go:141] libmachine: (ha-142481-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1010 18:14:25.125067   99368 main.go:141] libmachine: (ha-142481-m02) DBG | Checking permissions on dir: /home
	I1010 18:14:25.125079   99368 main.go:141] libmachine: (ha-142481-m02) Creating domain...
	I1010 18:14:25.125091   99368 main.go:141] libmachine: (ha-142481-m02) DBG | Skipping /home - not owner
	I1010 18:14:25.126075   99368 main.go:141] libmachine: (ha-142481-m02) define libvirt domain using xml: 
	I1010 18:14:25.126098   99368 main.go:141] libmachine: (ha-142481-m02) <domain type='kvm'>
	I1010 18:14:25.126107   99368 main.go:141] libmachine: (ha-142481-m02)   <name>ha-142481-m02</name>
	I1010 18:14:25.126114   99368 main.go:141] libmachine: (ha-142481-m02)   <memory unit='MiB'>2200</memory>
	I1010 18:14:25.126125   99368 main.go:141] libmachine: (ha-142481-m02)   <vcpu>2</vcpu>
	I1010 18:14:25.126132   99368 main.go:141] libmachine: (ha-142481-m02)   <features>
	I1010 18:14:25.126140   99368 main.go:141] libmachine: (ha-142481-m02)     <acpi/>
	I1010 18:14:25.126150   99368 main.go:141] libmachine: (ha-142481-m02)     <apic/>
	I1010 18:14:25.126164   99368 main.go:141] libmachine: (ha-142481-m02)     <pae/>
	I1010 18:14:25.126176   99368 main.go:141] libmachine: (ha-142481-m02)     
	I1010 18:14:25.126185   99368 main.go:141] libmachine: (ha-142481-m02)   </features>
	I1010 18:14:25.126193   99368 main.go:141] libmachine: (ha-142481-m02)   <cpu mode='host-passthrough'>
	I1010 18:14:25.126201   99368 main.go:141] libmachine: (ha-142481-m02)   
	I1010 18:14:25.126208   99368 main.go:141] libmachine: (ha-142481-m02)   </cpu>
	I1010 18:14:25.126215   99368 main.go:141] libmachine: (ha-142481-m02)   <os>
	I1010 18:14:25.126225   99368 main.go:141] libmachine: (ha-142481-m02)     <type>hvm</type>
	I1010 18:14:25.126232   99368 main.go:141] libmachine: (ha-142481-m02)     <boot dev='cdrom'/>
	I1010 18:14:25.126241   99368 main.go:141] libmachine: (ha-142481-m02)     <boot dev='hd'/>
	I1010 18:14:25.126251   99368 main.go:141] libmachine: (ha-142481-m02)     <bootmenu enable='no'/>
	I1010 18:14:25.126273   99368 main.go:141] libmachine: (ha-142481-m02)   </os>
	I1010 18:14:25.126284   99368 main.go:141] libmachine: (ha-142481-m02)   <devices>
	I1010 18:14:25.126294   99368 main.go:141] libmachine: (ha-142481-m02)     <disk type='file' device='cdrom'>
	I1010 18:14:25.126307   99368 main.go:141] libmachine: (ha-142481-m02)       <source file='/home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481-m02/boot2docker.iso'/>
	I1010 18:14:25.126318   99368 main.go:141] libmachine: (ha-142481-m02)       <target dev='hdc' bus='scsi'/>
	I1010 18:14:25.126329   99368 main.go:141] libmachine: (ha-142481-m02)       <readonly/>
	I1010 18:14:25.126342   99368 main.go:141] libmachine: (ha-142481-m02)     </disk>
	I1010 18:14:25.126353   99368 main.go:141] libmachine: (ha-142481-m02)     <disk type='file' device='disk'>
	I1010 18:14:25.126365   99368 main.go:141] libmachine: (ha-142481-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1010 18:14:25.126380   99368 main.go:141] libmachine: (ha-142481-m02)       <source file='/home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481-m02/ha-142481-m02.rawdisk'/>
	I1010 18:14:25.126391   99368 main.go:141] libmachine: (ha-142481-m02)       <target dev='hda' bus='virtio'/>
	I1010 18:14:25.126401   99368 main.go:141] libmachine: (ha-142481-m02)     </disk>
	I1010 18:14:25.126413   99368 main.go:141] libmachine: (ha-142481-m02)     <interface type='network'>
	I1010 18:14:25.126425   99368 main.go:141] libmachine: (ha-142481-m02)       <source network='mk-ha-142481'/>
	I1010 18:14:25.126434   99368 main.go:141] libmachine: (ha-142481-m02)       <model type='virtio'/>
	I1010 18:14:25.126443   99368 main.go:141] libmachine: (ha-142481-m02)     </interface>
	I1010 18:14:25.126454   99368 main.go:141] libmachine: (ha-142481-m02)     <interface type='network'>
	I1010 18:14:25.126463   99368 main.go:141] libmachine: (ha-142481-m02)       <source network='default'/>
	I1010 18:14:25.126473   99368 main.go:141] libmachine: (ha-142481-m02)       <model type='virtio'/>
	I1010 18:14:25.126494   99368 main.go:141] libmachine: (ha-142481-m02)     </interface>
	I1010 18:14:25.126518   99368 main.go:141] libmachine: (ha-142481-m02)     <serial type='pty'>
	I1010 18:14:25.126526   99368 main.go:141] libmachine: (ha-142481-m02)       <target port='0'/>
	I1010 18:14:25.126530   99368 main.go:141] libmachine: (ha-142481-m02)     </serial>
	I1010 18:14:25.126535   99368 main.go:141] libmachine: (ha-142481-m02)     <console type='pty'>
	I1010 18:14:25.126545   99368 main.go:141] libmachine: (ha-142481-m02)       <target type='serial' port='0'/>
	I1010 18:14:25.126550   99368 main.go:141] libmachine: (ha-142481-m02)     </console>
	I1010 18:14:25.126556   99368 main.go:141] libmachine: (ha-142481-m02)     <rng model='virtio'>
	I1010 18:14:25.126562   99368 main.go:141] libmachine: (ha-142481-m02)       <backend model='random'>/dev/random</backend>
	I1010 18:14:25.126569   99368 main.go:141] libmachine: (ha-142481-m02)     </rng>
	I1010 18:14:25.126574   99368 main.go:141] libmachine: (ha-142481-m02)     
	I1010 18:14:25.126579   99368 main.go:141] libmachine: (ha-142481-m02)     
	I1010 18:14:25.126610   99368 main.go:141] libmachine: (ha-142481-m02)   </devices>
	I1010 18:14:25.126633   99368 main.go:141] libmachine: (ha-142481-m02) </domain>
	I1010 18:14:25.126647   99368 main.go:141] libmachine: (ha-142481-m02) 
	I1010 18:14:25.133808   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:63:37:66 in network default
	I1010 18:14:25.134525   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:25.134551   99368 main.go:141] libmachine: (ha-142481-m02) Ensuring networks are active...
	I1010 18:14:25.135477   99368 main.go:141] libmachine: (ha-142481-m02) Ensuring network default is active
	I1010 18:14:25.135837   99368 main.go:141] libmachine: (ha-142481-m02) Ensuring network mk-ha-142481 is active
	I1010 18:14:25.136343   99368 main.go:141] libmachine: (ha-142481-m02) Getting domain xml...
	I1010 18:14:25.137263   99368 main.go:141] libmachine: (ha-142481-m02) Creating domain...
	I1010 18:14:26.362672   99368 main.go:141] libmachine: (ha-142481-m02) Waiting to get IP...
	I1010 18:14:26.363443   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:26.363821   99368 main.go:141] libmachine: (ha-142481-m02) DBG | unable to find current IP address of domain ha-142481-m02 in network mk-ha-142481
	I1010 18:14:26.363878   99368 main.go:141] libmachine: (ha-142481-m02) DBG | I1010 18:14:26.363829   99736 retry.go:31] will retry after 237.123337ms: waiting for machine to come up
	I1010 18:14:26.602398   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:26.602883   99368 main.go:141] libmachine: (ha-142481-m02) DBG | unable to find current IP address of domain ha-142481-m02 in network mk-ha-142481
	I1010 18:14:26.602910   99368 main.go:141] libmachine: (ha-142481-m02) DBG | I1010 18:14:26.602829   99736 retry.go:31] will retry after 255.919096ms: waiting for machine to come up
	I1010 18:14:26.860273   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:26.860891   99368 main.go:141] libmachine: (ha-142481-m02) DBG | unable to find current IP address of domain ha-142481-m02 in network mk-ha-142481
	I1010 18:14:26.860917   99368 main.go:141] libmachine: (ha-142481-m02) DBG | I1010 18:14:26.860860   99736 retry.go:31] will retry after 363.867823ms: waiting for machine to come up
	I1010 18:14:27.226493   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:27.226955   99368 main.go:141] libmachine: (ha-142481-m02) DBG | unable to find current IP address of domain ha-142481-m02 in network mk-ha-142481
	I1010 18:14:27.226984   99368 main.go:141] libmachine: (ha-142481-m02) DBG | I1010 18:14:27.226896   99736 retry.go:31] will retry after 430.931001ms: waiting for machine to come up
	I1010 18:14:27.659820   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:27.660273   99368 main.go:141] libmachine: (ha-142481-m02) DBG | unable to find current IP address of domain ha-142481-m02 in network mk-ha-142481
	I1010 18:14:27.660299   99368 main.go:141] libmachine: (ha-142481-m02) DBG | I1010 18:14:27.660222   99736 retry.go:31] will retry after 681.867141ms: waiting for machine to come up
	I1010 18:14:28.344366   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:28.344931   99368 main.go:141] libmachine: (ha-142481-m02) DBG | unable to find current IP address of domain ha-142481-m02 in network mk-ha-142481
	I1010 18:14:28.344989   99368 main.go:141] libmachine: (ha-142481-m02) DBG | I1010 18:14:28.344843   99736 retry.go:31] will retry after 753.410001ms: waiting for machine to come up
	I1010 18:14:29.099845   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:29.100316   99368 main.go:141] libmachine: (ha-142481-m02) DBG | unable to find current IP address of domain ha-142481-m02 in network mk-ha-142481
	I1010 18:14:29.100345   99368 main.go:141] libmachine: (ha-142481-m02) DBG | I1010 18:14:29.100254   99736 retry.go:31] will retry after 1.081998824s: waiting for machine to come up
	I1010 18:14:30.183319   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:30.183733   99368 main.go:141] libmachine: (ha-142481-m02) DBG | unable to find current IP address of domain ha-142481-m02 in network mk-ha-142481
	I1010 18:14:30.183762   99368 main.go:141] libmachine: (ha-142481-m02) DBG | I1010 18:14:30.183699   99736 retry.go:31] will retry after 1.2621544s: waiting for machine to come up
	I1010 18:14:31.448194   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:31.448615   99368 main.go:141] libmachine: (ha-142481-m02) DBG | unable to find current IP address of domain ha-142481-m02 in network mk-ha-142481
	I1010 18:14:31.448639   99368 main.go:141] libmachine: (ha-142481-m02) DBG | I1010 18:14:31.448571   99736 retry.go:31] will retry after 1.545841483s: waiting for machine to come up
	I1010 18:14:32.996370   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:32.996940   99368 main.go:141] libmachine: (ha-142481-m02) DBG | unable to find current IP address of domain ha-142481-m02 in network mk-ha-142481
	I1010 18:14:32.996970   99368 main.go:141] libmachine: (ha-142481-m02) DBG | I1010 18:14:32.996877   99736 retry.go:31] will retry after 1.954916368s: waiting for machine to come up
	I1010 18:14:34.953362   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:34.953810   99368 main.go:141] libmachine: (ha-142481-m02) DBG | unable to find current IP address of domain ha-142481-m02 in network mk-ha-142481
	I1010 18:14:34.953834   99368 main.go:141] libmachine: (ha-142481-m02) DBG | I1010 18:14:34.953765   99736 retry.go:31] will retry after 2.832021438s: waiting for machine to come up
	I1010 18:14:37.787030   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:37.787437   99368 main.go:141] libmachine: (ha-142481-m02) DBG | unable to find current IP address of domain ha-142481-m02 in network mk-ha-142481
	I1010 18:14:37.787462   99368 main.go:141] libmachine: (ha-142481-m02) DBG | I1010 18:14:37.787399   99736 retry.go:31] will retry after 3.372903659s: waiting for machine to come up
	I1010 18:14:41.162229   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:41.162830   99368 main.go:141] libmachine: (ha-142481-m02) DBG | unable to find current IP address of domain ha-142481-m02 in network mk-ha-142481
	I1010 18:14:41.162860   99368 main.go:141] libmachine: (ha-142481-m02) DBG | I1010 18:14:41.162748   99736 retry.go:31] will retry after 3.532610017s: waiting for machine to come up
	I1010 18:14:44.697346   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:44.697811   99368 main.go:141] libmachine: (ha-142481-m02) DBG | unable to find current IP address of domain ha-142481-m02 in network mk-ha-142481
	I1010 18:14:44.697838   99368 main.go:141] libmachine: (ha-142481-m02) DBG | I1010 18:14:44.697765   99736 retry.go:31] will retry after 4.121205885s: waiting for machine to come up
	I1010 18:14:48.820235   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:48.820691   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has current primary IP address 192.168.39.186 and MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:48.820707   99368 main.go:141] libmachine: (ha-142481-m02) Found IP for machine: 192.168.39.186
	I1010 18:14:48.820716   99368 main.go:141] libmachine: (ha-142481-m02) Reserving static IP address...
	I1010 18:14:48.821115   99368 main.go:141] libmachine: (ha-142481-m02) DBG | unable to find host DHCP lease matching {name: "ha-142481-m02", mac: "52:54:00:70:30:26", ip: "192.168.39.186"} in network mk-ha-142481
	I1010 18:14:48.903340   99368 main.go:141] libmachine: (ha-142481-m02) Reserved static IP address: 192.168.39.186
	I1010 18:14:48.903376   99368 main.go:141] libmachine: (ha-142481-m02) Waiting for SSH to be available...
	I1010 18:14:48.903387   99368 main.go:141] libmachine: (ha-142481-m02) DBG | Getting to WaitForSSH function...
	I1010 18:14:48.906232   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:48.906828   99368 main.go:141] libmachine: (ha-142481-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:30:26", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:14:39 +0000 UTC Type:0 Mac:52:54:00:70:30:26 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:minikube Clientid:01:52:54:00:70:30:26}
	I1010 18:14:48.906862   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined IP address 192.168.39.186 and MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:48.907057   99368 main.go:141] libmachine: (ha-142481-m02) DBG | Using SSH client type: external
	I1010 18:14:48.907087   99368 main.go:141] libmachine: (ha-142481-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481-m02/id_rsa (-rw-------)
	I1010 18:14:48.907120   99368 main.go:141] libmachine: (ha-142481-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.186 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1010 18:14:48.907134   99368 main.go:141] libmachine: (ha-142481-m02) DBG | About to run SSH command:
	I1010 18:14:48.907147   99368 main.go:141] libmachine: (ha-142481-m02) DBG | exit 0
	I1010 18:14:49.037555   99368 main.go:141] libmachine: (ha-142481-m02) DBG | SSH cmd err, output: <nil>: 
	I1010 18:14:49.037876   99368 main.go:141] libmachine: (ha-142481-m02) KVM machine creation complete!
	I1010 18:14:49.038189   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetConfigRaw
	I1010 18:14:49.038756   99368 main.go:141] libmachine: (ha-142481-m02) Calling .DriverName
	I1010 18:14:49.038950   99368 main.go:141] libmachine: (ha-142481-m02) Calling .DriverName
	I1010 18:14:49.039103   99368 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1010 18:14:49.039117   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetState
	I1010 18:14:49.040560   99368 main.go:141] libmachine: Detecting operating system of created instance...
	I1010 18:14:49.040573   99368 main.go:141] libmachine: Waiting for SSH to be available...
	I1010 18:14:49.040578   99368 main.go:141] libmachine: Getting to WaitForSSH function...
	I1010 18:14:49.040584   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHHostname
	I1010 18:14:49.042911   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:49.043240   99368 main.go:141] libmachine: (ha-142481-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:30:26", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:14:39 +0000 UTC Type:0 Mac:52:54:00:70:30:26 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-142481-m02 Clientid:01:52:54:00:70:30:26}
	I1010 18:14:49.043266   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined IP address 192.168.39.186 and MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:49.043533   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHPort
	I1010 18:14:49.043730   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHKeyPath
	I1010 18:14:49.043927   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHKeyPath
	I1010 18:14:49.044092   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHUsername
	I1010 18:14:49.044245   99368 main.go:141] libmachine: Using SSH client type: native
	I1010 18:14:49.044498   99368 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I1010 18:14:49.044515   99368 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1010 18:14:49.156568   99368 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1010 18:14:49.156599   99368 main.go:141] libmachine: Detecting the provisioner...
	I1010 18:14:49.156607   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHHostname
	I1010 18:14:49.159819   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:49.160299   99368 main.go:141] libmachine: (ha-142481-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:30:26", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:14:39 +0000 UTC Type:0 Mac:52:54:00:70:30:26 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-142481-m02 Clientid:01:52:54:00:70:30:26}
	I1010 18:14:49.160329   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined IP address 192.168.39.186 and MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:49.160572   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHPort
	I1010 18:14:49.160782   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHKeyPath
	I1010 18:14:49.160954   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHKeyPath
	I1010 18:14:49.161115   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHUsername
	I1010 18:14:49.161282   99368 main.go:141] libmachine: Using SSH client type: native
	I1010 18:14:49.161504   99368 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I1010 18:14:49.161519   99368 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1010 18:14:49.274150   99368 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1010 18:14:49.274238   99368 main.go:141] libmachine: found compatible host: buildroot
	I1010 18:14:49.274249   99368 main.go:141] libmachine: Provisioning with buildroot...
	I1010 18:14:49.274261   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetMachineName
	I1010 18:14:49.274541   99368 buildroot.go:166] provisioning hostname "ha-142481-m02"
	I1010 18:14:49.274574   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetMachineName
	I1010 18:14:49.274809   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHHostname
	I1010 18:14:49.277484   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:49.277861   99368 main.go:141] libmachine: (ha-142481-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:30:26", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:14:39 +0000 UTC Type:0 Mac:52:54:00:70:30:26 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-142481-m02 Clientid:01:52:54:00:70:30:26}
	I1010 18:14:49.277893   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined IP address 192.168.39.186 and MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:49.278037   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHPort
	I1010 18:14:49.278241   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHKeyPath
	I1010 18:14:49.278416   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHKeyPath
	I1010 18:14:49.278595   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHUsername
	I1010 18:14:49.278858   99368 main.go:141] libmachine: Using SSH client type: native
	I1010 18:14:49.279047   99368 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I1010 18:14:49.279061   99368 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-142481-m02 && echo "ha-142481-m02" | sudo tee /etc/hostname
	I1010 18:14:49.409335   99368 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-142481-m02
	
	I1010 18:14:49.409369   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHHostname
	I1010 18:14:49.412112   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:49.412427   99368 main.go:141] libmachine: (ha-142481-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:30:26", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:14:39 +0000 UTC Type:0 Mac:52:54:00:70:30:26 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-142481-m02 Clientid:01:52:54:00:70:30:26}
	I1010 18:14:49.412458   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined IP address 192.168.39.186 and MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:49.412712   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHPort
	I1010 18:14:49.412921   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHKeyPath
	I1010 18:14:49.413069   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHKeyPath
	I1010 18:14:49.413182   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHUsername
	I1010 18:14:49.413398   99368 main.go:141] libmachine: Using SSH client type: native
	I1010 18:14:49.413565   99368 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I1010 18:14:49.413581   99368 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-142481-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-142481-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-142481-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1010 18:14:49.542003   99368 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1010 18:14:49.542039   99368 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19787-81676/.minikube CaCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19787-81676/.minikube}
	I1010 18:14:49.542058   99368 buildroot.go:174] setting up certificates
	I1010 18:14:49.542069   99368 provision.go:84] configureAuth start
	I1010 18:14:49.542080   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetMachineName
	I1010 18:14:49.542340   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetIP
	I1010 18:14:49.545159   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:49.545524   99368 main.go:141] libmachine: (ha-142481-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:30:26", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:14:39 +0000 UTC Type:0 Mac:52:54:00:70:30:26 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-142481-m02 Clientid:01:52:54:00:70:30:26}
	I1010 18:14:49.545554   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined IP address 192.168.39.186 and MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:49.545698   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHHostname
	I1010 18:14:49.547804   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:49.548115   99368 main.go:141] libmachine: (ha-142481-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:30:26", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:14:39 +0000 UTC Type:0 Mac:52:54:00:70:30:26 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-142481-m02 Clientid:01:52:54:00:70:30:26}
	I1010 18:14:49.548135   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined IP address 192.168.39.186 and MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:49.548323   99368 provision.go:143] copyHostCerts
	I1010 18:14:49.548352   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem
	I1010 18:14:49.548392   99368 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem, removing ...
	I1010 18:14:49.548403   99368 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem
	I1010 18:14:49.548486   99368 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem (1078 bytes)
	I1010 18:14:49.548582   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem
	I1010 18:14:49.548609   99368 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem, removing ...
	I1010 18:14:49.548619   99368 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem
	I1010 18:14:49.548657   99368 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem (1123 bytes)
	I1010 18:14:49.548719   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem
	I1010 18:14:49.548743   99368 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem, removing ...
	I1010 18:14:49.548752   99368 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem
	I1010 18:14:49.548788   99368 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem (1675 bytes)
	I1010 18:14:49.548865   99368 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem org=jenkins.ha-142481-m02 san=[127.0.0.1 192.168.39.186 ha-142481-m02 localhost minikube]
	I1010 18:14:49.606708   99368 provision.go:177] copyRemoteCerts
	I1010 18:14:49.606781   99368 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1010 18:14:49.606811   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHHostname
	I1010 18:14:49.609620   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:49.609921   99368 main.go:141] libmachine: (ha-142481-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:30:26", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:14:39 +0000 UTC Type:0 Mac:52:54:00:70:30:26 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-142481-m02 Clientid:01:52:54:00:70:30:26}
	I1010 18:14:49.609952   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined IP address 192.168.39.186 and MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:49.610121   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHPort
	I1010 18:14:49.610322   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHKeyPath
	I1010 18:14:49.610506   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHUsername
	I1010 18:14:49.610631   99368 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481-m02/id_rsa Username:docker}
	I1010 18:14:49.695655   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1010 18:14:49.695736   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1010 18:14:49.723445   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1010 18:14:49.723520   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1010 18:14:49.748318   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1010 18:14:49.748402   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1010 18:14:49.773423   99368 provision.go:87] duration metric: took 231.339814ms to configureAuth
	I1010 18:14:49.773451   99368 buildroot.go:189] setting minikube options for container-runtime
	I1010 18:14:49.773626   99368 config.go:182] Loaded profile config "ha-142481": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 18:14:49.773705   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHHostname
	I1010 18:14:49.776350   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:49.776701   99368 main.go:141] libmachine: (ha-142481-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:30:26", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:14:39 +0000 UTC Type:0 Mac:52:54:00:70:30:26 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-142481-m02 Clientid:01:52:54:00:70:30:26}
	I1010 18:14:49.776726   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined IP address 192.168.39.186 and MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:49.776913   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHPort
	I1010 18:14:49.777128   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHKeyPath
	I1010 18:14:49.777292   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHKeyPath
	I1010 18:14:49.777435   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHUsername
	I1010 18:14:49.777590   99368 main.go:141] libmachine: Using SSH client type: native
	I1010 18:14:49.777795   99368 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I1010 18:14:49.777817   99368 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1010 18:14:50.018484   99368 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1010 18:14:50.018513   99368 main.go:141] libmachine: Checking connection to Docker...
	I1010 18:14:50.018525   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetURL
	I1010 18:14:50.019796   99368 main.go:141] libmachine: (ha-142481-m02) DBG | Using libvirt version 6000000
	I1010 18:14:50.022107   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:50.022432   99368 main.go:141] libmachine: (ha-142481-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:30:26", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:14:39 +0000 UTC Type:0 Mac:52:54:00:70:30:26 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-142481-m02 Clientid:01:52:54:00:70:30:26}
	I1010 18:14:50.022476   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined IP address 192.168.39.186 and MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:50.022628   99368 main.go:141] libmachine: Docker is up and running!
	I1010 18:14:50.022646   99368 main.go:141] libmachine: Reticulating splines...
	I1010 18:14:50.022657   99368 client.go:171] duration metric: took 25.439118717s to LocalClient.Create
	I1010 18:14:50.022695   99368 start.go:167] duration metric: took 25.439191435s to libmachine.API.Create "ha-142481"
	I1010 18:14:50.022708   99368 start.go:293] postStartSetup for "ha-142481-m02" (driver="kvm2")
	I1010 18:14:50.022725   99368 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1010 18:14:50.022763   99368 main.go:141] libmachine: (ha-142481-m02) Calling .DriverName
	I1010 18:14:50.023030   99368 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1010 18:14:50.023055   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHHostname
	I1010 18:14:50.025463   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:50.025834   99368 main.go:141] libmachine: (ha-142481-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:30:26", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:14:39 +0000 UTC Type:0 Mac:52:54:00:70:30:26 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-142481-m02 Clientid:01:52:54:00:70:30:26}
	I1010 18:14:50.025869   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined IP address 192.168.39.186 and MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:50.026093   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHPort
	I1010 18:14:50.026322   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHKeyPath
	I1010 18:14:50.026520   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHUsername
	I1010 18:14:50.026673   99368 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481-m02/id_rsa Username:docker}
	I1010 18:14:50.115488   99368 ssh_runner.go:195] Run: cat /etc/os-release
	I1010 18:14:50.120106   99368 info.go:137] Remote host: Buildroot 2023.02.9
	I1010 18:14:50.120146   99368 filesync.go:126] Scanning /home/jenkins/minikube-integration/19787-81676/.minikube/addons for local assets ...
	I1010 18:14:50.120259   99368 filesync.go:126] Scanning /home/jenkins/minikube-integration/19787-81676/.minikube/files for local assets ...
	I1010 18:14:50.120347   99368 filesync.go:149] local asset: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem -> 888762.pem in /etc/ssl/certs
	I1010 18:14:50.120360   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem -> /etc/ssl/certs/888762.pem
	I1010 18:14:50.120462   99368 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1010 18:14:50.130011   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem --> /etc/ssl/certs/888762.pem (1708 bytes)
	I1010 18:14:50.156296   99368 start.go:296] duration metric: took 133.570332ms for postStartSetup
	I1010 18:14:50.156350   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetConfigRaw
	I1010 18:14:50.156937   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetIP
	I1010 18:14:50.159597   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:50.160043   99368 main.go:141] libmachine: (ha-142481-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:30:26", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:14:39 +0000 UTC Type:0 Mac:52:54:00:70:30:26 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-142481-m02 Clientid:01:52:54:00:70:30:26}
	I1010 18:14:50.160071   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined IP address 192.168.39.186 and MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:50.160321   99368 profile.go:143] Saving config to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/config.json ...
	I1010 18:14:50.160495   99368 start.go:128] duration metric: took 25.595643097s to createHost
	I1010 18:14:50.160517   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHHostname
	I1010 18:14:50.162762   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:50.163085   99368 main.go:141] libmachine: (ha-142481-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:30:26", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:14:39 +0000 UTC Type:0 Mac:52:54:00:70:30:26 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-142481-m02 Clientid:01:52:54:00:70:30:26}
	I1010 18:14:50.163110   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined IP address 192.168.39.186 and MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:50.163276   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHPort
	I1010 18:14:50.163459   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHKeyPath
	I1010 18:14:50.163603   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHKeyPath
	I1010 18:14:50.163760   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHUsername
	I1010 18:14:50.163931   99368 main.go:141] libmachine: Using SSH client type: native
	I1010 18:14:50.164125   99368 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I1010 18:14:50.164139   99368 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1010 18:14:50.277898   99368 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728584090.237251579
	
	I1010 18:14:50.277925   99368 fix.go:216] guest clock: 1728584090.237251579
	I1010 18:14:50.277933   99368 fix.go:229] Guest: 2024-10-10 18:14:50.237251579 +0000 UTC Remote: 2024-10-10 18:14:50.160506288 +0000 UTC m=+72.091094363 (delta=76.745291ms)
	I1010 18:14:50.277949   99368 fix.go:200] guest clock delta is within tolerance: 76.745291ms
	I1010 18:14:50.277955   99368 start.go:83] releasing machines lock for "ha-142481-m02", held for 25.713195595s
	I1010 18:14:50.277975   99368 main.go:141] libmachine: (ha-142481-m02) Calling .DriverName
	I1010 18:14:50.278294   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetIP
	I1010 18:14:50.280842   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:50.281256   99368 main.go:141] libmachine: (ha-142481-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:30:26", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:14:39 +0000 UTC Type:0 Mac:52:54:00:70:30:26 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-142481-m02 Clientid:01:52:54:00:70:30:26}
	I1010 18:14:50.281283   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined IP address 192.168.39.186 and MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:50.283734   99368 out.go:177] * Found network options:
	I1010 18:14:50.285300   99368 out.go:177]   - NO_PROXY=192.168.39.104
	W1010 18:14:50.286708   99368 proxy.go:119] fail to check proxy env: Error ip not in block
	I1010 18:14:50.286748   99368 main.go:141] libmachine: (ha-142481-m02) Calling .DriverName
	I1010 18:14:50.287340   99368 main.go:141] libmachine: (ha-142481-m02) Calling .DriverName
	I1010 18:14:50.287549   99368 main.go:141] libmachine: (ha-142481-m02) Calling .DriverName
	I1010 18:14:50.287642   99368 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1010 18:14:50.287694   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHHostname
	W1010 18:14:50.287740   99368 proxy.go:119] fail to check proxy env: Error ip not in block
	I1010 18:14:50.287827   99368 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1010 18:14:50.287852   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHHostname
	I1010 18:14:50.290823   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:50.290971   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:50.291276   99368 main.go:141] libmachine: (ha-142481-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:30:26", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:14:39 +0000 UTC Type:0 Mac:52:54:00:70:30:26 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-142481-m02 Clientid:01:52:54:00:70:30:26}
	I1010 18:14:50.291307   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined IP address 192.168.39.186 and MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:50.291499   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHPort
	I1010 18:14:50.291594   99368 main.go:141] libmachine: (ha-142481-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:30:26", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:14:39 +0000 UTC Type:0 Mac:52:54:00:70:30:26 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-142481-m02 Clientid:01:52:54:00:70:30:26}
	I1010 18:14:50.291635   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined IP address 192.168.39.186 and MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:50.291693   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHKeyPath
	I1010 18:14:50.291858   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHPort
	I1010 18:14:50.291862   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHUsername
	I1010 18:14:50.292017   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHKeyPath
	I1010 18:14:50.292017   99368 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481-m02/id_rsa Username:docker}
	I1010 18:14:50.292146   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHUsername
	I1010 18:14:50.292458   99368 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481-m02/id_rsa Username:docker}
	I1010 18:14:50.532570   99368 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1010 18:14:50.540169   99368 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1010 18:14:50.540248   99368 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1010 18:14:50.557472   99368 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1010 18:14:50.557500   99368 start.go:495] detecting cgroup driver to use...
	I1010 18:14:50.557574   99368 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1010 18:14:50.574787   99368 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1010 18:14:50.590774   99368 docker.go:217] disabling cri-docker service (if available) ...
	I1010 18:14:50.590848   99368 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1010 18:14:50.605941   99368 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1010 18:14:50.620901   99368 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1010 18:14:50.753387   99368 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1010 18:14:50.919446   99368 docker.go:233] disabling docker service ...
	I1010 18:14:50.919535   99368 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1010 18:14:50.934691   99368 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1010 18:14:50.948383   99368 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1010 18:14:51.098212   99368 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1010 18:14:51.222205   99368 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1010 18:14:51.236395   99368 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1010 18:14:51.255620   99368 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1010 18:14:51.255682   99368 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:14:51.265706   99368 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1010 18:14:51.265766   99368 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:14:51.276288   99368 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:14:51.287384   99368 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:14:51.298290   99368 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1010 18:14:51.309391   99368 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:14:51.322059   99368 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:14:51.341165   99368 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:14:51.352334   99368 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1010 18:14:51.361995   99368 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1010 18:14:51.362055   99368 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1010 18:14:51.376647   99368 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1010 18:14:51.387344   99368 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 18:14:51.501276   99368 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1010 18:14:51.591570   99368 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1010 18:14:51.591667   99368 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1010 18:14:51.596519   99368 start.go:563] Will wait 60s for crictl version
	I1010 18:14:51.596593   99368 ssh_runner.go:195] Run: which crictl
	I1010 18:14:51.600964   99368 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1010 18:14:51.642625   99368 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1010 18:14:51.642709   99368 ssh_runner.go:195] Run: crio --version
	I1010 18:14:51.670857   99368 ssh_runner.go:195] Run: crio --version
	I1010 18:14:51.701992   99368 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1010 18:14:51.703402   99368 out.go:177]   - env NO_PROXY=192.168.39.104
	I1010 18:14:51.704577   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetIP
	I1010 18:14:51.707504   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:51.707889   99368 main.go:141] libmachine: (ha-142481-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:30:26", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:14:39 +0000 UTC Type:0 Mac:52:54:00:70:30:26 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-142481-m02 Clientid:01:52:54:00:70:30:26}
	I1010 18:14:51.707921   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined IP address 192.168.39.186 and MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:51.708187   99368 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1010 18:14:51.712581   99368 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 18:14:51.728042   99368 mustload.go:65] Loading cluster: ha-142481
	I1010 18:14:51.728254   99368 config.go:182] Loaded profile config "ha-142481": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 18:14:51.728534   99368 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 18:14:51.728571   99368 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 18:14:51.744127   99368 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38121
	I1010 18:14:51.744674   99368 main.go:141] libmachine: () Calling .GetVersion
	I1010 18:14:51.745223   99368 main.go:141] libmachine: Using API Version  1
	I1010 18:14:51.745247   99368 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 18:14:51.745620   99368 main.go:141] libmachine: () Calling .GetMachineName
	I1010 18:14:51.745831   99368 main.go:141] libmachine: (ha-142481) Calling .GetState
	I1010 18:14:51.747403   99368 host.go:66] Checking if "ha-142481" exists ...
	I1010 18:14:51.747706   99368 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 18:14:51.747737   99368 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 18:14:51.763030   99368 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41747
	I1010 18:14:51.763446   99368 main.go:141] libmachine: () Calling .GetVersion
	I1010 18:14:51.763925   99368 main.go:141] libmachine: Using API Version  1
	I1010 18:14:51.763949   99368 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 18:14:51.764295   99368 main.go:141] libmachine: () Calling .GetMachineName
	I1010 18:14:51.764486   99368 main.go:141] libmachine: (ha-142481) Calling .DriverName
	I1010 18:14:51.764627   99368 certs.go:68] Setting up /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481 for IP: 192.168.39.186
	I1010 18:14:51.764637   99368 certs.go:194] generating shared ca certs ...
	I1010 18:14:51.764650   99368 certs.go:226] acquiring lock for ca certs: {Name:mk934aa2a82050d7b4e8800492bf64f91d140517 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:14:51.764765   99368 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key
	I1010 18:14:51.764803   99368 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key
	I1010 18:14:51.764812   99368 certs.go:256] generating profile certs ...
	I1010 18:14:51.764912   99368 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/client.key
	I1010 18:14:51.764937   99368 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.key.b244f992
	I1010 18:14:51.764951   99368 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.crt.b244f992 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.104 192.168.39.186 192.168.39.254]
	I1010 18:14:51.993768   99368 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.crt.b244f992 ...
	I1010 18:14:51.993803   99368 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.crt.b244f992: {Name:mk9eca5b6bcf4de2bd1cb4984282b7c5168c504a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:14:51.993982   99368 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.key.b244f992 ...
	I1010 18:14:51.993996   99368 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.key.b244f992: {Name:mk53f522d230afb3a7d1b4f761a379d6be7ff843 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:14:51.994077   99368 certs.go:381] copying /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.crt.b244f992 -> /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.crt
	I1010 18:14:51.994210   99368 certs.go:385] copying /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.key.b244f992 -> /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.key
	I1010 18:14:51.994347   99368 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/proxy-client.key
	I1010 18:14:51.994363   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1010 18:14:51.994376   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1010 18:14:51.994389   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1010 18:14:51.994407   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1010 18:14:51.994420   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1010 18:14:51.994432   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1010 18:14:51.994443   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1010 18:14:51.994454   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1010 18:14:51.994507   99368 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876.pem (1338 bytes)
	W1010 18:14:51.994535   99368 certs.go:480] ignoring /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876_empty.pem, impossibly tiny 0 bytes
	I1010 18:14:51.994545   99368 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem (1679 bytes)
	I1010 18:14:51.994565   99368 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem (1078 bytes)
	I1010 18:14:51.994589   99368 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem (1123 bytes)
	I1010 18:14:51.994613   99368 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem (1675 bytes)
	I1010 18:14:51.994650   99368 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem (1708 bytes)
	I1010 18:14:51.994681   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:14:51.994695   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876.pem -> /usr/share/ca-certificates/88876.pem
	I1010 18:14:51.994706   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem -> /usr/share/ca-certificates/888762.pem
	I1010 18:14:51.994740   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHHostname
	I1010 18:14:51.997958   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:51.998443   99368 main.go:141] libmachine: (ha-142481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:fa:00", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:13:53 +0000 UTC Type:0 Mac:52:54:00:3e:fa:00 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-142481 Clientid:01:52:54:00:3e:fa:00}
	I1010 18:14:51.998473   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined IP address 192.168.39.104 and MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:51.998636   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHPort
	I1010 18:14:51.998839   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHKeyPath
	I1010 18:14:51.999035   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHUsername
	I1010 18:14:51.999239   99368 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481/id_rsa Username:docker}
	I1010 18:14:52.077280   99368 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1010 18:14:52.082655   99368 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1010 18:14:52.094293   99368 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1010 18:14:52.102951   99368 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1010 18:14:52.115800   99368 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1010 18:14:52.120082   99368 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1010 18:14:52.130693   99368 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1010 18:14:52.135696   99368 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1010 18:14:52.148816   99368 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1010 18:14:52.158283   99368 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1010 18:14:52.169959   99368 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1010 18:14:52.174352   99368 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1010 18:14:52.185494   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1010 18:14:52.211191   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1010 18:14:52.237842   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1010 18:14:52.263110   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1010 18:14:52.287843   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1010 18:14:52.313473   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1010 18:14:52.338065   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1010 18:14:52.363071   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1010 18:14:52.387579   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1010 18:14:52.412888   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876.pem --> /usr/share/ca-certificates/88876.pem (1338 bytes)
	I1010 18:14:52.437781   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem --> /usr/share/ca-certificates/888762.pem (1708 bytes)
	I1010 18:14:52.464757   99368 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1010 18:14:52.481913   99368 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1010 18:14:52.499025   99368 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1010 18:14:52.515900   99368 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1010 18:14:52.533545   99368 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1010 18:14:52.550809   99368 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1010 18:14:52.567422   99368 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1010 18:14:52.584795   99368 ssh_runner.go:195] Run: openssl version
	I1010 18:14:52.590891   99368 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/888762.pem && ln -fs /usr/share/ca-certificates/888762.pem /etc/ssl/certs/888762.pem"
	I1010 18:14:52.602879   99368 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/888762.pem
	I1010 18:14:52.607603   99368 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 10 18:09 /usr/share/ca-certificates/888762.pem
	I1010 18:14:52.607658   99368 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/888762.pem
	I1010 18:14:52.613708   99368 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/888762.pem /etc/ssl/certs/3ec20f2e.0"
	I1010 18:14:52.631468   99368 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1010 18:14:52.643064   99368 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:14:52.647811   99368 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 10 17:58 /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:14:52.647874   99368 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:14:52.653881   99368 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1010 18:14:52.665152   99368 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/88876.pem && ln -fs /usr/share/ca-certificates/88876.pem /etc/ssl/certs/88876.pem"
	I1010 18:14:52.676562   99368 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/88876.pem
	I1010 18:14:52.681256   99368 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 10 18:09 /usr/share/ca-certificates/88876.pem
	I1010 18:14:52.681313   99368 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/88876.pem
	I1010 18:14:52.687223   99368 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/88876.pem /etc/ssl/certs/51391683.0"
	I1010 18:14:52.699194   99368 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1010 18:14:52.703641   99368 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1010 18:14:52.703707   99368 kubeadm.go:934] updating node {m02 192.168.39.186 8443 v1.31.1 crio true true} ...
	I1010 18:14:52.703805   99368 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-142481-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.186
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-142481 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1010 18:14:52.703835   99368 kube-vip.go:115] generating kube-vip config ...
	I1010 18:14:52.703878   99368 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1010 18:14:52.723026   99368 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1010 18:14:52.723119   99368 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1010 18:14:52.723189   99368 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1010 18:14:52.734671   99368 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I1010 18:14:52.734752   99368 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I1010 18:14:52.745741   99368 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19787-81676/.minikube/cache/linux/amd64/v1.31.1/kubelet
	I1010 18:14:52.745751   99368 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19787-81676/.minikube/cache/linux/amd64/v1.31.1/kubeadm
	I1010 18:14:52.745751   99368 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I1010 18:14:52.745871   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I1010 18:14:52.745940   99368 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I1010 18:14:52.751099   99368 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I1010 18:14:52.751132   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I1010 18:14:53.544046   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1010 18:14:53.544130   99368 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1010 18:14:53.549472   99368 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I1010 18:14:53.549517   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I1010 18:14:53.647955   99368 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 18:14:53.681722   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I1010 18:14:53.681823   99368 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I1010 18:14:53.695932   99368 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I1010 18:14:53.695987   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I1010 18:14:54.175941   99368 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1010 18:14:54.187282   99368 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1010 18:14:54.205511   99368 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1010 18:14:54.223508   99368 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1010 18:14:54.241125   99368 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1010 18:14:54.245490   99368 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 18:14:54.259173   99368 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 18:14:54.401351   99368 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 18:14:54.419984   99368 host.go:66] Checking if "ha-142481" exists ...
	I1010 18:14:54.420484   99368 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 18:14:54.420546   99368 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 18:14:54.436033   99368 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37253
	I1010 18:14:54.436556   99368 main.go:141] libmachine: () Calling .GetVersion
	I1010 18:14:54.437251   99368 main.go:141] libmachine: Using API Version  1
	I1010 18:14:54.437281   99368 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 18:14:54.437607   99368 main.go:141] libmachine: () Calling .GetMachineName
	I1010 18:14:54.437831   99368 main.go:141] libmachine: (ha-142481) Calling .DriverName
	I1010 18:14:54.438020   99368 start.go:317] joinCluster: &{Name:ha-142481 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-142481 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.104 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.186 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 18:14:54.438157   99368 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1010 18:14:54.438180   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHHostname
	I1010 18:14:54.441157   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:54.441581   99368 main.go:141] libmachine: (ha-142481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:fa:00", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:13:53 +0000 UTC Type:0 Mac:52:54:00:3e:fa:00 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-142481 Clientid:01:52:54:00:3e:fa:00}
	I1010 18:14:54.441609   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined IP address 192.168.39.104 and MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:54.441854   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHPort
	I1010 18:14:54.442034   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHKeyPath
	I1010 18:14:54.442149   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHUsername
	I1010 18:14:54.442289   99368 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481/id_rsa Username:docker}
	I1010 18:14:54.604951   99368 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.186 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1010 18:14:54.605013   99368 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token wt3o3w.k6pkjtb13sd57t6w --discovery-token-ca-cert-hash sha256:bc8568f96f96483e4403a7eff9866ac7bd8b0e76d8203796a49dd1a769f11bda --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-142481-m02 --control-plane --apiserver-advertise-address=192.168.39.186 --apiserver-bind-port=8443"
	I1010 18:15:14.578208   99368 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token wt3o3w.k6pkjtb13sd57t6w --discovery-token-ca-cert-hash sha256:bc8568f96f96483e4403a7eff9866ac7bd8b0e76d8203796a49dd1a769f11bda --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-142481-m02 --control-plane --apiserver-advertise-address=192.168.39.186 --apiserver-bind-port=8443": (19.973131424s)
	I1010 18:15:14.578257   99368 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1010 18:15:15.095544   99368 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-142481-m02 minikube.k8s.io/updated_at=2024_10_10T18_15_15_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=e90d7de550e839433dc631623053398b652747fc minikube.k8s.io/name=ha-142481 minikube.k8s.io/primary=false
	I1010 18:15:15.208568   99368 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-142481-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I1010 18:15:15.337167   99368 start.go:319] duration metric: took 20.899144024s to joinCluster
	I1010 18:15:15.337270   99368 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.186 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1010 18:15:15.337601   99368 config.go:182] Loaded profile config "ha-142481": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 18:15:15.339949   99368 out.go:177] * Verifying Kubernetes components...
	I1010 18:15:15.341260   99368 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 18:15:15.615485   99368 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 18:15:15.642973   99368 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19787-81676/kubeconfig
	I1010 18:15:15.643325   99368 kapi.go:59] client config for ha-142481: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/client.crt", KeyFile:"/home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/client.key", CAFile:"/home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2432700), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1010 18:15:15.643422   99368 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.104:8443
	I1010 18:15:15.643731   99368 node_ready.go:35] waiting up to 6m0s for node "ha-142481-m02" to be "Ready" ...
	I1010 18:15:15.643859   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:15.643869   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:15.643880   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:15.643892   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:15.665402   99368 round_trippers.go:574] Response Status: 200 OK in 21 milliseconds
	I1010 18:15:16.144314   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:16.144340   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:16.144351   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:16.144357   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:16.150219   99368 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1010 18:15:16.644045   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:16.644074   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:16.644086   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:16.644093   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:16.654043   99368 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1010 18:15:17.144554   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:17.144581   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:17.144590   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:17.144595   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:17.148858   99368 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1010 18:15:17.643970   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:17.644078   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:17.644104   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:17.644122   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:17.653880   99368 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1010 18:15:17.654572   99368 node_ready.go:53] node "ha-142481-m02" has status "Ready":"False"
	I1010 18:15:18.144266   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:18.144294   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:18.144302   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:18.144308   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:18.147936   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:18.644346   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:18.644369   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:18.644378   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:18.644382   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:18.648587   99368 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1010 18:15:19.144413   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:19.144443   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:19.144454   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:19.144460   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:19.147695   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:19.644688   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:19.644715   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:19.644726   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:19.644730   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:19.648487   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:20.144679   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:20.144700   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:20.144708   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:20.144712   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:20.148475   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:20.149193   99368 node_ready.go:53] node "ha-142481-m02" has status "Ready":"False"
	I1010 18:15:20.644644   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:20.644675   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:20.644687   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:20.644694   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:20.648513   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:21.144341   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:21.144366   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:21.144377   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:21.144384   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:21.147839   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:21.644909   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:21.644934   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:21.644942   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:21.644946   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:21.648387   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:22.144173   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:22.144196   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:22.144205   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:22.144209   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:22.147385   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:22.644414   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:22.644444   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:22.644456   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:22.644462   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:22.713904   99368 round_trippers.go:574] Response Status: 200 OK in 69 milliseconds
	I1010 18:15:22.714410   99368 node_ready.go:53] node "ha-142481-m02" has status "Ready":"False"
	I1010 18:15:23.144902   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:23.144934   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:23.144947   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:23.144954   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:23.147993   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:23.644885   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:23.644971   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:23.644995   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:23.645002   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:23.648711   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:24.144645   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:24.144673   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:24.144685   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:24.144690   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:24.148415   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:24.644379   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:24.644413   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:24.644424   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:24.644429   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:24.648175   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:25.144097   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:25.144120   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:25.144128   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:25.144133   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:25.147203   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:25.147854   99368 node_ready.go:53] node "ha-142481-m02" has status "Ready":"False"
	I1010 18:15:25.644276   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:25.644303   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:25.644311   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:25.644316   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:25.647929   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:26.143986   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:26.144010   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:26.144018   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:26.144023   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:26.147277   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:26.644893   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:26.644924   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:26.644934   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:26.644939   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:26.648455   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:27.144020   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:27.144042   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:27.144050   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:27.144053   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:27.150719   99368 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1010 18:15:27.151307   99368 node_ready.go:53] node "ha-142481-m02" has status "Ready":"False"
	I1010 18:15:27.644596   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:27.644620   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:27.644628   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:27.644632   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:27.648391   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:28.144777   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:28.144801   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:28.144809   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:28.144813   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:28.148258   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:28.644636   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:28.644665   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:28.644673   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:28.644676   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:28.648181   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:29.144094   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:29.144120   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:29.144128   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:29.144133   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:29.147945   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:29.644955   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:29.644977   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:29.644986   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:29.644990   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:29.648391   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:29.649199   99368 node_ready.go:53] node "ha-142481-m02" has status "Ready":"False"
	I1010 18:15:30.144628   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:30.144653   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:30.144661   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:30.144665   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:30.148286   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:30.644255   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:30.644288   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:30.644299   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:30.644304   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:30.648062   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:31.144076   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:31.144101   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:31.144109   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:31.144112   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:31.148081   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:31.644011   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:31.644037   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:31.644049   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:31.644055   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:31.653327   99368 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1010 18:15:31.653921   99368 node_ready.go:53] node "ha-142481-m02" has status "Ready":"False"
	I1010 18:15:32.144247   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:32.144273   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:32.144282   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:32.144286   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:32.147700   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:32.644836   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:32.644894   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:32.644908   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:32.644913   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:32.648022   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:33.144204   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:33.144231   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:33.144240   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:33.144242   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:33.148094   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:33.644909   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:33.644932   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:33.644940   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:33.644943   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:33.648586   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:34.144644   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:34.144672   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:34.144680   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:34.144685   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:34.148129   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:34.148805   99368 node_ready.go:53] node "ha-142481-m02" has status "Ready":"False"
	I1010 18:15:34.644279   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:34.644310   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:34.644321   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:34.644329   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:34.648073   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:34.648695   99368 node_ready.go:49] node "ha-142481-m02" has status "Ready":"True"
	I1010 18:15:34.648716   99368 node_ready.go:38] duration metric: took 19.004960132s for node "ha-142481-m02" to be "Ready" ...
	I1010 18:15:34.648732   99368 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 18:15:34.648874   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods
	I1010 18:15:34.648887   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:34.648899   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:34.648905   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:34.653067   99368 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1010 18:15:34.660867   99368 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-28dll" in "kube-system" namespace to be "Ready" ...
	I1010 18:15:34.660985   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-28dll
	I1010 18:15:34.660996   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:34.661004   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:34.661008   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:34.673094   99368 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I1010 18:15:34.673807   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481
	I1010 18:15:34.673825   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:34.673833   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:34.673838   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:34.679300   99368 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1010 18:15:34.679893   99368 pod_ready.go:93] pod "coredns-7c65d6cfc9-28dll" in "kube-system" namespace has status "Ready":"True"
	I1010 18:15:34.679919   99368 pod_ready.go:82] duration metric: took 19.021803ms for pod "coredns-7c65d6cfc9-28dll" in "kube-system" namespace to be "Ready" ...
	I1010 18:15:34.679934   99368 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-xfhq8" in "kube-system" namespace to be "Ready" ...
	I1010 18:15:34.680016   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-xfhq8
	I1010 18:15:34.680028   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:34.680039   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:34.680046   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:34.687874   99368 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1010 18:15:34.688550   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481
	I1010 18:15:34.688567   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:34.688575   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:34.688578   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:34.693607   99368 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1010 18:15:34.694298   99368 pod_ready.go:93] pod "coredns-7c65d6cfc9-xfhq8" in "kube-system" namespace has status "Ready":"True"
	I1010 18:15:34.694318   99368 pod_ready.go:82] duration metric: took 14.376081ms for pod "coredns-7c65d6cfc9-xfhq8" in "kube-system" namespace to be "Ready" ...
	I1010 18:15:34.694329   99368 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-142481" in "kube-system" namespace to be "Ready" ...
	I1010 18:15:34.694401   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/etcd-ha-142481
	I1010 18:15:34.694412   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:34.694422   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:34.694427   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:34.705466   99368 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1010 18:15:34.706122   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481
	I1010 18:15:34.706142   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:34.706152   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:34.706157   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:34.713862   99368 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1010 18:15:34.714292   99368 pod_ready.go:93] pod "etcd-ha-142481" in "kube-system" namespace has status "Ready":"True"
	I1010 18:15:34.714313   99368 pod_ready.go:82] duration metric: took 19.977824ms for pod "etcd-ha-142481" in "kube-system" namespace to be "Ready" ...
	I1010 18:15:34.714324   99368 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-142481-m02" in "kube-system" namespace to be "Ready" ...
	I1010 18:15:34.714393   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/etcd-ha-142481-m02
	I1010 18:15:34.714397   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:34.714407   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:34.714411   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:34.724173   99368 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1010 18:15:34.725474   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:34.725492   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:34.725502   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:34.725507   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:34.728517   99368 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1010 18:15:34.729350   99368 pod_ready.go:93] pod "etcd-ha-142481-m02" in "kube-system" namespace has status "Ready":"True"
	I1010 18:15:34.729374   99368 pod_ready.go:82] duration metric: took 15.044498ms for pod "etcd-ha-142481-m02" in "kube-system" namespace to be "Ready" ...
	I1010 18:15:34.729392   99368 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-142481" in "kube-system" namespace to be "Ready" ...
	I1010 18:15:34.844828   99368 request.go:632] Waited for 115.352966ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-142481
	I1010 18:15:34.844940   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-142481
	I1010 18:15:34.844954   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:34.844965   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:34.844980   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:34.849582   99368 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1010 18:15:35.044720   99368 request.go:632] Waited for 194.440409ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/nodes/ha-142481
	I1010 18:15:35.044815   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481
	I1010 18:15:35.044823   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:35.044922   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:35.044934   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:35.049101   99368 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1010 18:15:35.049648   99368 pod_ready.go:93] pod "kube-apiserver-ha-142481" in "kube-system" namespace has status "Ready":"True"
	I1010 18:15:35.049671   99368 pod_ready.go:82] duration metric: took 320.272231ms for pod "kube-apiserver-ha-142481" in "kube-system" namespace to be "Ready" ...
	I1010 18:15:35.049694   99368 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-142481-m02" in "kube-system" namespace to be "Ready" ...
	I1010 18:15:35.244714   99368 request.go:632] Waited for 194.93387ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-142481-m02
	I1010 18:15:35.244774   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-142481-m02
	I1010 18:15:35.244780   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:35.244788   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:35.244791   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:35.248696   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:35.444831   99368 request.go:632] Waited for 195.412897ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:35.444927   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:35.444933   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:35.444942   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:35.444946   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:35.448991   99368 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1010 18:15:35.450079   99368 pod_ready.go:93] pod "kube-apiserver-ha-142481-m02" in "kube-system" namespace has status "Ready":"True"
	I1010 18:15:35.450103   99368 pod_ready.go:82] duration metric: took 400.401007ms for pod "kube-apiserver-ha-142481-m02" in "kube-system" namespace to be "Ready" ...
	I1010 18:15:35.450118   99368 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-142481" in "kube-system" namespace to be "Ready" ...
	I1010 18:15:35.645157   99368 request.go:632] Waited for 194.960575ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-142481
	I1010 18:15:35.645249   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-142481
	I1010 18:15:35.645257   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:35.645268   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:35.645274   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:35.648746   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:35.844906   99368 request.go:632] Waited for 195.418533ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/nodes/ha-142481
	I1010 18:15:35.844969   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481
	I1010 18:15:35.844974   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:35.844982   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:35.844985   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:35.849036   99368 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1010 18:15:35.849631   99368 pod_ready.go:93] pod "kube-controller-manager-ha-142481" in "kube-system" namespace has status "Ready":"True"
	I1010 18:15:35.849652   99368 pod_ready.go:82] duration metric: took 399.526564ms for pod "kube-controller-manager-ha-142481" in "kube-system" namespace to be "Ready" ...
	I1010 18:15:35.849663   99368 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-142481-m02" in "kube-system" namespace to be "Ready" ...
	I1010 18:15:36.044750   99368 request.go:632] Waited for 194.993362ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-142481-m02
	I1010 18:15:36.044821   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-142481-m02
	I1010 18:15:36.044829   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:36.044841   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:36.044860   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:36.048403   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:36.244872   99368 request.go:632] Waited for 195.41194ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:36.244966   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:36.244978   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:36.244991   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:36.245003   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:36.248422   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:36.249090   99368 pod_ready.go:93] pod "kube-controller-manager-ha-142481-m02" in "kube-system" namespace has status "Ready":"True"
	I1010 18:15:36.249112   99368 pod_ready.go:82] duration metric: took 399.440459ms for pod "kube-controller-manager-ha-142481-m02" in "kube-system" namespace to be "Ready" ...
	I1010 18:15:36.249127   99368 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-gwvrh" in "kube-system" namespace to be "Ready" ...
	I1010 18:15:36.445275   99368 request.go:632] Waited for 196.04196ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gwvrh
	I1010 18:15:36.445337   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gwvrh
	I1010 18:15:36.445343   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:36.445350   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:36.445354   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:36.449425   99368 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1010 18:15:36.644689   99368 request.go:632] Waited for 194.411636ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/nodes/ha-142481
	I1010 18:15:36.644795   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481
	I1010 18:15:36.644806   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:36.644817   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:36.644825   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:36.648756   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:36.649220   99368 pod_ready.go:93] pod "kube-proxy-gwvrh" in "kube-system" namespace has status "Ready":"True"
	I1010 18:15:36.649241   99368 pod_ready.go:82] duration metric: took 400.105171ms for pod "kube-proxy-gwvrh" in "kube-system" namespace to be "Ready" ...
	I1010 18:15:36.649254   99368 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-srfng" in "kube-system" namespace to be "Ready" ...
	I1010 18:15:36.844338   99368 request.go:632] Waited for 194.987151ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-proxy-srfng
	I1010 18:15:36.844405   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-proxy-srfng
	I1010 18:15:36.844411   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:36.844420   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:36.844434   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:36.848477   99368 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1010 18:15:37.044640   99368 request.go:632] Waited for 195.367234ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:37.044708   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:37.044715   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:37.044726   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:37.044731   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:37.048116   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:37.048721   99368 pod_ready.go:93] pod "kube-proxy-srfng" in "kube-system" namespace has status "Ready":"True"
	I1010 18:15:37.048745   99368 pod_ready.go:82] duration metric: took 399.483125ms for pod "kube-proxy-srfng" in "kube-system" namespace to be "Ready" ...
	I1010 18:15:37.048759   99368 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-142481" in "kube-system" namespace to be "Ready" ...
	I1010 18:15:37.244914   99368 request.go:632] Waited for 196.022775ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-142481
	I1010 18:15:37.244993   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-142481
	I1010 18:15:37.245004   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:37.245029   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:37.245036   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:37.248801   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:37.444916   99368 request.go:632] Waited for 195.401869ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/nodes/ha-142481
	I1010 18:15:37.444984   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481
	I1010 18:15:37.444991   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:37.445002   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:37.445008   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:37.448457   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:37.449008   99368 pod_ready.go:93] pod "kube-scheduler-ha-142481" in "kube-system" namespace has status "Ready":"True"
	I1010 18:15:37.449028   99368 pod_ready.go:82] duration metric: took 400.260773ms for pod "kube-scheduler-ha-142481" in "kube-system" namespace to be "Ready" ...
	I1010 18:15:37.449039   99368 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-142481-m02" in "kube-system" namespace to be "Ready" ...
	I1010 18:15:37.645172   99368 request.go:632] Waited for 196.046461ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-142481-m02
	I1010 18:15:37.645249   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-142481-m02
	I1010 18:15:37.645256   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:37.645265   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:37.645271   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:37.648894   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:37.844799   99368 request.go:632] Waited for 195.42858ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:37.844915   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:37.844926   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:37.844937   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:37.844945   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:37.848459   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:37.849058   99368 pod_ready.go:93] pod "kube-scheduler-ha-142481-m02" in "kube-system" namespace has status "Ready":"True"
	I1010 18:15:37.849077   99368 pod_ready.go:82] duration metric: took 400.031968ms for pod "kube-scheduler-ha-142481-m02" in "kube-system" namespace to be "Ready" ...
	I1010 18:15:37.849089   99368 pod_ready.go:39] duration metric: took 3.200308757s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 18:15:37.849113   99368 api_server.go:52] waiting for apiserver process to appear ...
	I1010 18:15:37.849168   99368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 18:15:37.867701   99368 api_server.go:72] duration metric: took 22.53038697s to wait for apiserver process to appear ...
	I1010 18:15:37.867737   99368 api_server.go:88] waiting for apiserver healthz status ...
	I1010 18:15:37.867762   99368 api_server.go:253] Checking apiserver healthz at https://192.168.39.104:8443/healthz ...
	I1010 18:15:37.874449   99368 api_server.go:279] https://192.168.39.104:8443/healthz returned 200:
	ok
	I1010 18:15:37.874534   99368 round_trippers.go:463] GET https://192.168.39.104:8443/version
	I1010 18:15:37.874545   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:37.874561   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:37.874568   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:37.875635   99368 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1010 18:15:37.875761   99368 api_server.go:141] control plane version: v1.31.1
	I1010 18:15:37.875781   99368 api_server.go:131] duration metric: took 8.036588ms to wait for apiserver health ...
	I1010 18:15:37.875792   99368 system_pods.go:43] waiting for kube-system pods to appear ...
	I1010 18:15:38.045248   99368 request.go:632] Waited for 169.346857ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods
	I1010 18:15:38.045336   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods
	I1010 18:15:38.045344   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:38.045356   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:38.045367   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:38.051387   99368 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1010 18:15:38.056244   99368 system_pods.go:59] 17 kube-system pods found
	I1010 18:15:38.056282   99368 system_pods.go:61] "coredns-7c65d6cfc9-28dll" [1d8bde36-6ab6-4b48-b00a-2a80059ecc11] Running
	I1010 18:15:38.056289   99368 system_pods.go:61] "coredns-7c65d6cfc9-xfhq8" [33d478fb-98f0-4029-87ae-9701a54cecd9] Running
	I1010 18:15:38.056293   99368 system_pods.go:61] "etcd-ha-142481" [fa917068-3ac4-4929-b80b-bfabdc3a9b94] Running
	I1010 18:15:38.056297   99368 system_pods.go:61] "etcd-ha-142481-m02" [8e6c1111-716c-41ff-9d31-ee189956993b] Running
	I1010 18:15:38.056300   99368 system_pods.go:61] "kindnet-4d9v4" [f52506a7-d4f9-4112-8b28-f239c7c8230b] Running
	I1010 18:15:38.056308   99368 system_pods.go:61] "kindnet-5k6j8" [e2d0fb49-23e6-4389-b890-25c03f6089a1] Running
	I1010 18:15:38.056311   99368 system_pods.go:61] "kube-apiserver-ha-142481" [b3d0326d-123b-43af-b1a2-b745b884c3b5] Running
	I1010 18:15:38.056315   99368 system_pods.go:61] "kube-apiserver-ha-142481-m02" [36f5732a-f056-4751-83e3-67f84744af8e] Running
	I1010 18:15:38.056318   99368 system_pods.go:61] "kube-controller-manager-ha-142481" [ad42e72b-6208-44c3-b4cb-ac9794ec9683] Running
	I1010 18:15:38.056323   99368 system_pods.go:61] "kube-controller-manager-ha-142481-m02" [4bd4ce73-1b81-44db-a59f-06c89184d88c] Running
	I1010 18:15:38.056327   99368 system_pods.go:61] "kube-proxy-gwvrh" [06e8f455-b250-46ea-bbcc-75ed230f2b57] Running
	I1010 18:15:38.056331   99368 system_pods.go:61] "kube-proxy-srfng" [0aad186a-89b6-48d9-8434-1063c7c8a42f] Running
	I1010 18:15:38.056334   99368 system_pods.go:61] "kube-scheduler-ha-142481" [c28a2da7-1807-4111-bf99-84891a0b278a] Running
	I1010 18:15:38.056337   99368 system_pods.go:61] "kube-scheduler-ha-142481-m02" [f29260ee-8ec8-47c4-848d-ee8ff1d06610] Running
	I1010 18:15:38.056340   99368 system_pods.go:61] "kube-vip-ha-142481" [3ae4dc2c-27b6-4f93-a605-8de6f0c096d0] Running
	I1010 18:15:38.056343   99368 system_pods.go:61] "kube-vip-ha-142481-m02" [bd50bf3a-5e9c-48d4-8bba-fe1fea6c017d] Running
	I1010 18:15:38.056345   99368 system_pods.go:61] "storage-provisioner" [9d74f64e-6391-4ab0-99ad-8c1711840696] Running
	I1010 18:15:38.056352   99368 system_pods.go:74] duration metric: took 180.553557ms to wait for pod list to return data ...
	I1010 18:15:38.056362   99368 default_sa.go:34] waiting for default service account to be created ...
	I1010 18:15:38.244537   99368 request.go:632] Waited for 188.093724ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/default/serviceaccounts
	I1010 18:15:38.244618   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/default/serviceaccounts
	I1010 18:15:38.244624   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:38.244633   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:38.244641   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:38.248165   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:38.248399   99368 default_sa.go:45] found service account: "default"
	I1010 18:15:38.248416   99368 default_sa.go:55] duration metric: took 192.046524ms for default service account to be created ...
	I1010 18:15:38.248427   99368 system_pods.go:116] waiting for k8s-apps to be running ...
	I1010 18:15:38.444704   99368 request.go:632] Waited for 196.206785ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods
	I1010 18:15:38.444765   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods
	I1010 18:15:38.444770   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:38.444778   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:38.444783   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:38.479585   99368 round_trippers.go:574] Response Status: 200 OK in 34 milliseconds
	I1010 18:15:38.484055   99368 system_pods.go:86] 17 kube-system pods found
	I1010 18:15:38.484088   99368 system_pods.go:89] "coredns-7c65d6cfc9-28dll" [1d8bde36-6ab6-4b48-b00a-2a80059ecc11] Running
	I1010 18:15:38.484094   99368 system_pods.go:89] "coredns-7c65d6cfc9-xfhq8" [33d478fb-98f0-4029-87ae-9701a54cecd9] Running
	I1010 18:15:38.484098   99368 system_pods.go:89] "etcd-ha-142481" [fa917068-3ac4-4929-b80b-bfabdc3a9b94] Running
	I1010 18:15:38.484102   99368 system_pods.go:89] "etcd-ha-142481-m02" [8e6c1111-716c-41ff-9d31-ee189956993b] Running
	I1010 18:15:38.484106   99368 system_pods.go:89] "kindnet-4d9v4" [f52506a7-d4f9-4112-8b28-f239c7c8230b] Running
	I1010 18:15:38.484109   99368 system_pods.go:89] "kindnet-5k6j8" [e2d0fb49-23e6-4389-b890-25c03f6089a1] Running
	I1010 18:15:38.484113   99368 system_pods.go:89] "kube-apiserver-ha-142481" [b3d0326d-123b-43af-b1a2-b745b884c3b5] Running
	I1010 18:15:38.484116   99368 system_pods.go:89] "kube-apiserver-ha-142481-m02" [36f5732a-f056-4751-83e3-67f84744af8e] Running
	I1010 18:15:38.484119   99368 system_pods.go:89] "kube-controller-manager-ha-142481" [ad42e72b-6208-44c3-b4cb-ac9794ec9683] Running
	I1010 18:15:38.484122   99368 system_pods.go:89] "kube-controller-manager-ha-142481-m02" [4bd4ce73-1b81-44db-a59f-06c89184d88c] Running
	I1010 18:15:38.484125   99368 system_pods.go:89] "kube-proxy-gwvrh" [06e8f455-b250-46ea-bbcc-75ed230f2b57] Running
	I1010 18:15:38.484128   99368 system_pods.go:89] "kube-proxy-srfng" [0aad186a-89b6-48d9-8434-1063c7c8a42f] Running
	I1010 18:15:38.484132   99368 system_pods.go:89] "kube-scheduler-ha-142481" [c28a2da7-1807-4111-bf99-84891a0b278a] Running
	I1010 18:15:38.484135   99368 system_pods.go:89] "kube-scheduler-ha-142481-m02" [f29260ee-8ec8-47c4-848d-ee8ff1d06610] Running
	I1010 18:15:38.484139   99368 system_pods.go:89] "kube-vip-ha-142481" [3ae4dc2c-27b6-4f93-a605-8de6f0c096d0] Running
	I1010 18:15:38.484141   99368 system_pods.go:89] "kube-vip-ha-142481-m02" [bd50bf3a-5e9c-48d4-8bba-fe1fea6c017d] Running
	I1010 18:15:38.484144   99368 system_pods.go:89] "storage-provisioner" [9d74f64e-6391-4ab0-99ad-8c1711840696] Running
	I1010 18:15:38.484152   99368 system_pods.go:126] duration metric: took 235.71716ms to wait for k8s-apps to be running ...
	I1010 18:15:38.484162   99368 system_svc.go:44] waiting for kubelet service to be running ....
	I1010 18:15:38.484219   99368 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 18:15:38.499587   99368 system_svc.go:56] duration metric: took 15.413149ms WaitForService to wait for kubelet
	I1010 18:15:38.499630   99368 kubeadm.go:582] duration metric: took 23.162321939s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1010 18:15:38.499655   99368 node_conditions.go:102] verifying NodePressure condition ...
	I1010 18:15:38.645127   99368 request.go:632] Waited for 145.342386ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/nodes
	I1010 18:15:38.645247   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes
	I1010 18:15:38.645259   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:38.645267   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:38.645272   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:38.649291   99368 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1010 18:15:38.650032   99368 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1010 18:15:38.650065   99368 node_conditions.go:123] node cpu capacity is 2
	I1010 18:15:38.650077   99368 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1010 18:15:38.650081   99368 node_conditions.go:123] node cpu capacity is 2
	I1010 18:15:38.650086   99368 node_conditions.go:105] duration metric: took 150.425543ms to run NodePressure ...
	I1010 18:15:38.650104   99368 start.go:241] waiting for startup goroutines ...
	I1010 18:15:38.650137   99368 start.go:255] writing updated cluster config ...
	I1010 18:15:38.652551   99368 out.go:201] 
	I1010 18:15:38.654476   99368 config.go:182] Loaded profile config "ha-142481": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 18:15:38.654593   99368 profile.go:143] Saving config to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/config.json ...
	I1010 18:15:38.656332   99368 out.go:177] * Starting "ha-142481-m03" control-plane node in "ha-142481" cluster
	I1010 18:15:38.657633   99368 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1010 18:15:38.657659   99368 cache.go:56] Caching tarball of preloaded images
	I1010 18:15:38.657790   99368 preload.go:172] Found /home/jenkins/minikube-integration/19787-81676/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1010 18:15:38.657806   99368 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1010 18:15:38.657908   99368 profile.go:143] Saving config to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/config.json ...
	I1010 18:15:38.658076   99368 start.go:360] acquireMachinesLock for ha-142481-m03: {Name:mkf50a8a3600473176c10a3ea212772e747151e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1010 18:15:38.658122   99368 start.go:364] duration metric: took 26.16µs to acquireMachinesLock for "ha-142481-m03"
	I1010 18:15:38.658147   99368 start.go:93] Provisioning new machine with config: &{Name:ha-142481 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-142481 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.104 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.186 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspekto
r-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1010 18:15:38.658249   99368 start.go:125] createHost starting for "m03" (driver="kvm2")
	I1010 18:15:38.660071   99368 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1010 18:15:38.660197   99368 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 18:15:38.660258   99368 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 18:15:38.676361   99368 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34941
	I1010 18:15:38.676935   99368 main.go:141] libmachine: () Calling .GetVersion
	I1010 18:15:38.677467   99368 main.go:141] libmachine: Using API Version  1
	I1010 18:15:38.677506   99368 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 18:15:38.677892   99368 main.go:141] libmachine: () Calling .GetMachineName
	I1010 18:15:38.678105   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetMachineName
	I1010 18:15:38.678326   99368 main.go:141] libmachine: (ha-142481-m03) Calling .DriverName
	I1010 18:15:38.678504   99368 start.go:159] libmachine.API.Create for "ha-142481" (driver="kvm2")
	I1010 18:15:38.678538   99368 client.go:168] LocalClient.Create starting
	I1010 18:15:38.678568   99368 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem
	I1010 18:15:38.678601   99368 main.go:141] libmachine: Decoding PEM data...
	I1010 18:15:38.678614   99368 main.go:141] libmachine: Parsing certificate...
	I1010 18:15:38.678663   99368 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem
	I1010 18:15:38.678681   99368 main.go:141] libmachine: Decoding PEM data...
	I1010 18:15:38.678691   99368 main.go:141] libmachine: Parsing certificate...
	I1010 18:15:38.678707   99368 main.go:141] libmachine: Running pre-create checks...
	I1010 18:15:38.678715   99368 main.go:141] libmachine: (ha-142481-m03) Calling .PreCreateCheck
	I1010 18:15:38.678898   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetConfigRaw
	I1010 18:15:38.679630   99368 main.go:141] libmachine: Creating machine...
	I1010 18:15:38.679653   99368 main.go:141] libmachine: (ha-142481-m03) Calling .Create
	I1010 18:15:38.680877   99368 main.go:141] libmachine: (ha-142481-m03) Creating KVM machine...
	I1010 18:15:38.681726   99368 main.go:141] libmachine: (ha-142481-m03) DBG | found existing default KVM network
	I1010 18:15:38.681754   99368 main.go:141] libmachine: (ha-142481-m03) DBG | found existing private KVM network mk-ha-142481
	I1010 18:15:38.681811   99368 main.go:141] libmachine: (ha-142481-m03) Setting up store path in /home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481-m03 ...
	I1010 18:15:38.681845   99368 main.go:141] libmachine: (ha-142481-m03) Building disk image from file:///home/jenkins/minikube-integration/19787-81676/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso
	I1010 18:15:38.681908   99368 main.go:141] libmachine: (ha-142481-m03) DBG | I1010 18:15:38.681805  100144 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19787-81676/.minikube
	I1010 18:15:38.681991   99368 main.go:141] libmachine: (ha-142481-m03) Downloading /home/jenkins/minikube-integration/19787-81676/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19787-81676/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso...
	I1010 18:15:38.938889   99368 main.go:141] libmachine: (ha-142481-m03) DBG | I1010 18:15:38.938689  100144 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481-m03/id_rsa...
	I1010 18:15:39.048405   99368 main.go:141] libmachine: (ha-142481-m03) DBG | I1010 18:15:39.048265  100144 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481-m03/ha-142481-m03.rawdisk...
	I1010 18:15:39.048440   99368 main.go:141] libmachine: (ha-142481-m03) DBG | Writing magic tar header
	I1010 18:15:39.048457   99368 main.go:141] libmachine: (ha-142481-m03) DBG | Writing SSH key tar header
	I1010 18:15:39.048467   99368 main.go:141] libmachine: (ha-142481-m03) DBG | I1010 18:15:39.048382  100144 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481-m03 ...
	I1010 18:15:39.048494   99368 main.go:141] libmachine: (ha-142481-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481-m03
	I1010 18:15:39.048510   99368 main.go:141] libmachine: (ha-142481-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19787-81676/.minikube/machines
	I1010 18:15:39.048527   99368 main.go:141] libmachine: (ha-142481-m03) Setting executable bit set on /home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481-m03 (perms=drwx------)
	I1010 18:15:39.048549   99368 main.go:141] libmachine: (ha-142481-m03) Setting executable bit set on /home/jenkins/minikube-integration/19787-81676/.minikube/machines (perms=drwxr-xr-x)
	I1010 18:15:39.048564   99368 main.go:141] libmachine: (ha-142481-m03) Setting executable bit set on /home/jenkins/minikube-integration/19787-81676/.minikube (perms=drwxr-xr-x)
	I1010 18:15:39.048578   99368 main.go:141] libmachine: (ha-142481-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19787-81676/.minikube
	I1010 18:15:39.048592   99368 main.go:141] libmachine: (ha-142481-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19787-81676
	I1010 18:15:39.048605   99368 main.go:141] libmachine: (ha-142481-m03) Setting executable bit set on /home/jenkins/minikube-integration/19787-81676 (perms=drwxrwxr-x)
	I1010 18:15:39.048635   99368 main.go:141] libmachine: (ha-142481-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1010 18:15:39.048655   99368 main.go:141] libmachine: (ha-142481-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1010 18:15:39.048662   99368 main.go:141] libmachine: (ha-142481-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1010 18:15:39.048676   99368 main.go:141] libmachine: (ha-142481-m03) Creating domain...
	I1010 18:15:39.048685   99368 main.go:141] libmachine: (ha-142481-m03) DBG | Checking permissions on dir: /home/jenkins
	I1010 18:15:39.048696   99368 main.go:141] libmachine: (ha-142481-m03) DBG | Checking permissions on dir: /home
	I1010 18:15:39.048710   99368 main.go:141] libmachine: (ha-142481-m03) DBG | Skipping /home - not owner
	I1010 18:15:39.049753   99368 main.go:141] libmachine: (ha-142481-m03) define libvirt domain using xml: 
	I1010 18:15:39.049779   99368 main.go:141] libmachine: (ha-142481-m03) <domain type='kvm'>
	I1010 18:15:39.049790   99368 main.go:141] libmachine: (ha-142481-m03)   <name>ha-142481-m03</name>
	I1010 18:15:39.049799   99368 main.go:141] libmachine: (ha-142481-m03)   <memory unit='MiB'>2200</memory>
	I1010 18:15:39.049809   99368 main.go:141] libmachine: (ha-142481-m03)   <vcpu>2</vcpu>
	I1010 18:15:39.049816   99368 main.go:141] libmachine: (ha-142481-m03)   <features>
	I1010 18:15:39.049822   99368 main.go:141] libmachine: (ha-142481-m03)     <acpi/>
	I1010 18:15:39.049830   99368 main.go:141] libmachine: (ha-142481-m03)     <apic/>
	I1010 18:15:39.049835   99368 main.go:141] libmachine: (ha-142481-m03)     <pae/>
	I1010 18:15:39.049839   99368 main.go:141] libmachine: (ha-142481-m03)     
	I1010 18:15:39.049845   99368 main.go:141] libmachine: (ha-142481-m03)   </features>
	I1010 18:15:39.049849   99368 main.go:141] libmachine: (ha-142481-m03)   <cpu mode='host-passthrough'>
	I1010 18:15:39.049856   99368 main.go:141] libmachine: (ha-142481-m03)   
	I1010 18:15:39.049862   99368 main.go:141] libmachine: (ha-142481-m03)   </cpu>
	I1010 18:15:39.049890   99368 main.go:141] libmachine: (ha-142481-m03)   <os>
	I1010 18:15:39.049903   99368 main.go:141] libmachine: (ha-142481-m03)     <type>hvm</type>
	I1010 18:15:39.049915   99368 main.go:141] libmachine: (ha-142481-m03)     <boot dev='cdrom'/>
	I1010 18:15:39.049926   99368 main.go:141] libmachine: (ha-142481-m03)     <boot dev='hd'/>
	I1010 18:15:39.049939   99368 main.go:141] libmachine: (ha-142481-m03)     <bootmenu enable='no'/>
	I1010 18:15:39.049945   99368 main.go:141] libmachine: (ha-142481-m03)   </os>
	I1010 18:15:39.049956   99368 main.go:141] libmachine: (ha-142481-m03)   <devices>
	I1010 18:15:39.049966   99368 main.go:141] libmachine: (ha-142481-m03)     <disk type='file' device='cdrom'>
	I1010 18:15:39.049980   99368 main.go:141] libmachine: (ha-142481-m03)       <source file='/home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481-m03/boot2docker.iso'/>
	I1010 18:15:39.049991   99368 main.go:141] libmachine: (ha-142481-m03)       <target dev='hdc' bus='scsi'/>
	I1010 18:15:39.050016   99368 main.go:141] libmachine: (ha-142481-m03)       <readonly/>
	I1010 18:15:39.050029   99368 main.go:141] libmachine: (ha-142481-m03)     </disk>
	I1010 18:15:39.050036   99368 main.go:141] libmachine: (ha-142481-m03)     <disk type='file' device='disk'>
	I1010 18:15:39.050044   99368 main.go:141] libmachine: (ha-142481-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1010 18:15:39.050056   99368 main.go:141] libmachine: (ha-142481-m03)       <source file='/home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481-m03/ha-142481-m03.rawdisk'/>
	I1010 18:15:39.050065   99368 main.go:141] libmachine: (ha-142481-m03)       <target dev='hda' bus='virtio'/>
	I1010 18:15:39.050070   99368 main.go:141] libmachine: (ha-142481-m03)     </disk>
	I1010 18:15:39.050075   99368 main.go:141] libmachine: (ha-142481-m03)     <interface type='network'>
	I1010 18:15:39.050081   99368 main.go:141] libmachine: (ha-142481-m03)       <source network='mk-ha-142481'/>
	I1010 18:15:39.050087   99368 main.go:141] libmachine: (ha-142481-m03)       <model type='virtio'/>
	I1010 18:15:39.050092   99368 main.go:141] libmachine: (ha-142481-m03)     </interface>
	I1010 18:15:39.050099   99368 main.go:141] libmachine: (ha-142481-m03)     <interface type='network'>
	I1010 18:15:39.050104   99368 main.go:141] libmachine: (ha-142481-m03)       <source network='default'/>
	I1010 18:15:39.050114   99368 main.go:141] libmachine: (ha-142481-m03)       <model type='virtio'/>
	I1010 18:15:39.050121   99368 main.go:141] libmachine: (ha-142481-m03)     </interface>
	I1010 18:15:39.050128   99368 main.go:141] libmachine: (ha-142481-m03)     <serial type='pty'>
	I1010 18:15:39.050232   99368 main.go:141] libmachine: (ha-142481-m03)       <target port='0'/>
	I1010 18:15:39.050268   99368 main.go:141] libmachine: (ha-142481-m03)     </serial>
	I1010 18:15:39.050282   99368 main.go:141] libmachine: (ha-142481-m03)     <console type='pty'>
	I1010 18:15:39.050294   99368 main.go:141] libmachine: (ha-142481-m03)       <target type='serial' port='0'/>
	I1010 18:15:39.050305   99368 main.go:141] libmachine: (ha-142481-m03)     </console>
	I1010 18:15:39.050315   99368 main.go:141] libmachine: (ha-142481-m03)     <rng model='virtio'>
	I1010 18:15:39.050328   99368 main.go:141] libmachine: (ha-142481-m03)       <backend model='random'>/dev/random</backend>
	I1010 18:15:39.050340   99368 main.go:141] libmachine: (ha-142481-m03)     </rng>
	I1010 18:15:39.050350   99368 main.go:141] libmachine: (ha-142481-m03)     
	I1010 18:15:39.050359   99368 main.go:141] libmachine: (ha-142481-m03)     
	I1010 18:15:39.050371   99368 main.go:141] libmachine: (ha-142481-m03)   </devices>
	I1010 18:15:39.050378   99368 main.go:141] libmachine: (ha-142481-m03) </domain>
	I1010 18:15:39.050391   99368 main.go:141] libmachine: (ha-142481-m03) 
	I1010 18:15:39.057742   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:01:68:df in network default
	I1010 18:15:39.058339   99368 main.go:141] libmachine: (ha-142481-m03) Ensuring networks are active...
	I1010 18:15:39.058372   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:15:39.059040   99368 main.go:141] libmachine: (ha-142481-m03) Ensuring network default is active
	I1010 18:15:39.059385   99368 main.go:141] libmachine: (ha-142481-m03) Ensuring network mk-ha-142481 is active
	I1010 18:15:39.060065   99368 main.go:141] libmachine: (ha-142481-m03) Getting domain xml...
	I1010 18:15:39.061108   99368 main.go:141] libmachine: (ha-142481-m03) Creating domain...
	I1010 18:15:40.343936   99368 main.go:141] libmachine: (ha-142481-m03) Waiting to get IP...
	I1010 18:15:40.344892   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:15:40.345373   99368 main.go:141] libmachine: (ha-142481-m03) DBG | unable to find current IP address of domain ha-142481-m03 in network mk-ha-142481
	I1010 18:15:40.345401   99368 main.go:141] libmachine: (ha-142481-m03) DBG | I1010 18:15:40.345319  100144 retry.go:31] will retry after 289.570163ms: waiting for machine to come up
	I1010 18:15:40.637167   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:15:40.637765   99368 main.go:141] libmachine: (ha-142481-m03) DBG | unable to find current IP address of domain ha-142481-m03 in network mk-ha-142481
	I1010 18:15:40.637799   99368 main.go:141] libmachine: (ha-142481-m03) DBG | I1010 18:15:40.637685  100144 retry.go:31] will retry after 311.078832ms: waiting for machine to come up
	I1010 18:15:40.950108   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:15:40.950581   99368 main.go:141] libmachine: (ha-142481-m03) DBG | unable to find current IP address of domain ha-142481-m03 in network mk-ha-142481
	I1010 18:15:40.950610   99368 main.go:141] libmachine: (ha-142481-m03) DBG | I1010 18:15:40.950529  100144 retry.go:31] will retry after 356.951796ms: waiting for machine to come up
	I1010 18:15:41.309147   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:15:41.309650   99368 main.go:141] libmachine: (ha-142481-m03) DBG | unable to find current IP address of domain ha-142481-m03 in network mk-ha-142481
	I1010 18:15:41.309677   99368 main.go:141] libmachine: (ha-142481-m03) DBG | I1010 18:15:41.309602  100144 retry.go:31] will retry after 532.45566ms: waiting for machine to come up
	I1010 18:15:41.843545   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:15:41.844119   99368 main.go:141] libmachine: (ha-142481-m03) DBG | unable to find current IP address of domain ha-142481-m03 in network mk-ha-142481
	I1010 18:15:41.844147   99368 main.go:141] libmachine: (ha-142481-m03) DBG | I1010 18:15:41.844054  100144 retry.go:31] will retry after 601.557958ms: waiting for machine to come up
	I1010 18:15:42.447022   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:15:42.447619   99368 main.go:141] libmachine: (ha-142481-m03) DBG | unable to find current IP address of domain ha-142481-m03 in network mk-ha-142481
	I1010 18:15:42.447649   99368 main.go:141] libmachine: (ha-142481-m03) DBG | I1010 18:15:42.447560  100144 retry.go:31] will retry after 756.716179ms: waiting for machine to come up
	I1010 18:15:43.206472   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:15:43.207013   99368 main.go:141] libmachine: (ha-142481-m03) DBG | unable to find current IP address of domain ha-142481-m03 in network mk-ha-142481
	I1010 18:15:43.207043   99368 main.go:141] libmachine: (ha-142481-m03) DBG | I1010 18:15:43.206973  100144 retry.go:31] will retry after 1.170057285s: waiting for machine to come up
	I1010 18:15:44.378682   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:15:44.379169   99368 main.go:141] libmachine: (ha-142481-m03) DBG | unable to find current IP address of domain ha-142481-m03 in network mk-ha-142481
	I1010 18:15:44.379199   99368 main.go:141] libmachine: (ha-142481-m03) DBG | I1010 18:15:44.379123  100144 retry.go:31] will retry after 1.176461257s: waiting for machine to come up
	I1010 18:15:45.558684   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:15:45.559193   99368 main.go:141] libmachine: (ha-142481-m03) DBG | unable to find current IP address of domain ha-142481-m03 in network mk-ha-142481
	I1010 18:15:45.559220   99368 main.go:141] libmachine: (ha-142481-m03) DBG | I1010 18:15:45.559154  100144 retry.go:31] will retry after 1.48319029s: waiting for machine to come up
	I1010 18:15:47.044036   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:15:47.044496   99368 main.go:141] libmachine: (ha-142481-m03) DBG | unable to find current IP address of domain ha-142481-m03 in network mk-ha-142481
	I1010 18:15:47.044521   99368 main.go:141] libmachine: (ha-142481-m03) DBG | I1010 18:15:47.044430  100144 retry.go:31] will retry after 1.688231692s: waiting for machine to come up
	I1010 18:15:48.734646   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:15:48.735151   99368 main.go:141] libmachine: (ha-142481-m03) DBG | unable to find current IP address of domain ha-142481-m03 in network mk-ha-142481
	I1010 18:15:48.735174   99368 main.go:141] libmachine: (ha-142481-m03) DBG | I1010 18:15:48.735104  100144 retry.go:31] will retry after 2.212019945s: waiting for machine to come up
	I1010 18:15:50.948675   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:15:50.949207   99368 main.go:141] libmachine: (ha-142481-m03) DBG | unable to find current IP address of domain ha-142481-m03 in network mk-ha-142481
	I1010 18:15:50.949236   99368 main.go:141] libmachine: (ha-142481-m03) DBG | I1010 18:15:50.949160  100144 retry.go:31] will retry after 2.319000915s: waiting for machine to come up
	I1010 18:15:53.270642   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:15:53.271193   99368 main.go:141] libmachine: (ha-142481-m03) DBG | unable to find current IP address of domain ha-142481-m03 in network mk-ha-142481
	I1010 18:15:53.271216   99368 main.go:141] libmachine: (ha-142481-m03) DBG | I1010 18:15:53.271155  100144 retry.go:31] will retry after 3.719042495s: waiting for machine to come up
	I1010 18:15:56.994579   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:15:56.995029   99368 main.go:141] libmachine: (ha-142481-m03) DBG | unable to find current IP address of domain ha-142481-m03 in network mk-ha-142481
	I1010 18:15:56.995054   99368 main.go:141] libmachine: (ha-142481-m03) DBG | I1010 18:15:56.994970  100144 retry.go:31] will retry after 5.298417625s: waiting for machine to come up
	I1010 18:16:02.294993   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:02.295462   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has current primary IP address 192.168.39.175 and MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:02.295487   99368 main.go:141] libmachine: (ha-142481-m03) Found IP for machine: 192.168.39.175
	I1010 18:16:02.295500   99368 main.go:141] libmachine: (ha-142481-m03) Reserving static IP address...
	I1010 18:16:02.295917   99368 main.go:141] libmachine: (ha-142481-m03) DBG | unable to find host DHCP lease matching {name: "ha-142481-m03", mac: "52:54:00:06:ed:5a", ip: "192.168.39.175"} in network mk-ha-142481
	I1010 18:16:02.376364   99368 main.go:141] libmachine: (ha-142481-m03) DBG | Getting to WaitForSSH function...
	I1010 18:16:02.376400   99368 main.go:141] libmachine: (ha-142481-m03) Reserved static IP address: 192.168.39.175
	I1010 18:16:02.376420   99368 main.go:141] libmachine: (ha-142481-m03) Waiting for SSH to be available...
	I1010 18:16:02.379038   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:02.379428   99368 main.go:141] libmachine: (ha-142481-m03) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:06:ed:5a", ip: ""} in network mk-ha-142481
	I1010 18:16:02.379482   99368 main.go:141] libmachine: (ha-142481-m03) DBG | unable to find defined IP address of network mk-ha-142481 interface with MAC address 52:54:00:06:ed:5a
	I1010 18:16:02.379643   99368 main.go:141] libmachine: (ha-142481-m03) DBG | Using SSH client type: external
	I1010 18:16:02.379666   99368 main.go:141] libmachine: (ha-142481-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481-m03/id_rsa (-rw-------)
	I1010 18:16:02.379695   99368 main.go:141] libmachine: (ha-142481-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1010 18:16:02.379708   99368 main.go:141] libmachine: (ha-142481-m03) DBG | About to run SSH command:
	I1010 18:16:02.379720   99368 main.go:141] libmachine: (ha-142481-m03) DBG | exit 0
	I1010 18:16:02.383609   99368 main.go:141] libmachine: (ha-142481-m03) DBG | SSH cmd err, output: exit status 255: 
	I1010 18:16:02.383645   99368 main.go:141] libmachine: (ha-142481-m03) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1010 18:16:02.383673   99368 main.go:141] libmachine: (ha-142481-m03) DBG | command : exit 0
	I1010 18:16:02.383687   99368 main.go:141] libmachine: (ha-142481-m03) DBG | err     : exit status 255
	I1010 18:16:02.383701   99368 main.go:141] libmachine: (ha-142481-m03) DBG | output  : 
	I1010 18:16:05.385045   99368 main.go:141] libmachine: (ha-142481-m03) DBG | Getting to WaitForSSH function...
	I1010 18:16:05.387500   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:05.388024   99368 main.go:141] libmachine: (ha-142481-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:ed:5a", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:15:53 +0000 UTC Type:0 Mac:52:54:00:06:ed:5a Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:ha-142481-m03 Clientid:01:52:54:00:06:ed:5a}
	I1010 18:16:05.388058   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined IP address 192.168.39.175 and MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:05.388149   99368 main.go:141] libmachine: (ha-142481-m03) DBG | Using SSH client type: external
	I1010 18:16:05.388172   99368 main.go:141] libmachine: (ha-142481-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481-m03/id_rsa (-rw-------)
	I1010 18:16:05.388198   99368 main.go:141] libmachine: (ha-142481-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.175 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1010 18:16:05.388212   99368 main.go:141] libmachine: (ha-142481-m03) DBG | About to run SSH command:
	I1010 18:16:05.388222   99368 main.go:141] libmachine: (ha-142481-m03) DBG | exit 0
	I1010 18:16:05.517373   99368 main.go:141] libmachine: (ha-142481-m03) DBG | SSH cmd err, output: <nil>: 
	I1010 18:16:05.517675   99368 main.go:141] libmachine: (ha-142481-m03) KVM machine creation complete!
	I1010 18:16:05.517976   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetConfigRaw
	I1010 18:16:05.518524   99368 main.go:141] libmachine: (ha-142481-m03) Calling .DriverName
	I1010 18:16:05.518756   99368 main.go:141] libmachine: (ha-142481-m03) Calling .DriverName
	I1010 18:16:05.518928   99368 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1010 18:16:05.518944   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetState
	I1010 18:16:05.520359   99368 main.go:141] libmachine: Detecting operating system of created instance...
	I1010 18:16:05.520374   99368 main.go:141] libmachine: Waiting for SSH to be available...
	I1010 18:16:05.520382   99368 main.go:141] libmachine: Getting to WaitForSSH function...
	I1010 18:16:05.520388   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHHostname
	I1010 18:16:05.523092   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:05.523568   99368 main.go:141] libmachine: (ha-142481-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:ed:5a", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:15:53 +0000 UTC Type:0 Mac:52:54:00:06:ed:5a Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:ha-142481-m03 Clientid:01:52:54:00:06:ed:5a}
	I1010 18:16:05.523601   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined IP address 192.168.39.175 and MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:05.523714   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHPort
	I1010 18:16:05.523901   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHKeyPath
	I1010 18:16:05.524055   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHKeyPath
	I1010 18:16:05.524156   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHUsername
	I1010 18:16:05.524338   99368 main.go:141] libmachine: Using SSH client type: native
	I1010 18:16:05.524636   99368 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.175 22 <nil> <nil>}
	I1010 18:16:05.524669   99368 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1010 18:16:05.632367   99368 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1010 18:16:05.632396   99368 main.go:141] libmachine: Detecting the provisioner...
	I1010 18:16:05.632408   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHHostname
	I1010 18:16:05.635809   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:05.636216   99368 main.go:141] libmachine: (ha-142481-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:ed:5a", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:15:53 +0000 UTC Type:0 Mac:52:54:00:06:ed:5a Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:ha-142481-m03 Clientid:01:52:54:00:06:ed:5a}
	I1010 18:16:05.636238   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined IP address 192.168.39.175 and MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:05.636547   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHPort
	I1010 18:16:05.636757   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHKeyPath
	I1010 18:16:05.636963   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHKeyPath
	I1010 18:16:05.637090   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHUsername
	I1010 18:16:05.637319   99368 main.go:141] libmachine: Using SSH client type: native
	I1010 18:16:05.637523   99368 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.175 22 <nil> <nil>}
	I1010 18:16:05.637539   99368 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1010 18:16:05.749769   99368 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1010 18:16:05.749833   99368 main.go:141] libmachine: found compatible host: buildroot
	I1010 18:16:05.749840   99368 main.go:141] libmachine: Provisioning with buildroot...
	I1010 18:16:05.749847   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetMachineName
	I1010 18:16:05.750100   99368 buildroot.go:166] provisioning hostname "ha-142481-m03"
	I1010 18:16:05.750135   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetMachineName
	I1010 18:16:05.750348   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHHostname
	I1010 18:16:05.753204   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:05.753697   99368 main.go:141] libmachine: (ha-142481-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:ed:5a", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:15:53 +0000 UTC Type:0 Mac:52:54:00:06:ed:5a Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:ha-142481-m03 Clientid:01:52:54:00:06:ed:5a}
	I1010 18:16:05.753724   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined IP address 192.168.39.175 and MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:05.753970   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHPort
	I1010 18:16:05.754155   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHKeyPath
	I1010 18:16:05.754326   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHKeyPath
	I1010 18:16:05.754456   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHUsername
	I1010 18:16:05.754597   99368 main.go:141] libmachine: Using SSH client type: native
	I1010 18:16:05.754815   99368 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.175 22 <nil> <nil>}
	I1010 18:16:05.754835   99368 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-142481-m03 && echo "ha-142481-m03" | sudo tee /etc/hostname
	I1010 18:16:05.886094   99368 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-142481-m03
	
	I1010 18:16:05.886129   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHHostname
	I1010 18:16:05.889027   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:05.889401   99368 main.go:141] libmachine: (ha-142481-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:ed:5a", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:15:53 +0000 UTC Type:0 Mac:52:54:00:06:ed:5a Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:ha-142481-m03 Clientid:01:52:54:00:06:ed:5a}
	I1010 18:16:05.889420   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined IP address 192.168.39.175 and MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:05.889629   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHPort
	I1010 18:16:05.889843   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHKeyPath
	I1010 18:16:05.889995   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHKeyPath
	I1010 18:16:05.890115   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHUsername
	I1010 18:16:05.890271   99368 main.go:141] libmachine: Using SSH client type: native
	I1010 18:16:05.890474   99368 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.175 22 <nil> <nil>}
	I1010 18:16:05.890491   99368 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-142481-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-142481-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-142481-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1010 18:16:06.011027   99368 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1010 18:16:06.011075   99368 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19787-81676/.minikube CaCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19787-81676/.minikube}
	I1010 18:16:06.011118   99368 buildroot.go:174] setting up certificates
	I1010 18:16:06.011128   99368 provision.go:84] configureAuth start
	I1010 18:16:06.011159   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetMachineName
	I1010 18:16:06.011515   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetIP
	I1010 18:16:06.014592   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:06.015019   99368 main.go:141] libmachine: (ha-142481-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:ed:5a", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:15:53 +0000 UTC Type:0 Mac:52:54:00:06:ed:5a Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:ha-142481-m03 Clientid:01:52:54:00:06:ed:5a}
	I1010 18:16:06.015050   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined IP address 192.168.39.175 and MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:06.015255   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHHostname
	I1010 18:16:06.017745   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:06.018212   99368 main.go:141] libmachine: (ha-142481-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:ed:5a", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:15:53 +0000 UTC Type:0 Mac:52:54:00:06:ed:5a Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:ha-142481-m03 Clientid:01:52:54:00:06:ed:5a}
	I1010 18:16:06.018241   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined IP address 192.168.39.175 and MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:06.018399   99368 provision.go:143] copyHostCerts
	I1010 18:16:06.018428   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem
	I1010 18:16:06.018461   99368 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem, removing ...
	I1010 18:16:06.018471   99368 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem
	I1010 18:16:06.018534   99368 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem (1675 bytes)
	I1010 18:16:06.018611   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem
	I1010 18:16:06.018628   99368 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem, removing ...
	I1010 18:16:06.018635   99368 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem
	I1010 18:16:06.018659   99368 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem (1078 bytes)
	I1010 18:16:06.018703   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem
	I1010 18:16:06.018722   99368 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem, removing ...
	I1010 18:16:06.018728   99368 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem
	I1010 18:16:06.018748   99368 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem (1123 bytes)
	I1010 18:16:06.018800   99368 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem org=jenkins.ha-142481-m03 san=[127.0.0.1 192.168.39.175 ha-142481-m03 localhost minikube]
	I1010 18:16:06.222717   99368 provision.go:177] copyRemoteCerts
	I1010 18:16:06.222779   99368 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1010 18:16:06.222805   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHHostname
	I1010 18:16:06.225434   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:06.225825   99368 main.go:141] libmachine: (ha-142481-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:ed:5a", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:15:53 +0000 UTC Type:0 Mac:52:54:00:06:ed:5a Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:ha-142481-m03 Clientid:01:52:54:00:06:ed:5a}
	I1010 18:16:06.225848   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined IP address 192.168.39.175 and MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:06.226065   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHPort
	I1010 18:16:06.226286   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHKeyPath
	I1010 18:16:06.226456   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHUsername
	I1010 18:16:06.226630   99368 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481-m03/id_rsa Username:docker}
	I1010 18:16:06.315791   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1010 18:16:06.315882   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1010 18:16:06.343259   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1010 18:16:06.343345   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1010 18:16:06.370749   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1010 18:16:06.370822   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1010 18:16:06.397148   99368 provision.go:87] duration metric: took 386.005417ms to configureAuth
	I1010 18:16:06.397183   99368 buildroot.go:189] setting minikube options for container-runtime
	I1010 18:16:06.397452   99368 config.go:182] Loaded profile config "ha-142481": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 18:16:06.397548   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHHostname
	I1010 18:16:06.400947   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:06.401493   99368 main.go:141] libmachine: (ha-142481-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:ed:5a", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:15:53 +0000 UTC Type:0 Mac:52:54:00:06:ed:5a Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:ha-142481-m03 Clientid:01:52:54:00:06:ed:5a}
	I1010 18:16:06.401529   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined IP address 192.168.39.175 and MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:06.401697   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHPort
	I1010 18:16:06.401877   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHKeyPath
	I1010 18:16:06.402099   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHKeyPath
	I1010 18:16:06.402329   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHUsername
	I1010 18:16:06.402536   99368 main.go:141] libmachine: Using SSH client type: native
	I1010 18:16:06.402752   99368 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.175 22 <nil> <nil>}
	I1010 18:16:06.402772   99368 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1010 18:16:06.637717   99368 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1010 18:16:06.637751   99368 main.go:141] libmachine: Checking connection to Docker...
	I1010 18:16:06.637762   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetURL
	I1010 18:16:06.639112   99368 main.go:141] libmachine: (ha-142481-m03) DBG | Using libvirt version 6000000
	I1010 18:16:06.641181   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:06.641548   99368 main.go:141] libmachine: (ha-142481-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:ed:5a", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:15:53 +0000 UTC Type:0 Mac:52:54:00:06:ed:5a Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:ha-142481-m03 Clientid:01:52:54:00:06:ed:5a}
	I1010 18:16:06.641587   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined IP address 192.168.39.175 and MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:06.641730   99368 main.go:141] libmachine: Docker is up and running!
	I1010 18:16:06.641747   99368 main.go:141] libmachine: Reticulating splines...
	I1010 18:16:06.641756   99368 client.go:171] duration metric: took 27.963208724s to LocalClient.Create
	I1010 18:16:06.641785   99368 start.go:167] duration metric: took 27.963279742s to libmachine.API.Create "ha-142481"
	I1010 18:16:06.641795   99368 start.go:293] postStartSetup for "ha-142481-m03" (driver="kvm2")
	I1010 18:16:06.641804   99368 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1010 18:16:06.641824   99368 main.go:141] libmachine: (ha-142481-m03) Calling .DriverName
	I1010 18:16:06.642091   99368 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1010 18:16:06.642123   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHHostname
	I1010 18:16:06.644087   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:06.644396   99368 main.go:141] libmachine: (ha-142481-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:ed:5a", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:15:53 +0000 UTC Type:0 Mac:52:54:00:06:ed:5a Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:ha-142481-m03 Clientid:01:52:54:00:06:ed:5a}
	I1010 18:16:06.644432   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined IP address 192.168.39.175 and MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:06.644567   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHPort
	I1010 18:16:06.644765   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHKeyPath
	I1010 18:16:06.644924   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHUsername
	I1010 18:16:06.645078   99368 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481-m03/id_rsa Username:docker}
	I1010 18:16:06.732228   99368 ssh_runner.go:195] Run: cat /etc/os-release
	I1010 18:16:06.736988   99368 info.go:137] Remote host: Buildroot 2023.02.9
	I1010 18:16:06.737036   99368 filesync.go:126] Scanning /home/jenkins/minikube-integration/19787-81676/.minikube/addons for local assets ...
	I1010 18:16:06.737116   99368 filesync.go:126] Scanning /home/jenkins/minikube-integration/19787-81676/.minikube/files for local assets ...
	I1010 18:16:06.737228   99368 filesync.go:149] local asset: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem -> 888762.pem in /etc/ssl/certs
	I1010 18:16:06.737241   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem -> /etc/ssl/certs/888762.pem
	I1010 18:16:06.737350   99368 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1010 18:16:06.747599   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem --> /etc/ssl/certs/888762.pem (1708 bytes)
	I1010 18:16:06.779643   99368 start.go:296] duration metric: took 137.832802ms for postStartSetup
	I1010 18:16:06.779701   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetConfigRaw
	I1010 18:16:06.780474   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetIP
	I1010 18:16:06.783287   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:06.783711   99368 main.go:141] libmachine: (ha-142481-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:ed:5a", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:15:53 +0000 UTC Type:0 Mac:52:54:00:06:ed:5a Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:ha-142481-m03 Clientid:01:52:54:00:06:ed:5a}
	I1010 18:16:06.783739   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined IP address 192.168.39.175 and MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:06.784133   99368 profile.go:143] Saving config to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/config.json ...
	I1010 18:16:06.784363   99368 start.go:128] duration metric: took 28.126102871s to createHost
	I1010 18:16:06.784390   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHHostname
	I1010 18:16:06.786724   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:06.787090   99368 main.go:141] libmachine: (ha-142481-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:ed:5a", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:15:53 +0000 UTC Type:0 Mac:52:54:00:06:ed:5a Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:ha-142481-m03 Clientid:01:52:54:00:06:ed:5a}
	I1010 18:16:06.787113   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined IP address 192.168.39.175 and MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:06.787327   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHPort
	I1010 18:16:06.787526   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHKeyPath
	I1010 18:16:06.787700   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHKeyPath
	I1010 18:16:06.787826   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHUsername
	I1010 18:16:06.787997   99368 main.go:141] libmachine: Using SSH client type: native
	I1010 18:16:06.788211   99368 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.175 22 <nil> <nil>}
	I1010 18:16:06.788226   99368 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1010 18:16:06.901742   99368 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728584166.882037024
	
	I1010 18:16:06.901769   99368 fix.go:216] guest clock: 1728584166.882037024
	I1010 18:16:06.901778   99368 fix.go:229] Guest: 2024-10-10 18:16:06.882037024 +0000 UTC Remote: 2024-10-10 18:16:06.784377622 +0000 UTC m=+148.714965698 (delta=97.659402ms)
	I1010 18:16:06.901799   99368 fix.go:200] guest clock delta is within tolerance: 97.659402ms
	I1010 18:16:06.901806   99368 start.go:83] releasing machines lock for "ha-142481-m03", held for 28.24367452s
	I1010 18:16:06.901831   99368 main.go:141] libmachine: (ha-142481-m03) Calling .DriverName
	I1010 18:16:06.902170   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetIP
	I1010 18:16:06.904709   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:06.905164   99368 main.go:141] libmachine: (ha-142481-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:ed:5a", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:15:53 +0000 UTC Type:0 Mac:52:54:00:06:ed:5a Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:ha-142481-m03 Clientid:01:52:54:00:06:ed:5a}
	I1010 18:16:06.905194   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined IP address 192.168.39.175 and MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:06.907619   99368 out.go:177] * Found network options:
	I1010 18:16:06.909057   99368 out.go:177]   - NO_PROXY=192.168.39.104,192.168.39.186
	W1010 18:16:06.910397   99368 proxy.go:119] fail to check proxy env: Error ip not in block
	W1010 18:16:06.910422   99368 proxy.go:119] fail to check proxy env: Error ip not in block
	I1010 18:16:06.910439   99368 main.go:141] libmachine: (ha-142481-m03) Calling .DriverName
	I1010 18:16:06.911020   99368 main.go:141] libmachine: (ha-142481-m03) Calling .DriverName
	I1010 18:16:06.911247   99368 main.go:141] libmachine: (ha-142481-m03) Calling .DriverName
	I1010 18:16:06.911351   99368 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1010 18:16:06.911394   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHHostname
	W1010 18:16:06.911428   99368 proxy.go:119] fail to check proxy env: Error ip not in block
	W1010 18:16:06.911458   99368 proxy.go:119] fail to check proxy env: Error ip not in block
	I1010 18:16:06.911514   99368 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1010 18:16:06.911529   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHHostname
	I1010 18:16:06.914295   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:06.914543   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:06.914629   99368 main.go:141] libmachine: (ha-142481-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:ed:5a", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:15:53 +0000 UTC Type:0 Mac:52:54:00:06:ed:5a Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:ha-142481-m03 Clientid:01:52:54:00:06:ed:5a}
	I1010 18:16:06.914656   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined IP address 192.168.39.175 and MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:06.914760   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHPort
	I1010 18:16:06.914838   99368 main.go:141] libmachine: (ha-142481-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:ed:5a", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:15:53 +0000 UTC Type:0 Mac:52:54:00:06:ed:5a Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:ha-142481-m03 Clientid:01:52:54:00:06:ed:5a}
	I1010 18:16:06.914856   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined IP address 192.168.39.175 and MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:06.914913   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHKeyPath
	I1010 18:16:06.915049   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHPort
	I1010 18:16:06.915098   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHUsername
	I1010 18:16:06.915168   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHKeyPath
	I1010 18:16:06.915225   99368 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481-m03/id_rsa Username:docker}
	I1010 18:16:06.915381   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHUsername
	I1010 18:16:06.915497   99368 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481-m03/id_rsa Username:docker}
	I1010 18:16:07.163627   99368 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1010 18:16:07.170344   99368 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1010 18:16:07.170418   99368 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1010 18:16:07.188658   99368 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1010 18:16:07.188691   99368 start.go:495] detecting cgroup driver to use...
	I1010 18:16:07.188764   99368 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1010 18:16:07.207458   99368 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1010 18:16:07.223388   99368 docker.go:217] disabling cri-docker service (if available) ...
	I1010 18:16:07.223465   99368 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1010 18:16:07.240312   99368 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1010 18:16:07.258338   99368 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1010 18:16:07.397297   99368 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1010 18:16:07.555534   99368 docker.go:233] disabling docker service ...
	I1010 18:16:07.555621   99368 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1010 18:16:07.571003   99368 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1010 18:16:07.585612   99368 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1010 18:16:07.724995   99368 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1010 18:16:07.861369   99368 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1010 18:16:07.876144   99368 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1010 18:16:07.895651   99368 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1010 18:16:07.895716   99368 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:16:07.906721   99368 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1010 18:16:07.906792   99368 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:16:07.917729   99368 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:16:07.929016   99368 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:16:07.940559   99368 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1010 18:16:07.953995   99368 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:16:07.965226   99368 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:16:07.984344   99368 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:16:07.995983   99368 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1010 18:16:08.006420   99368 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1010 18:16:08.006504   99368 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1010 18:16:08.021735   99368 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1010 18:16:08.033011   99368 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 18:16:08.164791   99368 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1010 18:16:08.260672   99368 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1010 18:16:08.260742   99368 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1010 18:16:08.271900   99368 start.go:563] Will wait 60s for crictl version
	I1010 18:16:08.271960   99368 ssh_runner.go:195] Run: which crictl
	I1010 18:16:08.275929   99368 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1010 18:16:08.314672   99368 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1010 18:16:08.314749   99368 ssh_runner.go:195] Run: crio --version
	I1010 18:16:08.346340   99368 ssh_runner.go:195] Run: crio --version
	I1010 18:16:08.377606   99368 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1010 18:16:08.379014   99368 out.go:177]   - env NO_PROXY=192.168.39.104
	I1010 18:16:08.380435   99368 out.go:177]   - env NO_PROXY=192.168.39.104,192.168.39.186
	I1010 18:16:08.381694   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetIP
	I1010 18:16:08.384544   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:08.384908   99368 main.go:141] libmachine: (ha-142481-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:ed:5a", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:15:53 +0000 UTC Type:0 Mac:52:54:00:06:ed:5a Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:ha-142481-m03 Clientid:01:52:54:00:06:ed:5a}
	I1010 18:16:08.384939   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined IP address 192.168.39.175 and MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:08.385183   99368 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1010 18:16:08.389725   99368 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 18:16:08.402638   99368 mustload.go:65] Loading cluster: ha-142481
	I1010 18:16:08.402881   99368 config.go:182] Loaded profile config "ha-142481": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 18:16:08.403135   99368 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 18:16:08.403183   99368 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 18:16:08.418274   99368 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43461
	I1010 18:16:08.418827   99368 main.go:141] libmachine: () Calling .GetVersion
	I1010 18:16:08.419392   99368 main.go:141] libmachine: Using API Version  1
	I1010 18:16:08.419418   99368 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 18:16:08.419747   99368 main.go:141] libmachine: () Calling .GetMachineName
	I1010 18:16:08.419899   99368 main.go:141] libmachine: (ha-142481) Calling .GetState
	I1010 18:16:08.421605   99368 host.go:66] Checking if "ha-142481" exists ...
	I1010 18:16:08.421927   99368 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 18:16:08.421980   99368 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 18:16:08.437329   99368 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37965
	I1010 18:16:08.437789   99368 main.go:141] libmachine: () Calling .GetVersion
	I1010 18:16:08.438250   99368 main.go:141] libmachine: Using API Version  1
	I1010 18:16:08.438271   99368 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 18:16:08.438615   99368 main.go:141] libmachine: () Calling .GetMachineName
	I1010 18:16:08.438801   99368 main.go:141] libmachine: (ha-142481) Calling .DriverName
	I1010 18:16:08.438970   99368 certs.go:68] Setting up /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481 for IP: 192.168.39.175
	I1010 18:16:08.438988   99368 certs.go:194] generating shared ca certs ...
	I1010 18:16:08.439008   99368 certs.go:226] acquiring lock for ca certs: {Name:mk934aa2a82050d7b4e8800492bf64f91d140517 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:16:08.439150   99368 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key
	I1010 18:16:08.439211   99368 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key
	I1010 18:16:08.439224   99368 certs.go:256] generating profile certs ...
	I1010 18:16:08.439325   99368 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/client.key
	I1010 18:16:08.439355   99368 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.key.7bb2202d
	I1010 18:16:08.439376   99368 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.crt.7bb2202d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.104 192.168.39.186 192.168.39.175 192.168.39.254]
	I1010 18:16:08.528731   99368 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.crt.7bb2202d ...
	I1010 18:16:08.528764   99368 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.crt.7bb2202d: {Name:mk202db6f01b46b51940ca7afe581ede7b3af4e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:16:08.528980   99368 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.key.7bb2202d ...
	I1010 18:16:08.528997   99368 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.key.7bb2202d: {Name:mk61783eedf299ba3a6dbb3f62b131938823078c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:16:08.529112   99368 certs.go:381] copying /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.crt.7bb2202d -> /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.crt
	I1010 18:16:08.529294   99368 certs.go:385] copying /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.key.7bb2202d -> /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.key
	I1010 18:16:08.529465   99368 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/proxy-client.key
	I1010 18:16:08.529488   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1010 18:16:08.529506   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1010 18:16:08.529521   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1010 18:16:08.529540   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1010 18:16:08.529557   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1010 18:16:08.529580   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1010 18:16:08.529599   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1010 18:16:08.545002   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1010 18:16:08.545123   99368 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876.pem (1338 bytes)
	W1010 18:16:08.545166   99368 certs.go:480] ignoring /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876_empty.pem, impossibly tiny 0 bytes
	I1010 18:16:08.545178   99368 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem (1679 bytes)
	I1010 18:16:08.545225   99368 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem (1078 bytes)
	I1010 18:16:08.545259   99368 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem (1123 bytes)
	I1010 18:16:08.545291   99368 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem (1675 bytes)
	I1010 18:16:08.545339   99368 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem (1708 bytes)
	I1010 18:16:08.545380   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem -> /usr/share/ca-certificates/888762.pem
	I1010 18:16:08.545401   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:16:08.545415   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876.pem -> /usr/share/ca-certificates/88876.pem
	I1010 18:16:08.545465   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHHostname
	I1010 18:16:08.548797   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:16:08.549296   99368 main.go:141] libmachine: (ha-142481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:fa:00", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:13:53 +0000 UTC Type:0 Mac:52:54:00:3e:fa:00 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-142481 Clientid:01:52:54:00:3e:fa:00}
	I1010 18:16:08.549316   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined IP address 192.168.39.104 and MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:16:08.549545   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHPort
	I1010 18:16:08.549789   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHKeyPath
	I1010 18:16:08.549993   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHUsername
	I1010 18:16:08.550143   99368 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481/id_rsa Username:docker}
	I1010 18:16:08.629272   99368 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1010 18:16:08.635349   99368 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1010 18:16:08.648258   99368 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1010 18:16:08.653797   99368 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1010 18:16:08.665553   99368 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1010 18:16:08.670066   99368 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1010 18:16:08.681281   99368 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1010 18:16:08.685851   99368 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1010 18:16:08.696759   99368 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1010 18:16:08.701070   99368 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1010 18:16:08.719143   99368 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1010 18:16:08.723782   99368 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1010 18:16:08.735082   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1010 18:16:08.763420   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1010 18:16:08.789246   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1010 18:16:08.814697   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1010 18:16:08.840641   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1010 18:16:08.865783   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1010 18:16:08.890663   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1010 18:16:08.916077   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1010 18:16:08.941574   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem --> /usr/share/ca-certificates/888762.pem (1708 bytes)
	I1010 18:16:08.971689   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1010 18:16:08.996394   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876.pem --> /usr/share/ca-certificates/88876.pem (1338 bytes)
	I1010 18:16:09.021329   99368 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1010 18:16:09.039289   99368 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1010 18:16:09.058514   99368 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1010 18:16:09.075508   99368 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1010 18:16:09.094047   99368 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1010 18:16:09.112093   99368 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1010 18:16:09.130182   99368 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1010 18:16:09.147655   99368 ssh_runner.go:195] Run: openssl version
	I1010 18:16:09.153962   99368 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/888762.pem && ln -fs /usr/share/ca-certificates/888762.pem /etc/ssl/certs/888762.pem"
	I1010 18:16:09.165361   99368 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/888762.pem
	I1010 18:16:09.170099   99368 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 10 18:09 /usr/share/ca-certificates/888762.pem
	I1010 18:16:09.170163   99368 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/888762.pem
	I1010 18:16:09.175991   99368 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/888762.pem /etc/ssl/certs/3ec20f2e.0"
	I1010 18:16:09.187134   99368 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1010 18:16:09.199298   99368 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:16:09.204550   99368 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 10 17:58 /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:16:09.204607   99368 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:16:09.210501   99368 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1010 18:16:09.222047   99368 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/88876.pem && ln -fs /usr/share/ca-certificates/88876.pem /etc/ssl/certs/88876.pem"
	I1010 18:16:09.233165   99368 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/88876.pem
	I1010 18:16:09.238141   99368 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 10 18:09 /usr/share/ca-certificates/88876.pem
	I1010 18:16:09.238209   99368 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/88876.pem
	I1010 18:16:09.243899   99368 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/88876.pem /etc/ssl/certs/51391683.0"
	I1010 18:16:09.256154   99368 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1010 18:16:09.260558   99368 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1010 18:16:09.260620   99368 kubeadm.go:934] updating node {m03 192.168.39.175 8443 v1.31.1 crio true true} ...
	I1010 18:16:09.260712   99368 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-142481-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.175
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-142481 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1010 18:16:09.260747   99368 kube-vip.go:115] generating kube-vip config ...
	I1010 18:16:09.260788   99368 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1010 18:16:09.281432   99368 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1010 18:16:09.281532   99368 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1010 18:16:09.281598   99368 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1010 18:16:09.292238   99368 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I1010 18:16:09.292302   99368 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I1010 18:16:09.302815   99368 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I1010 18:16:09.302834   99368 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I1010 18:16:09.302847   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I1010 18:16:09.302858   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1010 18:16:09.302874   99368 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I1010 18:16:09.302911   99368 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I1010 18:16:09.302925   99368 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1010 18:16:09.302927   99368 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 18:16:09.313038   99368 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I1010 18:16:09.313076   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I1010 18:16:09.313295   99368 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I1010 18:16:09.313324   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I1010 18:16:09.329019   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I1010 18:16:09.329132   99368 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I1010 18:16:09.460792   99368 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I1010 18:16:09.460863   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I1010 18:16:10.167695   99368 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1010 18:16:10.178304   99368 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1010 18:16:10.196198   99368 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1010 18:16:10.214107   99368 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1010 18:16:10.231699   99368 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1010 18:16:10.235598   99368 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 18:16:10.249379   99368 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 18:16:10.372228   99368 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 18:16:10.389956   99368 host.go:66] Checking if "ha-142481" exists ...
	I1010 18:16:10.390482   99368 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 18:16:10.390543   99368 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 18:16:10.406538   99368 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33965
	I1010 18:16:10.407120   99368 main.go:141] libmachine: () Calling .GetVersion
	I1010 18:16:10.407715   99368 main.go:141] libmachine: Using API Version  1
	I1010 18:16:10.407745   99368 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 18:16:10.408171   99368 main.go:141] libmachine: () Calling .GetMachineName
	I1010 18:16:10.408424   99368 main.go:141] libmachine: (ha-142481) Calling .DriverName
	I1010 18:16:10.408616   99368 start.go:317] joinCluster: &{Name:ha-142481 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-142481 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.104 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.186 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.175 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:f
alse istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 18:16:10.408761   99368 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1010 18:16:10.408786   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHHostname
	I1010 18:16:10.412501   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:16:10.412938   99368 main.go:141] libmachine: (ha-142481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:fa:00", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:13:53 +0000 UTC Type:0 Mac:52:54:00:3e:fa:00 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-142481 Clientid:01:52:54:00:3e:fa:00}
	I1010 18:16:10.412967   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined IP address 192.168.39.104 and MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:16:10.413287   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHPort
	I1010 18:16:10.413489   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHKeyPath
	I1010 18:16:10.413662   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHUsername
	I1010 18:16:10.413878   99368 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481/id_rsa Username:docker}
	I1010 18:16:10.584962   99368 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.175 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1010 18:16:10.585036   99368 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 01a2dn.g9vqo5mbslppupip --discovery-token-ca-cert-hash sha256:bc8568f96f96483e4403a7eff9866ac7bd8b0e76d8203796a49dd1a769f11bda --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-142481-m03 --control-plane --apiserver-advertise-address=192.168.39.175 --apiserver-bind-port=8443"
	I1010 18:16:34.116751   99368 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 01a2dn.g9vqo5mbslppupip --discovery-token-ca-cert-hash sha256:bc8568f96f96483e4403a7eff9866ac7bd8b0e76d8203796a49dd1a769f11bda --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-142481-m03 --control-plane --apiserver-advertise-address=192.168.39.175 --apiserver-bind-port=8443": (23.531656117s)
	I1010 18:16:34.116799   99368 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1010 18:16:34.662406   99368 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-142481-m03 minikube.k8s.io/updated_at=2024_10_10T18_16_34_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=e90d7de550e839433dc631623053398b652747fc minikube.k8s.io/name=ha-142481 minikube.k8s.io/primary=false
	I1010 18:16:34.812925   99368 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-142481-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I1010 18:16:34.939968   99368 start.go:319] duration metric: took 24.531346267s to joinCluster
	I1010 18:16:34.940121   99368 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.175 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1010 18:16:34.940600   99368 config.go:182] Loaded profile config "ha-142481": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 18:16:34.942338   99368 out.go:177] * Verifying Kubernetes components...
	I1010 18:16:34.943872   99368 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 18:16:35.261137   99368 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 18:16:35.322955   99368 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19787-81676/kubeconfig
	I1010 18:16:35.323214   99368 kapi.go:59] client config for ha-142481: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/client.crt", KeyFile:"/home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/client.key", CAFile:"/home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2432700), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1010 18:16:35.323281   99368 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.104:8443
	I1010 18:16:35.323557   99368 node_ready.go:35] waiting up to 6m0s for node "ha-142481-m03" to be "Ready" ...
	I1010 18:16:35.323656   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:35.323668   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:35.323679   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:35.323685   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:35.327318   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:35.823831   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:35.823858   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:35.823871   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:35.823877   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:35.828659   99368 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1010 18:16:36.324358   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:36.324382   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:36.324391   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:36.324395   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:36.327758   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:36.823911   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:36.823934   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:36.823942   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:36.823946   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:36.827063   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:37.323987   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:37.324011   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:37.324019   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:37.324023   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:37.327375   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:37.328058   99368 node_ready.go:53] node "ha-142481-m03" has status "Ready":"False"
	I1010 18:16:37.824329   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:37.824354   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:37.824443   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:37.824455   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:37.828067   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:38.323986   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:38.324025   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:38.324040   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:38.324046   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:38.327494   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:38.823762   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:38.823785   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:38.823794   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:38.823798   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:38.827926   99368 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1010 18:16:39.323928   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:39.323957   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:39.323969   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:39.323975   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:39.330422   99368 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1010 18:16:39.331171   99368 node_ready.go:53] node "ha-142481-m03" has status "Ready":"False"
	I1010 18:16:39.824574   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:39.824598   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:39.824607   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:39.824610   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:39.828722   99368 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1010 18:16:40.324796   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:40.324827   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:40.324838   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:40.324845   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:40.328842   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:40.823953   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:40.823979   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:40.823990   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:40.823996   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:40.828272   99368 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1010 18:16:41.324192   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:41.324218   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:41.324227   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:41.324230   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:41.327987   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:41.824162   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:41.824186   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:41.824198   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:41.824204   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:41.827541   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:41.828232   99368 node_ready.go:53] node "ha-142481-m03" has status "Ready":"False"
	I1010 18:16:42.324743   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:42.324783   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:42.324794   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:42.324801   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:42.328551   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:42.824718   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:42.824744   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:42.824755   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:42.824760   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:42.828428   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:43.324320   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:43.324346   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:43.324355   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:43.324364   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:43.328322   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:43.823956   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:43.824002   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:43.824013   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:43.824019   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:43.827615   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:43.828260   99368 node_ready.go:53] node "ha-142481-m03" has status "Ready":"False"
	I1010 18:16:44.324587   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:44.324612   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:44.324620   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:44.324623   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:44.328569   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:44.823816   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:44.823840   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:44.823849   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:44.823853   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:44.827589   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:45.324648   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:45.324673   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:45.324681   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:45.324684   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:45.328227   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:45.824305   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:45.824330   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:45.824338   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:45.824342   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:45.827901   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:45.828489   99368 node_ready.go:53] node "ha-142481-m03" has status "Ready":"False"
	I1010 18:16:46.323779   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:46.323813   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:46.323825   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:46.323830   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:46.327223   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:46.823931   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:46.823955   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:46.823964   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:46.823968   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:46.828168   99368 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1010 18:16:47.324172   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:47.324200   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:47.324214   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:47.324232   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:47.327405   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:47.824446   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:47.824470   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:47.824478   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:47.824483   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:47.828085   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:47.828574   99368 node_ready.go:53] node "ha-142481-m03" has status "Ready":"False"
	I1010 18:16:48.324641   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:48.324666   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:48.324674   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:48.324678   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:48.328399   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:48.823841   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:48.823872   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:48.823883   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:48.823899   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:48.827862   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:49.324364   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:49.324391   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:49.324402   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:49.324410   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:49.329836   99368 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1010 18:16:49.824868   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:49.824898   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:49.824909   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:49.824916   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:49.832424   99368 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1010 18:16:49.833781   99368 node_ready.go:53] node "ha-142481-m03" has status "Ready":"False"
	I1010 18:16:50.324106   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:50.324129   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:50.324137   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:50.324141   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:50.327377   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:50.824781   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:50.824809   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:50.824818   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:50.824824   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:50.828461   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:51.324626   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:51.324651   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:51.324659   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:51.324663   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:51.327965   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:51.824004   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:51.824028   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:51.824036   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:51.824041   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:51.827827   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:52.323895   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:52.323930   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:52.323939   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:52.323943   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:52.327292   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:52.327943   99368 node_ready.go:49] node "ha-142481-m03" has status "Ready":"True"
	I1010 18:16:52.327963   99368 node_ready.go:38] duration metric: took 17.004388796s for node "ha-142481-m03" to be "Ready" ...
	I1010 18:16:52.327973   99368 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 18:16:52.328041   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods
	I1010 18:16:52.328051   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:52.328058   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:52.328063   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:52.335352   99368 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1010 18:16:52.341969   99368 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-28dll" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:52.342092   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-28dll
	I1010 18:16:52.342105   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:52.342116   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:52.342121   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:52.346524   99368 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1010 18:16:52.347823   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481
	I1010 18:16:52.347844   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:52.347853   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:52.347860   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:52.352427   99368 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1010 18:16:52.353100   99368 pod_ready.go:93] pod "coredns-7c65d6cfc9-28dll" in "kube-system" namespace has status "Ready":"True"
	I1010 18:16:52.353132   99368 pod_ready.go:82] duration metric: took 11.131703ms for pod "coredns-7c65d6cfc9-28dll" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:52.353146   99368 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-xfhq8" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:52.353233   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-xfhq8
	I1010 18:16:52.353246   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:52.353255   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:52.353262   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:52.358189   99368 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1010 18:16:52.359137   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481
	I1010 18:16:52.359158   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:52.359170   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:52.359194   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:52.361882   99368 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1010 18:16:52.362586   99368 pod_ready.go:93] pod "coredns-7c65d6cfc9-xfhq8" in "kube-system" namespace has status "Ready":"True"
	I1010 18:16:52.362606   99368 pod_ready.go:82] duration metric: took 9.449469ms for pod "coredns-7c65d6cfc9-xfhq8" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:52.362618   99368 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-142481" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:52.362680   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/etcd-ha-142481
	I1010 18:16:52.362689   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:52.362696   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:52.362701   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:52.365259   99368 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1010 18:16:52.365819   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481
	I1010 18:16:52.365835   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:52.365842   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:52.365857   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:52.368864   99368 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1010 18:16:52.369337   99368 pod_ready.go:93] pod "etcd-ha-142481" in "kube-system" namespace has status "Ready":"True"
	I1010 18:16:52.369355   99368 pod_ready.go:82] duration metric: took 6.728138ms for pod "etcd-ha-142481" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:52.369365   99368 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-142481-m02" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:52.369427   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/etcd-ha-142481-m02
	I1010 18:16:52.369435   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:52.369442   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:52.369447   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:52.371801   99368 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1010 18:16:52.372469   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:16:52.372485   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:52.372496   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:52.372501   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:52.374845   99368 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1010 18:16:52.375380   99368 pod_ready.go:93] pod "etcd-ha-142481-m02" in "kube-system" namespace has status "Ready":"True"
	I1010 18:16:52.375400   99368 pod_ready.go:82] duration metric: took 6.028654ms for pod "etcd-ha-142481-m02" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:52.375414   99368 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-142481-m03" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:52.524876   99368 request.go:632] Waited for 149.316037ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/etcd-ha-142481-m03
	I1010 18:16:52.524969   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/etcd-ha-142481-m03
	I1010 18:16:52.524980   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:52.524993   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:52.525002   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:52.528336   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:52.724349   99368 request.go:632] Waited for 195.357304ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:52.724414   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:52.724419   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:52.724429   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:52.724433   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:52.727821   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:52.728420   99368 pod_ready.go:93] pod "etcd-ha-142481-m03" in "kube-system" namespace has status "Ready":"True"
	I1010 18:16:52.728440   99368 pod_ready.go:82] duration metric: took 353.013897ms for pod "etcd-ha-142481-m03" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:52.728461   99368 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-142481" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:52.924606   99368 request.go:632] Waited for 196.006652ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-142481
	I1010 18:16:52.924680   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-142481
	I1010 18:16:52.924687   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:52.924697   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:52.924702   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:52.928387   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:53.124197   99368 request.go:632] Waited for 194.992104ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/nodes/ha-142481
	I1010 18:16:53.124259   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481
	I1010 18:16:53.124264   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:53.124276   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:53.124281   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:53.127550   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:53.128097   99368 pod_ready.go:93] pod "kube-apiserver-ha-142481" in "kube-system" namespace has status "Ready":"True"
	I1010 18:16:53.128116   99368 pod_ready.go:82] duration metric: took 399.647709ms for pod "kube-apiserver-ha-142481" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:53.128127   99368 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-142481-m02" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:53.324538   99368 request.go:632] Waited for 196.340534ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-142481-m02
	I1010 18:16:53.324600   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-142481-m02
	I1010 18:16:53.324606   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:53.324613   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:53.324617   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:53.328266   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:53.524803   99368 request.go:632] Waited for 195.841443ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:16:53.524898   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:16:53.524906   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:53.524920   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:53.524931   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:53.529027   99368 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1010 18:16:53.529616   99368 pod_ready.go:93] pod "kube-apiserver-ha-142481-m02" in "kube-system" namespace has status "Ready":"True"
	I1010 18:16:53.529639   99368 pod_ready.go:82] duration metric: took 401.504985ms for pod "kube-apiserver-ha-142481-m02" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:53.529650   99368 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-142481-m03" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:53.724123   99368 request.go:632] Waited for 194.402378ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-142481-m03
	I1010 18:16:53.724207   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-142481-m03
	I1010 18:16:53.724212   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:53.724220   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:53.724226   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:53.728029   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:53.924000   99368 request.go:632] Waited for 195.20231ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:53.924121   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:53.924136   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:53.924145   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:53.924149   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:53.927318   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:53.927936   99368 pod_ready.go:93] pod "kube-apiserver-ha-142481-m03" in "kube-system" namespace has status "Ready":"True"
	I1010 18:16:53.927963   99368 pod_ready.go:82] duration metric: took 398.303309ms for pod "kube-apiserver-ha-142481-m03" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:53.927977   99368 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-142481" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:54.124931   99368 request.go:632] Waited for 196.86396ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-142481
	I1010 18:16:54.125030   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-142481
	I1010 18:16:54.125037   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:54.125045   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:54.125050   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:54.129323   99368 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1010 18:16:54.324484   99368 request.go:632] Waited for 194.400861ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/nodes/ha-142481
	I1010 18:16:54.324554   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481
	I1010 18:16:54.324564   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:54.324574   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:54.324580   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:54.327854   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:54.328431   99368 pod_ready.go:93] pod "kube-controller-manager-ha-142481" in "kube-system" namespace has status "Ready":"True"
	I1010 18:16:54.328451   99368 pod_ready.go:82] duration metric: took 400.466203ms for pod "kube-controller-manager-ha-142481" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:54.328463   99368 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-142481-m02" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:54.524928   99368 request.go:632] Waited for 196.394012ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-142481-m02
	I1010 18:16:54.524994   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-142481-m02
	I1010 18:16:54.525000   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:54.525008   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:54.525013   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:54.528390   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:54.724248   99368 request.go:632] Waited for 195.108613ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:16:54.724318   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:16:54.724325   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:54.724335   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:54.724341   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:54.727499   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:54.727990   99368 pod_ready.go:93] pod "kube-controller-manager-ha-142481-m02" in "kube-system" namespace has status "Ready":"True"
	I1010 18:16:54.728011   99368 pod_ready.go:82] duration metric: took 399.541027ms for pod "kube-controller-manager-ha-142481-m02" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:54.728023   99368 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-142481-m03" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:54.924017   99368 request.go:632] Waited for 195.924922ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-142481-m03
	I1010 18:16:54.924118   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-142481-m03
	I1010 18:16:54.924129   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:54.924137   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:54.924142   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:54.928875   99368 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1010 18:16:55.123960   99368 request.go:632] Waited for 194.31178ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:55.124017   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:55.124022   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:55.124030   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:55.124033   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:55.127461   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:55.128120   99368 pod_ready.go:93] pod "kube-controller-manager-ha-142481-m03" in "kube-system" namespace has status "Ready":"True"
	I1010 18:16:55.128144   99368 pod_ready.go:82] duration metric: took 400.113475ms for pod "kube-controller-manager-ha-142481-m03" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:55.128160   99368 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-cdjzg" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:55.323986   99368 request.go:632] Waited for 195.748073ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cdjzg
	I1010 18:16:55.324049   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cdjzg
	I1010 18:16:55.324055   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:55.324063   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:55.324069   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:55.327396   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:55.524493   99368 request.go:632] Waited for 196.370396ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:55.524560   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:55.524567   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:55.524578   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:55.524586   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:55.534026   99368 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1010 18:16:55.534701   99368 pod_ready.go:93] pod "kube-proxy-cdjzg" in "kube-system" namespace has status "Ready":"True"
	I1010 18:16:55.534728   99368 pod_ready.go:82] duration metric: took 406.559679ms for pod "kube-proxy-cdjzg" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:55.534745   99368 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-gwvrh" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:55.724765   99368 request.go:632] Waited for 189.945021ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gwvrh
	I1010 18:16:55.724857   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gwvrh
	I1010 18:16:55.724864   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:55.724872   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:55.724878   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:55.727940   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:55.923972   99368 request.go:632] Waited for 195.304711ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/nodes/ha-142481
	I1010 18:16:55.924037   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481
	I1010 18:16:55.924052   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:55.924078   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:55.924085   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:55.927605   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:55.928243   99368 pod_ready.go:93] pod "kube-proxy-gwvrh" in "kube-system" namespace has status "Ready":"True"
	I1010 18:16:55.928264   99368 pod_ready.go:82] duration metric: took 393.511622ms for pod "kube-proxy-gwvrh" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:55.928278   99368 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-srfng" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:56.124193   99368 request.go:632] Waited for 195.82573ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-proxy-srfng
	I1010 18:16:56.124313   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-proxy-srfng
	I1010 18:16:56.124327   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:56.124336   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:56.124340   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:56.127896   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:56.324881   99368 request.go:632] Waited for 196.244687ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:16:56.324996   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:16:56.325012   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:56.325022   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:56.325029   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:56.328576   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:56.329284   99368 pod_ready.go:93] pod "kube-proxy-srfng" in "kube-system" namespace has status "Ready":"True"
	I1010 18:16:56.329304   99368 pod_ready.go:82] duration metric: took 401.01865ms for pod "kube-proxy-srfng" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:56.329315   99368 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-142481" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:56.524473   99368 request.go:632] Waited for 195.075639ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-142481
	I1010 18:16:56.524535   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-142481
	I1010 18:16:56.524541   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:56.524548   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:56.524554   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:56.527661   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:56.724798   99368 request.go:632] Waited for 196.388114ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/nodes/ha-142481
	I1010 18:16:56.724919   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481
	I1010 18:16:56.724934   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:56.724945   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:56.724955   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:56.728172   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:56.728664   99368 pod_ready.go:93] pod "kube-scheduler-ha-142481" in "kube-system" namespace has status "Ready":"True"
	I1010 18:16:56.728684   99368 pod_ready.go:82] duration metric: took 399.362342ms for pod "kube-scheduler-ha-142481" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:56.728700   99368 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-142481-m02" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:56.924703   99368 request.go:632] Waited for 195.908558ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-142481-m02
	I1010 18:16:56.924769   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-142481-m02
	I1010 18:16:56.924784   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:56.924793   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:56.924796   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:56.928241   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:57.124466   99368 request.go:632] Waited for 195.354302ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:16:57.124566   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:16:57.124592   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:57.124604   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:57.124613   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:57.128217   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:57.128748   99368 pod_ready.go:93] pod "kube-scheduler-ha-142481-m02" in "kube-system" namespace has status "Ready":"True"
	I1010 18:16:57.128773   99368 pod_ready.go:82] duration metric: took 400.06441ms for pod "kube-scheduler-ha-142481-m02" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:57.128788   99368 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-142481-m03" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:57.323894   99368 request.go:632] Waited for 195.025916ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-142481-m03
	I1010 18:16:57.323960   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-142481-m03
	I1010 18:16:57.324019   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:57.324032   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:57.324036   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:57.328239   99368 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1010 18:16:57.524431   99368 request.go:632] Waited for 195.425292ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:57.524497   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:57.524503   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:57.524511   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:57.524515   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:57.527825   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:57.528689   99368 pod_ready.go:93] pod "kube-scheduler-ha-142481-m03" in "kube-system" namespace has status "Ready":"True"
	I1010 18:16:57.528706   99368 pod_ready.go:82] duration metric: took 399.911051ms for pod "kube-scheduler-ha-142481-m03" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:57.528718   99368 pod_ready.go:39] duration metric: took 5.200736466s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 18:16:57.528734   99368 api_server.go:52] waiting for apiserver process to appear ...
	I1010 18:16:57.528787   99368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 18:16:57.545663   99368 api_server.go:72] duration metric: took 22.605494204s to wait for apiserver process to appear ...
	I1010 18:16:57.545694   99368 api_server.go:88] waiting for apiserver healthz status ...
	I1010 18:16:57.545718   99368 api_server.go:253] Checking apiserver healthz at https://192.168.39.104:8443/healthz ...
	I1010 18:16:57.552066   99368 api_server.go:279] https://192.168.39.104:8443/healthz returned 200:
	ok
	I1010 18:16:57.552813   99368 round_trippers.go:463] GET https://192.168.39.104:8443/version
	I1010 18:16:57.552870   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:57.552882   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:57.552890   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:57.555288   99368 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1010 18:16:57.555381   99368 api_server.go:141] control plane version: v1.31.1
	I1010 18:16:57.555401   99368 api_server.go:131] duration metric: took 9.699914ms to wait for apiserver health ...
	I1010 18:16:57.555411   99368 system_pods.go:43] waiting for kube-system pods to appear ...
	I1010 18:16:57.724005   99368 request.go:632] Waited for 168.467999ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods
	I1010 18:16:57.724082   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods
	I1010 18:16:57.724091   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:57.724106   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:57.724114   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:57.730879   99368 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1010 18:16:57.737404   99368 system_pods.go:59] 24 kube-system pods found
	I1010 18:16:57.737436   99368 system_pods.go:61] "coredns-7c65d6cfc9-28dll" [1d8bde36-6ab6-4b48-b00a-2a80059ecc11] Running
	I1010 18:16:57.737442   99368 system_pods.go:61] "coredns-7c65d6cfc9-xfhq8" [33d478fb-98f0-4029-87ae-9701a54cecd9] Running
	I1010 18:16:57.737445   99368 system_pods.go:61] "etcd-ha-142481" [fa917068-3ac4-4929-b80b-bfabdc3a9b94] Running
	I1010 18:16:57.737449   99368 system_pods.go:61] "etcd-ha-142481-m02" [8e6c1111-716c-41ff-9d31-ee189956993b] Running
	I1010 18:16:57.737452   99368 system_pods.go:61] "etcd-ha-142481-m03" [3f1ae212-d09b-446c-9172-52b9bfc6c20c] Running
	I1010 18:16:57.737456   99368 system_pods.go:61] "kindnet-4d9v4" [f52506a7-d4f9-4112-8b28-f239c7c8230b] Running
	I1010 18:16:57.737459   99368 system_pods.go:61] "kindnet-5k6j8" [e2d0fb49-23e6-4389-b890-25c03f6089a1] Running
	I1010 18:16:57.737463   99368 system_pods.go:61] "kindnet-cjcsf" [237e5649-ed64-401c-befd-99ef520d0761] Running
	I1010 18:16:57.737466   99368 system_pods.go:61] "kube-apiserver-ha-142481" [b3d0326d-123b-43af-b1a2-b745b884c3b5] Running
	I1010 18:16:57.737469   99368 system_pods.go:61] "kube-apiserver-ha-142481-m02" [36f5732a-f056-4751-83e3-67f84744af8e] Running
	I1010 18:16:57.737472   99368 system_pods.go:61] "kube-apiserver-ha-142481-m03" [4c7836a0-6697-4ce5-87d6-582097925f80] Running
	I1010 18:16:57.737476   99368 system_pods.go:61] "kube-controller-manager-ha-142481" [ad42e72b-6208-44c3-b4cb-ac9794ec9683] Running
	I1010 18:16:57.737480   99368 system_pods.go:61] "kube-controller-manager-ha-142481-m02" [4bd4ce73-1b81-44db-a59f-06c89184d88c] Running
	I1010 18:16:57.737484   99368 system_pods.go:61] "kube-controller-manager-ha-142481-m03" [9444eb06-6dc4-44ab-a7d6-2d1d5b3e6410] Running
	I1010 18:16:57.737487   99368 system_pods.go:61] "kube-proxy-cdjzg" [98288460-9764-4e92-a589-e7e34654cfc5] Running
	I1010 18:16:57.737491   99368 system_pods.go:61] "kube-proxy-gwvrh" [06e8f455-b250-46ea-bbcc-75ed230f2b57] Running
	I1010 18:16:57.737494   99368 system_pods.go:61] "kube-proxy-srfng" [0aad186a-89b6-48d9-8434-1063c7c8a42f] Running
	I1010 18:16:57.737499   99368 system_pods.go:61] "kube-scheduler-ha-142481" [c28a2da7-1807-4111-bf99-84891a0b278a] Running
	I1010 18:16:57.737505   99368 system_pods.go:61] "kube-scheduler-ha-142481-m02" [f29260ee-8ec8-47c4-848d-ee8ff1d06610] Running
	I1010 18:16:57.737509   99368 system_pods.go:61] "kube-scheduler-ha-142481-m03" [a3eea545-bc31-4990-ad58-a43666964468] Running
	I1010 18:16:57.737512   99368 system_pods.go:61] "kube-vip-ha-142481" [3ae4dc2c-27b6-4f93-a605-8de6f0c096d0] Running
	I1010 18:16:57.737515   99368 system_pods.go:61] "kube-vip-ha-142481-m02" [bd50bf3a-5e9c-48d4-8bba-fe1fea6c017d] Running
	I1010 18:16:57.737519   99368 system_pods.go:61] "kube-vip-ha-142481-m03" [a93b4d63-0f6c-47b5-b987-a082b2b0d51a] Running
	I1010 18:16:57.737522   99368 system_pods.go:61] "storage-provisioner" [9d74f64e-6391-4ab0-99ad-8c1711840696] Running
	I1010 18:16:57.737528   99368 system_pods.go:74] duration metric: took 182.108204ms to wait for pod list to return data ...
	I1010 18:16:57.737537   99368 default_sa.go:34] waiting for default service account to be created ...
	I1010 18:16:57.923961   99368 request.go:632] Waited for 186.32043ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/default/serviceaccounts
	I1010 18:16:57.924040   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/default/serviceaccounts
	I1010 18:16:57.924048   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:57.924059   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:57.924064   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:57.928023   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:57.928206   99368 default_sa.go:45] found service account: "default"
	I1010 18:16:57.928229   99368 default_sa.go:55] duration metric: took 190.684117ms for default service account to be created ...
	I1010 18:16:57.928243   99368 system_pods.go:116] waiting for k8s-apps to be running ...
	I1010 18:16:58.124915   99368 request.go:632] Waited for 196.547566ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods
	I1010 18:16:58.124982   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods
	I1010 18:16:58.124989   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:58.124999   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:58.125007   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:58.131096   99368 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1010 18:16:58.138059   99368 system_pods.go:86] 24 kube-system pods found
	I1010 18:16:58.138089   99368 system_pods.go:89] "coredns-7c65d6cfc9-28dll" [1d8bde36-6ab6-4b48-b00a-2a80059ecc11] Running
	I1010 18:16:58.138095   99368 system_pods.go:89] "coredns-7c65d6cfc9-xfhq8" [33d478fb-98f0-4029-87ae-9701a54cecd9] Running
	I1010 18:16:58.138099   99368 system_pods.go:89] "etcd-ha-142481" [fa917068-3ac4-4929-b80b-bfabdc3a9b94] Running
	I1010 18:16:58.138103   99368 system_pods.go:89] "etcd-ha-142481-m02" [8e6c1111-716c-41ff-9d31-ee189956993b] Running
	I1010 18:16:58.138107   99368 system_pods.go:89] "etcd-ha-142481-m03" [3f1ae212-d09b-446c-9172-52b9bfc6c20c] Running
	I1010 18:16:58.138111   99368 system_pods.go:89] "kindnet-4d9v4" [f52506a7-d4f9-4112-8b28-f239c7c8230b] Running
	I1010 18:16:58.138114   99368 system_pods.go:89] "kindnet-5k6j8" [e2d0fb49-23e6-4389-b890-25c03f6089a1] Running
	I1010 18:16:58.138117   99368 system_pods.go:89] "kindnet-cjcsf" [237e5649-ed64-401c-befd-99ef520d0761] Running
	I1010 18:16:58.138120   99368 system_pods.go:89] "kube-apiserver-ha-142481" [b3d0326d-123b-43af-b1a2-b745b884c3b5] Running
	I1010 18:16:58.138124   99368 system_pods.go:89] "kube-apiserver-ha-142481-m02" [36f5732a-f056-4751-83e3-67f84744af8e] Running
	I1010 18:16:58.138127   99368 system_pods.go:89] "kube-apiserver-ha-142481-m03" [4c7836a0-6697-4ce5-87d6-582097925f80] Running
	I1010 18:16:58.138131   99368 system_pods.go:89] "kube-controller-manager-ha-142481" [ad42e72b-6208-44c3-b4cb-ac9794ec9683] Running
	I1010 18:16:58.138134   99368 system_pods.go:89] "kube-controller-manager-ha-142481-m02" [4bd4ce73-1b81-44db-a59f-06c89184d88c] Running
	I1010 18:16:58.138138   99368 system_pods.go:89] "kube-controller-manager-ha-142481-m03" [9444eb06-6dc4-44ab-a7d6-2d1d5b3e6410] Running
	I1010 18:16:58.138141   99368 system_pods.go:89] "kube-proxy-cdjzg" [98288460-9764-4e92-a589-e7e34654cfc5] Running
	I1010 18:16:58.138145   99368 system_pods.go:89] "kube-proxy-gwvrh" [06e8f455-b250-46ea-bbcc-75ed230f2b57] Running
	I1010 18:16:58.138148   99368 system_pods.go:89] "kube-proxy-srfng" [0aad186a-89b6-48d9-8434-1063c7c8a42f] Running
	I1010 18:16:58.138150   99368 system_pods.go:89] "kube-scheduler-ha-142481" [c28a2da7-1807-4111-bf99-84891a0b278a] Running
	I1010 18:16:58.138153   99368 system_pods.go:89] "kube-scheduler-ha-142481-m02" [f29260ee-8ec8-47c4-848d-ee8ff1d06610] Running
	I1010 18:16:58.138156   99368 system_pods.go:89] "kube-scheduler-ha-142481-m03" [a3eea545-bc31-4990-ad58-a43666964468] Running
	I1010 18:16:58.138160   99368 system_pods.go:89] "kube-vip-ha-142481" [3ae4dc2c-27b6-4f93-a605-8de6f0c096d0] Running
	I1010 18:16:58.138163   99368 system_pods.go:89] "kube-vip-ha-142481-m02" [bd50bf3a-5e9c-48d4-8bba-fe1fea6c017d] Running
	I1010 18:16:58.138165   99368 system_pods.go:89] "kube-vip-ha-142481-m03" [a93b4d63-0f6c-47b5-b987-a082b2b0d51a] Running
	I1010 18:16:58.138168   99368 system_pods.go:89] "storage-provisioner" [9d74f64e-6391-4ab0-99ad-8c1711840696] Running
	I1010 18:16:58.138175   99368 system_pods.go:126] duration metric: took 209.923309ms to wait for k8s-apps to be running ...
	I1010 18:16:58.138188   99368 system_svc.go:44] waiting for kubelet service to be running ....
	I1010 18:16:58.138234   99368 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 18:16:58.154620   99368 system_svc.go:56] duration metric: took 16.42135ms WaitForService to wait for kubelet
	I1010 18:16:58.154660   99368 kubeadm.go:582] duration metric: took 23.214494056s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1010 18:16:58.154684   99368 node_conditions.go:102] verifying NodePressure condition ...
	I1010 18:16:58.324577   99368 request.go:632] Waited for 169.800219ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/nodes
	I1010 18:16:58.324670   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes
	I1010 18:16:58.324677   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:58.324687   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:58.324694   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:58.328908   99368 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1010 18:16:58.329887   99368 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1010 18:16:58.329907   99368 node_conditions.go:123] node cpu capacity is 2
	I1010 18:16:58.329918   99368 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1010 18:16:58.329922   99368 node_conditions.go:123] node cpu capacity is 2
	I1010 18:16:58.329926   99368 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1010 18:16:58.329929   99368 node_conditions.go:123] node cpu capacity is 2
	I1010 18:16:58.329932   99368 node_conditions.go:105] duration metric: took 175.242574ms to run NodePressure ...
	I1010 18:16:58.329945   99368 start.go:241] waiting for startup goroutines ...
	I1010 18:16:58.329965   99368 start.go:255] writing updated cluster config ...
	I1010 18:16:58.330248   99368 ssh_runner.go:195] Run: rm -f paused
	I1010 18:16:58.382565   99368 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1010 18:16:58.384704   99368 out.go:177] * Done! kubectl is now configured to use "ha-142481" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 10 18:20:52 ha-142481 crio[662]: time="2024-10-10 18:20:52.900193489Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728584452900170616,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5302855b-5f5d-43ef-a77c-41f990a81b79 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 10 18:20:52 ha-142481 crio[662]: time="2024-10-10 18:20:52.900736663Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=685cd3cd-1477-4fd3-b830-db7d5efc68cb name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 18:20:52 ha-142481 crio[662]: time="2024-10-10 18:20:52.900791967Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=685cd3cd-1477-4fd3-b830-db7d5efc68cb name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 18:20:52 ha-142481 crio[662]: time="2024-10-10 18:20:52.901042462Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c07ad1fe2bce4fa8373def3cddec28c3e2552242ac1c39c134f1ae1b46e61fc7,PodSandboxId:0cebb1db5e1d3f09063f319b62399492ea2520962e06e43ba786fceadf07a397,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728584222925867591,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-xnwpj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ddbe05a6-e1b2-4b9d-b285-ed6a97c9ea98,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:018e6370bdfdae261e6d11d25c43c0456fb2814ad544f82f775181d88055bf37,PodSandboxId:84952d68d14fb8ddef31a76a7d3e10fc7ae9a1bb453a588629cf59972d4cc64d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728584076458035667,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xfhq8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33d478fb-98f0-4029-87ae-9701a54cecd9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c208648c013d8c43a6964604c6ae8de3a53473a5fd4e7885a2755580435fb2e,PodSandboxId:20b740049c58515dfa475c0041efc539108a367ad35f9468d4a5dd454a1ba4c1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728584076394205343,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-28dll,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
1d8bde36-6ab6-4b48-b00a-2a80059ecc11,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2eb7357e74059a7301a73b2f96692b18182bf02d619521a79db9b92779a3b9d3,PodSandboxId:a78996796d2ead6b297dda5318a5aee95e782ac04d0f6d8ae10bb40361461d5e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1728584076376848222,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d74f64e-6391-4ab0-99ad-8c1711840696,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b32ac96128061a8cd1f65949d4849412dd90d7656d1619c49a39c2a6f8cb40d3,PodSandboxId:d5a1a0a19e5bca4c63661ad0dc5406e3b343827728591b94c793305e3054b5bb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17285840
64156910390,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4d9v4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f52506a7-d4f9-4112-8b28-f239c7c8230b,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f7d32719ebd232c16c4b9d557726714c2b6c1836b556ff4ca647c75d46c03e9,PodSandboxId:63eed92e7516ac5ee123fb9b271b50065e605524dc4b10606329006c357fe8c2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728584064036938001,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gwvrh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06e8f455-b250-46ea-bbcc-75ed230f2b57,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80e86419d2aadedb18174a33f8cbe97c26931608fd3d13de863fea4148c76dc4,PodSandboxId:ef586683ae3a5a110607194cda298fc606be593d4b2e270644dcc4828413d726,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1728584055179658221,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-142481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca912ab25249d4277c44ead7f9f185cc,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:751981b34b5e95c8f49366cd245e2877efdef340757e2957722cc0c59f16c09c,PodSandboxId:a1a198bd8221ce4fda4789734e1aba8d3c7f89613592cf88b2fe723e9517e7f8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728584052280961387,Labels:map[string]string{io.kubernetes.container.name: kub
e-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-142481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ba30574b93c2cf97990905a11b77e8d,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d7eb644bee429f0aef0448b6ea6d1efc310f7b08553ebb2af7781f4c493bbaf,PodSandboxId:df70f8cffd3d4746ecd973b5cc68d7d7162723d607a107e3fc78197661178de8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728584052264112180,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-ha-142481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ddee910ffb9f86d5728fc6243e0b7a0,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43b160f9e1140bbc977bac3d49805593ce66960b30fe85ac1e6b104437e5a026,PodSandboxId:cf562380e5c8d4bcf939e430aad1b55e383372d779380ea7d38feca9c503cb13,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728584052244936539,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.
kubernetes.pod.name: kube-scheduler-ha-142481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4d0e8861f11277dd58b525d7e9f233b,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:206693e605977eabf0659fe5400e46be795755080a225b01d46bb7533d055c58,PodSandboxId:84fece63e17b567b38dff30b098d026073682dacac0ce0da5bf2ed8a3b0c34de,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728584052158394276,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-142481,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9682645df643fbbf1d6db4f0d49467bf,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=685cd3cd-1477-4fd3-b830-db7d5efc68cb name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 18:20:52 ha-142481 crio[662]: time="2024-10-10 18:20:52.951412489Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9c2bdc00-46bd-4286-af04-9b9479606c14 name=/runtime.v1.RuntimeService/Version
	Oct 10 18:20:52 ha-142481 crio[662]: time="2024-10-10 18:20:52.951487360Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9c2bdc00-46bd-4286-af04-9b9479606c14 name=/runtime.v1.RuntimeService/Version
	Oct 10 18:20:52 ha-142481 crio[662]: time="2024-10-10 18:20:52.953241974Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ec5f7d24-58a9-49fb-82c7-8e64ecd7b57c name=/runtime.v1.ImageService/ImageFsInfo
	Oct 10 18:20:52 ha-142481 crio[662]: time="2024-10-10 18:20:52.953708996Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728584452953683369,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ec5f7d24-58a9-49fb-82c7-8e64ecd7b57c name=/runtime.v1.ImageService/ImageFsInfo
	Oct 10 18:20:52 ha-142481 crio[662]: time="2024-10-10 18:20:52.954365716Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3191b84a-b3e9-41cd-ab97-e908f4ee4833 name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 18:20:52 ha-142481 crio[662]: time="2024-10-10 18:20:52.954671975Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3191b84a-b3e9-41cd-ab97-e908f4ee4833 name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 18:20:52 ha-142481 crio[662]: time="2024-10-10 18:20:52.955116836Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c07ad1fe2bce4fa8373def3cddec28c3e2552242ac1c39c134f1ae1b46e61fc7,PodSandboxId:0cebb1db5e1d3f09063f319b62399492ea2520962e06e43ba786fceadf07a397,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728584222925867591,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-xnwpj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ddbe05a6-e1b2-4b9d-b285-ed6a97c9ea98,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:018e6370bdfdae261e6d11d25c43c0456fb2814ad544f82f775181d88055bf37,PodSandboxId:84952d68d14fb8ddef31a76a7d3e10fc7ae9a1bb453a588629cf59972d4cc64d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728584076458035667,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xfhq8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33d478fb-98f0-4029-87ae-9701a54cecd9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c208648c013d8c43a6964604c6ae8de3a53473a5fd4e7885a2755580435fb2e,PodSandboxId:20b740049c58515dfa475c0041efc539108a367ad35f9468d4a5dd454a1ba4c1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728584076394205343,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-28dll,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
1d8bde36-6ab6-4b48-b00a-2a80059ecc11,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2eb7357e74059a7301a73b2f96692b18182bf02d619521a79db9b92779a3b9d3,PodSandboxId:a78996796d2ead6b297dda5318a5aee95e782ac04d0f6d8ae10bb40361461d5e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1728584076376848222,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d74f64e-6391-4ab0-99ad-8c1711840696,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b32ac96128061a8cd1f65949d4849412dd90d7656d1619c49a39c2a6f8cb40d3,PodSandboxId:d5a1a0a19e5bca4c63661ad0dc5406e3b343827728591b94c793305e3054b5bb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17285840
64156910390,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4d9v4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f52506a7-d4f9-4112-8b28-f239c7c8230b,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f7d32719ebd232c16c4b9d557726714c2b6c1836b556ff4ca647c75d46c03e9,PodSandboxId:63eed92e7516ac5ee123fb9b271b50065e605524dc4b10606329006c357fe8c2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728584064036938001,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gwvrh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06e8f455-b250-46ea-bbcc-75ed230f2b57,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80e86419d2aadedb18174a33f8cbe97c26931608fd3d13de863fea4148c76dc4,PodSandboxId:ef586683ae3a5a110607194cda298fc606be593d4b2e270644dcc4828413d726,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1728584055179658221,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-142481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca912ab25249d4277c44ead7f9f185cc,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:751981b34b5e95c8f49366cd245e2877efdef340757e2957722cc0c59f16c09c,PodSandboxId:a1a198bd8221ce4fda4789734e1aba8d3c7f89613592cf88b2fe723e9517e7f8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728584052280961387,Labels:map[string]string{io.kubernetes.container.name: kub
e-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-142481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ba30574b93c2cf97990905a11b77e8d,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d7eb644bee429f0aef0448b6ea6d1efc310f7b08553ebb2af7781f4c493bbaf,PodSandboxId:df70f8cffd3d4746ecd973b5cc68d7d7162723d607a107e3fc78197661178de8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728584052264112180,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-ha-142481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ddee910ffb9f86d5728fc6243e0b7a0,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43b160f9e1140bbc977bac3d49805593ce66960b30fe85ac1e6b104437e5a026,PodSandboxId:cf562380e5c8d4bcf939e430aad1b55e383372d779380ea7d38feca9c503cb13,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728584052244936539,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.
kubernetes.pod.name: kube-scheduler-ha-142481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4d0e8861f11277dd58b525d7e9f233b,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:206693e605977eabf0659fe5400e46be795755080a225b01d46bb7533d055c58,PodSandboxId:84fece63e17b567b38dff30b098d026073682dacac0ce0da5bf2ed8a3b0c34de,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728584052158394276,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-142481,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9682645df643fbbf1d6db4f0d49467bf,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3191b84a-b3e9-41cd-ab97-e908f4ee4833 name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 18:20:53 ha-142481 crio[662]: time="2024-10-10 18:20:53.010048348Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8b2405e2-7aa5-4b07-b223-748b03cb4eda name=/runtime.v1.RuntimeService/Version
	Oct 10 18:20:53 ha-142481 crio[662]: time="2024-10-10 18:20:53.010129873Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8b2405e2-7aa5-4b07-b223-748b03cb4eda name=/runtime.v1.RuntimeService/Version
	Oct 10 18:20:53 ha-142481 crio[662]: time="2024-10-10 18:20:53.013193575Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=01040140-af81-4f40-a45b-d16c405dcb61 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 10 18:20:53 ha-142481 crio[662]: time="2024-10-10 18:20:53.015446017Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728584453015414535,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=01040140-af81-4f40-a45b-d16c405dcb61 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 10 18:20:53 ha-142481 crio[662]: time="2024-10-10 18:20:53.016055180Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b3688cbb-34c1-4f6a-9804-24e880c23f52 name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 18:20:53 ha-142481 crio[662]: time="2024-10-10 18:20:53.016122514Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b3688cbb-34c1-4f6a-9804-24e880c23f52 name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 18:20:53 ha-142481 crio[662]: time="2024-10-10 18:20:53.016385445Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c07ad1fe2bce4fa8373def3cddec28c3e2552242ac1c39c134f1ae1b46e61fc7,PodSandboxId:0cebb1db5e1d3f09063f319b62399492ea2520962e06e43ba786fceadf07a397,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728584222925867591,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-xnwpj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ddbe05a6-e1b2-4b9d-b285-ed6a97c9ea98,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:018e6370bdfdae261e6d11d25c43c0456fb2814ad544f82f775181d88055bf37,PodSandboxId:84952d68d14fb8ddef31a76a7d3e10fc7ae9a1bb453a588629cf59972d4cc64d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728584076458035667,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xfhq8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33d478fb-98f0-4029-87ae-9701a54cecd9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c208648c013d8c43a6964604c6ae8de3a53473a5fd4e7885a2755580435fb2e,PodSandboxId:20b740049c58515dfa475c0041efc539108a367ad35f9468d4a5dd454a1ba4c1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728584076394205343,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-28dll,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
1d8bde36-6ab6-4b48-b00a-2a80059ecc11,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2eb7357e74059a7301a73b2f96692b18182bf02d619521a79db9b92779a3b9d3,PodSandboxId:a78996796d2ead6b297dda5318a5aee95e782ac04d0f6d8ae10bb40361461d5e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1728584076376848222,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d74f64e-6391-4ab0-99ad-8c1711840696,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b32ac96128061a8cd1f65949d4849412dd90d7656d1619c49a39c2a6f8cb40d3,PodSandboxId:d5a1a0a19e5bca4c63661ad0dc5406e3b343827728591b94c793305e3054b5bb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17285840
64156910390,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4d9v4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f52506a7-d4f9-4112-8b28-f239c7c8230b,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f7d32719ebd232c16c4b9d557726714c2b6c1836b556ff4ca647c75d46c03e9,PodSandboxId:63eed92e7516ac5ee123fb9b271b50065e605524dc4b10606329006c357fe8c2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728584064036938001,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gwvrh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06e8f455-b250-46ea-bbcc-75ed230f2b57,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80e86419d2aadedb18174a33f8cbe97c26931608fd3d13de863fea4148c76dc4,PodSandboxId:ef586683ae3a5a110607194cda298fc606be593d4b2e270644dcc4828413d726,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1728584055179658221,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-142481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca912ab25249d4277c44ead7f9f185cc,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:751981b34b5e95c8f49366cd245e2877efdef340757e2957722cc0c59f16c09c,PodSandboxId:a1a198bd8221ce4fda4789734e1aba8d3c7f89613592cf88b2fe723e9517e7f8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728584052280961387,Labels:map[string]string{io.kubernetes.container.name: kub
e-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-142481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ba30574b93c2cf97990905a11b77e8d,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d7eb644bee429f0aef0448b6ea6d1efc310f7b08553ebb2af7781f4c493bbaf,PodSandboxId:df70f8cffd3d4746ecd973b5cc68d7d7162723d607a107e3fc78197661178de8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728584052264112180,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-ha-142481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ddee910ffb9f86d5728fc6243e0b7a0,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43b160f9e1140bbc977bac3d49805593ce66960b30fe85ac1e6b104437e5a026,PodSandboxId:cf562380e5c8d4bcf939e430aad1b55e383372d779380ea7d38feca9c503cb13,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728584052244936539,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.
kubernetes.pod.name: kube-scheduler-ha-142481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4d0e8861f11277dd58b525d7e9f233b,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:206693e605977eabf0659fe5400e46be795755080a225b01d46bb7533d055c58,PodSandboxId:84fece63e17b567b38dff30b098d026073682dacac0ce0da5bf2ed8a3b0c34de,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728584052158394276,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-142481,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9682645df643fbbf1d6db4f0d49467bf,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b3688cbb-34c1-4f6a-9804-24e880c23f52 name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 18:20:53 ha-142481 crio[662]: time="2024-10-10 18:20:53.066862228Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2a98116e-6c20-4018-a216-bbbf309beb6d name=/runtime.v1.RuntimeService/Version
	Oct 10 18:20:53 ha-142481 crio[662]: time="2024-10-10 18:20:53.067053952Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2a98116e-6c20-4018-a216-bbbf309beb6d name=/runtime.v1.RuntimeService/Version
	Oct 10 18:20:53 ha-142481 crio[662]: time="2024-10-10 18:20:53.069093159Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b6948b8d-d03d-4327-ba4a-0ae137313ba8 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 10 18:20:53 ha-142481 crio[662]: time="2024-10-10 18:20:53.069626222Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728584453069535440,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b6948b8d-d03d-4327-ba4a-0ae137313ba8 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 10 18:20:53 ha-142481 crio[662]: time="2024-10-10 18:20:53.070229830Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=966c9a4a-18ec-48f6-a22a-bb0c0845fe70 name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 18:20:53 ha-142481 crio[662]: time="2024-10-10 18:20:53.070312094Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=966c9a4a-18ec-48f6-a22a-bb0c0845fe70 name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 18:20:53 ha-142481 crio[662]: time="2024-10-10 18:20:53.070651210Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c07ad1fe2bce4fa8373def3cddec28c3e2552242ac1c39c134f1ae1b46e61fc7,PodSandboxId:0cebb1db5e1d3f09063f319b62399492ea2520962e06e43ba786fceadf07a397,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728584222925867591,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-xnwpj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ddbe05a6-e1b2-4b9d-b285-ed6a97c9ea98,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:018e6370bdfdae261e6d11d25c43c0456fb2814ad544f82f775181d88055bf37,PodSandboxId:84952d68d14fb8ddef31a76a7d3e10fc7ae9a1bb453a588629cf59972d4cc64d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728584076458035667,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xfhq8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33d478fb-98f0-4029-87ae-9701a54cecd9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c208648c013d8c43a6964604c6ae8de3a53473a5fd4e7885a2755580435fb2e,PodSandboxId:20b740049c58515dfa475c0041efc539108a367ad35f9468d4a5dd454a1ba4c1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728584076394205343,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-28dll,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
1d8bde36-6ab6-4b48-b00a-2a80059ecc11,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2eb7357e74059a7301a73b2f96692b18182bf02d619521a79db9b92779a3b9d3,PodSandboxId:a78996796d2ead6b297dda5318a5aee95e782ac04d0f6d8ae10bb40361461d5e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1728584076376848222,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d74f64e-6391-4ab0-99ad-8c1711840696,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b32ac96128061a8cd1f65949d4849412dd90d7656d1619c49a39c2a6f8cb40d3,PodSandboxId:d5a1a0a19e5bca4c63661ad0dc5406e3b343827728591b94c793305e3054b5bb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17285840
64156910390,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4d9v4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f52506a7-d4f9-4112-8b28-f239c7c8230b,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f7d32719ebd232c16c4b9d557726714c2b6c1836b556ff4ca647c75d46c03e9,PodSandboxId:63eed92e7516ac5ee123fb9b271b50065e605524dc4b10606329006c357fe8c2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728584064036938001,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gwvrh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06e8f455-b250-46ea-bbcc-75ed230f2b57,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80e86419d2aadedb18174a33f8cbe97c26931608fd3d13de863fea4148c76dc4,PodSandboxId:ef586683ae3a5a110607194cda298fc606be593d4b2e270644dcc4828413d726,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1728584055179658221,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-142481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca912ab25249d4277c44ead7f9f185cc,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:751981b34b5e95c8f49366cd245e2877efdef340757e2957722cc0c59f16c09c,PodSandboxId:a1a198bd8221ce4fda4789734e1aba8d3c7f89613592cf88b2fe723e9517e7f8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728584052280961387,Labels:map[string]string{io.kubernetes.container.name: kub
e-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-142481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ba30574b93c2cf97990905a11b77e8d,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d7eb644bee429f0aef0448b6ea6d1efc310f7b08553ebb2af7781f4c493bbaf,PodSandboxId:df70f8cffd3d4746ecd973b5cc68d7d7162723d607a107e3fc78197661178de8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728584052264112180,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-ha-142481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ddee910ffb9f86d5728fc6243e0b7a0,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43b160f9e1140bbc977bac3d49805593ce66960b30fe85ac1e6b104437e5a026,PodSandboxId:cf562380e5c8d4bcf939e430aad1b55e383372d779380ea7d38feca9c503cb13,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728584052244936539,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.
kubernetes.pod.name: kube-scheduler-ha-142481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4d0e8861f11277dd58b525d7e9f233b,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:206693e605977eabf0659fe5400e46be795755080a225b01d46bb7533d055c58,PodSandboxId:84fece63e17b567b38dff30b098d026073682dacac0ce0da5bf2ed8a3b0c34de,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728584052158394276,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-142481,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9682645df643fbbf1d6db4f0d49467bf,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=966c9a4a-18ec-48f6-a22a-bb0c0845fe70 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c07ad1fe2bce4       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   0cebb1db5e1d3       busybox-7dff88458-xnwpj
	018e6370bdfda       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   84952d68d14fb       coredns-7c65d6cfc9-xfhq8
	5c208648c013d       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   20b740049c585       coredns-7c65d6cfc9-28dll
	2eb7357e74059       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   a78996796d2ea       storage-provisioner
	b32ac96128061       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      6 minutes ago       Running             kindnet-cni               0                   d5a1a0a19e5bc       kindnet-4d9v4
	9f7d32719ebd2       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      6 minutes ago       Running             kube-proxy                0                   63eed92e7516a       kube-proxy-gwvrh
	80e86419d2aad       ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413     6 minutes ago       Running             kube-vip                  0                   ef586683ae3a5       kube-vip-ha-142481
	751981b34b5e9       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      6 minutes ago       Running             kube-apiserver            0                   a1a198bd8221c       kube-apiserver-ha-142481
	4d7eb644bee42       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      6 minutes ago       Running             kube-controller-manager   0                   df70f8cffd3d4       kube-controller-manager-ha-142481
	43b160f9e1140       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      6 minutes ago       Running             kube-scheduler            0                   cf562380e5c8d       kube-scheduler-ha-142481
	206693e605977       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   84fece63e17b5       etcd-ha-142481
	
	
	==> coredns [018e6370bdfdae261e6d11d25c43c0456fb2814ad544f82f775181d88055bf37] <==
	[INFO] 10.244.1.2:34545 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001557695s
	[INFO] 10.244.1.2:38085 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000108964s
	[INFO] 10.244.1.2:51531 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000130545s
	[INFO] 10.244.0.4:44429 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002010271s
	[INFO] 10.244.0.4:54303 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000097043s
	[INFO] 10.244.0.4:42398 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000046814s
	[INFO] 10.244.0.4:45760 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00003792s
	[INFO] 10.244.2.2:37649 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000126566s
	[INFO] 10.244.2.2:40587 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000124439s
	[INFO] 10.244.2.2:57109 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00008569s
	[INFO] 10.244.1.2:44569 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000190494s
	[INFO] 10.244.1.2:36745 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000100275s
	[INFO] 10.244.1.2:43935 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000110935s
	[INFO] 10.244.0.4:38393 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000150867s
	[INFO] 10.244.0.4:42701 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000114037s
	[INFO] 10.244.0.4:38022 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000153775s
	[INFO] 10.244.0.4:54617 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000066619s
	[INFO] 10.244.2.2:38084 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000171s
	[INFO] 10.244.2.2:42518 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000188177s
	[INFO] 10.244.2.2:46288 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000151696s
	[INFO] 10.244.1.2:54065 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000167454s
	[INFO] 10.244.1.2:49349 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000138818s
	[INFO] 10.244.0.4:46873 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000110042s
	[INFO] 10.244.0.4:51740 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000092418s
	[INFO] 10.244.0.4:46743 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000066541s
	
	
	==> coredns [5c208648c013d8c43a6964604c6ae8de3a53473a5fd4e7885a2755580435fb2e] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:51137 - 38313 "HINFO IN 987630183612321637.831480708693955805. udp 55 false 512" NXDOMAIN qr,rd,ra 55 0.022844151s
	[INFO] 10.244.2.2:42578 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.001085393s
	[INFO] 10.244.1.2:46574 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.002185448s
	[INFO] 10.244.0.4:39782 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.001587443s
	[INFO] 10.244.0.4:53063 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000500521s
	[INFO] 10.244.2.2:54233 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000215976s
	[INFO] 10.244.2.2:58923 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000163879s
	[INFO] 10.244.1.2:45749 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000253197s
	[INFO] 10.244.1.2:48261 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001731s
	[INFO] 10.244.1.2:46306 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000179475s
	[INFO] 10.244.0.4:41358 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00015898s
	[INFO] 10.244.0.4:57383 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000192727s
	[INFO] 10.244.0.4:41993 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000083721s
	[INFO] 10.244.0.4:60789 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001398106s
	[INFO] 10.244.2.2:56030 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000145862s
	[INFO] 10.244.1.2:34434 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000144043s
	[INFO] 10.244.2.2:40687 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000170156s
	[INFO] 10.244.1.2:56591 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000140447s
	[INFO] 10.244.1.2:34586 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000215712s
	[INFO] 10.244.0.4:49420 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000094221s
	
	
	==> describe nodes <==
	Name:               ha-142481
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-142481
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e90d7de550e839433dc631623053398b652747fc
	                    minikube.k8s.io/name=ha-142481
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_10T18_14_22_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 10 Oct 2024 18:14:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-142481
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 10 Oct 2024 18:20:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 10 Oct 2024 18:17:25 +0000   Thu, 10 Oct 2024 18:14:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 10 Oct 2024 18:17:25 +0000   Thu, 10 Oct 2024 18:14:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 10 Oct 2024 18:17:25 +0000   Thu, 10 Oct 2024 18:14:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 10 Oct 2024 18:17:25 +0000   Thu, 10 Oct 2024 18:14:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.104
	  Hostname:    ha-142481
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 103fd1cad9094f108b20248867a8c9f2
	  System UUID:                103fd1ca-d909-4f10-8b20-248867a8c9f2
	  Boot ID:                    ea46d519-f733-4cdc-b631-5fb0eb75e07c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-xnwpj              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m54s
	  kube-system                 coredns-7c65d6cfc9-28dll             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m30s
	  kube-system                 coredns-7c65d6cfc9-xfhq8             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m30s
	  kube-system                 etcd-ha-142481                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m32s
	  kube-system                 kindnet-4d9v4                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m31s
	  kube-system                 kube-apiserver-ha-142481             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m32s
	  kube-system                 kube-controller-manager-ha-142481    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m32s
	  kube-system                 kube-proxy-gwvrh                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m31s
	  kube-system                 kube-scheduler-ha-142481             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m32s
	  kube-system                 kube-vip-ha-142481                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m32s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m28s                  kube-proxy       
	  Normal  NodeHasSufficientPID     6m42s (x7 over 6m42s)  kubelet          Node ha-142481 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m42s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 6m42s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m42s (x8 over 6m42s)  kubelet          Node ha-142481 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m42s (x8 over 6m42s)  kubelet          Node ha-142481 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 6m32s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m32s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m32s                  kubelet          Node ha-142481 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m32s                  kubelet          Node ha-142481 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m32s                  kubelet          Node ha-142481 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m31s                  node-controller  Node ha-142481 event: Registered Node ha-142481 in Controller
	  Normal  NodeReady                6m18s                  kubelet          Node ha-142481 status is now: NodeReady
	  Normal  RegisteredNode           5m32s                  node-controller  Node ha-142481 event: Registered Node ha-142481 in Controller
	  Normal  RegisteredNode           4m13s                  node-controller  Node ha-142481 event: Registered Node ha-142481 in Controller
	
	
	Name:               ha-142481-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-142481-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e90d7de550e839433dc631623053398b652747fc
	                    minikube.k8s.io/name=ha-142481
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_10T18_15_15_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 10 Oct 2024 18:15:12 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-142481-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 10 Oct 2024 18:18:16 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Thu, 10 Oct 2024 18:17:15 +0000   Thu, 10 Oct 2024 18:18:57 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Thu, 10 Oct 2024 18:17:15 +0000   Thu, 10 Oct 2024 18:18:57 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Thu, 10 Oct 2024 18:17:15 +0000   Thu, 10 Oct 2024 18:18:57 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Thu, 10 Oct 2024 18:17:15 +0000   Thu, 10 Oct 2024 18:18:57 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.186
	  Hostname:    ha-142481-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 64af1b9db3cc41a38fc696e261399a82
	  System UUID:                64af1b9d-b3cc-41a3-8fc6-96e261399a82
	  Boot ID:                    1ad9a5aa-6f71-4b62-94f2-fcfc6f775bcc
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-wf7qs                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m54s
	  kube-system                 etcd-ha-142481-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m39s
	  kube-system                 kindnet-5k6j8                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m41s
	  kube-system                 kube-apiserver-ha-142481-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m39s
	  kube-system                 kube-controller-manager-ha-142481-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m34s
	  kube-system                 kube-proxy-srfng                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m41s
	  kube-system                 kube-scheduler-ha-142481-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m39s
	  kube-system                 kube-vip-ha-142481-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m36s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m36s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m41s (x8 over 5m41s)  kubelet          Node ha-142481-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m41s (x8 over 5m41s)  kubelet          Node ha-142481-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m41s (x7 over 5m41s)  kubelet          Node ha-142481-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m41s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m36s                  node-controller  Node ha-142481-m02 event: Registered Node ha-142481-m02 in Controller
	  Normal  RegisteredNode           5m32s                  node-controller  Node ha-142481-m02 event: Registered Node ha-142481-m02 in Controller
	  Normal  RegisteredNode           4m13s                  node-controller  Node ha-142481-m02 event: Registered Node ha-142481-m02 in Controller
	  Normal  NodeNotReady             116s                   node-controller  Node ha-142481-m02 status is now: NodeNotReady
	
	
	Name:               ha-142481-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-142481-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e90d7de550e839433dc631623053398b652747fc
	                    minikube.k8s.io/name=ha-142481
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_10T18_16_34_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 10 Oct 2024 18:16:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-142481-m03
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 10 Oct 2024 18:20:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 10 Oct 2024 18:17:32 +0000   Thu, 10 Oct 2024 18:16:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 10 Oct 2024 18:17:32 +0000   Thu, 10 Oct 2024 18:16:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 10 Oct 2024 18:17:32 +0000   Thu, 10 Oct 2024 18:16:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 10 Oct 2024 18:17:32 +0000   Thu, 10 Oct 2024 18:16:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.175
	  Hostname:    ha-142481-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 940ef061e50d4431baad36dbbc54f8b4
	  System UUID:                940ef061-e50d-4431-baad-36dbbc54f8b4
	  Boot ID:                    48ae8d44-92c8-45fc-a610-982f0242851e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-5544l                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m54s
	  kube-system                 etcd-ha-142481-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m20s
	  kube-system                 kindnet-cjcsf                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m22s
	  kube-system                 kube-apiserver-ha-142481-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m20s
	  kube-system                 kube-controller-manager-ha-142481-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m20s
	  kube-system                 kube-proxy-cdjzg                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m22s
	  kube-system                 kube-scheduler-ha-142481-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m20s
	  kube-system                 kube-vip-ha-142481-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m16s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m22s (x8 over 4m22s)  kubelet          Node ha-142481-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m22s (x8 over 4m22s)  kubelet          Node ha-142481-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m22s (x7 over 4m22s)  kubelet          Node ha-142481-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m22s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m21s                  node-controller  Node ha-142481-m03 event: Registered Node ha-142481-m03 in Controller
	  Normal  RegisteredNode           4m17s                  node-controller  Node ha-142481-m03 event: Registered Node ha-142481-m03 in Controller
	  Normal  RegisteredNode           4m13s                  node-controller  Node ha-142481-m03 event: Registered Node ha-142481-m03 in Controller
	
	
	Name:               ha-142481-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-142481-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e90d7de550e839433dc631623053398b652747fc
	                    minikube.k8s.io/name=ha-142481
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_10T18_17_40_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 10 Oct 2024 18:17:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-142481-m04
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 10 Oct 2024 18:20:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 10 Oct 2024 18:18:09 +0000   Thu, 10 Oct 2024 18:17:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 10 Oct 2024 18:18:09 +0000   Thu, 10 Oct 2024 18:17:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 10 Oct 2024 18:18:09 +0000   Thu, 10 Oct 2024 18:17:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 10 Oct 2024 18:18:09 +0000   Thu, 10 Oct 2024 18:17:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.164
	  Hostname:    ha-142481-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 98346cf85e5d4e1e831142d0f2e86f20
	  System UUID:                98346cf8-5e5d-4e1e-8311-42d0f2e86f20
	  Boot ID:                    0fd379eb-2eaf-4e1b-aeda-b9abfe41644d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-qbvk6       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m14s
	  kube-system                 kube-proxy-4xzhw    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m14s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m9s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  3m14s (x2 over 3m14s)  kubelet          Node ha-142481-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m14s (x2 over 3m14s)  kubelet          Node ha-142481-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m14s (x2 over 3m14s)  kubelet          Node ha-142481-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m14s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m13s                  node-controller  Node ha-142481-m04 event: Registered Node ha-142481-m04 in Controller
	  Normal  RegisteredNode           3m12s                  node-controller  Node ha-142481-m04 event: Registered Node ha-142481-m04 in Controller
	  Normal  RegisteredNode           3m11s                  node-controller  Node ha-142481-m04 event: Registered Node ha-142481-m04 in Controller
	  Normal  NodeReady                2m54s                  kubelet          Node ha-142481-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct10 18:13] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050451] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040403] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.885132] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.655679] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.952802] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Oct10 18:14] systemd-fstab-generator[584]: Ignoring "noauto" option for root device
	[  +0.063573] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.063579] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.169358] systemd-fstab-generator[611]: Ignoring "noauto" option for root device
	[  +0.137879] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.284778] systemd-fstab-generator[653]: Ignoring "noauto" option for root device
	[  +4.055847] systemd-fstab-generator[745]: Ignoring "noauto" option for root device
	[  +3.359583] systemd-fstab-generator[872]: Ignoring "noauto" option for root device
	[  +0.065935] kauditd_printk_skb: 158 callbacks suppressed
	[ +10.163908] systemd-fstab-generator[1291]: Ignoring "noauto" option for root device
	[  +0.085716] kauditd_printk_skb: 79 callbacks suppressed
	[ +14.930913] kauditd_printk_skb: 69 callbacks suppressed
	[Oct10 18:15] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [206693e605977eabf0659fe5400e46be795755080a225b01d46bb7533d055c58] <==
	{"level":"warn","ts":"2024-10-10T18:20:53.346320Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"223628dc6b2f68bd","from":"223628dc6b2f68bd","remote-peer-id":"74546b85ba45f826","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-10T18:20:53.346379Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"223628dc6b2f68bd","from":"223628dc6b2f68bd","remote-peer-id":"74546b85ba45f826","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-10T18:20:53.354962Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"223628dc6b2f68bd","from":"223628dc6b2f68bd","remote-peer-id":"74546b85ba45f826","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-10T18:20:53.359281Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"223628dc6b2f68bd","from":"223628dc6b2f68bd","remote-peer-id":"74546b85ba45f826","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-10T18:20:53.367791Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"223628dc6b2f68bd","from":"223628dc6b2f68bd","remote-peer-id":"74546b85ba45f826","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-10T18:20:53.378830Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"223628dc6b2f68bd","from":"223628dc6b2f68bd","remote-peer-id":"74546b85ba45f826","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-10T18:20:53.387101Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"223628dc6b2f68bd","from":"223628dc6b2f68bd","remote-peer-id":"74546b85ba45f826","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-10T18:20:53.391099Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"223628dc6b2f68bd","from":"223628dc6b2f68bd","remote-peer-id":"74546b85ba45f826","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-10T18:20:53.394952Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"223628dc6b2f68bd","from":"223628dc6b2f68bd","remote-peer-id":"74546b85ba45f826","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-10T18:20:53.403413Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"223628dc6b2f68bd","from":"223628dc6b2f68bd","remote-peer-id":"74546b85ba45f826","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-10T18:20:53.411748Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"223628dc6b2f68bd","from":"223628dc6b2f68bd","remote-peer-id":"74546b85ba45f826","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-10T18:20:53.419044Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"223628dc6b2f68bd","from":"223628dc6b2f68bd","remote-peer-id":"74546b85ba45f826","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-10T18:20:53.425264Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"223628dc6b2f68bd","from":"223628dc6b2f68bd","remote-peer-id":"74546b85ba45f826","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-10T18:20:53.429674Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"223628dc6b2f68bd","from":"223628dc6b2f68bd","remote-peer-id":"74546b85ba45f826","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-10T18:20:53.438375Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"223628dc6b2f68bd","from":"223628dc6b2f68bd","remote-peer-id":"74546b85ba45f826","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-10T18:20:53.439735Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"223628dc6b2f68bd","from":"223628dc6b2f68bd","remote-peer-id":"74546b85ba45f826","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-10T18:20:53.448728Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"223628dc6b2f68bd","from":"223628dc6b2f68bd","remote-peer-id":"74546b85ba45f826","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-10T18:20:53.453381Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"223628dc6b2f68bd","from":"223628dc6b2f68bd","remote-peer-id":"74546b85ba45f826","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-10T18:20:53.462129Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"223628dc6b2f68bd","from":"223628dc6b2f68bd","remote-peer-id":"74546b85ba45f826","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-10T18:20:53.467290Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"223628dc6b2f68bd","from":"223628dc6b2f68bd","remote-peer-id":"74546b85ba45f826","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-10T18:20:53.473764Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"223628dc6b2f68bd","from":"223628dc6b2f68bd","remote-peer-id":"74546b85ba45f826","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-10T18:20:53.479068Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"223628dc6b2f68bd","from":"223628dc6b2f68bd","remote-peer-id":"74546b85ba45f826","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-10T18:20:53.486756Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"223628dc6b2f68bd","from":"223628dc6b2f68bd","remote-peer-id":"74546b85ba45f826","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-10T18:20:53.522945Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"223628dc6b2f68bd","from":"223628dc6b2f68bd","remote-peer-id":"74546b85ba45f826","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-10T18:20:53.547717Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"223628dc6b2f68bd","from":"223628dc6b2f68bd","remote-peer-id":"74546b85ba45f826","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 18:20:53 up 7 min,  0 users,  load average: 0.43, 0.37, 0.19
	Linux ha-142481 5.10.207 #1 SMP Tue Oct 8 15:16:25 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [b32ac96128061a8cd1f65949d4849412dd90d7656d1619c49a39c2a6f8cb40d3] <==
	I1010 18:20:15.395542       1 main.go:322] Node ha-142481-m03 has CIDR [10.244.2.0/24] 
	I1010 18:20:25.390200       1 main.go:295] Handling node with IPs: map[192.168.39.104:{}]
	I1010 18:20:25.390354       1 main.go:299] handling current node
	I1010 18:20:25.390392       1 main.go:295] Handling node with IPs: map[192.168.39.186:{}]
	I1010 18:20:25.390416       1 main.go:322] Node ha-142481-m02 has CIDR [10.244.1.0/24] 
	I1010 18:20:25.390644       1 main.go:295] Handling node with IPs: map[192.168.39.175:{}]
	I1010 18:20:25.390677       1 main.go:322] Node ha-142481-m03 has CIDR [10.244.2.0/24] 
	I1010 18:20:25.390737       1 main.go:295] Handling node with IPs: map[192.168.39.164:{}]
	I1010 18:20:25.390755       1 main.go:322] Node ha-142481-m04 has CIDR [10.244.3.0/24] 
	I1010 18:20:35.399378       1 main.go:295] Handling node with IPs: map[192.168.39.104:{}]
	I1010 18:20:35.399430       1 main.go:299] handling current node
	I1010 18:20:35.399452       1 main.go:295] Handling node with IPs: map[192.168.39.186:{}]
	I1010 18:20:35.399457       1 main.go:322] Node ha-142481-m02 has CIDR [10.244.1.0/24] 
	I1010 18:20:35.399642       1 main.go:295] Handling node with IPs: map[192.168.39.175:{}]
	I1010 18:20:35.399667       1 main.go:322] Node ha-142481-m03 has CIDR [10.244.2.0/24] 
	I1010 18:20:35.399718       1 main.go:295] Handling node with IPs: map[192.168.39.164:{}]
	I1010 18:20:35.399723       1 main.go:322] Node ha-142481-m04 has CIDR [10.244.3.0/24] 
	I1010 18:20:45.399629       1 main.go:295] Handling node with IPs: map[192.168.39.175:{}]
	I1010 18:20:45.399760       1 main.go:322] Node ha-142481-m03 has CIDR [10.244.2.0/24] 
	I1010 18:20:45.399950       1 main.go:295] Handling node with IPs: map[192.168.39.164:{}]
	I1010 18:20:45.399978       1 main.go:322] Node ha-142481-m04 has CIDR [10.244.3.0/24] 
	I1010 18:20:45.400080       1 main.go:295] Handling node with IPs: map[192.168.39.104:{}]
	I1010 18:20:45.400105       1 main.go:299] handling current node
	I1010 18:20:45.400138       1 main.go:295] Handling node with IPs: map[192.168.39.186:{}]
	I1010 18:20:45.400158       1 main.go:322] Node ha-142481-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [751981b34b5e95c8f49366cd245e2877efdef340757e2957722cc0c59f16c09c] <==
	I1010 18:14:21.601752       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1010 18:14:21.615538       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1010 18:14:22.685756       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I1010 18:14:22.961093       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E1010 18:15:13.597943       1 writers.go:122] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E1010 18:15:13.598021       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 8.162µs, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	E1010 18:15:13.599137       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E1010 18:15:13.600311       1 writers.go:135] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E1010 18:15:13.601619       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="3.769951ms" method="POST" path="/api/v1/namespaces/kube-system/events" result=null
	E1010 18:17:03.850296       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:50978: use of closed network connection
	E1010 18:17:04.060164       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:50998: use of closed network connection
	E1010 18:17:04.265073       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51022: use of closed network connection
	E1010 18:17:04.497148       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51026: use of closed network connection
	E1010 18:17:04.691753       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51052: use of closed network connection
	E1010 18:17:04.874313       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51072: use of closed network connection
	E1010 18:17:05.055509       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51096: use of closed network connection
	E1010 18:17:05.241806       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51110: use of closed network connection
	E1010 18:17:05.418962       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51128: use of closed network connection
	E1010 18:17:05.714305       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35886: use of closed network connection
	E1010 18:17:05.894226       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35894: use of closed network connection
	E1010 18:17:06.084951       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35922: use of closed network connection
	E1010 18:17:06.281751       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35936: use of closed network connection
	E1010 18:17:06.459430       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35954: use of closed network connection
	E1010 18:17:06.642941       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35966: use of closed network connection
	W1010 18:18:37.363890       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.104 192.168.39.175]
	
	
	==> kube-controller-manager [4d7eb644bee429f0aef0448b6ea6d1efc310f7b08553ebb2af7781f4c493bbaf] <==
	I1010 18:17:39.636355       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-142481-m04" podCIDRs=["10.244.3.0/24"]
	I1010 18:17:39.636414       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-142481-m04"
	I1010 18:17:39.636469       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-142481-m04"
	I1010 18:17:39.668112       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-142481-m04"
	I1010 18:17:39.689740       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-142481-m04"
	I1010 18:17:40.177402       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-142481-m04"
	I1010 18:17:40.233291       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-142481-m04"
	I1010 18:17:41.187681       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-142481-m04"
	I1010 18:17:41.226193       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-142481-m04"
	I1010 18:17:42.243172       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-142481-m04"
	I1010 18:17:42.243646       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-142481-m04"
	I1010 18:17:42.333986       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-142481-m04"
	I1010 18:17:49.941287       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-142481-m04"
	I1010 18:17:59.249000       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-142481-m04"
	I1010 18:17:59.249257       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-142481-m04"
	I1010 18:17:59.269371       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-142481-m04"
	I1010 18:18:00.212787       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-142481-m04"
	I1010 18:18:09.988078       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-142481-m04"
	I1010 18:18:57.270927       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-142481-m04"
	I1010 18:18:57.272138       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-142481-m02"
	I1010 18:18:57.296852       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-142481-m02"
	I1010 18:18:57.478314       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="49.230176ms"
	I1010 18:18:57.478428       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="80.474µs"
	I1010 18:19:00.278371       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-142481-m02"
	I1010 18:19:02.479119       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-142481-m02"
	
	
	==> kube-proxy [9f7d32719ebd232c16c4b9d557726714c2b6c1836b556ff4ca647c75d46c03e9] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1010 18:14:24.446239       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1010 18:14:24.508320       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.104"]
	E1010 18:14:24.508809       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1010 18:14:24.556831       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1010 18:14:24.556922       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1010 18:14:24.556961       1 server_linux.go:169] "Using iptables Proxier"
	I1010 18:14:24.559536       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1010 18:14:24.560518       1 server.go:483] "Version info" version="v1.31.1"
	I1010 18:14:24.560742       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1010 18:14:24.562971       1 config.go:199] "Starting service config controller"
	I1010 18:14:24.563611       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1010 18:14:24.563720       1 config.go:105] "Starting endpoint slice config controller"
	I1010 18:14:24.563744       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1010 18:14:24.566215       1 config.go:328] "Starting node config controller"
	I1010 18:14:24.566227       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1010 18:14:24.665476       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1010 18:14:24.665712       1 shared_informer.go:320] Caches are synced for service config
	I1010 18:14:24.667666       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [43b160f9e1140bbc977bac3d49805593ce66960b30fe85ac1e6b104437e5a026] <==
	W1010 18:14:16.494936       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1010 18:14:16.495042       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1010 18:14:16.517223       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1010 18:14:16.517488       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1010 18:14:16.544128       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1010 18:14:16.544233       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1010 18:14:16.560806       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1010 18:14:16.560856       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1010 18:14:16.640427       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1010 18:14:16.640554       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1010 18:14:16.701938       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1010 18:14:16.702008       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1010 18:14:16.773339       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1010 18:14:16.773523       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1010 18:14:16.873800       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1010 18:14:16.874006       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1010 18:14:18.221733       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1010 18:16:59.352658       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-wf7qs\": pod busybox-7dff88458-wf7qs is already assigned to node \"ha-142481-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-wf7qs" node="ha-142481-m02"
	E1010 18:16:59.352878       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 8cfeb378-41dd-4850-bbc6-610453612cf5(default/busybox-7dff88458-wf7qs) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-wf7qs"
	E1010 18:16:59.352933       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-wf7qs\": pod busybox-7dff88458-wf7qs is already assigned to node \"ha-142481-m02\"" pod="default/busybox-7dff88458-wf7qs"
	I1010 18:16:59.352990       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-wf7qs" node="ha-142481-m02"
	E1010 18:17:39.876287       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-qbvk6\": pod kindnet-qbvk6 is already assigned to node \"ha-142481-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-qbvk6" node="ha-142481-m04"
	E1010 18:17:39.876531       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 67b280c2-562d-45e0-a362-726dadaf5cf6(kube-system/kindnet-qbvk6) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-qbvk6"
	E1010 18:17:39.876554       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-qbvk6\": pod kindnet-qbvk6 is already assigned to node \"ha-142481-m04\"" pod="kube-system/kindnet-qbvk6"
	I1010 18:17:39.876861       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-qbvk6" node="ha-142481-m04"
	
	
	==> kubelet <==
	Oct 10 18:19:21 ha-142481 kubelet[1298]: E1010 18:19:21.653774    1298 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728584361653351989,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 18:19:21 ha-142481 kubelet[1298]: E1010 18:19:21.654165    1298 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728584361653351989,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 18:19:31 ha-142481 kubelet[1298]: E1010 18:19:31.655501    1298 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728584371655103881,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 18:19:31 ha-142481 kubelet[1298]: E1010 18:19:31.656061    1298 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728584371655103881,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 18:19:41 ha-142481 kubelet[1298]: E1010 18:19:41.657888    1298 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728584381657459506,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 18:19:41 ha-142481 kubelet[1298]: E1010 18:19:41.657923    1298 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728584381657459506,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 18:19:51 ha-142481 kubelet[1298]: E1010 18:19:51.662516    1298 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728584391660533273,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 18:19:51 ha-142481 kubelet[1298]: E1010 18:19:51.662805    1298 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728584391660533273,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 18:20:01 ha-142481 kubelet[1298]: E1010 18:20:01.665482    1298 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728584401664880599,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 18:20:01 ha-142481 kubelet[1298]: E1010 18:20:01.665528    1298 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728584401664880599,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 18:20:11 ha-142481 kubelet[1298]: E1010 18:20:11.668335    1298 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728584411667894103,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 18:20:11 ha-142481 kubelet[1298]: E1010 18:20:11.668374    1298 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728584411667894103,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 18:20:21 ha-142481 kubelet[1298]: E1010 18:20:21.541634    1298 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 10 18:20:21 ha-142481 kubelet[1298]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 10 18:20:21 ha-142481 kubelet[1298]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 10 18:20:21 ha-142481 kubelet[1298]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 10 18:20:21 ha-142481 kubelet[1298]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 10 18:20:21 ha-142481 kubelet[1298]: E1010 18:20:21.670317    1298 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728584421670063294,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 18:20:21 ha-142481 kubelet[1298]: E1010 18:20:21.670363    1298 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728584421670063294,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 18:20:31 ha-142481 kubelet[1298]: E1010 18:20:31.672182    1298 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728584431671864331,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 18:20:31 ha-142481 kubelet[1298]: E1010 18:20:31.672436    1298 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728584431671864331,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 18:20:41 ha-142481 kubelet[1298]: E1010 18:20:41.682034    1298 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728584441680876363,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 18:20:41 ha-142481 kubelet[1298]: E1010 18:20:41.682449    1298 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728584441680876363,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 18:20:51 ha-142481 kubelet[1298]: E1010 18:20:51.683877    1298 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728584451683638908,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 18:20:51 ha-142481 kubelet[1298]: E1010 18:20:51.683909    1298 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728584451683638908,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-142481 -n ha-142481
helpers_test.go:261: (dbg) Run:  kubectl --context ha-142481 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (6.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (6.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.77401153s)
ha_test.go:309: expected profile "ha-142481" in json of 'profile list' to have "HAppy" status but have "Unknown" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-142481\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"ha-142481\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"kvm2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\
"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-142481\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.39.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.39.104\",\"Port\":8443,\"Kuberne
tesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.39.186\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.39.175\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.39.164\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":false,\"m
etallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/home/jenkins:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":
262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-142481 -n ha-142481
helpers_test.go:244: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-142481 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-142481 logs -n 25: (1.473636291s)
helpers_test.go:252: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-142481 ssh -n                                                                 | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | ha-142481-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-142481 cp ha-142481-m03:/home/docker/cp-test.txt                              | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | ha-142481:/home/docker/cp-test_ha-142481-m03_ha-142481.txt                       |           |         |         |                     |                     |
	| ssh     | ha-142481 ssh -n                                                                 | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | ha-142481-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-142481 ssh -n ha-142481 sudo cat                                              | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | /home/docker/cp-test_ha-142481-m03_ha-142481.txt                                 |           |         |         |                     |                     |
	| cp      | ha-142481 cp ha-142481-m03:/home/docker/cp-test.txt                              | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | ha-142481-m02:/home/docker/cp-test_ha-142481-m03_ha-142481-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-142481 ssh -n                                                                 | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | ha-142481-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-142481 ssh -n ha-142481-m02 sudo cat                                          | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | /home/docker/cp-test_ha-142481-m03_ha-142481-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-142481 cp ha-142481-m03:/home/docker/cp-test.txt                              | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | ha-142481-m04:/home/docker/cp-test_ha-142481-m03_ha-142481-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-142481 ssh -n                                                                 | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | ha-142481-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-142481 ssh -n ha-142481-m04 sudo cat                                          | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | /home/docker/cp-test_ha-142481-m03_ha-142481-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-142481 cp testdata/cp-test.txt                                                | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | ha-142481-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-142481 ssh -n                                                                 | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | ha-142481-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-142481 cp ha-142481-m04:/home/docker/cp-test.txt                              | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile4115718102/001/cp-test_ha-142481-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-142481 ssh -n                                                                 | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | ha-142481-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-142481 cp ha-142481-m04:/home/docker/cp-test.txt                              | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | ha-142481:/home/docker/cp-test_ha-142481-m04_ha-142481.txt                       |           |         |         |                     |                     |
	| ssh     | ha-142481 ssh -n                                                                 | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | ha-142481-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-142481 ssh -n ha-142481 sudo cat                                              | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | /home/docker/cp-test_ha-142481-m04_ha-142481.txt                                 |           |         |         |                     |                     |
	| cp      | ha-142481 cp ha-142481-m04:/home/docker/cp-test.txt                              | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | ha-142481-m02:/home/docker/cp-test_ha-142481-m04_ha-142481-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-142481 ssh -n                                                                 | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | ha-142481-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-142481 ssh -n ha-142481-m02 sudo cat                                          | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | /home/docker/cp-test_ha-142481-m04_ha-142481-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-142481 cp ha-142481-m04:/home/docker/cp-test.txt                              | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | ha-142481-m03:/home/docker/cp-test_ha-142481-m04_ha-142481-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-142481 ssh -n                                                                 | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | ha-142481-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-142481 ssh -n ha-142481-m03 sudo cat                                          | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | /home/docker/cp-test_ha-142481-m04_ha-142481-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-142481 node stop m02 -v=7                                                     | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-142481 node start m02 -v=7                                                    | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:20 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/10 18:13:38
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1010 18:13:38.106562   99368 out.go:345] Setting OutFile to fd 1 ...
	I1010 18:13:38.106682   99368 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 18:13:38.106690   99368 out.go:358] Setting ErrFile to fd 2...
	I1010 18:13:38.106694   99368 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 18:13:38.106895   99368 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19787-81676/.minikube/bin
	I1010 18:13:38.107477   99368 out.go:352] Setting JSON to false
	I1010 18:13:38.108309   99368 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":6964,"bootTime":1728577054,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1010 18:13:38.108413   99368 start.go:139] virtualization: kvm guest
	I1010 18:13:38.110824   99368 out.go:177] * [ha-142481] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1010 18:13:38.112418   99368 out.go:177]   - MINIKUBE_LOCATION=19787
	I1010 18:13:38.112454   99368 notify.go:220] Checking for updates...
	I1010 18:13:38.114936   99368 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1010 18:13:38.116370   99368 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19787-81676/kubeconfig
	I1010 18:13:38.117745   99368 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19787-81676/.minikube
	I1010 18:13:38.118944   99368 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1010 18:13:38.120250   99368 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1010 18:13:38.121551   99368 driver.go:394] Setting default libvirt URI to qemu:///system
	I1010 18:13:38.157644   99368 out.go:177] * Using the kvm2 driver based on user configuration
	I1010 18:13:38.158888   99368 start.go:297] selected driver: kvm2
	I1010 18:13:38.158919   99368 start.go:901] validating driver "kvm2" against <nil>
	I1010 18:13:38.158934   99368 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1010 18:13:38.159711   99368 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 18:13:38.159814   99368 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19787-81676/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1010 18:13:38.174780   99368 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1010 18:13:38.174840   99368 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1010 18:13:38.175095   99368 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1010 18:13:38.175132   99368 cni.go:84] Creating CNI manager for ""
	I1010 18:13:38.175195   99368 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1010 18:13:38.175219   99368 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1010 18:13:38.175271   99368 start.go:340] cluster config:
	{Name:ha-142481 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-142481 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1010 18:13:38.175372   99368 iso.go:125] acquiring lock: {Name:mk53f4624ffe67cac4f5d6b29b9ed87292a010a5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 18:13:38.177295   99368 out.go:177] * Starting "ha-142481" primary control-plane node in "ha-142481" cluster
	I1010 18:13:38.178523   99368 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1010 18:13:38.178564   99368 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1010 18:13:38.178578   99368 cache.go:56] Caching tarball of preloaded images
	I1010 18:13:38.178671   99368 preload.go:172] Found /home/jenkins/minikube-integration/19787-81676/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1010 18:13:38.178686   99368 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1010 18:13:38.179056   99368 profile.go:143] Saving config to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/config.json ...
	I1010 18:13:38.179080   99368 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/config.json: {Name:mk6ba06e5ddbd39667f8d6031429fc5b567ca233 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:13:38.179240   99368 start.go:360] acquireMachinesLock for ha-142481: {Name:mkf50a8a3600473176c10a3ea212772e747151e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1010 18:13:38.179277   99368 start.go:364] duration metric: took 20.536µs to acquireMachinesLock for "ha-142481"
	I1010 18:13:38.179299   99368 start.go:93] Provisioning new machine with config: &{Name:ha-142481 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-142481 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1010 18:13:38.179350   99368 start.go:125] createHost starting for "" (driver="kvm2")
	I1010 18:13:38.180956   99368 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1010 18:13:38.181134   99368 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 18:13:38.181190   99368 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 18:13:38.195735   99368 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37675
	I1010 18:13:38.196239   99368 main.go:141] libmachine: () Calling .GetVersion
	I1010 18:13:38.196810   99368 main.go:141] libmachine: Using API Version  1
	I1010 18:13:38.196834   99368 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 18:13:38.197229   99368 main.go:141] libmachine: () Calling .GetMachineName
	I1010 18:13:38.197439   99368 main.go:141] libmachine: (ha-142481) Calling .GetMachineName
	I1010 18:13:38.197656   99368 main.go:141] libmachine: (ha-142481) Calling .DriverName
	I1010 18:13:38.197815   99368 start.go:159] libmachine.API.Create for "ha-142481" (driver="kvm2")
	I1010 18:13:38.197850   99368 client.go:168] LocalClient.Create starting
	I1010 18:13:38.197896   99368 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem
	I1010 18:13:38.197929   99368 main.go:141] libmachine: Decoding PEM data...
	I1010 18:13:38.197946   99368 main.go:141] libmachine: Parsing certificate...
	I1010 18:13:38.197994   99368 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem
	I1010 18:13:38.198011   99368 main.go:141] libmachine: Decoding PEM data...
	I1010 18:13:38.198032   99368 main.go:141] libmachine: Parsing certificate...
	I1010 18:13:38.198051   99368 main.go:141] libmachine: Running pre-create checks...
	I1010 18:13:38.198059   99368 main.go:141] libmachine: (ha-142481) Calling .PreCreateCheck
	I1010 18:13:38.198443   99368 main.go:141] libmachine: (ha-142481) Calling .GetConfigRaw
	I1010 18:13:38.198814   99368 main.go:141] libmachine: Creating machine...
	I1010 18:13:38.198829   99368 main.go:141] libmachine: (ha-142481) Calling .Create
	I1010 18:13:38.199006   99368 main.go:141] libmachine: (ha-142481) Creating KVM machine...
	I1010 18:13:38.200423   99368 main.go:141] libmachine: (ha-142481) DBG | found existing default KVM network
	I1010 18:13:38.201134   99368 main.go:141] libmachine: (ha-142481) DBG | I1010 18:13:38.200987   99391 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015b90}
	I1010 18:13:38.201152   99368 main.go:141] libmachine: (ha-142481) DBG | created network xml: 
	I1010 18:13:38.201163   99368 main.go:141] libmachine: (ha-142481) DBG | <network>
	I1010 18:13:38.201168   99368 main.go:141] libmachine: (ha-142481) DBG |   <name>mk-ha-142481</name>
	I1010 18:13:38.201173   99368 main.go:141] libmachine: (ha-142481) DBG |   <dns enable='no'/>
	I1010 18:13:38.201179   99368 main.go:141] libmachine: (ha-142481) DBG |   
	I1010 18:13:38.201186   99368 main.go:141] libmachine: (ha-142481) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1010 18:13:38.201195   99368 main.go:141] libmachine: (ha-142481) DBG |     <dhcp>
	I1010 18:13:38.201204   99368 main.go:141] libmachine: (ha-142481) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1010 18:13:38.201210   99368 main.go:141] libmachine: (ha-142481) DBG |     </dhcp>
	I1010 18:13:38.201224   99368 main.go:141] libmachine: (ha-142481) DBG |   </ip>
	I1010 18:13:38.201233   99368 main.go:141] libmachine: (ha-142481) DBG |   
	I1010 18:13:38.201241   99368 main.go:141] libmachine: (ha-142481) DBG | </network>
	I1010 18:13:38.201253   99368 main.go:141] libmachine: (ha-142481) DBG | 
	I1010 18:13:38.206109   99368 main.go:141] libmachine: (ha-142481) DBG | trying to create private KVM network mk-ha-142481 192.168.39.0/24...
	I1010 18:13:38.273921   99368 main.go:141] libmachine: (ha-142481) DBG | private KVM network mk-ha-142481 192.168.39.0/24 created
	I1010 18:13:38.273973   99368 main.go:141] libmachine: (ha-142481) DBG | I1010 18:13:38.273888   99391 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19787-81676/.minikube
	I1010 18:13:38.273987   99368 main.go:141] libmachine: (ha-142481) Setting up store path in /home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481 ...
	I1010 18:13:38.274008   99368 main.go:141] libmachine: (ha-142481) Building disk image from file:///home/jenkins/minikube-integration/19787-81676/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso
	I1010 18:13:38.274030   99368 main.go:141] libmachine: (ha-142481) Downloading /home/jenkins/minikube-integration/19787-81676/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19787-81676/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso...
	I1010 18:13:38.538580   99368 main.go:141] libmachine: (ha-142481) DBG | I1010 18:13:38.538442   99391 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481/id_rsa...
	I1010 18:13:38.734956   99368 main.go:141] libmachine: (ha-142481) DBG | I1010 18:13:38.734800   99391 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481/ha-142481.rawdisk...
	I1010 18:13:38.734986   99368 main.go:141] libmachine: (ha-142481) DBG | Writing magic tar header
	I1010 18:13:38.734996   99368 main.go:141] libmachine: (ha-142481) DBG | Writing SSH key tar header
	I1010 18:13:38.735006   99368 main.go:141] libmachine: (ha-142481) DBG | I1010 18:13:38.734920   99391 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481 ...
	I1010 18:13:38.735023   99368 main.go:141] libmachine: (ha-142481) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481
	I1010 18:13:38.735054   99368 main.go:141] libmachine: (ha-142481) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19787-81676/.minikube/machines
	I1010 18:13:38.735062   99368 main.go:141] libmachine: (ha-142481) Setting executable bit set on /home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481 (perms=drwx------)
	I1010 18:13:38.735074   99368 main.go:141] libmachine: (ha-142481) Setting executable bit set on /home/jenkins/minikube-integration/19787-81676/.minikube/machines (perms=drwxr-xr-x)
	I1010 18:13:38.735083   99368 main.go:141] libmachine: (ha-142481) Setting executable bit set on /home/jenkins/minikube-integration/19787-81676/.minikube (perms=drwxr-xr-x)
	I1010 18:13:38.735098   99368 main.go:141] libmachine: (ha-142481) Setting executable bit set on /home/jenkins/minikube-integration/19787-81676 (perms=drwxrwxr-x)
	I1010 18:13:38.735107   99368 main.go:141] libmachine: (ha-142481) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19787-81676/.minikube
	I1010 18:13:38.735121   99368 main.go:141] libmachine: (ha-142481) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1010 18:13:38.735132   99368 main.go:141] libmachine: (ha-142481) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1010 18:13:38.735139   99368 main.go:141] libmachine: (ha-142481) Creating domain...
	I1010 18:13:38.735156   99368 main.go:141] libmachine: (ha-142481) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19787-81676
	I1010 18:13:38.735166   99368 main.go:141] libmachine: (ha-142481) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1010 18:13:38.735171   99368 main.go:141] libmachine: (ha-142481) DBG | Checking permissions on dir: /home/jenkins
	I1010 18:13:38.735177   99368 main.go:141] libmachine: (ha-142481) DBG | Checking permissions on dir: /home
	I1010 18:13:38.735183   99368 main.go:141] libmachine: (ha-142481) DBG | Skipping /home - not owner
	I1010 18:13:38.736388   99368 main.go:141] libmachine: (ha-142481) define libvirt domain using xml: 
	I1010 18:13:38.736417   99368 main.go:141] libmachine: (ha-142481) <domain type='kvm'>
	I1010 18:13:38.736427   99368 main.go:141] libmachine: (ha-142481)   <name>ha-142481</name>
	I1010 18:13:38.736439   99368 main.go:141] libmachine: (ha-142481)   <memory unit='MiB'>2200</memory>
	I1010 18:13:38.736471   99368 main.go:141] libmachine: (ha-142481)   <vcpu>2</vcpu>
	I1010 18:13:38.736493   99368 main.go:141] libmachine: (ha-142481)   <features>
	I1010 18:13:38.736527   99368 main.go:141] libmachine: (ha-142481)     <acpi/>
	I1010 18:13:38.736554   99368 main.go:141] libmachine: (ha-142481)     <apic/>
	I1010 18:13:38.736566   99368 main.go:141] libmachine: (ha-142481)     <pae/>
	I1010 18:13:38.736588   99368 main.go:141] libmachine: (ha-142481)     
	I1010 18:13:38.736600   99368 main.go:141] libmachine: (ha-142481)   </features>
	I1010 18:13:38.736610   99368 main.go:141] libmachine: (ha-142481)   <cpu mode='host-passthrough'>
	I1010 18:13:38.736620   99368 main.go:141] libmachine: (ha-142481)   
	I1010 18:13:38.736633   99368 main.go:141] libmachine: (ha-142481)   </cpu>
	I1010 18:13:38.736643   99368 main.go:141] libmachine: (ha-142481)   <os>
	I1010 18:13:38.736649   99368 main.go:141] libmachine: (ha-142481)     <type>hvm</type>
	I1010 18:13:38.736661   99368 main.go:141] libmachine: (ha-142481)     <boot dev='cdrom'/>
	I1010 18:13:38.736672   99368 main.go:141] libmachine: (ha-142481)     <boot dev='hd'/>
	I1010 18:13:38.736684   99368 main.go:141] libmachine: (ha-142481)     <bootmenu enable='no'/>
	I1010 18:13:38.736693   99368 main.go:141] libmachine: (ha-142481)   </os>
	I1010 18:13:38.736700   99368 main.go:141] libmachine: (ha-142481)   <devices>
	I1010 18:13:38.736710   99368 main.go:141] libmachine: (ha-142481)     <disk type='file' device='cdrom'>
	I1010 18:13:38.736729   99368 main.go:141] libmachine: (ha-142481)       <source file='/home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481/boot2docker.iso'/>
	I1010 18:13:38.736737   99368 main.go:141] libmachine: (ha-142481)       <target dev='hdc' bus='scsi'/>
	I1010 18:13:38.736742   99368 main.go:141] libmachine: (ha-142481)       <readonly/>
	I1010 18:13:38.736748   99368 main.go:141] libmachine: (ha-142481)     </disk>
	I1010 18:13:38.736754   99368 main.go:141] libmachine: (ha-142481)     <disk type='file' device='disk'>
	I1010 18:13:38.736761   99368 main.go:141] libmachine: (ha-142481)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1010 18:13:38.736768   99368 main.go:141] libmachine: (ha-142481)       <source file='/home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481/ha-142481.rawdisk'/>
	I1010 18:13:38.736773   99368 main.go:141] libmachine: (ha-142481)       <target dev='hda' bus='virtio'/>
	I1010 18:13:38.736780   99368 main.go:141] libmachine: (ha-142481)     </disk>
	I1010 18:13:38.736789   99368 main.go:141] libmachine: (ha-142481)     <interface type='network'>
	I1010 18:13:38.736795   99368 main.go:141] libmachine: (ha-142481)       <source network='mk-ha-142481'/>
	I1010 18:13:38.736800   99368 main.go:141] libmachine: (ha-142481)       <model type='virtio'/>
	I1010 18:13:38.736804   99368 main.go:141] libmachine: (ha-142481)     </interface>
	I1010 18:13:38.736811   99368 main.go:141] libmachine: (ha-142481)     <interface type='network'>
	I1010 18:13:38.736816   99368 main.go:141] libmachine: (ha-142481)       <source network='default'/>
	I1010 18:13:38.736822   99368 main.go:141] libmachine: (ha-142481)       <model type='virtio'/>
	I1010 18:13:38.736831   99368 main.go:141] libmachine: (ha-142481)     </interface>
	I1010 18:13:38.736837   99368 main.go:141] libmachine: (ha-142481)     <serial type='pty'>
	I1010 18:13:38.736842   99368 main.go:141] libmachine: (ha-142481)       <target port='0'/>
	I1010 18:13:38.736868   99368 main.go:141] libmachine: (ha-142481)     </serial>
	I1010 18:13:38.736882   99368 main.go:141] libmachine: (ha-142481)     <console type='pty'>
	I1010 18:13:38.736896   99368 main.go:141] libmachine: (ha-142481)       <target type='serial' port='0'/>
	I1010 18:13:38.736911   99368 main.go:141] libmachine: (ha-142481)     </console>
	I1010 18:13:38.736921   99368 main.go:141] libmachine: (ha-142481)     <rng model='virtio'>
	I1010 18:13:38.736929   99368 main.go:141] libmachine: (ha-142481)       <backend model='random'>/dev/random</backend>
	I1010 18:13:38.736935   99368 main.go:141] libmachine: (ha-142481)     </rng>
	I1010 18:13:38.736942   99368 main.go:141] libmachine: (ha-142481)     
	I1010 18:13:38.736951   99368 main.go:141] libmachine: (ha-142481)     
	I1010 18:13:38.736962   99368 main.go:141] libmachine: (ha-142481)   </devices>
	I1010 18:13:38.736973   99368 main.go:141] libmachine: (ha-142481) </domain>
	I1010 18:13:38.737007   99368 main.go:141] libmachine: (ha-142481) 
	I1010 18:13:38.741472   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:b1:0c:5d in network default
	I1010 18:13:38.742188   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:13:38.742202   99368 main.go:141] libmachine: (ha-142481) Ensuring networks are active...
	I1010 18:13:38.743102   99368 main.go:141] libmachine: (ha-142481) Ensuring network default is active
	I1010 18:13:38.743484   99368 main.go:141] libmachine: (ha-142481) Ensuring network mk-ha-142481 is active
	I1010 18:13:38.743981   99368 main.go:141] libmachine: (ha-142481) Getting domain xml...
	I1010 18:13:38.744831   99368 main.go:141] libmachine: (ha-142481) Creating domain...
	I1010 18:13:39.943643   99368 main.go:141] libmachine: (ha-142481) Waiting to get IP...
	I1010 18:13:39.944415   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:13:39.944819   99368 main.go:141] libmachine: (ha-142481) DBG | unable to find current IP address of domain ha-142481 in network mk-ha-142481
	I1010 18:13:39.944886   99368 main.go:141] libmachine: (ha-142481) DBG | I1010 18:13:39.944805   99391 retry.go:31] will retry after 263.450232ms: waiting for machine to come up
	I1010 18:13:40.210494   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:13:40.210938   99368 main.go:141] libmachine: (ha-142481) DBG | unable to find current IP address of domain ha-142481 in network mk-ha-142481
	I1010 18:13:40.210979   99368 main.go:141] libmachine: (ha-142481) DBG | I1010 18:13:40.210904   99391 retry.go:31] will retry after 318.83444ms: waiting for machine to come up
	I1010 18:13:40.531556   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:13:40.531982   99368 main.go:141] libmachine: (ha-142481) DBG | unable to find current IP address of domain ha-142481 in network mk-ha-142481
	I1010 18:13:40.532010   99368 main.go:141] libmachine: (ha-142481) DBG | I1010 18:13:40.531946   99391 retry.go:31] will retry after 379.250744ms: waiting for machine to come up
	I1010 18:13:40.912440   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:13:40.912909   99368 main.go:141] libmachine: (ha-142481) DBG | unable to find current IP address of domain ha-142481 in network mk-ha-142481
	I1010 18:13:40.912942   99368 main.go:141] libmachine: (ha-142481) DBG | I1010 18:13:40.912844   99391 retry.go:31] will retry after 505.831382ms: waiting for machine to come up
	I1010 18:13:41.420670   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:13:41.421119   99368 main.go:141] libmachine: (ha-142481) DBG | unable to find current IP address of domain ha-142481 in network mk-ha-142481
	I1010 18:13:41.421141   99368 main.go:141] libmachine: (ha-142481) DBG | I1010 18:13:41.421071   99391 retry.go:31] will retry after 555.074801ms: waiting for machine to come up
	I1010 18:13:41.977849   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:13:41.978257   99368 main.go:141] libmachine: (ha-142481) DBG | unable to find current IP address of domain ha-142481 in network mk-ha-142481
	I1010 18:13:41.978281   99368 main.go:141] libmachine: (ha-142481) DBG | I1010 18:13:41.978194   99391 retry.go:31] will retry after 636.152434ms: waiting for machine to come up
	I1010 18:13:42.615909   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:13:42.616285   99368 main.go:141] libmachine: (ha-142481) DBG | unable to find current IP address of domain ha-142481 in network mk-ha-142481
	I1010 18:13:42.616320   99368 main.go:141] libmachine: (ha-142481) DBG | I1010 18:13:42.616236   99391 retry.go:31] will retry after 907.451913ms: waiting for machine to come up
	I1010 18:13:43.524700   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:13:43.525164   99368 main.go:141] libmachine: (ha-142481) DBG | unable to find current IP address of domain ha-142481 in network mk-ha-142481
	I1010 18:13:43.525241   99368 main.go:141] libmachine: (ha-142481) DBG | I1010 18:13:43.525119   99391 retry.go:31] will retry after 916.746032ms: waiting for machine to come up
	I1010 18:13:44.443019   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:13:44.443439   99368 main.go:141] libmachine: (ha-142481) DBG | unable to find current IP address of domain ha-142481 in network mk-ha-142481
	I1010 18:13:44.443463   99368 main.go:141] libmachine: (ha-142481) DBG | I1010 18:13:44.443379   99391 retry.go:31] will retry after 1.722399675s: waiting for machine to come up
	I1010 18:13:46.168252   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:13:46.168660   99368 main.go:141] libmachine: (ha-142481) DBG | unable to find current IP address of domain ha-142481 in network mk-ha-142481
	I1010 18:13:46.168691   99368 main.go:141] libmachine: (ha-142481) DBG | I1010 18:13:46.168625   99391 retry.go:31] will retry after 2.191060126s: waiting for machine to come up
	I1010 18:13:48.361115   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:13:48.361666   99368 main.go:141] libmachine: (ha-142481) DBG | unable to find current IP address of domain ha-142481 in network mk-ha-142481
	I1010 18:13:48.361699   99368 main.go:141] libmachine: (ha-142481) DBG | I1010 18:13:48.361609   99391 retry.go:31] will retry after 2.390239739s: waiting for machine to come up
	I1010 18:13:50.755200   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:13:50.755610   99368 main.go:141] libmachine: (ha-142481) DBG | unable to find current IP address of domain ha-142481 in network mk-ha-142481
	I1010 18:13:50.755636   99368 main.go:141] libmachine: (ha-142481) DBG | I1010 18:13:50.755576   99391 retry.go:31] will retry after 2.188596051s: waiting for machine to come up
	I1010 18:13:52.946995   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:13:52.947360   99368 main.go:141] libmachine: (ha-142481) DBG | unable to find current IP address of domain ha-142481 in network mk-ha-142481
	I1010 18:13:52.947382   99368 main.go:141] libmachine: (ha-142481) DBG | I1010 18:13:52.947318   99391 retry.go:31] will retry after 3.863064875s: waiting for machine to come up
	I1010 18:13:56.814839   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:13:56.815487   99368 main.go:141] libmachine: (ha-142481) DBG | unable to find current IP address of domain ha-142481 in network mk-ha-142481
	I1010 18:13:56.815508   99368 main.go:141] libmachine: (ha-142481) DBG | I1010 18:13:56.815409   99391 retry.go:31] will retry after 3.762373701s: waiting for machine to come up
	I1010 18:14:00.580406   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:00.580915   99368 main.go:141] libmachine: (ha-142481) Found IP for machine: 192.168.39.104
	I1010 18:14:00.580940   99368 main.go:141] libmachine: (ha-142481) Reserving static IP address...
	I1010 18:14:00.580952   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has current primary IP address 192.168.39.104 and MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:00.581384   99368 main.go:141] libmachine: (ha-142481) DBG | unable to find host DHCP lease matching {name: "ha-142481", mac: "52:54:00:3e:fa:00", ip: "192.168.39.104"} in network mk-ha-142481
	I1010 18:14:00.656496   99368 main.go:141] libmachine: (ha-142481) DBG | Getting to WaitForSSH function...
	I1010 18:14:00.656530   99368 main.go:141] libmachine: (ha-142481) Reserved static IP address: 192.168.39.104
	I1010 18:14:00.656576   99368 main.go:141] libmachine: (ha-142481) Waiting for SSH to be available...
	I1010 18:14:00.659584   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:00.659994   99368 main.go:141] libmachine: (ha-142481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:fa:00", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:13:53 +0000 UTC Type:0 Mac:52:54:00:3e:fa:00 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:minikube Clientid:01:52:54:00:3e:fa:00}
	I1010 18:14:00.660032   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined IP address 192.168.39.104 and MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:00.660120   99368 main.go:141] libmachine: (ha-142481) DBG | Using SSH client type: external
	I1010 18:14:00.660175   99368 main.go:141] libmachine: (ha-142481) DBG | Using SSH private key: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481/id_rsa (-rw-------)
	I1010 18:14:00.660252   99368 main.go:141] libmachine: (ha-142481) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.104 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1010 18:14:00.660280   99368 main.go:141] libmachine: (ha-142481) DBG | About to run SSH command:
	I1010 18:14:00.660297   99368 main.go:141] libmachine: (ha-142481) DBG | exit 0
	I1010 18:14:00.789008   99368 main.go:141] libmachine: (ha-142481) DBG | SSH cmd err, output: <nil>: 
	I1010 18:14:00.789292   99368 main.go:141] libmachine: (ha-142481) KVM machine creation complete!
	I1010 18:14:00.789591   99368 main.go:141] libmachine: (ha-142481) Calling .GetConfigRaw
	I1010 18:14:00.790247   99368 main.go:141] libmachine: (ha-142481) Calling .DriverName
	I1010 18:14:00.790563   99368 main.go:141] libmachine: (ha-142481) Calling .DriverName
	I1010 18:14:00.790779   99368 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1010 18:14:00.790797   99368 main.go:141] libmachine: (ha-142481) Calling .GetState
	I1010 18:14:00.791977   99368 main.go:141] libmachine: Detecting operating system of created instance...
	I1010 18:14:00.791993   99368 main.go:141] libmachine: Waiting for SSH to be available...
	I1010 18:14:00.792000   99368 main.go:141] libmachine: Getting to WaitForSSH function...
	I1010 18:14:00.792007   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHHostname
	I1010 18:14:00.795049   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:00.795517   99368 main.go:141] libmachine: (ha-142481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:fa:00", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:13:53 +0000 UTC Type:0 Mac:52:54:00:3e:fa:00 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-142481 Clientid:01:52:54:00:3e:fa:00}
	I1010 18:14:00.795546   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined IP address 192.168.39.104 and MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:00.795737   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHPort
	I1010 18:14:00.795931   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHKeyPath
	I1010 18:14:00.796109   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHKeyPath
	I1010 18:14:00.796201   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHUsername
	I1010 18:14:00.796384   99368 main.go:141] libmachine: Using SSH client type: native
	I1010 18:14:00.796677   99368 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I1010 18:14:00.796694   99368 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1010 18:14:00.904506   99368 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1010 18:14:00.904529   99368 main.go:141] libmachine: Detecting the provisioner...
	I1010 18:14:00.904538   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHHostname
	I1010 18:14:00.907535   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:00.907882   99368 main.go:141] libmachine: (ha-142481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:fa:00", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:13:53 +0000 UTC Type:0 Mac:52:54:00:3e:fa:00 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-142481 Clientid:01:52:54:00:3e:fa:00}
	I1010 18:14:00.907924   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined IP address 192.168.39.104 and MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:00.908104   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHPort
	I1010 18:14:00.908324   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHKeyPath
	I1010 18:14:00.908499   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHKeyPath
	I1010 18:14:00.908658   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHUsername
	I1010 18:14:00.908892   99368 main.go:141] libmachine: Using SSH client type: native
	I1010 18:14:00.909076   99368 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I1010 18:14:00.909086   99368 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1010 18:14:01.018108   99368 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1010 18:14:01.018217   99368 main.go:141] libmachine: found compatible host: buildroot
	I1010 18:14:01.018228   99368 main.go:141] libmachine: Provisioning with buildroot...
	I1010 18:14:01.018236   99368 main.go:141] libmachine: (ha-142481) Calling .GetMachineName
	I1010 18:14:01.018570   99368 buildroot.go:166] provisioning hostname "ha-142481"
	I1010 18:14:01.018602   99368 main.go:141] libmachine: (ha-142481) Calling .GetMachineName
	I1010 18:14:01.018780   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHHostname
	I1010 18:14:01.021625   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:01.022001   99368 main.go:141] libmachine: (ha-142481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:fa:00", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:13:53 +0000 UTC Type:0 Mac:52:54:00:3e:fa:00 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-142481 Clientid:01:52:54:00:3e:fa:00}
	I1010 18:14:01.022049   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined IP address 192.168.39.104 and MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:01.022142   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHPort
	I1010 18:14:01.022330   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHKeyPath
	I1010 18:14:01.022485   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHKeyPath
	I1010 18:14:01.022628   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHUsername
	I1010 18:14:01.022792   99368 main.go:141] libmachine: Using SSH client type: native
	I1010 18:14:01.023020   99368 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I1010 18:14:01.023040   99368 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-142481 && echo "ha-142481" | sudo tee /etc/hostname
	I1010 18:14:01.148746   99368 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-142481
	
	I1010 18:14:01.148780   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHHostname
	I1010 18:14:01.151700   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:01.152069   99368 main.go:141] libmachine: (ha-142481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:fa:00", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:13:53 +0000 UTC Type:0 Mac:52:54:00:3e:fa:00 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-142481 Clientid:01:52:54:00:3e:fa:00}
	I1010 18:14:01.152101   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined IP address 192.168.39.104 and MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:01.152379   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHPort
	I1010 18:14:01.152566   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHKeyPath
	I1010 18:14:01.152733   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHKeyPath
	I1010 18:14:01.153007   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHUsername
	I1010 18:14:01.153254   99368 main.go:141] libmachine: Using SSH client type: native
	I1010 18:14:01.153456   99368 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I1010 18:14:01.153473   99368 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-142481' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-142481/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-142481' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1010 18:14:01.270656   99368 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1010 18:14:01.270702   99368 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19787-81676/.minikube CaCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19787-81676/.minikube}
	I1010 18:14:01.270768   99368 buildroot.go:174] setting up certificates
	I1010 18:14:01.270784   99368 provision.go:84] configureAuth start
	I1010 18:14:01.270804   99368 main.go:141] libmachine: (ha-142481) Calling .GetMachineName
	I1010 18:14:01.271123   99368 main.go:141] libmachine: (ha-142481) Calling .GetIP
	I1010 18:14:01.274054   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:01.274377   99368 main.go:141] libmachine: (ha-142481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:fa:00", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:13:53 +0000 UTC Type:0 Mac:52:54:00:3e:fa:00 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-142481 Clientid:01:52:54:00:3e:fa:00}
	I1010 18:14:01.274414   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined IP address 192.168.39.104 and MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:01.274599   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHHostname
	I1010 18:14:01.277056   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:01.277372   99368 main.go:141] libmachine: (ha-142481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:fa:00", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:13:53 +0000 UTC Type:0 Mac:52:54:00:3e:fa:00 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-142481 Clientid:01:52:54:00:3e:fa:00}
	I1010 18:14:01.277402   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined IP address 192.168.39.104 and MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:01.277532   99368 provision.go:143] copyHostCerts
	I1010 18:14:01.277566   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem
	I1010 18:14:01.277608   99368 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem, removing ...
	I1010 18:14:01.277620   99368 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem
	I1010 18:14:01.277701   99368 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem (1675 bytes)
	I1010 18:14:01.277845   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem
	I1010 18:14:01.277882   99368 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem, removing ...
	I1010 18:14:01.277893   99368 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem
	I1010 18:14:01.277935   99368 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem (1078 bytes)
	I1010 18:14:01.278014   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem
	I1010 18:14:01.278037   99368 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem, removing ...
	I1010 18:14:01.278043   99368 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem
	I1010 18:14:01.278078   99368 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem (1123 bytes)
	I1010 18:14:01.278160   99368 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem org=jenkins.ha-142481 san=[127.0.0.1 192.168.39.104 ha-142481 localhost minikube]
	I1010 18:14:01.863097   99368 provision.go:177] copyRemoteCerts
	I1010 18:14:01.863162   99368 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1010 18:14:01.863187   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHHostname
	I1010 18:14:01.866290   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:01.866626   99368 main.go:141] libmachine: (ha-142481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:fa:00", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:13:53 +0000 UTC Type:0 Mac:52:54:00:3e:fa:00 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-142481 Clientid:01:52:54:00:3e:fa:00}
	I1010 18:14:01.866657   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined IP address 192.168.39.104 and MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:01.866843   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHPort
	I1010 18:14:01.867075   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHKeyPath
	I1010 18:14:01.867295   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHUsername
	I1010 18:14:01.867474   99368 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481/id_rsa Username:docker}
	I1010 18:14:01.951802   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1010 18:14:01.951888   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1010 18:14:01.976504   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1010 18:14:01.976590   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1010 18:14:02.000608   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1010 18:14:02.000694   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1010 18:14:02.025514   99368 provision.go:87] duration metric: took 754.678106ms to configureAuth
	I1010 18:14:02.025558   99368 buildroot.go:189] setting minikube options for container-runtime
	I1010 18:14:02.025780   99368 config.go:182] Loaded profile config "ha-142481": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 18:14:02.025872   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHHostname
	I1010 18:14:02.028822   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:02.029419   99368 main.go:141] libmachine: (ha-142481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:fa:00", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:13:53 +0000 UTC Type:0 Mac:52:54:00:3e:fa:00 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-142481 Clientid:01:52:54:00:3e:fa:00}
	I1010 18:14:02.029448   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined IP address 192.168.39.104 and MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:02.029637   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHPort
	I1010 18:14:02.029859   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHKeyPath
	I1010 18:14:02.030076   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHKeyPath
	I1010 18:14:02.030249   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHUsername
	I1010 18:14:02.030408   99368 main.go:141] libmachine: Using SSH client type: native
	I1010 18:14:02.030613   99368 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I1010 18:14:02.030638   99368 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1010 18:14:02.255598   99368 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1010 18:14:02.255635   99368 main.go:141] libmachine: Checking connection to Docker...
	I1010 18:14:02.255663   99368 main.go:141] libmachine: (ha-142481) Calling .GetURL
	I1010 18:14:02.256998   99368 main.go:141] libmachine: (ha-142481) DBG | Using libvirt version 6000000
	I1010 18:14:02.259693   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:02.260061   99368 main.go:141] libmachine: (ha-142481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:fa:00", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:13:53 +0000 UTC Type:0 Mac:52:54:00:3e:fa:00 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-142481 Clientid:01:52:54:00:3e:fa:00}
	I1010 18:14:02.260105   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined IP address 192.168.39.104 and MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:02.260245   99368 main.go:141] libmachine: Docker is up and running!
	I1010 18:14:02.260269   99368 main.go:141] libmachine: Reticulating splines...
	I1010 18:14:02.260277   99368 client.go:171] duration metric: took 24.062416136s to LocalClient.Create
	I1010 18:14:02.260305   99368 start.go:167] duration metric: took 24.062491775s to libmachine.API.Create "ha-142481"
	I1010 18:14:02.260317   99368 start.go:293] postStartSetup for "ha-142481" (driver="kvm2")
	I1010 18:14:02.260330   99368 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1010 18:14:02.260355   99368 main.go:141] libmachine: (ha-142481) Calling .DriverName
	I1010 18:14:02.260598   99368 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1010 18:14:02.260623   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHHostname
	I1010 18:14:02.262655   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:02.262966   99368 main.go:141] libmachine: (ha-142481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:fa:00", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:13:53 +0000 UTC Type:0 Mac:52:54:00:3e:fa:00 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-142481 Clientid:01:52:54:00:3e:fa:00}
	I1010 18:14:02.262995   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined IP address 192.168.39.104 and MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:02.263106   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHPort
	I1010 18:14:02.263281   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHKeyPath
	I1010 18:14:02.263418   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHUsername
	I1010 18:14:02.263549   99368 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481/id_rsa Username:docker}
	I1010 18:14:02.347386   99368 ssh_runner.go:195] Run: cat /etc/os-release
	I1010 18:14:02.352007   99368 info.go:137] Remote host: Buildroot 2023.02.9
	I1010 18:14:02.352037   99368 filesync.go:126] Scanning /home/jenkins/minikube-integration/19787-81676/.minikube/addons for local assets ...
	I1010 18:14:02.352118   99368 filesync.go:126] Scanning /home/jenkins/minikube-integration/19787-81676/.minikube/files for local assets ...
	I1010 18:14:02.352241   99368 filesync.go:149] local asset: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem -> 888762.pem in /etc/ssl/certs
	I1010 18:14:02.352255   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem -> /etc/ssl/certs/888762.pem
	I1010 18:14:02.352383   99368 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1010 18:14:02.361986   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem --> /etc/ssl/certs/888762.pem (1708 bytes)
	I1010 18:14:02.387757   99368 start.go:296] duration metric: took 127.42447ms for postStartSetup
	I1010 18:14:02.387817   99368 main.go:141] libmachine: (ha-142481) Calling .GetConfigRaw
	I1010 18:14:02.388481   99368 main.go:141] libmachine: (ha-142481) Calling .GetIP
	I1010 18:14:02.391530   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:02.391900   99368 main.go:141] libmachine: (ha-142481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:fa:00", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:13:53 +0000 UTC Type:0 Mac:52:54:00:3e:fa:00 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-142481 Clientid:01:52:54:00:3e:fa:00}
	I1010 18:14:02.391927   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined IP address 192.168.39.104 and MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:02.392187   99368 profile.go:143] Saving config to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/config.json ...
	I1010 18:14:02.392385   99368 start.go:128] duration metric: took 24.213024958s to createHost
	I1010 18:14:02.392410   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHHostname
	I1010 18:14:02.394865   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:02.395239   99368 main.go:141] libmachine: (ha-142481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:fa:00", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:13:53 +0000 UTC Type:0 Mac:52:54:00:3e:fa:00 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-142481 Clientid:01:52:54:00:3e:fa:00}
	I1010 18:14:02.395269   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined IP address 192.168.39.104 and MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:02.395418   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHPort
	I1010 18:14:02.395616   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHKeyPath
	I1010 18:14:02.395799   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHKeyPath
	I1010 18:14:02.395913   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHUsername
	I1010 18:14:02.396045   99368 main.go:141] libmachine: Using SSH client type: native
	I1010 18:14:02.396233   99368 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I1010 18:14:02.396253   99368 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1010 18:14:02.506374   99368 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728584042.463674877
	
	I1010 18:14:02.506405   99368 fix.go:216] guest clock: 1728584042.463674877
	I1010 18:14:02.506415   99368 fix.go:229] Guest: 2024-10-10 18:14:02.463674877 +0000 UTC Remote: 2024-10-10 18:14:02.392397471 +0000 UTC m=+24.322985546 (delta=71.277406ms)
	I1010 18:14:02.506501   99368 fix.go:200] guest clock delta is within tolerance: 71.277406ms
	I1010 18:14:02.506513   99368 start.go:83] releasing machines lock for "ha-142481", held for 24.327223548s
	I1010 18:14:02.506550   99368 main.go:141] libmachine: (ha-142481) Calling .DriverName
	I1010 18:14:02.506889   99368 main.go:141] libmachine: (ha-142481) Calling .GetIP
	I1010 18:14:02.509401   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:02.509764   99368 main.go:141] libmachine: (ha-142481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:fa:00", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:13:53 +0000 UTC Type:0 Mac:52:54:00:3e:fa:00 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-142481 Clientid:01:52:54:00:3e:fa:00}
	I1010 18:14:02.509802   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined IP address 192.168.39.104 and MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:02.509942   99368 main.go:141] libmachine: (ha-142481) Calling .DriverName
	I1010 18:14:02.510549   99368 main.go:141] libmachine: (ha-142481) Calling .DriverName
	I1010 18:14:02.510772   99368 main.go:141] libmachine: (ha-142481) Calling .DriverName
	I1010 18:14:02.510843   99368 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1010 18:14:02.510929   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHHostname
	I1010 18:14:02.511003   99368 ssh_runner.go:195] Run: cat /version.json
	I1010 18:14:02.511038   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHHostname
	I1010 18:14:02.513796   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:02.513896   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:02.514234   99368 main.go:141] libmachine: (ha-142481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:fa:00", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:13:53 +0000 UTC Type:0 Mac:52:54:00:3e:fa:00 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-142481 Clientid:01:52:54:00:3e:fa:00}
	I1010 18:14:02.514254   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined IP address 192.168.39.104 and MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:02.514280   99368 main.go:141] libmachine: (ha-142481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:fa:00", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:13:53 +0000 UTC Type:0 Mac:52:54:00:3e:fa:00 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-142481 Clientid:01:52:54:00:3e:fa:00}
	I1010 18:14:02.514293   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined IP address 192.168.39.104 and MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:02.514533   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHPort
	I1010 18:14:02.514631   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHPort
	I1010 18:14:02.514713   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHKeyPath
	I1010 18:14:02.514804   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHKeyPath
	I1010 18:14:02.514890   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHUsername
	I1010 18:14:02.514938   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHUsername
	I1010 18:14:02.515026   99368 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481/id_rsa Username:docker}
	I1010 18:14:02.515073   99368 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481/id_rsa Username:docker}
	I1010 18:14:02.615715   99368 ssh_runner.go:195] Run: systemctl --version
	I1010 18:14:02.621955   99368 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1010 18:14:02.785775   99368 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1010 18:14:02.792271   99368 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1010 18:14:02.792352   99368 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1010 18:14:02.808426   99368 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1010 18:14:02.808464   99368 start.go:495] detecting cgroup driver to use...
	I1010 18:14:02.808542   99368 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1010 18:14:02.825314   99368 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1010 18:14:02.842065   99368 docker.go:217] disabling cri-docker service (if available) ...
	I1010 18:14:02.842135   99368 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1010 18:14:02.858984   99368 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1010 18:14:02.876330   99368 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1010 18:14:02.990523   99368 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1010 18:14:03.132316   99368 docker.go:233] disabling docker service ...
	I1010 18:14:03.132386   99368 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1010 18:14:03.147477   99368 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1010 18:14:03.161268   99368 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1010 18:14:03.304325   99368 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1010 18:14:03.429397   99368 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1010 18:14:03.443898   99368 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1010 18:14:03.463181   99368 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1010 18:14:03.463273   99368 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:14:03.474215   99368 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1010 18:14:03.474286   99368 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:14:03.485513   99368 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:14:03.496394   99368 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:14:03.507084   99368 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1010 18:14:03.517675   99368 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:14:03.527867   99368 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:14:03.545825   99368 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:14:03.556723   99368 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1010 18:14:03.566428   99368 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1010 18:14:03.566513   99368 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1010 18:14:03.579726   99368 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1010 18:14:03.589897   99368 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 18:14:03.711306   99368 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1010 18:14:03.812353   99368 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1010 18:14:03.812440   99368 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1010 18:14:03.817265   99368 start.go:563] Will wait 60s for crictl version
	I1010 18:14:03.817331   99368 ssh_runner.go:195] Run: which crictl
	I1010 18:14:03.821238   99368 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1010 18:14:03.865031   99368 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1010 18:14:03.865131   99368 ssh_runner.go:195] Run: crio --version
	I1010 18:14:03.893405   99368 ssh_runner.go:195] Run: crio --version
	I1010 18:14:03.923688   99368 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1010 18:14:03.925089   99368 main.go:141] libmachine: (ha-142481) Calling .GetIP
	I1010 18:14:03.927862   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:03.928210   99368 main.go:141] libmachine: (ha-142481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:fa:00", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:13:53 +0000 UTC Type:0 Mac:52:54:00:3e:fa:00 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-142481 Clientid:01:52:54:00:3e:fa:00}
	I1010 18:14:03.928239   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined IP address 192.168.39.104 and MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:03.928482   99368 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1010 18:14:03.932808   99368 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 18:14:03.947607   99368 kubeadm.go:883] updating cluster {Name:ha-142481 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-142481 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.104 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1010 18:14:03.947723   99368 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1010 18:14:03.947771   99368 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 18:14:03.980321   99368 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1010 18:14:03.980402   99368 ssh_runner.go:195] Run: which lz4
	I1010 18:14:03.984490   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1010 18:14:03.984586   99368 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1010 18:14:03.988814   99368 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1010 18:14:03.988866   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1010 18:14:05.363098   99368 crio.go:462] duration metric: took 1.37853137s to copy over tarball
	I1010 18:14:05.363172   99368 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1010 18:14:07.378827   99368 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.01562073s)
	I1010 18:14:07.378863   99368 crio.go:469] duration metric: took 2.015730634s to extract the tarball
	I1010 18:14:07.378873   99368 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1010 18:14:07.415494   99368 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 18:14:07.461637   99368 crio.go:514] all images are preloaded for cri-o runtime.
	I1010 18:14:07.461668   99368 cache_images.go:84] Images are preloaded, skipping loading
	I1010 18:14:07.461678   99368 kubeadm.go:934] updating node { 192.168.39.104 8443 v1.31.1 crio true true} ...
	I1010 18:14:07.461810   99368 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-142481 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.104
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-142481 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1010 18:14:07.461895   99368 ssh_runner.go:195] Run: crio config
	I1010 18:14:07.511179   99368 cni.go:84] Creating CNI manager for ""
	I1010 18:14:07.511203   99368 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1010 18:14:07.511219   99368 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1010 18:14:07.511240   99368 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.104 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-142481 NodeName:ha-142481 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.104"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.104 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1010 18:14:07.511378   99368 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.104
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-142481"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.104
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.104"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1010 18:14:07.511402   99368 kube-vip.go:115] generating kube-vip config ...
	I1010 18:14:07.511447   99368 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1010 18:14:07.530825   99368 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1010 18:14:07.530966   99368 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1010 18:14:07.531061   99368 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1010 18:14:07.541336   99368 binaries.go:44] Found k8s binaries, skipping transfer
	I1010 18:14:07.541418   99368 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1010 18:14:07.551149   99368 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I1010 18:14:07.567775   99368 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1010 18:14:07.585048   99368 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I1010 18:14:07.601614   99368 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I1010 18:14:07.618435   99368 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1010 18:14:07.622366   99368 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 18:14:07.634534   99368 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 18:14:07.769061   99368 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 18:14:07.786728   99368 certs.go:68] Setting up /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481 for IP: 192.168.39.104
	I1010 18:14:07.786757   99368 certs.go:194] generating shared ca certs ...
	I1010 18:14:07.786780   99368 certs.go:226] acquiring lock for ca certs: {Name:mk934aa2a82050d7b4e8800492bf64f91d140517 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:14:07.786963   99368 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key
	I1010 18:14:07.787019   99368 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key
	I1010 18:14:07.787049   99368 certs.go:256] generating profile certs ...
	I1010 18:14:07.787126   99368 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/client.key
	I1010 18:14:07.787145   99368 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/client.crt with IP's: []
	I1010 18:14:07.903290   99368 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/client.crt ...
	I1010 18:14:07.903319   99368 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/client.crt: {Name:mkc3e45adeab2c56df47bde3919e2c30e370ae85 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:14:07.903506   99368 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/client.key ...
	I1010 18:14:07.903521   99368 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/client.key: {Name:mka461c8525916f7bc85840820bc278320ec6313 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:14:07.903626   99368 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.key.36896560
	I1010 18:14:07.903643   99368 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.crt.36896560 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.104 192.168.39.254]
	I1010 18:14:08.280801   99368 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.crt.36896560 ...
	I1010 18:14:08.280860   99368 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.crt.36896560: {Name:mk5acd7350e86bebedada3fd330840a975c10cfe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:14:08.281063   99368 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.key.36896560 ...
	I1010 18:14:08.281078   99368 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.key.36896560: {Name:mk1053269a10fe97cf940622a274d032edb2023c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:14:08.281164   99368 certs.go:381] copying /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.crt.36896560 -> /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.crt
	I1010 18:14:08.281248   99368 certs.go:385] copying /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.key.36896560 -> /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.key
	I1010 18:14:08.281307   99368 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/proxy-client.key
	I1010 18:14:08.281325   99368 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/proxy-client.crt with IP's: []
	I1010 18:14:08.428528   99368 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/proxy-client.crt ...
	I1010 18:14:08.428562   99368 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/proxy-client.crt: {Name:mk868dec1ca79ab4285d30dbc6ee93e0f0415a05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:14:08.428730   99368 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/proxy-client.key ...
	I1010 18:14:08.428741   99368 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/proxy-client.key: {Name:mk5632176fd6e0bd1fedbd590f44cb77fc86fc75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:14:08.428812   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1010 18:14:08.428829   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1010 18:14:08.428839   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1010 18:14:08.428867   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1010 18:14:08.428886   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1010 18:14:08.428905   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1010 18:14:08.428919   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1010 18:14:08.428930   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1010 18:14:08.428986   99368 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876.pem (1338 bytes)
	W1010 18:14:08.429023   99368 certs.go:480] ignoring /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876_empty.pem, impossibly tiny 0 bytes
	I1010 18:14:08.429032   99368 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem (1679 bytes)
	I1010 18:14:08.429057   99368 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem (1078 bytes)
	I1010 18:14:08.429082   99368 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem (1123 bytes)
	I1010 18:14:08.429103   99368 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem (1675 bytes)
	I1010 18:14:08.429139   99368 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem (1708 bytes)
	I1010 18:14:08.429166   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876.pem -> /usr/share/ca-certificates/88876.pem
	I1010 18:14:08.429180   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem -> /usr/share/ca-certificates/888762.pem
	I1010 18:14:08.429192   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:14:08.429725   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1010 18:14:08.459934   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1010 18:14:08.486537   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1010 18:14:08.511793   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1010 18:14:08.536743   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1010 18:14:08.569819   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1010 18:14:08.605499   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1010 18:14:08.633615   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1010 18:14:08.657501   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876.pem --> /usr/share/ca-certificates/88876.pem (1338 bytes)
	I1010 18:14:08.684906   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem --> /usr/share/ca-certificates/888762.pem (1708 bytes)
	I1010 18:14:08.712812   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1010 18:14:08.741219   99368 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1010 18:14:08.760444   99368 ssh_runner.go:195] Run: openssl version
	I1010 18:14:08.766741   99368 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/888762.pem && ln -fs /usr/share/ca-certificates/888762.pem /etc/ssl/certs/888762.pem"
	I1010 18:14:08.778475   99368 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/888762.pem
	I1010 18:14:08.783145   99368 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 10 18:09 /usr/share/ca-certificates/888762.pem
	I1010 18:14:08.783213   99368 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/888762.pem
	I1010 18:14:08.789500   99368 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/888762.pem /etc/ssl/certs/3ec20f2e.0"
	I1010 18:14:08.800279   99368 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1010 18:14:08.811452   99368 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:14:08.816338   99368 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 10 17:58 /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:14:08.816413   99368 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:14:08.822105   99368 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1010 18:14:08.833024   99368 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/88876.pem && ln -fs /usr/share/ca-certificates/88876.pem /etc/ssl/certs/88876.pem"
	I1010 18:14:08.844522   99368 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/88876.pem
	I1010 18:14:08.849855   99368 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 10 18:09 /usr/share/ca-certificates/88876.pem
	I1010 18:14:08.849915   99368 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/88876.pem
	I1010 18:14:08.856326   99368 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/88876.pem /etc/ssl/certs/51391683.0"
	I1010 18:14:08.868339   99368 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1010 18:14:08.873080   99368 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1010 18:14:08.873139   99368 kubeadm.go:392] StartCluster: {Name:ha-142481 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-142481 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.104 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 18:14:08.873227   99368 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1010 18:14:08.873270   99368 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1010 18:14:08.916635   99368 cri.go:89] found id: ""
	I1010 18:14:08.916701   99368 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1010 18:14:08.927424   99368 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1010 18:14:08.937639   99368 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1010 18:14:08.950754   99368 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1010 18:14:08.950779   99368 kubeadm.go:157] found existing configuration files:
	
	I1010 18:14:08.950834   99368 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1010 18:14:08.962204   99368 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1010 18:14:08.962290   99368 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1010 18:14:08.975261   99368 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1010 18:14:08.986716   99368 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1010 18:14:08.986809   99368 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1010 18:14:08.998689   99368 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1010 18:14:09.010244   99368 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1010 18:14:09.010336   99368 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1010 18:14:09.022153   99368 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1010 18:14:09.033360   99368 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1010 18:14:09.033436   99368 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1010 18:14:09.045356   99368 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1010 18:14:09.160966   99368 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1010 18:14:09.161052   99368 kubeadm.go:310] [preflight] Running pre-flight checks
	I1010 18:14:09.286355   99368 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1010 18:14:09.286552   99368 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1010 18:14:09.286700   99368 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1010 18:14:09.304139   99368 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1010 18:14:09.367960   99368 out.go:235]   - Generating certificates and keys ...
	I1010 18:14:09.368080   99368 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1010 18:14:09.368161   99368 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1010 18:14:09.384046   99368 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1010 18:14:09.463103   99368 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1010 18:14:09.567857   99368 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1010 18:14:09.723111   99368 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1010 18:14:09.854233   99368 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1010 18:14:09.854378   99368 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-142481 localhost] and IPs [192.168.39.104 127.0.0.1 ::1]
	I1010 18:14:09.939722   99368 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1010 18:14:09.939862   99368 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-142481 localhost] and IPs [192.168.39.104 127.0.0.1 ::1]
	I1010 18:14:10.144343   99368 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1010 18:14:10.236373   99368 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1010 18:14:10.313629   99368 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1010 18:14:10.313727   99368 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1010 18:14:10.420431   99368 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1010 18:14:10.571019   99368 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1010 18:14:10.736436   99368 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1010 18:14:10.835479   99368 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1010 18:14:10.964962   99368 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1010 18:14:10.965625   99368 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1010 18:14:10.970210   99368 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1010 18:14:10.974272   99368 out.go:235]   - Booting up control plane ...
	I1010 18:14:10.974411   99368 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1010 18:14:10.974532   99368 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1010 18:14:10.974647   99368 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1010 18:14:10.995458   99368 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1010 18:14:11.002605   99368 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1010 18:14:11.002687   99368 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1010 18:14:11.149847   99368 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1010 18:14:11.150007   99368 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1010 18:14:11.651121   99368 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.084729ms
	I1010 18:14:11.651236   99368 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1010 18:14:20.808127   99368 kubeadm.go:310] [api-check] The API server is healthy after 9.156536113s
	I1010 18:14:20.824946   99368 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1010 18:14:20.839773   99368 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1010 18:14:20.870820   99368 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1010 18:14:20.871016   99368 kubeadm.go:310] [mark-control-plane] Marking the node ha-142481 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1010 18:14:20.887157   99368 kubeadm.go:310] [bootstrap-token] Using token: 644oik.7go4jyqro7if5l4w
	I1010 18:14:20.888737   99368 out.go:235]   - Configuring RBAC rules ...
	I1010 18:14:20.888842   99368 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1010 18:14:20.898440   99368 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1010 18:14:20.910480   99368 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1010 18:14:20.915628   99368 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1010 18:14:20.920682   99368 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1010 18:14:20.931471   99368 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1010 18:14:21.219016   99368 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1010 18:14:21.647641   99368 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1010 18:14:22.223206   99368 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1010 18:14:22.224137   99368 kubeadm.go:310] 
	I1010 18:14:22.224257   99368 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1010 18:14:22.224281   99368 kubeadm.go:310] 
	I1010 18:14:22.224367   99368 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1010 18:14:22.224376   99368 kubeadm.go:310] 
	I1010 18:14:22.224411   99368 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1010 18:14:22.224481   99368 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1010 18:14:22.224552   99368 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1010 18:14:22.224561   99368 kubeadm.go:310] 
	I1010 18:14:22.224636   99368 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1010 18:14:22.224649   99368 kubeadm.go:310] 
	I1010 18:14:22.224716   99368 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1010 18:14:22.224728   99368 kubeadm.go:310] 
	I1010 18:14:22.224806   99368 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1010 18:14:22.224925   99368 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1010 18:14:22.225015   99368 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1010 18:14:22.225025   99368 kubeadm.go:310] 
	I1010 18:14:22.225149   99368 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1010 18:14:22.225266   99368 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1010 18:14:22.225276   99368 kubeadm.go:310] 
	I1010 18:14:22.225390   99368 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 644oik.7go4jyqro7if5l4w \
	I1010 18:14:22.225541   99368 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:bc8568f96f96483e4403a7eff9866ac7bd8b0e76d8203796a49dd1a769f11bda \
	I1010 18:14:22.225591   99368 kubeadm.go:310] 	--control-plane 
	I1010 18:14:22.225619   99368 kubeadm.go:310] 
	I1010 18:14:22.225743   99368 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1010 18:14:22.225753   99368 kubeadm.go:310] 
	I1010 18:14:22.225845   99368 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 644oik.7go4jyqro7if5l4w \
	I1010 18:14:22.225968   99368 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:bc8568f96f96483e4403a7eff9866ac7bd8b0e76d8203796a49dd1a769f11bda 
	I1010 18:14:22.226430   99368 kubeadm.go:310] W1010 18:14:09.112606     828 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1010 18:14:22.226836   99368 kubeadm.go:310] W1010 18:14:09.113373     828 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1010 18:14:22.226944   99368 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1010 18:14:22.226978   99368 cni.go:84] Creating CNI manager for ""
	I1010 18:14:22.226989   99368 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1010 18:14:22.229089   99368 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1010 18:14:22.230625   99368 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1010 18:14:22.236334   99368 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I1010 18:14:22.236358   99368 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1010 18:14:22.263826   99368 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1010 18:14:22.691291   99368 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1010 18:14:22.691383   99368 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:14:22.691399   99368 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-142481 minikube.k8s.io/updated_at=2024_10_10T18_14_22_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=e90d7de550e839433dc631623053398b652747fc minikube.k8s.io/name=ha-142481 minikube.k8s.io/primary=true
	I1010 18:14:22.748532   99368 ops.go:34] apiserver oom_adj: -16
	I1010 18:14:22.970463   99368 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:14:23.471032   99368 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:14:23.553414   99368 kubeadm.go:1113] duration metric: took 862.100636ms to wait for elevateKubeSystemPrivileges
	I1010 18:14:23.553464   99368 kubeadm.go:394] duration metric: took 14.680326546s to StartCluster
	I1010 18:14:23.553490   99368 settings.go:142] acquiring lock: {Name:mk8d73e472e6ca16e47be8eb712406a95625b8da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:14:23.553611   99368 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19787-81676/kubeconfig
	I1010 18:14:23.554487   99368 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19787-81676/kubeconfig: {Name:mk31297b607b774870865548d952622c04d970cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:14:23.554725   99368 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1010 18:14:23.554735   99368 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1010 18:14:23.554719   99368 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.104 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1010 18:14:23.554809   99368 addons.go:69] Setting storage-provisioner=true in profile "ha-142481"
	I1010 18:14:23.554818   99368 start.go:241] waiting for startup goroutines ...
	I1010 18:14:23.554825   99368 addons.go:234] Setting addon storage-provisioner=true in "ha-142481"
	I1010 18:14:23.554829   99368 addons.go:69] Setting default-storageclass=true in profile "ha-142481"
	I1010 18:14:23.554845   99368 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-142481"
	I1010 18:14:23.554853   99368 host.go:66] Checking if "ha-142481" exists ...
	I1010 18:14:23.554928   99368 config.go:182] Loaded profile config "ha-142481": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 18:14:23.555209   99368 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 18:14:23.555239   99368 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 18:14:23.555300   99368 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 18:14:23.555338   99368 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 18:14:23.570324   99368 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36105
	I1010 18:14:23.570445   99368 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38545
	I1010 18:14:23.570857   99368 main.go:141] libmachine: () Calling .GetVersion
	I1010 18:14:23.570886   99368 main.go:141] libmachine: () Calling .GetVersion
	I1010 18:14:23.571436   99368 main.go:141] libmachine: Using API Version  1
	I1010 18:14:23.571459   99368 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 18:14:23.571566   99368 main.go:141] libmachine: Using API Version  1
	I1010 18:14:23.571589   99368 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 18:14:23.571790   99368 main.go:141] libmachine: () Calling .GetMachineName
	I1010 18:14:23.571894   99368 main.go:141] libmachine: () Calling .GetMachineName
	I1010 18:14:23.571996   99368 main.go:141] libmachine: (ha-142481) Calling .GetState
	I1010 18:14:23.572434   99368 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 18:14:23.572484   99368 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 18:14:23.574225   99368 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19787-81676/kubeconfig
	I1010 18:14:23.574554   99368 kapi.go:59] client config for ha-142481: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/client.crt", KeyFile:"/home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/client.key", CAFile:"/home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2432700), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1010 18:14:23.575091   99368 cert_rotation.go:140] Starting client certificate rotation controller
	I1010 18:14:23.575347   99368 addons.go:234] Setting addon default-storageclass=true in "ha-142481"
	I1010 18:14:23.575391   99368 host.go:66] Checking if "ha-142481" exists ...
	I1010 18:14:23.575743   99368 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 18:14:23.575783   99368 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 18:14:23.587483   99368 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34291
	I1010 18:14:23.587940   99368 main.go:141] libmachine: () Calling .GetVersion
	I1010 18:14:23.588477   99368 main.go:141] libmachine: Using API Version  1
	I1010 18:14:23.588502   99368 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 18:14:23.588933   99368 main.go:141] libmachine: () Calling .GetMachineName
	I1010 18:14:23.589102   99368 main.go:141] libmachine: (ha-142481) Calling .GetState
	I1010 18:14:23.590856   99368 main.go:141] libmachine: (ha-142481) Calling .DriverName
	I1010 18:14:23.590904   99368 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42799
	I1010 18:14:23.591399   99368 main.go:141] libmachine: () Calling .GetVersion
	I1010 18:14:23.591917   99368 main.go:141] libmachine: Using API Version  1
	I1010 18:14:23.591946   99368 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 18:14:23.592234   99368 main.go:141] libmachine: () Calling .GetMachineName
	I1010 18:14:23.592690   99368 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 18:14:23.592731   99368 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 18:14:23.593082   99368 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 18:14:23.594593   99368 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 18:14:23.594613   99368 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1010 18:14:23.594629   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHHostname
	I1010 18:14:23.597561   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:23.598029   99368 main.go:141] libmachine: (ha-142481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:fa:00", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:13:53 +0000 UTC Type:0 Mac:52:54:00:3e:fa:00 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-142481 Clientid:01:52:54:00:3e:fa:00}
	I1010 18:14:23.598057   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined IP address 192.168.39.104 and MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:23.598292   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHPort
	I1010 18:14:23.598455   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHKeyPath
	I1010 18:14:23.598621   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHUsername
	I1010 18:14:23.598811   99368 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481/id_rsa Username:docker}
	I1010 18:14:23.608949   99368 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46053
	I1010 18:14:23.609372   99368 main.go:141] libmachine: () Calling .GetVersion
	I1010 18:14:23.609889   99368 main.go:141] libmachine: Using API Version  1
	I1010 18:14:23.609916   99368 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 18:14:23.610243   99368 main.go:141] libmachine: () Calling .GetMachineName
	I1010 18:14:23.610467   99368 main.go:141] libmachine: (ha-142481) Calling .GetState
	I1010 18:14:23.612216   99368 main.go:141] libmachine: (ha-142481) Calling .DriverName
	I1010 18:14:23.612447   99368 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1010 18:14:23.612464   99368 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1010 18:14:23.612481   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHHostname
	I1010 18:14:23.615402   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:23.615852   99368 main.go:141] libmachine: (ha-142481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:fa:00", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:13:53 +0000 UTC Type:0 Mac:52:54:00:3e:fa:00 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-142481 Clientid:01:52:54:00:3e:fa:00}
	I1010 18:14:23.615886   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined IP address 192.168.39.104 and MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:23.616075   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHPort
	I1010 18:14:23.616255   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHKeyPath
	I1010 18:14:23.616404   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHUsername
	I1010 18:14:23.616566   99368 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481/id_rsa Username:docker}
	I1010 18:14:23.680546   99368 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1010 18:14:23.774021   99368 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 18:14:23.820915   99368 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1010 18:14:24.197953   99368 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1010 18:14:24.533925   99368 main.go:141] libmachine: Making call to close driver server
	I1010 18:14:24.533960   99368 main.go:141] libmachine: (ha-142481) Calling .Close
	I1010 18:14:24.533990   99368 main.go:141] libmachine: Making call to close driver server
	I1010 18:14:24.534001   99368 main.go:141] libmachine: (ha-142481) Calling .Close
	I1010 18:14:24.534267   99368 main.go:141] libmachine: (ha-142481) DBG | Closing plugin on server side
	I1010 18:14:24.534297   99368 main.go:141] libmachine: Successfully made call to close driver server
	I1010 18:14:24.534313   99368 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 18:14:24.534319   99368 main.go:141] libmachine: Successfully made call to close driver server
	I1010 18:14:24.534320   99368 main.go:141] libmachine: (ha-142481) DBG | Closing plugin on server side
	I1010 18:14:24.534323   99368 main.go:141] libmachine: Making call to close driver server
	I1010 18:14:24.534342   99368 main.go:141] libmachine: (ha-142481) Calling .Close
	I1010 18:14:24.534328   99368 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 18:14:24.534394   99368 main.go:141] libmachine: Making call to close driver server
	I1010 18:14:24.534402   99368 main.go:141] libmachine: (ha-142481) Calling .Close
	I1010 18:14:24.534551   99368 main.go:141] libmachine: Successfully made call to close driver server
	I1010 18:14:24.534571   99368 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 18:14:24.534647   99368 main.go:141] libmachine: Successfully made call to close driver server
	I1010 18:14:24.534673   99368 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 18:14:24.534690   99368 main.go:141] libmachine: (ha-142481) DBG | Closing plugin on server side
	I1010 18:14:24.534743   99368 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1010 18:14:24.534893   99368 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1010 18:14:24.535016   99368 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1010 18:14:24.535028   99368 round_trippers.go:469] Request Headers:
	I1010 18:14:24.535038   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:14:24.535046   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:14:24.550066   99368 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I1010 18:14:24.550802   99368 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1010 18:14:24.550817   99368 round_trippers.go:469] Request Headers:
	I1010 18:14:24.550825   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:14:24.550830   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:14:24.550834   99368 round_trippers.go:473]     Content-Type: application/json
	I1010 18:14:24.554277   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:14:24.554448   99368 main.go:141] libmachine: Making call to close driver server
	I1010 18:14:24.554465   99368 main.go:141] libmachine: (ha-142481) Calling .Close
	I1010 18:14:24.554772   99368 main.go:141] libmachine: Successfully made call to close driver server
	I1010 18:14:24.554791   99368 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 18:14:24.556620   99368 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1010 18:14:24.558034   99368 addons.go:510] duration metric: took 1.003294102s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1010 18:14:24.558071   99368 start.go:246] waiting for cluster config update ...
	I1010 18:14:24.558083   99368 start.go:255] writing updated cluster config ...
	I1010 18:14:24.559825   99368 out.go:201] 
	I1010 18:14:24.561439   99368 config.go:182] Loaded profile config "ha-142481": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 18:14:24.561503   99368 profile.go:143] Saving config to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/config.json ...
	I1010 18:14:24.563101   99368 out.go:177] * Starting "ha-142481-m02" control-plane node in "ha-142481" cluster
	I1010 18:14:24.564327   99368 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1010 18:14:24.564349   99368 cache.go:56] Caching tarball of preloaded images
	I1010 18:14:24.564452   99368 preload.go:172] Found /home/jenkins/minikube-integration/19787-81676/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1010 18:14:24.564466   99368 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1010 18:14:24.564540   99368 profile.go:143] Saving config to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/config.json ...
	I1010 18:14:24.564701   99368 start.go:360] acquireMachinesLock for ha-142481-m02: {Name:mkf50a8a3600473176c10a3ea212772e747151e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1010 18:14:24.564749   99368 start.go:364] duration metric: took 27.041µs to acquireMachinesLock for "ha-142481-m02"
	I1010 18:14:24.564772   99368 start.go:93] Provisioning new machine with config: &{Name:ha-142481 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-142481 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.104 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1010 18:14:24.564841   99368 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1010 18:14:24.566583   99368 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1010 18:14:24.566679   99368 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 18:14:24.566707   99368 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 18:14:24.581685   99368 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43727
	I1010 18:14:24.582176   99368 main.go:141] libmachine: () Calling .GetVersion
	I1010 18:14:24.582682   99368 main.go:141] libmachine: Using API Version  1
	I1010 18:14:24.582704   99368 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 18:14:24.583014   99368 main.go:141] libmachine: () Calling .GetMachineName
	I1010 18:14:24.583206   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetMachineName
	I1010 18:14:24.583343   99368 main.go:141] libmachine: (ha-142481-m02) Calling .DriverName
	I1010 18:14:24.583500   99368 start.go:159] libmachine.API.Create for "ha-142481" (driver="kvm2")
	I1010 18:14:24.583528   99368 client.go:168] LocalClient.Create starting
	I1010 18:14:24.583563   99368 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem
	I1010 18:14:24.583608   99368 main.go:141] libmachine: Decoding PEM data...
	I1010 18:14:24.583628   99368 main.go:141] libmachine: Parsing certificate...
	I1010 18:14:24.583689   99368 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem
	I1010 18:14:24.583714   99368 main.go:141] libmachine: Decoding PEM data...
	I1010 18:14:24.583730   99368 main.go:141] libmachine: Parsing certificate...
	I1010 18:14:24.583754   99368 main.go:141] libmachine: Running pre-create checks...
	I1010 18:14:24.583765   99368 main.go:141] libmachine: (ha-142481-m02) Calling .PreCreateCheck
	I1010 18:14:24.584021   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetConfigRaw
	I1010 18:14:24.584567   99368 main.go:141] libmachine: Creating machine...
	I1010 18:14:24.584588   99368 main.go:141] libmachine: (ha-142481-m02) Calling .Create
	I1010 18:14:24.584740   99368 main.go:141] libmachine: (ha-142481-m02) Creating KVM machine...
	I1010 18:14:24.585948   99368 main.go:141] libmachine: (ha-142481-m02) DBG | found existing default KVM network
	I1010 18:14:24.586049   99368 main.go:141] libmachine: (ha-142481-m02) DBG | found existing private KVM network mk-ha-142481
	I1010 18:14:24.586156   99368 main.go:141] libmachine: (ha-142481-m02) Setting up store path in /home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481-m02 ...
	I1010 18:14:24.586179   99368 main.go:141] libmachine: (ha-142481-m02) Building disk image from file:///home/jenkins/minikube-integration/19787-81676/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso
	I1010 18:14:24.586274   99368 main.go:141] libmachine: (ha-142481-m02) DBG | I1010 18:14:24.586151   99736 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19787-81676/.minikube
	I1010 18:14:24.586354   99368 main.go:141] libmachine: (ha-142481-m02) Downloading /home/jenkins/minikube-integration/19787-81676/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19787-81676/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso...
	I1010 18:14:24.870233   99368 main.go:141] libmachine: (ha-142481-m02) DBG | I1010 18:14:24.870047   99736 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481-m02/id_rsa...
	I1010 18:14:25.124750   99368 main.go:141] libmachine: (ha-142481-m02) DBG | I1010 18:14:25.124608   99736 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481-m02/ha-142481-m02.rawdisk...
	I1010 18:14:25.124783   99368 main.go:141] libmachine: (ha-142481-m02) DBG | Writing magic tar header
	I1010 18:14:25.124795   99368 main.go:141] libmachine: (ha-142481-m02) DBG | Writing SSH key tar header
	I1010 18:14:25.124806   99368 main.go:141] libmachine: (ha-142481-m02) DBG | I1010 18:14:25.124735   99736 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481-m02 ...
	I1010 18:14:25.124821   99368 main.go:141] libmachine: (ha-142481-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481-m02
	I1010 18:14:25.124919   99368 main.go:141] libmachine: (ha-142481-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19787-81676/.minikube/machines
	I1010 18:14:25.124946   99368 main.go:141] libmachine: (ha-142481-m02) Setting executable bit set on /home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481-m02 (perms=drwx------)
	I1010 18:14:25.124954   99368 main.go:141] libmachine: (ha-142481-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19787-81676/.minikube
	I1010 18:14:25.124968   99368 main.go:141] libmachine: (ha-142481-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19787-81676
	I1010 18:14:25.124973   99368 main.go:141] libmachine: (ha-142481-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1010 18:14:25.124980   99368 main.go:141] libmachine: (ha-142481-m02) Setting executable bit set on /home/jenkins/minikube-integration/19787-81676/.minikube/machines (perms=drwxr-xr-x)
	I1010 18:14:25.124988   99368 main.go:141] libmachine: (ha-142481-m02) Setting executable bit set on /home/jenkins/minikube-integration/19787-81676/.minikube (perms=drwxr-xr-x)
	I1010 18:14:25.124994   99368 main.go:141] libmachine: (ha-142481-m02) Setting executable bit set on /home/jenkins/minikube-integration/19787-81676 (perms=drwxrwxr-x)
	I1010 18:14:25.124999   99368 main.go:141] libmachine: (ha-142481-m02) DBG | Checking permissions on dir: /home/jenkins
	I1010 18:14:25.125037   99368 main.go:141] libmachine: (ha-142481-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1010 18:14:25.125058   99368 main.go:141] libmachine: (ha-142481-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1010 18:14:25.125067   99368 main.go:141] libmachine: (ha-142481-m02) DBG | Checking permissions on dir: /home
	I1010 18:14:25.125079   99368 main.go:141] libmachine: (ha-142481-m02) Creating domain...
	I1010 18:14:25.125091   99368 main.go:141] libmachine: (ha-142481-m02) DBG | Skipping /home - not owner
	I1010 18:14:25.126075   99368 main.go:141] libmachine: (ha-142481-m02) define libvirt domain using xml: 
	I1010 18:14:25.126098   99368 main.go:141] libmachine: (ha-142481-m02) <domain type='kvm'>
	I1010 18:14:25.126107   99368 main.go:141] libmachine: (ha-142481-m02)   <name>ha-142481-m02</name>
	I1010 18:14:25.126114   99368 main.go:141] libmachine: (ha-142481-m02)   <memory unit='MiB'>2200</memory>
	I1010 18:14:25.126125   99368 main.go:141] libmachine: (ha-142481-m02)   <vcpu>2</vcpu>
	I1010 18:14:25.126132   99368 main.go:141] libmachine: (ha-142481-m02)   <features>
	I1010 18:14:25.126140   99368 main.go:141] libmachine: (ha-142481-m02)     <acpi/>
	I1010 18:14:25.126150   99368 main.go:141] libmachine: (ha-142481-m02)     <apic/>
	I1010 18:14:25.126164   99368 main.go:141] libmachine: (ha-142481-m02)     <pae/>
	I1010 18:14:25.126176   99368 main.go:141] libmachine: (ha-142481-m02)     
	I1010 18:14:25.126185   99368 main.go:141] libmachine: (ha-142481-m02)   </features>
	I1010 18:14:25.126193   99368 main.go:141] libmachine: (ha-142481-m02)   <cpu mode='host-passthrough'>
	I1010 18:14:25.126201   99368 main.go:141] libmachine: (ha-142481-m02)   
	I1010 18:14:25.126208   99368 main.go:141] libmachine: (ha-142481-m02)   </cpu>
	I1010 18:14:25.126215   99368 main.go:141] libmachine: (ha-142481-m02)   <os>
	I1010 18:14:25.126225   99368 main.go:141] libmachine: (ha-142481-m02)     <type>hvm</type>
	I1010 18:14:25.126232   99368 main.go:141] libmachine: (ha-142481-m02)     <boot dev='cdrom'/>
	I1010 18:14:25.126241   99368 main.go:141] libmachine: (ha-142481-m02)     <boot dev='hd'/>
	I1010 18:14:25.126251   99368 main.go:141] libmachine: (ha-142481-m02)     <bootmenu enable='no'/>
	I1010 18:14:25.126273   99368 main.go:141] libmachine: (ha-142481-m02)   </os>
	I1010 18:14:25.126284   99368 main.go:141] libmachine: (ha-142481-m02)   <devices>
	I1010 18:14:25.126294   99368 main.go:141] libmachine: (ha-142481-m02)     <disk type='file' device='cdrom'>
	I1010 18:14:25.126307   99368 main.go:141] libmachine: (ha-142481-m02)       <source file='/home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481-m02/boot2docker.iso'/>
	I1010 18:14:25.126318   99368 main.go:141] libmachine: (ha-142481-m02)       <target dev='hdc' bus='scsi'/>
	I1010 18:14:25.126329   99368 main.go:141] libmachine: (ha-142481-m02)       <readonly/>
	I1010 18:14:25.126342   99368 main.go:141] libmachine: (ha-142481-m02)     </disk>
	I1010 18:14:25.126353   99368 main.go:141] libmachine: (ha-142481-m02)     <disk type='file' device='disk'>
	I1010 18:14:25.126365   99368 main.go:141] libmachine: (ha-142481-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1010 18:14:25.126380   99368 main.go:141] libmachine: (ha-142481-m02)       <source file='/home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481-m02/ha-142481-m02.rawdisk'/>
	I1010 18:14:25.126391   99368 main.go:141] libmachine: (ha-142481-m02)       <target dev='hda' bus='virtio'/>
	I1010 18:14:25.126401   99368 main.go:141] libmachine: (ha-142481-m02)     </disk>
	I1010 18:14:25.126413   99368 main.go:141] libmachine: (ha-142481-m02)     <interface type='network'>
	I1010 18:14:25.126425   99368 main.go:141] libmachine: (ha-142481-m02)       <source network='mk-ha-142481'/>
	I1010 18:14:25.126434   99368 main.go:141] libmachine: (ha-142481-m02)       <model type='virtio'/>
	I1010 18:14:25.126443   99368 main.go:141] libmachine: (ha-142481-m02)     </interface>
	I1010 18:14:25.126454   99368 main.go:141] libmachine: (ha-142481-m02)     <interface type='network'>
	I1010 18:14:25.126463   99368 main.go:141] libmachine: (ha-142481-m02)       <source network='default'/>
	I1010 18:14:25.126473   99368 main.go:141] libmachine: (ha-142481-m02)       <model type='virtio'/>
	I1010 18:14:25.126494   99368 main.go:141] libmachine: (ha-142481-m02)     </interface>
	I1010 18:14:25.126518   99368 main.go:141] libmachine: (ha-142481-m02)     <serial type='pty'>
	I1010 18:14:25.126526   99368 main.go:141] libmachine: (ha-142481-m02)       <target port='0'/>
	I1010 18:14:25.126530   99368 main.go:141] libmachine: (ha-142481-m02)     </serial>
	I1010 18:14:25.126535   99368 main.go:141] libmachine: (ha-142481-m02)     <console type='pty'>
	I1010 18:14:25.126545   99368 main.go:141] libmachine: (ha-142481-m02)       <target type='serial' port='0'/>
	I1010 18:14:25.126550   99368 main.go:141] libmachine: (ha-142481-m02)     </console>
	I1010 18:14:25.126556   99368 main.go:141] libmachine: (ha-142481-m02)     <rng model='virtio'>
	I1010 18:14:25.126562   99368 main.go:141] libmachine: (ha-142481-m02)       <backend model='random'>/dev/random</backend>
	I1010 18:14:25.126569   99368 main.go:141] libmachine: (ha-142481-m02)     </rng>
	I1010 18:14:25.126574   99368 main.go:141] libmachine: (ha-142481-m02)     
	I1010 18:14:25.126579   99368 main.go:141] libmachine: (ha-142481-m02)     
	I1010 18:14:25.126610   99368 main.go:141] libmachine: (ha-142481-m02)   </devices>
	I1010 18:14:25.126633   99368 main.go:141] libmachine: (ha-142481-m02) </domain>
	I1010 18:14:25.126647   99368 main.go:141] libmachine: (ha-142481-m02) 
	I1010 18:14:25.133808   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:63:37:66 in network default
	I1010 18:14:25.134525   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:25.134551   99368 main.go:141] libmachine: (ha-142481-m02) Ensuring networks are active...
	I1010 18:14:25.135477   99368 main.go:141] libmachine: (ha-142481-m02) Ensuring network default is active
	I1010 18:14:25.135837   99368 main.go:141] libmachine: (ha-142481-m02) Ensuring network mk-ha-142481 is active
	I1010 18:14:25.136343   99368 main.go:141] libmachine: (ha-142481-m02) Getting domain xml...
	I1010 18:14:25.137263   99368 main.go:141] libmachine: (ha-142481-m02) Creating domain...
	I1010 18:14:26.362672   99368 main.go:141] libmachine: (ha-142481-m02) Waiting to get IP...
	I1010 18:14:26.363443   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:26.363821   99368 main.go:141] libmachine: (ha-142481-m02) DBG | unable to find current IP address of domain ha-142481-m02 in network mk-ha-142481
	I1010 18:14:26.363878   99368 main.go:141] libmachine: (ha-142481-m02) DBG | I1010 18:14:26.363829   99736 retry.go:31] will retry after 237.123337ms: waiting for machine to come up
	I1010 18:14:26.602398   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:26.602883   99368 main.go:141] libmachine: (ha-142481-m02) DBG | unable to find current IP address of domain ha-142481-m02 in network mk-ha-142481
	I1010 18:14:26.602910   99368 main.go:141] libmachine: (ha-142481-m02) DBG | I1010 18:14:26.602829   99736 retry.go:31] will retry after 255.919096ms: waiting for machine to come up
	I1010 18:14:26.860273   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:26.860891   99368 main.go:141] libmachine: (ha-142481-m02) DBG | unable to find current IP address of domain ha-142481-m02 in network mk-ha-142481
	I1010 18:14:26.860917   99368 main.go:141] libmachine: (ha-142481-m02) DBG | I1010 18:14:26.860860   99736 retry.go:31] will retry after 363.867823ms: waiting for machine to come up
	I1010 18:14:27.226493   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:27.226955   99368 main.go:141] libmachine: (ha-142481-m02) DBG | unable to find current IP address of domain ha-142481-m02 in network mk-ha-142481
	I1010 18:14:27.226984   99368 main.go:141] libmachine: (ha-142481-m02) DBG | I1010 18:14:27.226896   99736 retry.go:31] will retry after 430.931001ms: waiting for machine to come up
	I1010 18:14:27.659820   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:27.660273   99368 main.go:141] libmachine: (ha-142481-m02) DBG | unable to find current IP address of domain ha-142481-m02 in network mk-ha-142481
	I1010 18:14:27.660299   99368 main.go:141] libmachine: (ha-142481-m02) DBG | I1010 18:14:27.660222   99736 retry.go:31] will retry after 681.867141ms: waiting for machine to come up
	I1010 18:14:28.344366   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:28.344931   99368 main.go:141] libmachine: (ha-142481-m02) DBG | unable to find current IP address of domain ha-142481-m02 in network mk-ha-142481
	I1010 18:14:28.344989   99368 main.go:141] libmachine: (ha-142481-m02) DBG | I1010 18:14:28.344843   99736 retry.go:31] will retry after 753.410001ms: waiting for machine to come up
	I1010 18:14:29.099845   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:29.100316   99368 main.go:141] libmachine: (ha-142481-m02) DBG | unable to find current IP address of domain ha-142481-m02 in network mk-ha-142481
	I1010 18:14:29.100345   99368 main.go:141] libmachine: (ha-142481-m02) DBG | I1010 18:14:29.100254   99736 retry.go:31] will retry after 1.081998824s: waiting for machine to come up
	I1010 18:14:30.183319   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:30.183733   99368 main.go:141] libmachine: (ha-142481-m02) DBG | unable to find current IP address of domain ha-142481-m02 in network mk-ha-142481
	I1010 18:14:30.183762   99368 main.go:141] libmachine: (ha-142481-m02) DBG | I1010 18:14:30.183699   99736 retry.go:31] will retry after 1.2621544s: waiting for machine to come up
	I1010 18:14:31.448194   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:31.448615   99368 main.go:141] libmachine: (ha-142481-m02) DBG | unable to find current IP address of domain ha-142481-m02 in network mk-ha-142481
	I1010 18:14:31.448639   99368 main.go:141] libmachine: (ha-142481-m02) DBG | I1010 18:14:31.448571   99736 retry.go:31] will retry after 1.545841483s: waiting for machine to come up
	I1010 18:14:32.996370   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:32.996940   99368 main.go:141] libmachine: (ha-142481-m02) DBG | unable to find current IP address of domain ha-142481-m02 in network mk-ha-142481
	I1010 18:14:32.996970   99368 main.go:141] libmachine: (ha-142481-m02) DBG | I1010 18:14:32.996877   99736 retry.go:31] will retry after 1.954916368s: waiting for machine to come up
	I1010 18:14:34.953362   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:34.953810   99368 main.go:141] libmachine: (ha-142481-m02) DBG | unable to find current IP address of domain ha-142481-m02 in network mk-ha-142481
	I1010 18:14:34.953834   99368 main.go:141] libmachine: (ha-142481-m02) DBG | I1010 18:14:34.953765   99736 retry.go:31] will retry after 2.832021438s: waiting for machine to come up
	I1010 18:14:37.787030   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:37.787437   99368 main.go:141] libmachine: (ha-142481-m02) DBG | unable to find current IP address of domain ha-142481-m02 in network mk-ha-142481
	I1010 18:14:37.787462   99368 main.go:141] libmachine: (ha-142481-m02) DBG | I1010 18:14:37.787399   99736 retry.go:31] will retry after 3.372903659s: waiting for machine to come up
	I1010 18:14:41.162229   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:41.162830   99368 main.go:141] libmachine: (ha-142481-m02) DBG | unable to find current IP address of domain ha-142481-m02 in network mk-ha-142481
	I1010 18:14:41.162860   99368 main.go:141] libmachine: (ha-142481-m02) DBG | I1010 18:14:41.162748   99736 retry.go:31] will retry after 3.532610017s: waiting for machine to come up
	I1010 18:14:44.697346   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:44.697811   99368 main.go:141] libmachine: (ha-142481-m02) DBG | unable to find current IP address of domain ha-142481-m02 in network mk-ha-142481
	I1010 18:14:44.697838   99368 main.go:141] libmachine: (ha-142481-m02) DBG | I1010 18:14:44.697765   99736 retry.go:31] will retry after 4.121205885s: waiting for machine to come up
	I1010 18:14:48.820235   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:48.820691   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has current primary IP address 192.168.39.186 and MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:48.820707   99368 main.go:141] libmachine: (ha-142481-m02) Found IP for machine: 192.168.39.186
	I1010 18:14:48.820716   99368 main.go:141] libmachine: (ha-142481-m02) Reserving static IP address...
	I1010 18:14:48.821115   99368 main.go:141] libmachine: (ha-142481-m02) DBG | unable to find host DHCP lease matching {name: "ha-142481-m02", mac: "52:54:00:70:30:26", ip: "192.168.39.186"} in network mk-ha-142481
	I1010 18:14:48.903340   99368 main.go:141] libmachine: (ha-142481-m02) Reserved static IP address: 192.168.39.186
	I1010 18:14:48.903376   99368 main.go:141] libmachine: (ha-142481-m02) Waiting for SSH to be available...
	I1010 18:14:48.903387   99368 main.go:141] libmachine: (ha-142481-m02) DBG | Getting to WaitForSSH function...
	I1010 18:14:48.906232   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:48.906828   99368 main.go:141] libmachine: (ha-142481-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:30:26", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:14:39 +0000 UTC Type:0 Mac:52:54:00:70:30:26 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:minikube Clientid:01:52:54:00:70:30:26}
	I1010 18:14:48.906862   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined IP address 192.168.39.186 and MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:48.907057   99368 main.go:141] libmachine: (ha-142481-m02) DBG | Using SSH client type: external
	I1010 18:14:48.907087   99368 main.go:141] libmachine: (ha-142481-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481-m02/id_rsa (-rw-------)
	I1010 18:14:48.907120   99368 main.go:141] libmachine: (ha-142481-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.186 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1010 18:14:48.907134   99368 main.go:141] libmachine: (ha-142481-m02) DBG | About to run SSH command:
	I1010 18:14:48.907147   99368 main.go:141] libmachine: (ha-142481-m02) DBG | exit 0
	I1010 18:14:49.037555   99368 main.go:141] libmachine: (ha-142481-m02) DBG | SSH cmd err, output: <nil>: 
	I1010 18:14:49.037876   99368 main.go:141] libmachine: (ha-142481-m02) KVM machine creation complete!
	I1010 18:14:49.038189   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetConfigRaw
	I1010 18:14:49.038756   99368 main.go:141] libmachine: (ha-142481-m02) Calling .DriverName
	I1010 18:14:49.038950   99368 main.go:141] libmachine: (ha-142481-m02) Calling .DriverName
	I1010 18:14:49.039103   99368 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1010 18:14:49.039117   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetState
	I1010 18:14:49.040560   99368 main.go:141] libmachine: Detecting operating system of created instance...
	I1010 18:14:49.040573   99368 main.go:141] libmachine: Waiting for SSH to be available...
	I1010 18:14:49.040578   99368 main.go:141] libmachine: Getting to WaitForSSH function...
	I1010 18:14:49.040584   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHHostname
	I1010 18:14:49.042911   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:49.043240   99368 main.go:141] libmachine: (ha-142481-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:30:26", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:14:39 +0000 UTC Type:0 Mac:52:54:00:70:30:26 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-142481-m02 Clientid:01:52:54:00:70:30:26}
	I1010 18:14:49.043266   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined IP address 192.168.39.186 and MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:49.043533   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHPort
	I1010 18:14:49.043730   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHKeyPath
	I1010 18:14:49.043927   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHKeyPath
	I1010 18:14:49.044092   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHUsername
	I1010 18:14:49.044245   99368 main.go:141] libmachine: Using SSH client type: native
	I1010 18:14:49.044498   99368 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I1010 18:14:49.044515   99368 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1010 18:14:49.156568   99368 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1010 18:14:49.156599   99368 main.go:141] libmachine: Detecting the provisioner...
	I1010 18:14:49.156607   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHHostname
	I1010 18:14:49.159819   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:49.160299   99368 main.go:141] libmachine: (ha-142481-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:30:26", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:14:39 +0000 UTC Type:0 Mac:52:54:00:70:30:26 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-142481-m02 Clientid:01:52:54:00:70:30:26}
	I1010 18:14:49.160329   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined IP address 192.168.39.186 and MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:49.160572   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHPort
	I1010 18:14:49.160782   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHKeyPath
	I1010 18:14:49.160954   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHKeyPath
	I1010 18:14:49.161115   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHUsername
	I1010 18:14:49.161282   99368 main.go:141] libmachine: Using SSH client type: native
	I1010 18:14:49.161504   99368 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I1010 18:14:49.161519   99368 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1010 18:14:49.274150   99368 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1010 18:14:49.274238   99368 main.go:141] libmachine: found compatible host: buildroot
	I1010 18:14:49.274249   99368 main.go:141] libmachine: Provisioning with buildroot...
	I1010 18:14:49.274261   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetMachineName
	I1010 18:14:49.274541   99368 buildroot.go:166] provisioning hostname "ha-142481-m02"
	I1010 18:14:49.274574   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetMachineName
	I1010 18:14:49.274809   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHHostname
	I1010 18:14:49.277484   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:49.277861   99368 main.go:141] libmachine: (ha-142481-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:30:26", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:14:39 +0000 UTC Type:0 Mac:52:54:00:70:30:26 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-142481-m02 Clientid:01:52:54:00:70:30:26}
	I1010 18:14:49.277893   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined IP address 192.168.39.186 and MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:49.278037   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHPort
	I1010 18:14:49.278241   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHKeyPath
	I1010 18:14:49.278416   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHKeyPath
	I1010 18:14:49.278595   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHUsername
	I1010 18:14:49.278858   99368 main.go:141] libmachine: Using SSH client type: native
	I1010 18:14:49.279047   99368 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I1010 18:14:49.279061   99368 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-142481-m02 && echo "ha-142481-m02" | sudo tee /etc/hostname
	I1010 18:14:49.409335   99368 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-142481-m02
	
	I1010 18:14:49.409369   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHHostname
	I1010 18:14:49.412112   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:49.412427   99368 main.go:141] libmachine: (ha-142481-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:30:26", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:14:39 +0000 UTC Type:0 Mac:52:54:00:70:30:26 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-142481-m02 Clientid:01:52:54:00:70:30:26}
	I1010 18:14:49.412458   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined IP address 192.168.39.186 and MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:49.412712   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHPort
	I1010 18:14:49.412921   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHKeyPath
	I1010 18:14:49.413069   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHKeyPath
	I1010 18:14:49.413182   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHUsername
	I1010 18:14:49.413398   99368 main.go:141] libmachine: Using SSH client type: native
	I1010 18:14:49.413565   99368 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I1010 18:14:49.413581   99368 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-142481-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-142481-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-142481-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1010 18:14:49.542003   99368 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1010 18:14:49.542039   99368 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19787-81676/.minikube CaCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19787-81676/.minikube}
	I1010 18:14:49.542058   99368 buildroot.go:174] setting up certificates
	I1010 18:14:49.542069   99368 provision.go:84] configureAuth start
	I1010 18:14:49.542080   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetMachineName
	I1010 18:14:49.542340   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetIP
	I1010 18:14:49.545159   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:49.545524   99368 main.go:141] libmachine: (ha-142481-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:30:26", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:14:39 +0000 UTC Type:0 Mac:52:54:00:70:30:26 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-142481-m02 Clientid:01:52:54:00:70:30:26}
	I1010 18:14:49.545554   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined IP address 192.168.39.186 and MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:49.545698   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHHostname
	I1010 18:14:49.547804   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:49.548115   99368 main.go:141] libmachine: (ha-142481-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:30:26", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:14:39 +0000 UTC Type:0 Mac:52:54:00:70:30:26 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-142481-m02 Clientid:01:52:54:00:70:30:26}
	I1010 18:14:49.548135   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined IP address 192.168.39.186 and MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:49.548323   99368 provision.go:143] copyHostCerts
	I1010 18:14:49.548352   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem
	I1010 18:14:49.548392   99368 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem, removing ...
	I1010 18:14:49.548403   99368 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem
	I1010 18:14:49.548486   99368 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem (1078 bytes)
	I1010 18:14:49.548582   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem
	I1010 18:14:49.548609   99368 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem, removing ...
	I1010 18:14:49.548619   99368 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem
	I1010 18:14:49.548657   99368 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem (1123 bytes)
	I1010 18:14:49.548719   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem
	I1010 18:14:49.548743   99368 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem, removing ...
	I1010 18:14:49.548752   99368 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem
	I1010 18:14:49.548788   99368 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem (1675 bytes)
	I1010 18:14:49.548865   99368 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem org=jenkins.ha-142481-m02 san=[127.0.0.1 192.168.39.186 ha-142481-m02 localhost minikube]
	I1010 18:14:49.606708   99368 provision.go:177] copyRemoteCerts
	I1010 18:14:49.606781   99368 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1010 18:14:49.606811   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHHostname
	I1010 18:14:49.609620   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:49.609921   99368 main.go:141] libmachine: (ha-142481-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:30:26", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:14:39 +0000 UTC Type:0 Mac:52:54:00:70:30:26 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-142481-m02 Clientid:01:52:54:00:70:30:26}
	I1010 18:14:49.609952   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined IP address 192.168.39.186 and MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:49.610121   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHPort
	I1010 18:14:49.610322   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHKeyPath
	I1010 18:14:49.610506   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHUsername
	I1010 18:14:49.610631   99368 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481-m02/id_rsa Username:docker}
	I1010 18:14:49.695655   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1010 18:14:49.695736   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1010 18:14:49.723445   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1010 18:14:49.723520   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1010 18:14:49.748318   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1010 18:14:49.748402   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1010 18:14:49.773423   99368 provision.go:87] duration metric: took 231.339814ms to configureAuth
	I1010 18:14:49.773451   99368 buildroot.go:189] setting minikube options for container-runtime
	I1010 18:14:49.773626   99368 config.go:182] Loaded profile config "ha-142481": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 18:14:49.773705   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHHostname
	I1010 18:14:49.776350   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:49.776701   99368 main.go:141] libmachine: (ha-142481-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:30:26", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:14:39 +0000 UTC Type:0 Mac:52:54:00:70:30:26 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-142481-m02 Clientid:01:52:54:00:70:30:26}
	I1010 18:14:49.776726   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined IP address 192.168.39.186 and MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:49.776913   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHPort
	I1010 18:14:49.777128   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHKeyPath
	I1010 18:14:49.777292   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHKeyPath
	I1010 18:14:49.777435   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHUsername
	I1010 18:14:49.777590   99368 main.go:141] libmachine: Using SSH client type: native
	I1010 18:14:49.777795   99368 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I1010 18:14:49.777817   99368 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1010 18:14:50.018484   99368 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1010 18:14:50.018513   99368 main.go:141] libmachine: Checking connection to Docker...
	I1010 18:14:50.018525   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetURL
	I1010 18:14:50.019796   99368 main.go:141] libmachine: (ha-142481-m02) DBG | Using libvirt version 6000000
	I1010 18:14:50.022107   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:50.022432   99368 main.go:141] libmachine: (ha-142481-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:30:26", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:14:39 +0000 UTC Type:0 Mac:52:54:00:70:30:26 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-142481-m02 Clientid:01:52:54:00:70:30:26}
	I1010 18:14:50.022476   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined IP address 192.168.39.186 and MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:50.022628   99368 main.go:141] libmachine: Docker is up and running!
	I1010 18:14:50.022646   99368 main.go:141] libmachine: Reticulating splines...
	I1010 18:14:50.022657   99368 client.go:171] duration metric: took 25.439118717s to LocalClient.Create
	I1010 18:14:50.022695   99368 start.go:167] duration metric: took 25.439191435s to libmachine.API.Create "ha-142481"
	I1010 18:14:50.022708   99368 start.go:293] postStartSetup for "ha-142481-m02" (driver="kvm2")
	I1010 18:14:50.022725   99368 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1010 18:14:50.022763   99368 main.go:141] libmachine: (ha-142481-m02) Calling .DriverName
	I1010 18:14:50.023030   99368 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1010 18:14:50.023055   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHHostname
	I1010 18:14:50.025463   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:50.025834   99368 main.go:141] libmachine: (ha-142481-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:30:26", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:14:39 +0000 UTC Type:0 Mac:52:54:00:70:30:26 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-142481-m02 Clientid:01:52:54:00:70:30:26}
	I1010 18:14:50.025869   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined IP address 192.168.39.186 and MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:50.026093   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHPort
	I1010 18:14:50.026322   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHKeyPath
	I1010 18:14:50.026520   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHUsername
	I1010 18:14:50.026673   99368 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481-m02/id_rsa Username:docker}
	I1010 18:14:50.115488   99368 ssh_runner.go:195] Run: cat /etc/os-release
	I1010 18:14:50.120106   99368 info.go:137] Remote host: Buildroot 2023.02.9
	I1010 18:14:50.120146   99368 filesync.go:126] Scanning /home/jenkins/minikube-integration/19787-81676/.minikube/addons for local assets ...
	I1010 18:14:50.120259   99368 filesync.go:126] Scanning /home/jenkins/minikube-integration/19787-81676/.minikube/files for local assets ...
	I1010 18:14:50.120347   99368 filesync.go:149] local asset: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem -> 888762.pem in /etc/ssl/certs
	I1010 18:14:50.120360   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem -> /etc/ssl/certs/888762.pem
	I1010 18:14:50.120462   99368 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1010 18:14:50.130011   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem --> /etc/ssl/certs/888762.pem (1708 bytes)
	I1010 18:14:50.156296   99368 start.go:296] duration metric: took 133.570332ms for postStartSetup
	I1010 18:14:50.156350   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetConfigRaw
	I1010 18:14:50.156937   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetIP
	I1010 18:14:50.159597   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:50.160043   99368 main.go:141] libmachine: (ha-142481-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:30:26", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:14:39 +0000 UTC Type:0 Mac:52:54:00:70:30:26 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-142481-m02 Clientid:01:52:54:00:70:30:26}
	I1010 18:14:50.160071   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined IP address 192.168.39.186 and MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:50.160321   99368 profile.go:143] Saving config to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/config.json ...
	I1010 18:14:50.160495   99368 start.go:128] duration metric: took 25.595643097s to createHost
	I1010 18:14:50.160517   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHHostname
	I1010 18:14:50.162762   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:50.163085   99368 main.go:141] libmachine: (ha-142481-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:30:26", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:14:39 +0000 UTC Type:0 Mac:52:54:00:70:30:26 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-142481-m02 Clientid:01:52:54:00:70:30:26}
	I1010 18:14:50.163110   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined IP address 192.168.39.186 and MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:50.163276   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHPort
	I1010 18:14:50.163459   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHKeyPath
	I1010 18:14:50.163603   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHKeyPath
	I1010 18:14:50.163760   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHUsername
	I1010 18:14:50.163931   99368 main.go:141] libmachine: Using SSH client type: native
	I1010 18:14:50.164125   99368 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.186 22 <nil> <nil>}
	I1010 18:14:50.164139   99368 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1010 18:14:50.277898   99368 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728584090.237251579
	
	I1010 18:14:50.277925   99368 fix.go:216] guest clock: 1728584090.237251579
	I1010 18:14:50.277933   99368 fix.go:229] Guest: 2024-10-10 18:14:50.237251579 +0000 UTC Remote: 2024-10-10 18:14:50.160506288 +0000 UTC m=+72.091094363 (delta=76.745291ms)
	I1010 18:14:50.277949   99368 fix.go:200] guest clock delta is within tolerance: 76.745291ms
	I1010 18:14:50.277955   99368 start.go:83] releasing machines lock for "ha-142481-m02", held for 25.713195595s
	I1010 18:14:50.277975   99368 main.go:141] libmachine: (ha-142481-m02) Calling .DriverName
	I1010 18:14:50.278294   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetIP
	I1010 18:14:50.280842   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:50.281256   99368 main.go:141] libmachine: (ha-142481-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:30:26", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:14:39 +0000 UTC Type:0 Mac:52:54:00:70:30:26 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-142481-m02 Clientid:01:52:54:00:70:30:26}
	I1010 18:14:50.281283   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined IP address 192.168.39.186 and MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:50.283734   99368 out.go:177] * Found network options:
	I1010 18:14:50.285300   99368 out.go:177]   - NO_PROXY=192.168.39.104
	W1010 18:14:50.286708   99368 proxy.go:119] fail to check proxy env: Error ip not in block
	I1010 18:14:50.286748   99368 main.go:141] libmachine: (ha-142481-m02) Calling .DriverName
	I1010 18:14:50.287340   99368 main.go:141] libmachine: (ha-142481-m02) Calling .DriverName
	I1010 18:14:50.287549   99368 main.go:141] libmachine: (ha-142481-m02) Calling .DriverName
	I1010 18:14:50.287642   99368 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1010 18:14:50.287694   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHHostname
	W1010 18:14:50.287740   99368 proxy.go:119] fail to check proxy env: Error ip not in block
	I1010 18:14:50.287827   99368 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1010 18:14:50.287852   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHHostname
	I1010 18:14:50.290823   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:50.290971   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:50.291276   99368 main.go:141] libmachine: (ha-142481-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:30:26", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:14:39 +0000 UTC Type:0 Mac:52:54:00:70:30:26 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-142481-m02 Clientid:01:52:54:00:70:30:26}
	I1010 18:14:50.291307   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined IP address 192.168.39.186 and MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:50.291499   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHPort
	I1010 18:14:50.291594   99368 main.go:141] libmachine: (ha-142481-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:30:26", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:14:39 +0000 UTC Type:0 Mac:52:54:00:70:30:26 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-142481-m02 Clientid:01:52:54:00:70:30:26}
	I1010 18:14:50.291635   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined IP address 192.168.39.186 and MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:50.291693   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHKeyPath
	I1010 18:14:50.291858   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHPort
	I1010 18:14:50.291862   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHUsername
	I1010 18:14:50.292017   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHKeyPath
	I1010 18:14:50.292017   99368 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481-m02/id_rsa Username:docker}
	I1010 18:14:50.292146   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetSSHUsername
	I1010 18:14:50.292458   99368 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481-m02/id_rsa Username:docker}
	I1010 18:14:50.532570   99368 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1010 18:14:50.540169   99368 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1010 18:14:50.540248   99368 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1010 18:14:50.557472   99368 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1010 18:14:50.557500   99368 start.go:495] detecting cgroup driver to use...
	I1010 18:14:50.557574   99368 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1010 18:14:50.574787   99368 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1010 18:14:50.590774   99368 docker.go:217] disabling cri-docker service (if available) ...
	I1010 18:14:50.590848   99368 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1010 18:14:50.605941   99368 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1010 18:14:50.620901   99368 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1010 18:14:50.753387   99368 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1010 18:14:50.919446   99368 docker.go:233] disabling docker service ...
	I1010 18:14:50.919535   99368 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1010 18:14:50.934691   99368 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1010 18:14:50.948383   99368 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1010 18:14:51.098212   99368 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1010 18:14:51.222205   99368 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1010 18:14:51.236395   99368 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1010 18:14:51.255620   99368 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1010 18:14:51.255682   99368 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:14:51.265706   99368 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1010 18:14:51.265766   99368 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:14:51.276288   99368 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:14:51.287384   99368 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:14:51.298290   99368 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1010 18:14:51.309391   99368 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:14:51.322059   99368 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:14:51.341165   99368 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:14:51.352334   99368 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1010 18:14:51.361995   99368 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1010 18:14:51.362055   99368 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1010 18:14:51.376647   99368 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1010 18:14:51.387344   99368 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 18:14:51.501276   99368 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1010 18:14:51.591570   99368 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1010 18:14:51.591667   99368 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1010 18:14:51.596519   99368 start.go:563] Will wait 60s for crictl version
	I1010 18:14:51.596593   99368 ssh_runner.go:195] Run: which crictl
	I1010 18:14:51.600964   99368 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1010 18:14:51.642625   99368 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1010 18:14:51.642709   99368 ssh_runner.go:195] Run: crio --version
	I1010 18:14:51.670857   99368 ssh_runner.go:195] Run: crio --version
	I1010 18:14:51.701992   99368 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1010 18:14:51.703402   99368 out.go:177]   - env NO_PROXY=192.168.39.104
	I1010 18:14:51.704577   99368 main.go:141] libmachine: (ha-142481-m02) Calling .GetIP
	I1010 18:14:51.707504   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:51.707889   99368 main.go:141] libmachine: (ha-142481-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:30:26", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:14:39 +0000 UTC Type:0 Mac:52:54:00:70:30:26 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:ha-142481-m02 Clientid:01:52:54:00:70:30:26}
	I1010 18:14:51.707921   99368 main.go:141] libmachine: (ha-142481-m02) DBG | domain ha-142481-m02 has defined IP address 192.168.39.186 and MAC address 52:54:00:70:30:26 in network mk-ha-142481
	I1010 18:14:51.708187   99368 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1010 18:14:51.712581   99368 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 18:14:51.728042   99368 mustload.go:65] Loading cluster: ha-142481
	I1010 18:14:51.728254   99368 config.go:182] Loaded profile config "ha-142481": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 18:14:51.728534   99368 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 18:14:51.728571   99368 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 18:14:51.744127   99368 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38121
	I1010 18:14:51.744674   99368 main.go:141] libmachine: () Calling .GetVersion
	I1010 18:14:51.745223   99368 main.go:141] libmachine: Using API Version  1
	I1010 18:14:51.745247   99368 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 18:14:51.745620   99368 main.go:141] libmachine: () Calling .GetMachineName
	I1010 18:14:51.745831   99368 main.go:141] libmachine: (ha-142481) Calling .GetState
	I1010 18:14:51.747403   99368 host.go:66] Checking if "ha-142481" exists ...
	I1010 18:14:51.747706   99368 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 18:14:51.747737   99368 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 18:14:51.763030   99368 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41747
	I1010 18:14:51.763446   99368 main.go:141] libmachine: () Calling .GetVersion
	I1010 18:14:51.763925   99368 main.go:141] libmachine: Using API Version  1
	I1010 18:14:51.763949   99368 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 18:14:51.764295   99368 main.go:141] libmachine: () Calling .GetMachineName
	I1010 18:14:51.764486   99368 main.go:141] libmachine: (ha-142481) Calling .DriverName
	I1010 18:14:51.764627   99368 certs.go:68] Setting up /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481 for IP: 192.168.39.186
	I1010 18:14:51.764637   99368 certs.go:194] generating shared ca certs ...
	I1010 18:14:51.764650   99368 certs.go:226] acquiring lock for ca certs: {Name:mk934aa2a82050d7b4e8800492bf64f91d140517 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:14:51.764765   99368 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key
	I1010 18:14:51.764803   99368 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key
	I1010 18:14:51.764812   99368 certs.go:256] generating profile certs ...
	I1010 18:14:51.764912   99368 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/client.key
	I1010 18:14:51.764937   99368 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.key.b244f992
	I1010 18:14:51.764951   99368 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.crt.b244f992 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.104 192.168.39.186 192.168.39.254]
	I1010 18:14:51.993768   99368 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.crt.b244f992 ...
	I1010 18:14:51.993803   99368 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.crt.b244f992: {Name:mk9eca5b6bcf4de2bd1cb4984282b7c5168c504a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:14:51.993982   99368 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.key.b244f992 ...
	I1010 18:14:51.993996   99368 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.key.b244f992: {Name:mk53f522d230afb3a7d1b4f761a379d6be7ff843 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:14:51.994077   99368 certs.go:381] copying /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.crt.b244f992 -> /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.crt
	I1010 18:14:51.994210   99368 certs.go:385] copying /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.key.b244f992 -> /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.key
	I1010 18:14:51.994347   99368 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/proxy-client.key
	I1010 18:14:51.994363   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1010 18:14:51.994376   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1010 18:14:51.994389   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1010 18:14:51.994407   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1010 18:14:51.994420   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1010 18:14:51.994432   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1010 18:14:51.994443   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1010 18:14:51.994454   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1010 18:14:51.994507   99368 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876.pem (1338 bytes)
	W1010 18:14:51.994535   99368 certs.go:480] ignoring /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876_empty.pem, impossibly tiny 0 bytes
	I1010 18:14:51.994545   99368 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem (1679 bytes)
	I1010 18:14:51.994565   99368 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem (1078 bytes)
	I1010 18:14:51.994589   99368 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem (1123 bytes)
	I1010 18:14:51.994613   99368 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem (1675 bytes)
	I1010 18:14:51.994650   99368 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem (1708 bytes)
	I1010 18:14:51.994681   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:14:51.994695   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876.pem -> /usr/share/ca-certificates/88876.pem
	I1010 18:14:51.994706   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem -> /usr/share/ca-certificates/888762.pem
	I1010 18:14:51.994740   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHHostname
	I1010 18:14:51.997958   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:51.998443   99368 main.go:141] libmachine: (ha-142481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:fa:00", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:13:53 +0000 UTC Type:0 Mac:52:54:00:3e:fa:00 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-142481 Clientid:01:52:54:00:3e:fa:00}
	I1010 18:14:51.998473   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined IP address 192.168.39.104 and MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:51.998636   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHPort
	I1010 18:14:51.998839   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHKeyPath
	I1010 18:14:51.999035   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHUsername
	I1010 18:14:51.999239   99368 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481/id_rsa Username:docker}
	I1010 18:14:52.077280   99368 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1010 18:14:52.082655   99368 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1010 18:14:52.094293   99368 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1010 18:14:52.102951   99368 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1010 18:14:52.115800   99368 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1010 18:14:52.120082   99368 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1010 18:14:52.130693   99368 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1010 18:14:52.135696   99368 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1010 18:14:52.148816   99368 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1010 18:14:52.158283   99368 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1010 18:14:52.169959   99368 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1010 18:14:52.174352   99368 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1010 18:14:52.185494   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1010 18:14:52.211191   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1010 18:14:52.237842   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1010 18:14:52.263110   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1010 18:14:52.287843   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1010 18:14:52.313473   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1010 18:14:52.338065   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1010 18:14:52.363071   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1010 18:14:52.387579   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1010 18:14:52.412888   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876.pem --> /usr/share/ca-certificates/88876.pem (1338 bytes)
	I1010 18:14:52.437781   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem --> /usr/share/ca-certificates/888762.pem (1708 bytes)
	I1010 18:14:52.464757   99368 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1010 18:14:52.481913   99368 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1010 18:14:52.499025   99368 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1010 18:14:52.515900   99368 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1010 18:14:52.533545   99368 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1010 18:14:52.550809   99368 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1010 18:14:52.567422   99368 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1010 18:14:52.584795   99368 ssh_runner.go:195] Run: openssl version
	I1010 18:14:52.590891   99368 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/888762.pem && ln -fs /usr/share/ca-certificates/888762.pem /etc/ssl/certs/888762.pem"
	I1010 18:14:52.602879   99368 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/888762.pem
	I1010 18:14:52.607603   99368 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 10 18:09 /usr/share/ca-certificates/888762.pem
	I1010 18:14:52.607658   99368 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/888762.pem
	I1010 18:14:52.613708   99368 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/888762.pem /etc/ssl/certs/3ec20f2e.0"
	I1010 18:14:52.631468   99368 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1010 18:14:52.643064   99368 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:14:52.647811   99368 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 10 17:58 /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:14:52.647874   99368 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:14:52.653881   99368 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1010 18:14:52.665152   99368 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/88876.pem && ln -fs /usr/share/ca-certificates/88876.pem /etc/ssl/certs/88876.pem"
	I1010 18:14:52.676562   99368 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/88876.pem
	I1010 18:14:52.681256   99368 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 10 18:09 /usr/share/ca-certificates/88876.pem
	I1010 18:14:52.681313   99368 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/88876.pem
	I1010 18:14:52.687223   99368 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/88876.pem /etc/ssl/certs/51391683.0"
	I1010 18:14:52.699194   99368 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1010 18:14:52.703641   99368 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1010 18:14:52.703707   99368 kubeadm.go:934] updating node {m02 192.168.39.186 8443 v1.31.1 crio true true} ...
	I1010 18:14:52.703805   99368 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-142481-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.186
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-142481 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1010 18:14:52.703835   99368 kube-vip.go:115] generating kube-vip config ...
	I1010 18:14:52.703878   99368 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1010 18:14:52.723026   99368 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1010 18:14:52.723119   99368 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1010 18:14:52.723189   99368 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1010 18:14:52.734671   99368 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I1010 18:14:52.734752   99368 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I1010 18:14:52.745741   99368 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19787-81676/.minikube/cache/linux/amd64/v1.31.1/kubelet
	I1010 18:14:52.745751   99368 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19787-81676/.minikube/cache/linux/amd64/v1.31.1/kubeadm
	I1010 18:14:52.745751   99368 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I1010 18:14:52.745871   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I1010 18:14:52.745940   99368 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I1010 18:14:52.751099   99368 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I1010 18:14:52.751132   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I1010 18:14:53.544046   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1010 18:14:53.544130   99368 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1010 18:14:53.549472   99368 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I1010 18:14:53.549517   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I1010 18:14:53.647955   99368 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 18:14:53.681722   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I1010 18:14:53.681823   99368 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I1010 18:14:53.695932   99368 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I1010 18:14:53.695987   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I1010 18:14:54.175941   99368 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1010 18:14:54.187282   99368 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1010 18:14:54.205511   99368 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1010 18:14:54.223508   99368 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1010 18:14:54.241125   99368 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1010 18:14:54.245490   99368 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 18:14:54.259173   99368 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 18:14:54.401351   99368 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 18:14:54.419984   99368 host.go:66] Checking if "ha-142481" exists ...
	I1010 18:14:54.420484   99368 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 18:14:54.420546   99368 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 18:14:54.436033   99368 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37253
	I1010 18:14:54.436556   99368 main.go:141] libmachine: () Calling .GetVersion
	I1010 18:14:54.437251   99368 main.go:141] libmachine: Using API Version  1
	I1010 18:14:54.437281   99368 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 18:14:54.437607   99368 main.go:141] libmachine: () Calling .GetMachineName
	I1010 18:14:54.437831   99368 main.go:141] libmachine: (ha-142481) Calling .DriverName
	I1010 18:14:54.438020   99368 start.go:317] joinCluster: &{Name:ha-142481 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-142481 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.104 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.186 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 18:14:54.438157   99368 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1010 18:14:54.438180   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHHostname
	I1010 18:14:54.441157   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:54.441581   99368 main.go:141] libmachine: (ha-142481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:fa:00", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:13:53 +0000 UTC Type:0 Mac:52:54:00:3e:fa:00 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-142481 Clientid:01:52:54:00:3e:fa:00}
	I1010 18:14:54.441609   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined IP address 192.168.39.104 and MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:14:54.441854   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHPort
	I1010 18:14:54.442034   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHKeyPath
	I1010 18:14:54.442149   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHUsername
	I1010 18:14:54.442289   99368 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481/id_rsa Username:docker}
	I1010 18:14:54.604951   99368 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.186 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1010 18:14:54.605013   99368 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token wt3o3w.k6pkjtb13sd57t6w --discovery-token-ca-cert-hash sha256:bc8568f96f96483e4403a7eff9866ac7bd8b0e76d8203796a49dd1a769f11bda --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-142481-m02 --control-plane --apiserver-advertise-address=192.168.39.186 --apiserver-bind-port=8443"
	I1010 18:15:14.578208   99368 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token wt3o3w.k6pkjtb13sd57t6w --discovery-token-ca-cert-hash sha256:bc8568f96f96483e4403a7eff9866ac7bd8b0e76d8203796a49dd1a769f11bda --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-142481-m02 --control-plane --apiserver-advertise-address=192.168.39.186 --apiserver-bind-port=8443": (19.973131424s)
	I1010 18:15:14.578257   99368 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1010 18:15:15.095544   99368 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-142481-m02 minikube.k8s.io/updated_at=2024_10_10T18_15_15_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=e90d7de550e839433dc631623053398b652747fc minikube.k8s.io/name=ha-142481 minikube.k8s.io/primary=false
	I1010 18:15:15.208568   99368 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-142481-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I1010 18:15:15.337167   99368 start.go:319] duration metric: took 20.899144024s to joinCluster
	I1010 18:15:15.337270   99368 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.186 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1010 18:15:15.337601   99368 config.go:182] Loaded profile config "ha-142481": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 18:15:15.339949   99368 out.go:177] * Verifying Kubernetes components...
	I1010 18:15:15.341260   99368 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 18:15:15.615485   99368 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 18:15:15.642973   99368 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19787-81676/kubeconfig
	I1010 18:15:15.643325   99368 kapi.go:59] client config for ha-142481: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/client.crt", KeyFile:"/home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/client.key", CAFile:"/home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2432700), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1010 18:15:15.643422   99368 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.104:8443
	I1010 18:15:15.643731   99368 node_ready.go:35] waiting up to 6m0s for node "ha-142481-m02" to be "Ready" ...
	I1010 18:15:15.643859   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:15.643869   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:15.643880   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:15.643892   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:15.665402   99368 round_trippers.go:574] Response Status: 200 OK in 21 milliseconds
	I1010 18:15:16.144314   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:16.144340   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:16.144351   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:16.144357   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:16.150219   99368 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1010 18:15:16.644045   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:16.644074   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:16.644086   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:16.644093   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:16.654043   99368 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1010 18:15:17.144554   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:17.144581   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:17.144590   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:17.144595   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:17.148858   99368 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1010 18:15:17.643970   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:17.644078   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:17.644104   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:17.644122   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:17.653880   99368 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1010 18:15:17.654572   99368 node_ready.go:53] node "ha-142481-m02" has status "Ready":"False"
	I1010 18:15:18.144266   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:18.144294   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:18.144302   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:18.144308   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:18.147936   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:18.644346   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:18.644369   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:18.644378   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:18.644382   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:18.648587   99368 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1010 18:15:19.144413   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:19.144443   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:19.144454   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:19.144460   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:19.147695   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:19.644688   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:19.644715   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:19.644726   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:19.644730   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:19.648487   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:20.144679   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:20.144700   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:20.144708   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:20.144712   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:20.148475   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:20.149193   99368 node_ready.go:53] node "ha-142481-m02" has status "Ready":"False"
	I1010 18:15:20.644644   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:20.644675   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:20.644687   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:20.644694   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:20.648513   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:21.144341   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:21.144366   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:21.144377   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:21.144384   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:21.147839   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:21.644909   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:21.644934   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:21.644942   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:21.644946   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:21.648387   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:22.144173   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:22.144196   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:22.144205   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:22.144209   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:22.147385   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:22.644414   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:22.644444   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:22.644456   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:22.644462   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:22.713904   99368 round_trippers.go:574] Response Status: 200 OK in 69 milliseconds
	I1010 18:15:22.714410   99368 node_ready.go:53] node "ha-142481-m02" has status "Ready":"False"
	I1010 18:15:23.144902   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:23.144934   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:23.144947   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:23.144954   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:23.147993   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:23.644885   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:23.644971   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:23.644995   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:23.645002   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:23.648711   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:24.144645   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:24.144673   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:24.144685   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:24.144690   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:24.148415   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:24.644379   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:24.644413   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:24.644424   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:24.644429   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:24.648175   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:25.144097   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:25.144120   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:25.144128   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:25.144133   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:25.147203   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:25.147854   99368 node_ready.go:53] node "ha-142481-m02" has status "Ready":"False"
	I1010 18:15:25.644276   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:25.644303   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:25.644311   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:25.644316   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:25.647929   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:26.143986   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:26.144010   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:26.144018   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:26.144023   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:26.147277   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:26.644893   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:26.644924   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:26.644934   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:26.644939   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:26.648455   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:27.144020   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:27.144042   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:27.144050   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:27.144053   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:27.150719   99368 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1010 18:15:27.151307   99368 node_ready.go:53] node "ha-142481-m02" has status "Ready":"False"
	I1010 18:15:27.644596   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:27.644620   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:27.644628   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:27.644632   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:27.648391   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:28.144777   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:28.144801   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:28.144809   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:28.144813   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:28.148258   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:28.644636   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:28.644665   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:28.644673   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:28.644676   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:28.648181   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:29.144094   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:29.144120   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:29.144128   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:29.144133   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:29.147945   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:29.644955   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:29.644977   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:29.644986   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:29.644990   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:29.648391   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:29.649199   99368 node_ready.go:53] node "ha-142481-m02" has status "Ready":"False"
	I1010 18:15:30.144628   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:30.144653   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:30.144661   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:30.144665   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:30.148286   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:30.644255   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:30.644288   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:30.644299   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:30.644304   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:30.648062   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:31.144076   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:31.144101   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:31.144109   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:31.144112   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:31.148081   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:31.644011   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:31.644037   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:31.644049   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:31.644055   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:31.653327   99368 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1010 18:15:31.653921   99368 node_ready.go:53] node "ha-142481-m02" has status "Ready":"False"
	I1010 18:15:32.144247   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:32.144273   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:32.144282   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:32.144286   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:32.147700   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:32.644836   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:32.644894   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:32.644908   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:32.644913   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:32.648022   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:33.144204   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:33.144231   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:33.144240   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:33.144242   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:33.148094   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:33.644909   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:33.644932   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:33.644940   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:33.644943   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:33.648586   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:34.144644   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:34.144672   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:34.144680   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:34.144685   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:34.148129   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:34.148805   99368 node_ready.go:53] node "ha-142481-m02" has status "Ready":"False"
	I1010 18:15:34.644279   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:34.644310   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:34.644321   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:34.644329   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:34.648073   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:34.648695   99368 node_ready.go:49] node "ha-142481-m02" has status "Ready":"True"
	I1010 18:15:34.648716   99368 node_ready.go:38] duration metric: took 19.004960132s for node "ha-142481-m02" to be "Ready" ...
	I1010 18:15:34.648732   99368 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 18:15:34.648874   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods
	I1010 18:15:34.648887   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:34.648899   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:34.648905   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:34.653067   99368 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1010 18:15:34.660867   99368 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-28dll" in "kube-system" namespace to be "Ready" ...
	I1010 18:15:34.660985   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-28dll
	I1010 18:15:34.660996   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:34.661004   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:34.661008   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:34.673094   99368 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I1010 18:15:34.673807   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481
	I1010 18:15:34.673825   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:34.673833   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:34.673838   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:34.679300   99368 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1010 18:15:34.679893   99368 pod_ready.go:93] pod "coredns-7c65d6cfc9-28dll" in "kube-system" namespace has status "Ready":"True"
	I1010 18:15:34.679919   99368 pod_ready.go:82] duration metric: took 19.021803ms for pod "coredns-7c65d6cfc9-28dll" in "kube-system" namespace to be "Ready" ...
	I1010 18:15:34.679934   99368 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-xfhq8" in "kube-system" namespace to be "Ready" ...
	I1010 18:15:34.680016   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-xfhq8
	I1010 18:15:34.680028   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:34.680039   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:34.680046   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:34.687874   99368 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1010 18:15:34.688550   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481
	I1010 18:15:34.688567   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:34.688575   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:34.688578   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:34.693607   99368 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1010 18:15:34.694298   99368 pod_ready.go:93] pod "coredns-7c65d6cfc9-xfhq8" in "kube-system" namespace has status "Ready":"True"
	I1010 18:15:34.694318   99368 pod_ready.go:82] duration metric: took 14.376081ms for pod "coredns-7c65d6cfc9-xfhq8" in "kube-system" namespace to be "Ready" ...
	I1010 18:15:34.694329   99368 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-142481" in "kube-system" namespace to be "Ready" ...
	I1010 18:15:34.694401   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/etcd-ha-142481
	I1010 18:15:34.694412   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:34.694422   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:34.694427   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:34.705466   99368 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1010 18:15:34.706122   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481
	I1010 18:15:34.706142   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:34.706152   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:34.706157   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:34.713862   99368 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1010 18:15:34.714292   99368 pod_ready.go:93] pod "etcd-ha-142481" in "kube-system" namespace has status "Ready":"True"
	I1010 18:15:34.714313   99368 pod_ready.go:82] duration metric: took 19.977824ms for pod "etcd-ha-142481" in "kube-system" namespace to be "Ready" ...
	I1010 18:15:34.714324   99368 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-142481-m02" in "kube-system" namespace to be "Ready" ...
	I1010 18:15:34.714393   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/etcd-ha-142481-m02
	I1010 18:15:34.714397   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:34.714407   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:34.714411   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:34.724173   99368 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1010 18:15:34.725474   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:34.725492   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:34.725502   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:34.725507   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:34.728517   99368 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1010 18:15:34.729350   99368 pod_ready.go:93] pod "etcd-ha-142481-m02" in "kube-system" namespace has status "Ready":"True"
	I1010 18:15:34.729374   99368 pod_ready.go:82] duration metric: took 15.044498ms for pod "etcd-ha-142481-m02" in "kube-system" namespace to be "Ready" ...
	I1010 18:15:34.729392   99368 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-142481" in "kube-system" namespace to be "Ready" ...
	I1010 18:15:34.844828   99368 request.go:632] Waited for 115.352966ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-142481
	I1010 18:15:34.844940   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-142481
	I1010 18:15:34.844954   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:34.844965   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:34.844980   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:34.849582   99368 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1010 18:15:35.044720   99368 request.go:632] Waited for 194.440409ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/nodes/ha-142481
	I1010 18:15:35.044815   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481
	I1010 18:15:35.044823   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:35.044922   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:35.044934   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:35.049101   99368 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1010 18:15:35.049648   99368 pod_ready.go:93] pod "kube-apiserver-ha-142481" in "kube-system" namespace has status "Ready":"True"
	I1010 18:15:35.049671   99368 pod_ready.go:82] duration metric: took 320.272231ms for pod "kube-apiserver-ha-142481" in "kube-system" namespace to be "Ready" ...
	I1010 18:15:35.049694   99368 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-142481-m02" in "kube-system" namespace to be "Ready" ...
	I1010 18:15:35.244714   99368 request.go:632] Waited for 194.93387ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-142481-m02
	I1010 18:15:35.244774   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-142481-m02
	I1010 18:15:35.244780   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:35.244788   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:35.244791   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:35.248696   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:35.444831   99368 request.go:632] Waited for 195.412897ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:35.444927   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:35.444933   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:35.444942   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:35.444946   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:35.448991   99368 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1010 18:15:35.450079   99368 pod_ready.go:93] pod "kube-apiserver-ha-142481-m02" in "kube-system" namespace has status "Ready":"True"
	I1010 18:15:35.450103   99368 pod_ready.go:82] duration metric: took 400.401007ms for pod "kube-apiserver-ha-142481-m02" in "kube-system" namespace to be "Ready" ...
	I1010 18:15:35.450118   99368 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-142481" in "kube-system" namespace to be "Ready" ...
	I1010 18:15:35.645157   99368 request.go:632] Waited for 194.960575ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-142481
	I1010 18:15:35.645249   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-142481
	I1010 18:15:35.645257   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:35.645268   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:35.645274   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:35.648746   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:35.844906   99368 request.go:632] Waited for 195.418533ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/nodes/ha-142481
	I1010 18:15:35.844969   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481
	I1010 18:15:35.844974   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:35.844982   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:35.844985   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:35.849036   99368 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1010 18:15:35.849631   99368 pod_ready.go:93] pod "kube-controller-manager-ha-142481" in "kube-system" namespace has status "Ready":"True"
	I1010 18:15:35.849652   99368 pod_ready.go:82] duration metric: took 399.526564ms for pod "kube-controller-manager-ha-142481" in "kube-system" namespace to be "Ready" ...
	I1010 18:15:35.849663   99368 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-142481-m02" in "kube-system" namespace to be "Ready" ...
	I1010 18:15:36.044750   99368 request.go:632] Waited for 194.993362ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-142481-m02
	I1010 18:15:36.044821   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-142481-m02
	I1010 18:15:36.044829   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:36.044841   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:36.044860   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:36.048403   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:36.244872   99368 request.go:632] Waited for 195.41194ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:36.244966   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:36.244978   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:36.244991   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:36.245003   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:36.248422   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:36.249090   99368 pod_ready.go:93] pod "kube-controller-manager-ha-142481-m02" in "kube-system" namespace has status "Ready":"True"
	I1010 18:15:36.249112   99368 pod_ready.go:82] duration metric: took 399.440459ms for pod "kube-controller-manager-ha-142481-m02" in "kube-system" namespace to be "Ready" ...
	I1010 18:15:36.249127   99368 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-gwvrh" in "kube-system" namespace to be "Ready" ...
	I1010 18:15:36.445275   99368 request.go:632] Waited for 196.04196ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gwvrh
	I1010 18:15:36.445337   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gwvrh
	I1010 18:15:36.445343   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:36.445350   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:36.445354   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:36.449425   99368 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1010 18:15:36.644689   99368 request.go:632] Waited for 194.411636ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/nodes/ha-142481
	I1010 18:15:36.644795   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481
	I1010 18:15:36.644806   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:36.644817   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:36.644825   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:36.648756   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:36.649220   99368 pod_ready.go:93] pod "kube-proxy-gwvrh" in "kube-system" namespace has status "Ready":"True"
	I1010 18:15:36.649241   99368 pod_ready.go:82] duration metric: took 400.105171ms for pod "kube-proxy-gwvrh" in "kube-system" namespace to be "Ready" ...
	I1010 18:15:36.649254   99368 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-srfng" in "kube-system" namespace to be "Ready" ...
	I1010 18:15:36.844338   99368 request.go:632] Waited for 194.987151ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-proxy-srfng
	I1010 18:15:36.844405   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-proxy-srfng
	I1010 18:15:36.844411   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:36.844420   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:36.844434   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:36.848477   99368 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1010 18:15:37.044640   99368 request.go:632] Waited for 195.367234ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:37.044708   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:37.044715   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:37.044726   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:37.044731   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:37.048116   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:37.048721   99368 pod_ready.go:93] pod "kube-proxy-srfng" in "kube-system" namespace has status "Ready":"True"
	I1010 18:15:37.048745   99368 pod_ready.go:82] duration metric: took 399.483125ms for pod "kube-proxy-srfng" in "kube-system" namespace to be "Ready" ...
	I1010 18:15:37.048759   99368 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-142481" in "kube-system" namespace to be "Ready" ...
	I1010 18:15:37.244914   99368 request.go:632] Waited for 196.022775ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-142481
	I1010 18:15:37.244993   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-142481
	I1010 18:15:37.245004   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:37.245029   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:37.245036   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:37.248801   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:37.444916   99368 request.go:632] Waited for 195.401869ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/nodes/ha-142481
	I1010 18:15:37.444984   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481
	I1010 18:15:37.444991   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:37.445002   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:37.445008   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:37.448457   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:37.449008   99368 pod_ready.go:93] pod "kube-scheduler-ha-142481" in "kube-system" namespace has status "Ready":"True"
	I1010 18:15:37.449028   99368 pod_ready.go:82] duration metric: took 400.260773ms for pod "kube-scheduler-ha-142481" in "kube-system" namespace to be "Ready" ...
	I1010 18:15:37.449039   99368 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-142481-m02" in "kube-system" namespace to be "Ready" ...
	I1010 18:15:37.645172   99368 request.go:632] Waited for 196.046461ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-142481-m02
	I1010 18:15:37.645249   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-142481-m02
	I1010 18:15:37.645256   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:37.645265   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:37.645271   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:37.648894   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:37.844799   99368 request.go:632] Waited for 195.42858ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:37.844915   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:15:37.844926   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:37.844937   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:37.844945   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:37.848459   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:37.849058   99368 pod_ready.go:93] pod "kube-scheduler-ha-142481-m02" in "kube-system" namespace has status "Ready":"True"
	I1010 18:15:37.849077   99368 pod_ready.go:82] duration metric: took 400.031968ms for pod "kube-scheduler-ha-142481-m02" in "kube-system" namespace to be "Ready" ...
	I1010 18:15:37.849089   99368 pod_ready.go:39] duration metric: took 3.200308757s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 18:15:37.849113   99368 api_server.go:52] waiting for apiserver process to appear ...
	I1010 18:15:37.849168   99368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 18:15:37.867701   99368 api_server.go:72] duration metric: took 22.53038697s to wait for apiserver process to appear ...
	I1010 18:15:37.867737   99368 api_server.go:88] waiting for apiserver healthz status ...
	I1010 18:15:37.867762   99368 api_server.go:253] Checking apiserver healthz at https://192.168.39.104:8443/healthz ...
	I1010 18:15:37.874449   99368 api_server.go:279] https://192.168.39.104:8443/healthz returned 200:
	ok
	I1010 18:15:37.874534   99368 round_trippers.go:463] GET https://192.168.39.104:8443/version
	I1010 18:15:37.874545   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:37.874561   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:37.874568   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:37.875635   99368 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1010 18:15:37.875761   99368 api_server.go:141] control plane version: v1.31.1
	I1010 18:15:37.875781   99368 api_server.go:131] duration metric: took 8.036588ms to wait for apiserver health ...
	I1010 18:15:37.875792   99368 system_pods.go:43] waiting for kube-system pods to appear ...
	I1010 18:15:38.045248   99368 request.go:632] Waited for 169.346857ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods
	I1010 18:15:38.045336   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods
	I1010 18:15:38.045344   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:38.045356   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:38.045367   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:38.051387   99368 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1010 18:15:38.056244   99368 system_pods.go:59] 17 kube-system pods found
	I1010 18:15:38.056282   99368 system_pods.go:61] "coredns-7c65d6cfc9-28dll" [1d8bde36-6ab6-4b48-b00a-2a80059ecc11] Running
	I1010 18:15:38.056289   99368 system_pods.go:61] "coredns-7c65d6cfc9-xfhq8" [33d478fb-98f0-4029-87ae-9701a54cecd9] Running
	I1010 18:15:38.056293   99368 system_pods.go:61] "etcd-ha-142481" [fa917068-3ac4-4929-b80b-bfabdc3a9b94] Running
	I1010 18:15:38.056297   99368 system_pods.go:61] "etcd-ha-142481-m02" [8e6c1111-716c-41ff-9d31-ee189956993b] Running
	I1010 18:15:38.056300   99368 system_pods.go:61] "kindnet-4d9v4" [f52506a7-d4f9-4112-8b28-f239c7c8230b] Running
	I1010 18:15:38.056308   99368 system_pods.go:61] "kindnet-5k6j8" [e2d0fb49-23e6-4389-b890-25c03f6089a1] Running
	I1010 18:15:38.056311   99368 system_pods.go:61] "kube-apiserver-ha-142481" [b3d0326d-123b-43af-b1a2-b745b884c3b5] Running
	I1010 18:15:38.056315   99368 system_pods.go:61] "kube-apiserver-ha-142481-m02" [36f5732a-f056-4751-83e3-67f84744af8e] Running
	I1010 18:15:38.056318   99368 system_pods.go:61] "kube-controller-manager-ha-142481" [ad42e72b-6208-44c3-b4cb-ac9794ec9683] Running
	I1010 18:15:38.056323   99368 system_pods.go:61] "kube-controller-manager-ha-142481-m02" [4bd4ce73-1b81-44db-a59f-06c89184d88c] Running
	I1010 18:15:38.056327   99368 system_pods.go:61] "kube-proxy-gwvrh" [06e8f455-b250-46ea-bbcc-75ed230f2b57] Running
	I1010 18:15:38.056331   99368 system_pods.go:61] "kube-proxy-srfng" [0aad186a-89b6-48d9-8434-1063c7c8a42f] Running
	I1010 18:15:38.056334   99368 system_pods.go:61] "kube-scheduler-ha-142481" [c28a2da7-1807-4111-bf99-84891a0b278a] Running
	I1010 18:15:38.056337   99368 system_pods.go:61] "kube-scheduler-ha-142481-m02" [f29260ee-8ec8-47c4-848d-ee8ff1d06610] Running
	I1010 18:15:38.056340   99368 system_pods.go:61] "kube-vip-ha-142481" [3ae4dc2c-27b6-4f93-a605-8de6f0c096d0] Running
	I1010 18:15:38.056343   99368 system_pods.go:61] "kube-vip-ha-142481-m02" [bd50bf3a-5e9c-48d4-8bba-fe1fea6c017d] Running
	I1010 18:15:38.056345   99368 system_pods.go:61] "storage-provisioner" [9d74f64e-6391-4ab0-99ad-8c1711840696] Running
	I1010 18:15:38.056352   99368 system_pods.go:74] duration metric: took 180.553557ms to wait for pod list to return data ...
	I1010 18:15:38.056362   99368 default_sa.go:34] waiting for default service account to be created ...
	I1010 18:15:38.244537   99368 request.go:632] Waited for 188.093724ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/default/serviceaccounts
	I1010 18:15:38.244618   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/default/serviceaccounts
	I1010 18:15:38.244624   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:38.244633   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:38.244641   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:38.248165   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:15:38.248399   99368 default_sa.go:45] found service account: "default"
	I1010 18:15:38.248416   99368 default_sa.go:55] duration metric: took 192.046524ms for default service account to be created ...
	I1010 18:15:38.248427   99368 system_pods.go:116] waiting for k8s-apps to be running ...
	I1010 18:15:38.444704   99368 request.go:632] Waited for 196.206785ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods
	I1010 18:15:38.444765   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods
	I1010 18:15:38.444770   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:38.444778   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:38.444783   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:38.479585   99368 round_trippers.go:574] Response Status: 200 OK in 34 milliseconds
	I1010 18:15:38.484055   99368 system_pods.go:86] 17 kube-system pods found
	I1010 18:15:38.484088   99368 system_pods.go:89] "coredns-7c65d6cfc9-28dll" [1d8bde36-6ab6-4b48-b00a-2a80059ecc11] Running
	I1010 18:15:38.484094   99368 system_pods.go:89] "coredns-7c65d6cfc9-xfhq8" [33d478fb-98f0-4029-87ae-9701a54cecd9] Running
	I1010 18:15:38.484098   99368 system_pods.go:89] "etcd-ha-142481" [fa917068-3ac4-4929-b80b-bfabdc3a9b94] Running
	I1010 18:15:38.484102   99368 system_pods.go:89] "etcd-ha-142481-m02" [8e6c1111-716c-41ff-9d31-ee189956993b] Running
	I1010 18:15:38.484106   99368 system_pods.go:89] "kindnet-4d9v4" [f52506a7-d4f9-4112-8b28-f239c7c8230b] Running
	I1010 18:15:38.484109   99368 system_pods.go:89] "kindnet-5k6j8" [e2d0fb49-23e6-4389-b890-25c03f6089a1] Running
	I1010 18:15:38.484113   99368 system_pods.go:89] "kube-apiserver-ha-142481" [b3d0326d-123b-43af-b1a2-b745b884c3b5] Running
	I1010 18:15:38.484116   99368 system_pods.go:89] "kube-apiserver-ha-142481-m02" [36f5732a-f056-4751-83e3-67f84744af8e] Running
	I1010 18:15:38.484119   99368 system_pods.go:89] "kube-controller-manager-ha-142481" [ad42e72b-6208-44c3-b4cb-ac9794ec9683] Running
	I1010 18:15:38.484122   99368 system_pods.go:89] "kube-controller-manager-ha-142481-m02" [4bd4ce73-1b81-44db-a59f-06c89184d88c] Running
	I1010 18:15:38.484125   99368 system_pods.go:89] "kube-proxy-gwvrh" [06e8f455-b250-46ea-bbcc-75ed230f2b57] Running
	I1010 18:15:38.484128   99368 system_pods.go:89] "kube-proxy-srfng" [0aad186a-89b6-48d9-8434-1063c7c8a42f] Running
	I1010 18:15:38.484132   99368 system_pods.go:89] "kube-scheduler-ha-142481" [c28a2da7-1807-4111-bf99-84891a0b278a] Running
	I1010 18:15:38.484135   99368 system_pods.go:89] "kube-scheduler-ha-142481-m02" [f29260ee-8ec8-47c4-848d-ee8ff1d06610] Running
	I1010 18:15:38.484139   99368 system_pods.go:89] "kube-vip-ha-142481" [3ae4dc2c-27b6-4f93-a605-8de6f0c096d0] Running
	I1010 18:15:38.484141   99368 system_pods.go:89] "kube-vip-ha-142481-m02" [bd50bf3a-5e9c-48d4-8bba-fe1fea6c017d] Running
	I1010 18:15:38.484144   99368 system_pods.go:89] "storage-provisioner" [9d74f64e-6391-4ab0-99ad-8c1711840696] Running
	I1010 18:15:38.484152   99368 system_pods.go:126] duration metric: took 235.71716ms to wait for k8s-apps to be running ...
	I1010 18:15:38.484162   99368 system_svc.go:44] waiting for kubelet service to be running ....
	I1010 18:15:38.484219   99368 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 18:15:38.499587   99368 system_svc.go:56] duration metric: took 15.413149ms WaitForService to wait for kubelet
	I1010 18:15:38.499630   99368 kubeadm.go:582] duration metric: took 23.162321939s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1010 18:15:38.499655   99368 node_conditions.go:102] verifying NodePressure condition ...
	I1010 18:15:38.645127   99368 request.go:632] Waited for 145.342386ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/nodes
	I1010 18:15:38.645247   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes
	I1010 18:15:38.645259   99368 round_trippers.go:469] Request Headers:
	I1010 18:15:38.645267   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:15:38.645272   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:15:38.649291   99368 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1010 18:15:38.650032   99368 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1010 18:15:38.650065   99368 node_conditions.go:123] node cpu capacity is 2
	I1010 18:15:38.650077   99368 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1010 18:15:38.650081   99368 node_conditions.go:123] node cpu capacity is 2
	I1010 18:15:38.650086   99368 node_conditions.go:105] duration metric: took 150.425543ms to run NodePressure ...
	I1010 18:15:38.650104   99368 start.go:241] waiting for startup goroutines ...
	I1010 18:15:38.650137   99368 start.go:255] writing updated cluster config ...
	I1010 18:15:38.652551   99368 out.go:201] 
	I1010 18:15:38.654476   99368 config.go:182] Loaded profile config "ha-142481": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 18:15:38.654593   99368 profile.go:143] Saving config to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/config.json ...
	I1010 18:15:38.656332   99368 out.go:177] * Starting "ha-142481-m03" control-plane node in "ha-142481" cluster
	I1010 18:15:38.657633   99368 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1010 18:15:38.657659   99368 cache.go:56] Caching tarball of preloaded images
	I1010 18:15:38.657790   99368 preload.go:172] Found /home/jenkins/minikube-integration/19787-81676/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1010 18:15:38.657806   99368 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1010 18:15:38.657908   99368 profile.go:143] Saving config to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/config.json ...
	I1010 18:15:38.658076   99368 start.go:360] acquireMachinesLock for ha-142481-m03: {Name:mkf50a8a3600473176c10a3ea212772e747151e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1010 18:15:38.658122   99368 start.go:364] duration metric: took 26.16µs to acquireMachinesLock for "ha-142481-m03"
	I1010 18:15:38.658147   99368 start.go:93] Provisioning new machine with config: &{Name:ha-142481 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-142481 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.104 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.186 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspekto
r-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1010 18:15:38.658249   99368 start.go:125] createHost starting for "m03" (driver="kvm2")
	I1010 18:15:38.660071   99368 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1010 18:15:38.660197   99368 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 18:15:38.660258   99368 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 18:15:38.676361   99368 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34941
	I1010 18:15:38.676935   99368 main.go:141] libmachine: () Calling .GetVersion
	I1010 18:15:38.677467   99368 main.go:141] libmachine: Using API Version  1
	I1010 18:15:38.677506   99368 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 18:15:38.677892   99368 main.go:141] libmachine: () Calling .GetMachineName
	I1010 18:15:38.678105   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetMachineName
	I1010 18:15:38.678326   99368 main.go:141] libmachine: (ha-142481-m03) Calling .DriverName
	I1010 18:15:38.678504   99368 start.go:159] libmachine.API.Create for "ha-142481" (driver="kvm2")
	I1010 18:15:38.678538   99368 client.go:168] LocalClient.Create starting
	I1010 18:15:38.678568   99368 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem
	I1010 18:15:38.678601   99368 main.go:141] libmachine: Decoding PEM data...
	I1010 18:15:38.678614   99368 main.go:141] libmachine: Parsing certificate...
	I1010 18:15:38.678663   99368 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem
	I1010 18:15:38.678681   99368 main.go:141] libmachine: Decoding PEM data...
	I1010 18:15:38.678691   99368 main.go:141] libmachine: Parsing certificate...
	I1010 18:15:38.678707   99368 main.go:141] libmachine: Running pre-create checks...
	I1010 18:15:38.678715   99368 main.go:141] libmachine: (ha-142481-m03) Calling .PreCreateCheck
	I1010 18:15:38.678898   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetConfigRaw
	I1010 18:15:38.679630   99368 main.go:141] libmachine: Creating machine...
	I1010 18:15:38.679653   99368 main.go:141] libmachine: (ha-142481-m03) Calling .Create
	I1010 18:15:38.680877   99368 main.go:141] libmachine: (ha-142481-m03) Creating KVM machine...
	I1010 18:15:38.681726   99368 main.go:141] libmachine: (ha-142481-m03) DBG | found existing default KVM network
	I1010 18:15:38.681754   99368 main.go:141] libmachine: (ha-142481-m03) DBG | found existing private KVM network mk-ha-142481
	I1010 18:15:38.681811   99368 main.go:141] libmachine: (ha-142481-m03) Setting up store path in /home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481-m03 ...
	I1010 18:15:38.681845   99368 main.go:141] libmachine: (ha-142481-m03) Building disk image from file:///home/jenkins/minikube-integration/19787-81676/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso
	I1010 18:15:38.681908   99368 main.go:141] libmachine: (ha-142481-m03) DBG | I1010 18:15:38.681805  100144 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19787-81676/.minikube
	I1010 18:15:38.681991   99368 main.go:141] libmachine: (ha-142481-m03) Downloading /home/jenkins/minikube-integration/19787-81676/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19787-81676/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso...
	I1010 18:15:38.938889   99368 main.go:141] libmachine: (ha-142481-m03) DBG | I1010 18:15:38.938689  100144 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481-m03/id_rsa...
	I1010 18:15:39.048405   99368 main.go:141] libmachine: (ha-142481-m03) DBG | I1010 18:15:39.048265  100144 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481-m03/ha-142481-m03.rawdisk...
	I1010 18:15:39.048440   99368 main.go:141] libmachine: (ha-142481-m03) DBG | Writing magic tar header
	I1010 18:15:39.048457   99368 main.go:141] libmachine: (ha-142481-m03) DBG | Writing SSH key tar header
	I1010 18:15:39.048467   99368 main.go:141] libmachine: (ha-142481-m03) DBG | I1010 18:15:39.048382  100144 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481-m03 ...
	I1010 18:15:39.048494   99368 main.go:141] libmachine: (ha-142481-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481-m03
	I1010 18:15:39.048510   99368 main.go:141] libmachine: (ha-142481-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19787-81676/.minikube/machines
	I1010 18:15:39.048527   99368 main.go:141] libmachine: (ha-142481-m03) Setting executable bit set on /home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481-m03 (perms=drwx------)
	I1010 18:15:39.048549   99368 main.go:141] libmachine: (ha-142481-m03) Setting executable bit set on /home/jenkins/minikube-integration/19787-81676/.minikube/machines (perms=drwxr-xr-x)
	I1010 18:15:39.048564   99368 main.go:141] libmachine: (ha-142481-m03) Setting executable bit set on /home/jenkins/minikube-integration/19787-81676/.minikube (perms=drwxr-xr-x)
	I1010 18:15:39.048578   99368 main.go:141] libmachine: (ha-142481-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19787-81676/.minikube
	I1010 18:15:39.048592   99368 main.go:141] libmachine: (ha-142481-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19787-81676
	I1010 18:15:39.048605   99368 main.go:141] libmachine: (ha-142481-m03) Setting executable bit set on /home/jenkins/minikube-integration/19787-81676 (perms=drwxrwxr-x)
	I1010 18:15:39.048635   99368 main.go:141] libmachine: (ha-142481-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1010 18:15:39.048655   99368 main.go:141] libmachine: (ha-142481-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1010 18:15:39.048662   99368 main.go:141] libmachine: (ha-142481-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1010 18:15:39.048676   99368 main.go:141] libmachine: (ha-142481-m03) Creating domain...
	I1010 18:15:39.048685   99368 main.go:141] libmachine: (ha-142481-m03) DBG | Checking permissions on dir: /home/jenkins
	I1010 18:15:39.048696   99368 main.go:141] libmachine: (ha-142481-m03) DBG | Checking permissions on dir: /home
	I1010 18:15:39.048710   99368 main.go:141] libmachine: (ha-142481-m03) DBG | Skipping /home - not owner
	I1010 18:15:39.049753   99368 main.go:141] libmachine: (ha-142481-m03) define libvirt domain using xml: 
	I1010 18:15:39.049779   99368 main.go:141] libmachine: (ha-142481-m03) <domain type='kvm'>
	I1010 18:15:39.049790   99368 main.go:141] libmachine: (ha-142481-m03)   <name>ha-142481-m03</name>
	I1010 18:15:39.049799   99368 main.go:141] libmachine: (ha-142481-m03)   <memory unit='MiB'>2200</memory>
	I1010 18:15:39.049809   99368 main.go:141] libmachine: (ha-142481-m03)   <vcpu>2</vcpu>
	I1010 18:15:39.049816   99368 main.go:141] libmachine: (ha-142481-m03)   <features>
	I1010 18:15:39.049822   99368 main.go:141] libmachine: (ha-142481-m03)     <acpi/>
	I1010 18:15:39.049830   99368 main.go:141] libmachine: (ha-142481-m03)     <apic/>
	I1010 18:15:39.049835   99368 main.go:141] libmachine: (ha-142481-m03)     <pae/>
	I1010 18:15:39.049839   99368 main.go:141] libmachine: (ha-142481-m03)     
	I1010 18:15:39.049845   99368 main.go:141] libmachine: (ha-142481-m03)   </features>
	I1010 18:15:39.049849   99368 main.go:141] libmachine: (ha-142481-m03)   <cpu mode='host-passthrough'>
	I1010 18:15:39.049856   99368 main.go:141] libmachine: (ha-142481-m03)   
	I1010 18:15:39.049862   99368 main.go:141] libmachine: (ha-142481-m03)   </cpu>
	I1010 18:15:39.049890   99368 main.go:141] libmachine: (ha-142481-m03)   <os>
	I1010 18:15:39.049903   99368 main.go:141] libmachine: (ha-142481-m03)     <type>hvm</type>
	I1010 18:15:39.049915   99368 main.go:141] libmachine: (ha-142481-m03)     <boot dev='cdrom'/>
	I1010 18:15:39.049926   99368 main.go:141] libmachine: (ha-142481-m03)     <boot dev='hd'/>
	I1010 18:15:39.049939   99368 main.go:141] libmachine: (ha-142481-m03)     <bootmenu enable='no'/>
	I1010 18:15:39.049945   99368 main.go:141] libmachine: (ha-142481-m03)   </os>
	I1010 18:15:39.049956   99368 main.go:141] libmachine: (ha-142481-m03)   <devices>
	I1010 18:15:39.049966   99368 main.go:141] libmachine: (ha-142481-m03)     <disk type='file' device='cdrom'>
	I1010 18:15:39.049980   99368 main.go:141] libmachine: (ha-142481-m03)       <source file='/home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481-m03/boot2docker.iso'/>
	I1010 18:15:39.049991   99368 main.go:141] libmachine: (ha-142481-m03)       <target dev='hdc' bus='scsi'/>
	I1010 18:15:39.050016   99368 main.go:141] libmachine: (ha-142481-m03)       <readonly/>
	I1010 18:15:39.050029   99368 main.go:141] libmachine: (ha-142481-m03)     </disk>
	I1010 18:15:39.050036   99368 main.go:141] libmachine: (ha-142481-m03)     <disk type='file' device='disk'>
	I1010 18:15:39.050044   99368 main.go:141] libmachine: (ha-142481-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1010 18:15:39.050056   99368 main.go:141] libmachine: (ha-142481-m03)       <source file='/home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481-m03/ha-142481-m03.rawdisk'/>
	I1010 18:15:39.050065   99368 main.go:141] libmachine: (ha-142481-m03)       <target dev='hda' bus='virtio'/>
	I1010 18:15:39.050070   99368 main.go:141] libmachine: (ha-142481-m03)     </disk>
	I1010 18:15:39.050075   99368 main.go:141] libmachine: (ha-142481-m03)     <interface type='network'>
	I1010 18:15:39.050081   99368 main.go:141] libmachine: (ha-142481-m03)       <source network='mk-ha-142481'/>
	I1010 18:15:39.050087   99368 main.go:141] libmachine: (ha-142481-m03)       <model type='virtio'/>
	I1010 18:15:39.050092   99368 main.go:141] libmachine: (ha-142481-m03)     </interface>
	I1010 18:15:39.050099   99368 main.go:141] libmachine: (ha-142481-m03)     <interface type='network'>
	I1010 18:15:39.050104   99368 main.go:141] libmachine: (ha-142481-m03)       <source network='default'/>
	I1010 18:15:39.050114   99368 main.go:141] libmachine: (ha-142481-m03)       <model type='virtio'/>
	I1010 18:15:39.050121   99368 main.go:141] libmachine: (ha-142481-m03)     </interface>
	I1010 18:15:39.050128   99368 main.go:141] libmachine: (ha-142481-m03)     <serial type='pty'>
	I1010 18:15:39.050232   99368 main.go:141] libmachine: (ha-142481-m03)       <target port='0'/>
	I1010 18:15:39.050268   99368 main.go:141] libmachine: (ha-142481-m03)     </serial>
	I1010 18:15:39.050282   99368 main.go:141] libmachine: (ha-142481-m03)     <console type='pty'>
	I1010 18:15:39.050294   99368 main.go:141] libmachine: (ha-142481-m03)       <target type='serial' port='0'/>
	I1010 18:15:39.050305   99368 main.go:141] libmachine: (ha-142481-m03)     </console>
	I1010 18:15:39.050315   99368 main.go:141] libmachine: (ha-142481-m03)     <rng model='virtio'>
	I1010 18:15:39.050328   99368 main.go:141] libmachine: (ha-142481-m03)       <backend model='random'>/dev/random</backend>
	I1010 18:15:39.050340   99368 main.go:141] libmachine: (ha-142481-m03)     </rng>
	I1010 18:15:39.050350   99368 main.go:141] libmachine: (ha-142481-m03)     
	I1010 18:15:39.050359   99368 main.go:141] libmachine: (ha-142481-m03)     
	I1010 18:15:39.050371   99368 main.go:141] libmachine: (ha-142481-m03)   </devices>
	I1010 18:15:39.050378   99368 main.go:141] libmachine: (ha-142481-m03) </domain>
	I1010 18:15:39.050391   99368 main.go:141] libmachine: (ha-142481-m03) 
	I1010 18:15:39.057742   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:01:68:df in network default
	I1010 18:15:39.058339   99368 main.go:141] libmachine: (ha-142481-m03) Ensuring networks are active...
	I1010 18:15:39.058372   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:15:39.059040   99368 main.go:141] libmachine: (ha-142481-m03) Ensuring network default is active
	I1010 18:15:39.059385   99368 main.go:141] libmachine: (ha-142481-m03) Ensuring network mk-ha-142481 is active
	I1010 18:15:39.060065   99368 main.go:141] libmachine: (ha-142481-m03) Getting domain xml...
	I1010 18:15:39.061108   99368 main.go:141] libmachine: (ha-142481-m03) Creating domain...
	I1010 18:15:40.343936   99368 main.go:141] libmachine: (ha-142481-m03) Waiting to get IP...
	I1010 18:15:40.344892   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:15:40.345373   99368 main.go:141] libmachine: (ha-142481-m03) DBG | unable to find current IP address of domain ha-142481-m03 in network mk-ha-142481
	I1010 18:15:40.345401   99368 main.go:141] libmachine: (ha-142481-m03) DBG | I1010 18:15:40.345319  100144 retry.go:31] will retry after 289.570163ms: waiting for machine to come up
	I1010 18:15:40.637167   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:15:40.637765   99368 main.go:141] libmachine: (ha-142481-m03) DBG | unable to find current IP address of domain ha-142481-m03 in network mk-ha-142481
	I1010 18:15:40.637799   99368 main.go:141] libmachine: (ha-142481-m03) DBG | I1010 18:15:40.637685  100144 retry.go:31] will retry after 311.078832ms: waiting for machine to come up
	I1010 18:15:40.950108   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:15:40.950581   99368 main.go:141] libmachine: (ha-142481-m03) DBG | unable to find current IP address of domain ha-142481-m03 in network mk-ha-142481
	I1010 18:15:40.950610   99368 main.go:141] libmachine: (ha-142481-m03) DBG | I1010 18:15:40.950529  100144 retry.go:31] will retry after 356.951796ms: waiting for machine to come up
	I1010 18:15:41.309147   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:15:41.309650   99368 main.go:141] libmachine: (ha-142481-m03) DBG | unable to find current IP address of domain ha-142481-m03 in network mk-ha-142481
	I1010 18:15:41.309677   99368 main.go:141] libmachine: (ha-142481-m03) DBG | I1010 18:15:41.309602  100144 retry.go:31] will retry after 532.45566ms: waiting for machine to come up
	I1010 18:15:41.843545   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:15:41.844119   99368 main.go:141] libmachine: (ha-142481-m03) DBG | unable to find current IP address of domain ha-142481-m03 in network mk-ha-142481
	I1010 18:15:41.844147   99368 main.go:141] libmachine: (ha-142481-m03) DBG | I1010 18:15:41.844054  100144 retry.go:31] will retry after 601.557958ms: waiting for machine to come up
	I1010 18:15:42.447022   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:15:42.447619   99368 main.go:141] libmachine: (ha-142481-m03) DBG | unable to find current IP address of domain ha-142481-m03 in network mk-ha-142481
	I1010 18:15:42.447649   99368 main.go:141] libmachine: (ha-142481-m03) DBG | I1010 18:15:42.447560  100144 retry.go:31] will retry after 756.716179ms: waiting for machine to come up
	I1010 18:15:43.206472   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:15:43.207013   99368 main.go:141] libmachine: (ha-142481-m03) DBG | unable to find current IP address of domain ha-142481-m03 in network mk-ha-142481
	I1010 18:15:43.207043   99368 main.go:141] libmachine: (ha-142481-m03) DBG | I1010 18:15:43.206973  100144 retry.go:31] will retry after 1.170057285s: waiting for machine to come up
	I1010 18:15:44.378682   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:15:44.379169   99368 main.go:141] libmachine: (ha-142481-m03) DBG | unable to find current IP address of domain ha-142481-m03 in network mk-ha-142481
	I1010 18:15:44.379199   99368 main.go:141] libmachine: (ha-142481-m03) DBG | I1010 18:15:44.379123  100144 retry.go:31] will retry after 1.176461257s: waiting for machine to come up
	I1010 18:15:45.558684   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:15:45.559193   99368 main.go:141] libmachine: (ha-142481-m03) DBG | unable to find current IP address of domain ha-142481-m03 in network mk-ha-142481
	I1010 18:15:45.559220   99368 main.go:141] libmachine: (ha-142481-m03) DBG | I1010 18:15:45.559154  100144 retry.go:31] will retry after 1.48319029s: waiting for machine to come up
	I1010 18:15:47.044036   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:15:47.044496   99368 main.go:141] libmachine: (ha-142481-m03) DBG | unable to find current IP address of domain ha-142481-m03 in network mk-ha-142481
	I1010 18:15:47.044521   99368 main.go:141] libmachine: (ha-142481-m03) DBG | I1010 18:15:47.044430  100144 retry.go:31] will retry after 1.688231692s: waiting for machine to come up
	I1010 18:15:48.734646   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:15:48.735151   99368 main.go:141] libmachine: (ha-142481-m03) DBG | unable to find current IP address of domain ha-142481-m03 in network mk-ha-142481
	I1010 18:15:48.735174   99368 main.go:141] libmachine: (ha-142481-m03) DBG | I1010 18:15:48.735104  100144 retry.go:31] will retry after 2.212019945s: waiting for machine to come up
	I1010 18:15:50.948675   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:15:50.949207   99368 main.go:141] libmachine: (ha-142481-m03) DBG | unable to find current IP address of domain ha-142481-m03 in network mk-ha-142481
	I1010 18:15:50.949236   99368 main.go:141] libmachine: (ha-142481-m03) DBG | I1010 18:15:50.949160  100144 retry.go:31] will retry after 2.319000915s: waiting for machine to come up
	I1010 18:15:53.270642   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:15:53.271193   99368 main.go:141] libmachine: (ha-142481-m03) DBG | unable to find current IP address of domain ha-142481-m03 in network mk-ha-142481
	I1010 18:15:53.271216   99368 main.go:141] libmachine: (ha-142481-m03) DBG | I1010 18:15:53.271155  100144 retry.go:31] will retry after 3.719042495s: waiting for machine to come up
	I1010 18:15:56.994579   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:15:56.995029   99368 main.go:141] libmachine: (ha-142481-m03) DBG | unable to find current IP address of domain ha-142481-m03 in network mk-ha-142481
	I1010 18:15:56.995054   99368 main.go:141] libmachine: (ha-142481-m03) DBG | I1010 18:15:56.994970  100144 retry.go:31] will retry after 5.298417625s: waiting for machine to come up
	I1010 18:16:02.294993   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:02.295462   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has current primary IP address 192.168.39.175 and MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:02.295487   99368 main.go:141] libmachine: (ha-142481-m03) Found IP for machine: 192.168.39.175
	I1010 18:16:02.295500   99368 main.go:141] libmachine: (ha-142481-m03) Reserving static IP address...
	I1010 18:16:02.295917   99368 main.go:141] libmachine: (ha-142481-m03) DBG | unable to find host DHCP lease matching {name: "ha-142481-m03", mac: "52:54:00:06:ed:5a", ip: "192.168.39.175"} in network mk-ha-142481
	I1010 18:16:02.376364   99368 main.go:141] libmachine: (ha-142481-m03) DBG | Getting to WaitForSSH function...
	I1010 18:16:02.376400   99368 main.go:141] libmachine: (ha-142481-m03) Reserved static IP address: 192.168.39.175
	I1010 18:16:02.376420   99368 main.go:141] libmachine: (ha-142481-m03) Waiting for SSH to be available...
	I1010 18:16:02.379038   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:02.379428   99368 main.go:141] libmachine: (ha-142481-m03) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:06:ed:5a", ip: ""} in network mk-ha-142481
	I1010 18:16:02.379482   99368 main.go:141] libmachine: (ha-142481-m03) DBG | unable to find defined IP address of network mk-ha-142481 interface with MAC address 52:54:00:06:ed:5a
	I1010 18:16:02.379643   99368 main.go:141] libmachine: (ha-142481-m03) DBG | Using SSH client type: external
	I1010 18:16:02.379666   99368 main.go:141] libmachine: (ha-142481-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481-m03/id_rsa (-rw-------)
	I1010 18:16:02.379695   99368 main.go:141] libmachine: (ha-142481-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1010 18:16:02.379708   99368 main.go:141] libmachine: (ha-142481-m03) DBG | About to run SSH command:
	I1010 18:16:02.379720   99368 main.go:141] libmachine: (ha-142481-m03) DBG | exit 0
	I1010 18:16:02.383609   99368 main.go:141] libmachine: (ha-142481-m03) DBG | SSH cmd err, output: exit status 255: 
	I1010 18:16:02.383645   99368 main.go:141] libmachine: (ha-142481-m03) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1010 18:16:02.383673   99368 main.go:141] libmachine: (ha-142481-m03) DBG | command : exit 0
	I1010 18:16:02.383687   99368 main.go:141] libmachine: (ha-142481-m03) DBG | err     : exit status 255
	I1010 18:16:02.383701   99368 main.go:141] libmachine: (ha-142481-m03) DBG | output  : 
	I1010 18:16:05.385045   99368 main.go:141] libmachine: (ha-142481-m03) DBG | Getting to WaitForSSH function...
	I1010 18:16:05.387500   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:05.388024   99368 main.go:141] libmachine: (ha-142481-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:ed:5a", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:15:53 +0000 UTC Type:0 Mac:52:54:00:06:ed:5a Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:ha-142481-m03 Clientid:01:52:54:00:06:ed:5a}
	I1010 18:16:05.388058   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined IP address 192.168.39.175 and MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:05.388149   99368 main.go:141] libmachine: (ha-142481-m03) DBG | Using SSH client type: external
	I1010 18:16:05.388172   99368 main.go:141] libmachine: (ha-142481-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481-m03/id_rsa (-rw-------)
	I1010 18:16:05.388198   99368 main.go:141] libmachine: (ha-142481-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.175 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1010 18:16:05.388212   99368 main.go:141] libmachine: (ha-142481-m03) DBG | About to run SSH command:
	I1010 18:16:05.388222   99368 main.go:141] libmachine: (ha-142481-m03) DBG | exit 0
	I1010 18:16:05.517373   99368 main.go:141] libmachine: (ha-142481-m03) DBG | SSH cmd err, output: <nil>: 
	I1010 18:16:05.517675   99368 main.go:141] libmachine: (ha-142481-m03) KVM machine creation complete!
	I1010 18:16:05.517976   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetConfigRaw
	I1010 18:16:05.518524   99368 main.go:141] libmachine: (ha-142481-m03) Calling .DriverName
	I1010 18:16:05.518756   99368 main.go:141] libmachine: (ha-142481-m03) Calling .DriverName
	I1010 18:16:05.518928   99368 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1010 18:16:05.518944   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetState
	I1010 18:16:05.520359   99368 main.go:141] libmachine: Detecting operating system of created instance...
	I1010 18:16:05.520374   99368 main.go:141] libmachine: Waiting for SSH to be available...
	I1010 18:16:05.520382   99368 main.go:141] libmachine: Getting to WaitForSSH function...
	I1010 18:16:05.520388   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHHostname
	I1010 18:16:05.523092   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:05.523568   99368 main.go:141] libmachine: (ha-142481-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:ed:5a", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:15:53 +0000 UTC Type:0 Mac:52:54:00:06:ed:5a Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:ha-142481-m03 Clientid:01:52:54:00:06:ed:5a}
	I1010 18:16:05.523601   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined IP address 192.168.39.175 and MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:05.523714   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHPort
	I1010 18:16:05.523901   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHKeyPath
	I1010 18:16:05.524055   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHKeyPath
	I1010 18:16:05.524156   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHUsername
	I1010 18:16:05.524338   99368 main.go:141] libmachine: Using SSH client type: native
	I1010 18:16:05.524636   99368 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.175 22 <nil> <nil>}
	I1010 18:16:05.524669   99368 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1010 18:16:05.632367   99368 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1010 18:16:05.632396   99368 main.go:141] libmachine: Detecting the provisioner...
	I1010 18:16:05.632408   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHHostname
	I1010 18:16:05.635809   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:05.636216   99368 main.go:141] libmachine: (ha-142481-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:ed:5a", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:15:53 +0000 UTC Type:0 Mac:52:54:00:06:ed:5a Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:ha-142481-m03 Clientid:01:52:54:00:06:ed:5a}
	I1010 18:16:05.636238   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined IP address 192.168.39.175 and MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:05.636547   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHPort
	I1010 18:16:05.636757   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHKeyPath
	I1010 18:16:05.636963   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHKeyPath
	I1010 18:16:05.637090   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHUsername
	I1010 18:16:05.637319   99368 main.go:141] libmachine: Using SSH client type: native
	I1010 18:16:05.637523   99368 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.175 22 <nil> <nil>}
	I1010 18:16:05.637539   99368 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1010 18:16:05.749769   99368 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1010 18:16:05.749833   99368 main.go:141] libmachine: found compatible host: buildroot
	I1010 18:16:05.749840   99368 main.go:141] libmachine: Provisioning with buildroot...
	I1010 18:16:05.749847   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetMachineName
	I1010 18:16:05.750100   99368 buildroot.go:166] provisioning hostname "ha-142481-m03"
	I1010 18:16:05.750135   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetMachineName
	I1010 18:16:05.750348   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHHostname
	I1010 18:16:05.753204   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:05.753697   99368 main.go:141] libmachine: (ha-142481-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:ed:5a", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:15:53 +0000 UTC Type:0 Mac:52:54:00:06:ed:5a Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:ha-142481-m03 Clientid:01:52:54:00:06:ed:5a}
	I1010 18:16:05.753724   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined IP address 192.168.39.175 and MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:05.753970   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHPort
	I1010 18:16:05.754155   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHKeyPath
	I1010 18:16:05.754326   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHKeyPath
	I1010 18:16:05.754456   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHUsername
	I1010 18:16:05.754597   99368 main.go:141] libmachine: Using SSH client type: native
	I1010 18:16:05.754815   99368 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.175 22 <nil> <nil>}
	I1010 18:16:05.754835   99368 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-142481-m03 && echo "ha-142481-m03" | sudo tee /etc/hostname
	I1010 18:16:05.886094   99368 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-142481-m03
	
	I1010 18:16:05.886129   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHHostname
	I1010 18:16:05.889027   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:05.889401   99368 main.go:141] libmachine: (ha-142481-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:ed:5a", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:15:53 +0000 UTC Type:0 Mac:52:54:00:06:ed:5a Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:ha-142481-m03 Clientid:01:52:54:00:06:ed:5a}
	I1010 18:16:05.889420   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined IP address 192.168.39.175 and MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:05.889629   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHPort
	I1010 18:16:05.889843   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHKeyPath
	I1010 18:16:05.889995   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHKeyPath
	I1010 18:16:05.890115   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHUsername
	I1010 18:16:05.890271   99368 main.go:141] libmachine: Using SSH client type: native
	I1010 18:16:05.890474   99368 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.175 22 <nil> <nil>}
	I1010 18:16:05.890491   99368 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-142481-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-142481-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-142481-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1010 18:16:06.011027   99368 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1010 18:16:06.011075   99368 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19787-81676/.minikube CaCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19787-81676/.minikube}
	I1010 18:16:06.011118   99368 buildroot.go:174] setting up certificates
	I1010 18:16:06.011128   99368 provision.go:84] configureAuth start
	I1010 18:16:06.011159   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetMachineName
	I1010 18:16:06.011515   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetIP
	I1010 18:16:06.014592   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:06.015019   99368 main.go:141] libmachine: (ha-142481-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:ed:5a", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:15:53 +0000 UTC Type:0 Mac:52:54:00:06:ed:5a Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:ha-142481-m03 Clientid:01:52:54:00:06:ed:5a}
	I1010 18:16:06.015050   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined IP address 192.168.39.175 and MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:06.015255   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHHostname
	I1010 18:16:06.017745   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:06.018212   99368 main.go:141] libmachine: (ha-142481-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:ed:5a", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:15:53 +0000 UTC Type:0 Mac:52:54:00:06:ed:5a Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:ha-142481-m03 Clientid:01:52:54:00:06:ed:5a}
	I1010 18:16:06.018241   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined IP address 192.168.39.175 and MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:06.018399   99368 provision.go:143] copyHostCerts
	I1010 18:16:06.018428   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem
	I1010 18:16:06.018461   99368 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem, removing ...
	I1010 18:16:06.018471   99368 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem
	I1010 18:16:06.018534   99368 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem (1675 bytes)
	I1010 18:16:06.018611   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem
	I1010 18:16:06.018628   99368 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem, removing ...
	I1010 18:16:06.018635   99368 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem
	I1010 18:16:06.018659   99368 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem (1078 bytes)
	I1010 18:16:06.018703   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem
	I1010 18:16:06.018722   99368 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem, removing ...
	I1010 18:16:06.018728   99368 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem
	I1010 18:16:06.018748   99368 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem (1123 bytes)
	I1010 18:16:06.018800   99368 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem org=jenkins.ha-142481-m03 san=[127.0.0.1 192.168.39.175 ha-142481-m03 localhost minikube]
	I1010 18:16:06.222717   99368 provision.go:177] copyRemoteCerts
	I1010 18:16:06.222779   99368 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1010 18:16:06.222805   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHHostname
	I1010 18:16:06.225434   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:06.225825   99368 main.go:141] libmachine: (ha-142481-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:ed:5a", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:15:53 +0000 UTC Type:0 Mac:52:54:00:06:ed:5a Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:ha-142481-m03 Clientid:01:52:54:00:06:ed:5a}
	I1010 18:16:06.225848   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined IP address 192.168.39.175 and MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:06.226065   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHPort
	I1010 18:16:06.226286   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHKeyPath
	I1010 18:16:06.226456   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHUsername
	I1010 18:16:06.226630   99368 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481-m03/id_rsa Username:docker}
	I1010 18:16:06.315791   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1010 18:16:06.315882   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1010 18:16:06.343259   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1010 18:16:06.343345   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1010 18:16:06.370749   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1010 18:16:06.370822   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1010 18:16:06.397148   99368 provision.go:87] duration metric: took 386.005417ms to configureAuth
	I1010 18:16:06.397183   99368 buildroot.go:189] setting minikube options for container-runtime
	I1010 18:16:06.397452   99368 config.go:182] Loaded profile config "ha-142481": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 18:16:06.397548   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHHostname
	I1010 18:16:06.400947   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:06.401493   99368 main.go:141] libmachine: (ha-142481-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:ed:5a", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:15:53 +0000 UTC Type:0 Mac:52:54:00:06:ed:5a Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:ha-142481-m03 Clientid:01:52:54:00:06:ed:5a}
	I1010 18:16:06.401529   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined IP address 192.168.39.175 and MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:06.401697   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHPort
	I1010 18:16:06.401877   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHKeyPath
	I1010 18:16:06.402099   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHKeyPath
	I1010 18:16:06.402329   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHUsername
	I1010 18:16:06.402536   99368 main.go:141] libmachine: Using SSH client type: native
	I1010 18:16:06.402752   99368 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.175 22 <nil> <nil>}
	I1010 18:16:06.402772   99368 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1010 18:16:06.637717   99368 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1010 18:16:06.637751   99368 main.go:141] libmachine: Checking connection to Docker...
	I1010 18:16:06.637762   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetURL
	I1010 18:16:06.639112   99368 main.go:141] libmachine: (ha-142481-m03) DBG | Using libvirt version 6000000
	I1010 18:16:06.641181   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:06.641548   99368 main.go:141] libmachine: (ha-142481-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:ed:5a", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:15:53 +0000 UTC Type:0 Mac:52:54:00:06:ed:5a Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:ha-142481-m03 Clientid:01:52:54:00:06:ed:5a}
	I1010 18:16:06.641587   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined IP address 192.168.39.175 and MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:06.641730   99368 main.go:141] libmachine: Docker is up and running!
	I1010 18:16:06.641747   99368 main.go:141] libmachine: Reticulating splines...
	I1010 18:16:06.641756   99368 client.go:171] duration metric: took 27.963208724s to LocalClient.Create
	I1010 18:16:06.641785   99368 start.go:167] duration metric: took 27.963279742s to libmachine.API.Create "ha-142481"
	I1010 18:16:06.641795   99368 start.go:293] postStartSetup for "ha-142481-m03" (driver="kvm2")
	I1010 18:16:06.641804   99368 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1010 18:16:06.641824   99368 main.go:141] libmachine: (ha-142481-m03) Calling .DriverName
	I1010 18:16:06.642091   99368 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1010 18:16:06.642123   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHHostname
	I1010 18:16:06.644087   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:06.644396   99368 main.go:141] libmachine: (ha-142481-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:ed:5a", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:15:53 +0000 UTC Type:0 Mac:52:54:00:06:ed:5a Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:ha-142481-m03 Clientid:01:52:54:00:06:ed:5a}
	I1010 18:16:06.644432   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined IP address 192.168.39.175 and MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:06.644567   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHPort
	I1010 18:16:06.644765   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHKeyPath
	I1010 18:16:06.644924   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHUsername
	I1010 18:16:06.645078   99368 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481-m03/id_rsa Username:docker}
	I1010 18:16:06.732228   99368 ssh_runner.go:195] Run: cat /etc/os-release
	I1010 18:16:06.736988   99368 info.go:137] Remote host: Buildroot 2023.02.9
	I1010 18:16:06.737036   99368 filesync.go:126] Scanning /home/jenkins/minikube-integration/19787-81676/.minikube/addons for local assets ...
	I1010 18:16:06.737116   99368 filesync.go:126] Scanning /home/jenkins/minikube-integration/19787-81676/.minikube/files for local assets ...
	I1010 18:16:06.737228   99368 filesync.go:149] local asset: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem -> 888762.pem in /etc/ssl/certs
	I1010 18:16:06.737241   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem -> /etc/ssl/certs/888762.pem
	I1010 18:16:06.737350   99368 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1010 18:16:06.747599   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem --> /etc/ssl/certs/888762.pem (1708 bytes)
	I1010 18:16:06.779643   99368 start.go:296] duration metric: took 137.832802ms for postStartSetup
	I1010 18:16:06.779701   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetConfigRaw
	I1010 18:16:06.780474   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetIP
	I1010 18:16:06.783287   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:06.783711   99368 main.go:141] libmachine: (ha-142481-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:ed:5a", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:15:53 +0000 UTC Type:0 Mac:52:54:00:06:ed:5a Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:ha-142481-m03 Clientid:01:52:54:00:06:ed:5a}
	I1010 18:16:06.783739   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined IP address 192.168.39.175 and MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:06.784133   99368 profile.go:143] Saving config to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/config.json ...
	I1010 18:16:06.784363   99368 start.go:128] duration metric: took 28.126102871s to createHost
	I1010 18:16:06.784390   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHHostname
	I1010 18:16:06.786724   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:06.787090   99368 main.go:141] libmachine: (ha-142481-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:ed:5a", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:15:53 +0000 UTC Type:0 Mac:52:54:00:06:ed:5a Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:ha-142481-m03 Clientid:01:52:54:00:06:ed:5a}
	I1010 18:16:06.787113   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined IP address 192.168.39.175 and MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:06.787327   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHPort
	I1010 18:16:06.787526   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHKeyPath
	I1010 18:16:06.787700   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHKeyPath
	I1010 18:16:06.787826   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHUsername
	I1010 18:16:06.787997   99368 main.go:141] libmachine: Using SSH client type: native
	I1010 18:16:06.788211   99368 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.175 22 <nil> <nil>}
	I1010 18:16:06.788226   99368 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1010 18:16:06.901742   99368 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728584166.882037024
	
	I1010 18:16:06.901769   99368 fix.go:216] guest clock: 1728584166.882037024
	I1010 18:16:06.901778   99368 fix.go:229] Guest: 2024-10-10 18:16:06.882037024 +0000 UTC Remote: 2024-10-10 18:16:06.784377622 +0000 UTC m=+148.714965698 (delta=97.659402ms)
	I1010 18:16:06.901799   99368 fix.go:200] guest clock delta is within tolerance: 97.659402ms
	I1010 18:16:06.901806   99368 start.go:83] releasing machines lock for "ha-142481-m03", held for 28.24367452s
	I1010 18:16:06.901831   99368 main.go:141] libmachine: (ha-142481-m03) Calling .DriverName
	I1010 18:16:06.902170   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetIP
	I1010 18:16:06.904709   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:06.905164   99368 main.go:141] libmachine: (ha-142481-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:ed:5a", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:15:53 +0000 UTC Type:0 Mac:52:54:00:06:ed:5a Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:ha-142481-m03 Clientid:01:52:54:00:06:ed:5a}
	I1010 18:16:06.905194   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined IP address 192.168.39.175 and MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:06.907619   99368 out.go:177] * Found network options:
	I1010 18:16:06.909057   99368 out.go:177]   - NO_PROXY=192.168.39.104,192.168.39.186
	W1010 18:16:06.910397   99368 proxy.go:119] fail to check proxy env: Error ip not in block
	W1010 18:16:06.910422   99368 proxy.go:119] fail to check proxy env: Error ip not in block
	I1010 18:16:06.910439   99368 main.go:141] libmachine: (ha-142481-m03) Calling .DriverName
	I1010 18:16:06.911020   99368 main.go:141] libmachine: (ha-142481-m03) Calling .DriverName
	I1010 18:16:06.911247   99368 main.go:141] libmachine: (ha-142481-m03) Calling .DriverName
	I1010 18:16:06.911351   99368 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1010 18:16:06.911394   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHHostname
	W1010 18:16:06.911428   99368 proxy.go:119] fail to check proxy env: Error ip not in block
	W1010 18:16:06.911458   99368 proxy.go:119] fail to check proxy env: Error ip not in block
	I1010 18:16:06.911514   99368 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1010 18:16:06.911529   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHHostname
	I1010 18:16:06.914295   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:06.914543   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:06.914629   99368 main.go:141] libmachine: (ha-142481-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:ed:5a", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:15:53 +0000 UTC Type:0 Mac:52:54:00:06:ed:5a Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:ha-142481-m03 Clientid:01:52:54:00:06:ed:5a}
	I1010 18:16:06.914656   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined IP address 192.168.39.175 and MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:06.914760   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHPort
	I1010 18:16:06.914838   99368 main.go:141] libmachine: (ha-142481-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:ed:5a", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:15:53 +0000 UTC Type:0 Mac:52:54:00:06:ed:5a Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:ha-142481-m03 Clientid:01:52:54:00:06:ed:5a}
	I1010 18:16:06.914856   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined IP address 192.168.39.175 and MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:06.914913   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHKeyPath
	I1010 18:16:06.915049   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHPort
	I1010 18:16:06.915098   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHUsername
	I1010 18:16:06.915168   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHKeyPath
	I1010 18:16:06.915225   99368 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481-m03/id_rsa Username:docker}
	I1010 18:16:06.915381   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHUsername
	I1010 18:16:06.915497   99368 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481-m03/id_rsa Username:docker}
	I1010 18:16:07.163627   99368 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1010 18:16:07.170344   99368 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1010 18:16:07.170418   99368 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1010 18:16:07.188658   99368 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1010 18:16:07.188691   99368 start.go:495] detecting cgroup driver to use...
	I1010 18:16:07.188764   99368 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1010 18:16:07.207458   99368 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1010 18:16:07.223388   99368 docker.go:217] disabling cri-docker service (if available) ...
	I1010 18:16:07.223465   99368 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1010 18:16:07.240312   99368 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1010 18:16:07.258338   99368 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1010 18:16:07.397297   99368 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1010 18:16:07.555534   99368 docker.go:233] disabling docker service ...
	I1010 18:16:07.555621   99368 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1010 18:16:07.571003   99368 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1010 18:16:07.585612   99368 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1010 18:16:07.724995   99368 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1010 18:16:07.861369   99368 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1010 18:16:07.876144   99368 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1010 18:16:07.895651   99368 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1010 18:16:07.895716   99368 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:16:07.906721   99368 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1010 18:16:07.906792   99368 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:16:07.917729   99368 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:16:07.929016   99368 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:16:07.940559   99368 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1010 18:16:07.953995   99368 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:16:07.965226   99368 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:16:07.984344   99368 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:16:07.995983   99368 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1010 18:16:08.006420   99368 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1010 18:16:08.006504   99368 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1010 18:16:08.021735   99368 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1010 18:16:08.033011   99368 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 18:16:08.164791   99368 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1010 18:16:08.260672   99368 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1010 18:16:08.260742   99368 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1010 18:16:08.271900   99368 start.go:563] Will wait 60s for crictl version
	I1010 18:16:08.271960   99368 ssh_runner.go:195] Run: which crictl
	I1010 18:16:08.275929   99368 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1010 18:16:08.314672   99368 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1010 18:16:08.314749   99368 ssh_runner.go:195] Run: crio --version
	I1010 18:16:08.346340   99368 ssh_runner.go:195] Run: crio --version
	I1010 18:16:08.377606   99368 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1010 18:16:08.379014   99368 out.go:177]   - env NO_PROXY=192.168.39.104
	I1010 18:16:08.380435   99368 out.go:177]   - env NO_PROXY=192.168.39.104,192.168.39.186
	I1010 18:16:08.381694   99368 main.go:141] libmachine: (ha-142481-m03) Calling .GetIP
	I1010 18:16:08.384544   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:08.384908   99368 main.go:141] libmachine: (ha-142481-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:ed:5a", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:15:53 +0000 UTC Type:0 Mac:52:54:00:06:ed:5a Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:ha-142481-m03 Clientid:01:52:54:00:06:ed:5a}
	I1010 18:16:08.384939   99368 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined IP address 192.168.39.175 and MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:16:08.385183   99368 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1010 18:16:08.389725   99368 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 18:16:08.402638   99368 mustload.go:65] Loading cluster: ha-142481
	I1010 18:16:08.402881   99368 config.go:182] Loaded profile config "ha-142481": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 18:16:08.403135   99368 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 18:16:08.403183   99368 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 18:16:08.418274   99368 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43461
	I1010 18:16:08.418827   99368 main.go:141] libmachine: () Calling .GetVersion
	I1010 18:16:08.419392   99368 main.go:141] libmachine: Using API Version  1
	I1010 18:16:08.419418   99368 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 18:16:08.419747   99368 main.go:141] libmachine: () Calling .GetMachineName
	I1010 18:16:08.419899   99368 main.go:141] libmachine: (ha-142481) Calling .GetState
	I1010 18:16:08.421605   99368 host.go:66] Checking if "ha-142481" exists ...
	I1010 18:16:08.421927   99368 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 18:16:08.421980   99368 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 18:16:08.437329   99368 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37965
	I1010 18:16:08.437789   99368 main.go:141] libmachine: () Calling .GetVersion
	I1010 18:16:08.438250   99368 main.go:141] libmachine: Using API Version  1
	I1010 18:16:08.438271   99368 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 18:16:08.438615   99368 main.go:141] libmachine: () Calling .GetMachineName
	I1010 18:16:08.438801   99368 main.go:141] libmachine: (ha-142481) Calling .DriverName
	I1010 18:16:08.438970   99368 certs.go:68] Setting up /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481 for IP: 192.168.39.175
	I1010 18:16:08.438988   99368 certs.go:194] generating shared ca certs ...
	I1010 18:16:08.439008   99368 certs.go:226] acquiring lock for ca certs: {Name:mk934aa2a82050d7b4e8800492bf64f91d140517 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:16:08.439150   99368 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key
	I1010 18:16:08.439211   99368 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key
	I1010 18:16:08.439224   99368 certs.go:256] generating profile certs ...
	I1010 18:16:08.439325   99368 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/client.key
	I1010 18:16:08.439355   99368 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.key.7bb2202d
	I1010 18:16:08.439376   99368 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.crt.7bb2202d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.104 192.168.39.186 192.168.39.175 192.168.39.254]
	I1010 18:16:08.528731   99368 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.crt.7bb2202d ...
	I1010 18:16:08.528764   99368 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.crt.7bb2202d: {Name:mk202db6f01b46b51940ca7afe581ede7b3af4e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:16:08.528980   99368 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.key.7bb2202d ...
	I1010 18:16:08.528997   99368 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.key.7bb2202d: {Name:mk61783eedf299ba3a6dbb3f62b131938823078c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:16:08.529112   99368 certs.go:381] copying /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.crt.7bb2202d -> /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.crt
	I1010 18:16:08.529294   99368 certs.go:385] copying /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.key.7bb2202d -> /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.key
	I1010 18:16:08.529465   99368 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/proxy-client.key
	I1010 18:16:08.529488   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1010 18:16:08.529506   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1010 18:16:08.529521   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1010 18:16:08.529540   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1010 18:16:08.529557   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1010 18:16:08.529580   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1010 18:16:08.529599   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1010 18:16:08.545002   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1010 18:16:08.545123   99368 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876.pem (1338 bytes)
	W1010 18:16:08.545166   99368 certs.go:480] ignoring /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876_empty.pem, impossibly tiny 0 bytes
	I1010 18:16:08.545178   99368 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem (1679 bytes)
	I1010 18:16:08.545225   99368 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem (1078 bytes)
	I1010 18:16:08.545259   99368 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem (1123 bytes)
	I1010 18:16:08.545291   99368 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem (1675 bytes)
	I1010 18:16:08.545339   99368 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem (1708 bytes)
	I1010 18:16:08.545380   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem -> /usr/share/ca-certificates/888762.pem
	I1010 18:16:08.545401   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:16:08.545415   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876.pem -> /usr/share/ca-certificates/88876.pem
	I1010 18:16:08.545465   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHHostname
	I1010 18:16:08.548797   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:16:08.549296   99368 main.go:141] libmachine: (ha-142481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:fa:00", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:13:53 +0000 UTC Type:0 Mac:52:54:00:3e:fa:00 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-142481 Clientid:01:52:54:00:3e:fa:00}
	I1010 18:16:08.549316   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined IP address 192.168.39.104 and MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:16:08.549545   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHPort
	I1010 18:16:08.549789   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHKeyPath
	I1010 18:16:08.549993   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHUsername
	I1010 18:16:08.550143   99368 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481/id_rsa Username:docker}
	I1010 18:16:08.629272   99368 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1010 18:16:08.635349   99368 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1010 18:16:08.648258   99368 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1010 18:16:08.653797   99368 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1010 18:16:08.665553   99368 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1010 18:16:08.670066   99368 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1010 18:16:08.681281   99368 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1010 18:16:08.685851   99368 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1010 18:16:08.696759   99368 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1010 18:16:08.701070   99368 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1010 18:16:08.719143   99368 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1010 18:16:08.723782   99368 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1010 18:16:08.735082   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1010 18:16:08.763420   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1010 18:16:08.789246   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1010 18:16:08.814697   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1010 18:16:08.840641   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1010 18:16:08.865783   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1010 18:16:08.890663   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1010 18:16:08.916077   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1010 18:16:08.941574   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem --> /usr/share/ca-certificates/888762.pem (1708 bytes)
	I1010 18:16:08.971689   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1010 18:16:08.996394   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876.pem --> /usr/share/ca-certificates/88876.pem (1338 bytes)
	I1010 18:16:09.021329   99368 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1010 18:16:09.039289   99368 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1010 18:16:09.058514   99368 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1010 18:16:09.075508   99368 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1010 18:16:09.094047   99368 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1010 18:16:09.112093   99368 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1010 18:16:09.130182   99368 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1010 18:16:09.147655   99368 ssh_runner.go:195] Run: openssl version
	I1010 18:16:09.153962   99368 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/888762.pem && ln -fs /usr/share/ca-certificates/888762.pem /etc/ssl/certs/888762.pem"
	I1010 18:16:09.165361   99368 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/888762.pem
	I1010 18:16:09.170099   99368 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 10 18:09 /usr/share/ca-certificates/888762.pem
	I1010 18:16:09.170163   99368 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/888762.pem
	I1010 18:16:09.175991   99368 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/888762.pem /etc/ssl/certs/3ec20f2e.0"
	I1010 18:16:09.187134   99368 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1010 18:16:09.199298   99368 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:16:09.204550   99368 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 10 17:58 /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:16:09.204607   99368 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:16:09.210501   99368 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1010 18:16:09.222047   99368 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/88876.pem && ln -fs /usr/share/ca-certificates/88876.pem /etc/ssl/certs/88876.pem"
	I1010 18:16:09.233165   99368 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/88876.pem
	I1010 18:16:09.238141   99368 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 10 18:09 /usr/share/ca-certificates/88876.pem
	I1010 18:16:09.238209   99368 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/88876.pem
	I1010 18:16:09.243899   99368 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/88876.pem /etc/ssl/certs/51391683.0"
	I1010 18:16:09.256154   99368 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1010 18:16:09.260558   99368 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1010 18:16:09.260620   99368 kubeadm.go:934] updating node {m03 192.168.39.175 8443 v1.31.1 crio true true} ...
	I1010 18:16:09.260712   99368 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-142481-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.175
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-142481 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1010 18:16:09.260747   99368 kube-vip.go:115] generating kube-vip config ...
	I1010 18:16:09.260788   99368 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1010 18:16:09.281432   99368 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1010 18:16:09.281532   99368 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1010 18:16:09.281598   99368 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1010 18:16:09.292238   99368 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I1010 18:16:09.292302   99368 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I1010 18:16:09.302815   99368 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I1010 18:16:09.302834   99368 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I1010 18:16:09.302847   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I1010 18:16:09.302858   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1010 18:16:09.302874   99368 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I1010 18:16:09.302911   99368 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I1010 18:16:09.302925   99368 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1010 18:16:09.302927   99368 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 18:16:09.313038   99368 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I1010 18:16:09.313076   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I1010 18:16:09.313295   99368 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I1010 18:16:09.313324   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I1010 18:16:09.329019   99368 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I1010 18:16:09.329132   99368 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I1010 18:16:09.460792   99368 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I1010 18:16:09.460863   99368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I1010 18:16:10.167695   99368 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1010 18:16:10.178304   99368 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1010 18:16:10.196198   99368 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1010 18:16:10.214107   99368 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1010 18:16:10.231699   99368 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1010 18:16:10.235598   99368 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 18:16:10.249379   99368 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 18:16:10.372228   99368 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 18:16:10.389956   99368 host.go:66] Checking if "ha-142481" exists ...
	I1010 18:16:10.390482   99368 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 18:16:10.390543   99368 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 18:16:10.406538   99368 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33965
	I1010 18:16:10.407120   99368 main.go:141] libmachine: () Calling .GetVersion
	I1010 18:16:10.407715   99368 main.go:141] libmachine: Using API Version  1
	I1010 18:16:10.407745   99368 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 18:16:10.408171   99368 main.go:141] libmachine: () Calling .GetMachineName
	I1010 18:16:10.408424   99368 main.go:141] libmachine: (ha-142481) Calling .DriverName
	I1010 18:16:10.408616   99368 start.go:317] joinCluster: &{Name:ha-142481 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-142481 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.104 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.186 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.175 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:f
alse istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 18:16:10.408761   99368 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1010 18:16:10.408786   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHHostname
	I1010 18:16:10.412501   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:16:10.412938   99368 main.go:141] libmachine: (ha-142481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:fa:00", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:13:53 +0000 UTC Type:0 Mac:52:54:00:3e:fa:00 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-142481 Clientid:01:52:54:00:3e:fa:00}
	I1010 18:16:10.412967   99368 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined IP address 192.168.39.104 and MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:16:10.413287   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHPort
	I1010 18:16:10.413489   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHKeyPath
	I1010 18:16:10.413662   99368 main.go:141] libmachine: (ha-142481) Calling .GetSSHUsername
	I1010 18:16:10.413878   99368 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481/id_rsa Username:docker}
	I1010 18:16:10.584962   99368 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.175 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1010 18:16:10.585036   99368 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 01a2dn.g9vqo5mbslppupip --discovery-token-ca-cert-hash sha256:bc8568f96f96483e4403a7eff9866ac7bd8b0e76d8203796a49dd1a769f11bda --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-142481-m03 --control-plane --apiserver-advertise-address=192.168.39.175 --apiserver-bind-port=8443"
	I1010 18:16:34.116751   99368 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 01a2dn.g9vqo5mbslppupip --discovery-token-ca-cert-hash sha256:bc8568f96f96483e4403a7eff9866ac7bd8b0e76d8203796a49dd1a769f11bda --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-142481-m03 --control-plane --apiserver-advertise-address=192.168.39.175 --apiserver-bind-port=8443": (23.531656117s)
	I1010 18:16:34.116799   99368 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1010 18:16:34.662406   99368 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-142481-m03 minikube.k8s.io/updated_at=2024_10_10T18_16_34_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=e90d7de550e839433dc631623053398b652747fc minikube.k8s.io/name=ha-142481 minikube.k8s.io/primary=false
	I1010 18:16:34.812925   99368 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-142481-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I1010 18:16:34.939968   99368 start.go:319] duration metric: took 24.531346267s to joinCluster
	I1010 18:16:34.940121   99368 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.175 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1010 18:16:34.940600   99368 config.go:182] Loaded profile config "ha-142481": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 18:16:34.942338   99368 out.go:177] * Verifying Kubernetes components...
	I1010 18:16:34.943872   99368 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 18:16:35.261137   99368 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 18:16:35.322955   99368 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19787-81676/kubeconfig
	I1010 18:16:35.323214   99368 kapi.go:59] client config for ha-142481: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/client.crt", KeyFile:"/home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/client.key", CAFile:"/home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2432700), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1010 18:16:35.323281   99368 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.104:8443
	I1010 18:16:35.323557   99368 node_ready.go:35] waiting up to 6m0s for node "ha-142481-m03" to be "Ready" ...
	I1010 18:16:35.323656   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:35.323668   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:35.323679   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:35.323685   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:35.327318   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:35.823831   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:35.823858   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:35.823871   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:35.823877   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:35.828659   99368 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1010 18:16:36.324358   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:36.324382   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:36.324391   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:36.324395   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:36.327758   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:36.823911   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:36.823934   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:36.823942   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:36.823946   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:36.827063   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:37.323987   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:37.324011   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:37.324019   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:37.324023   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:37.327375   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:37.328058   99368 node_ready.go:53] node "ha-142481-m03" has status "Ready":"False"
	I1010 18:16:37.824329   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:37.824354   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:37.824443   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:37.824455   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:37.828067   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:38.323986   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:38.324025   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:38.324040   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:38.324046   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:38.327494   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:38.823762   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:38.823785   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:38.823794   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:38.823798   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:38.827926   99368 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1010 18:16:39.323928   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:39.323957   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:39.323969   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:39.323975   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:39.330422   99368 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1010 18:16:39.331171   99368 node_ready.go:53] node "ha-142481-m03" has status "Ready":"False"
	I1010 18:16:39.824574   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:39.824598   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:39.824607   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:39.824610   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:39.828722   99368 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1010 18:16:40.324796   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:40.324827   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:40.324838   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:40.324845   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:40.328842   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:40.823953   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:40.823979   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:40.823990   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:40.823996   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:40.828272   99368 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1010 18:16:41.324192   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:41.324218   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:41.324227   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:41.324230   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:41.327987   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:41.824162   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:41.824186   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:41.824198   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:41.824204   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:41.827541   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:41.828232   99368 node_ready.go:53] node "ha-142481-m03" has status "Ready":"False"
	I1010 18:16:42.324743   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:42.324783   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:42.324794   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:42.324801   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:42.328551   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:42.824718   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:42.824744   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:42.824755   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:42.824760   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:42.828428   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:43.324320   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:43.324346   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:43.324355   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:43.324364   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:43.328322   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:43.823956   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:43.824002   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:43.824013   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:43.824019   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:43.827615   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:43.828260   99368 node_ready.go:53] node "ha-142481-m03" has status "Ready":"False"
	I1010 18:16:44.324587   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:44.324612   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:44.324620   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:44.324623   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:44.328569   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:44.823816   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:44.823840   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:44.823849   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:44.823853   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:44.827589   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:45.324648   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:45.324673   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:45.324681   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:45.324684   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:45.328227   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:45.824305   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:45.824330   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:45.824338   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:45.824342   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:45.827901   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:45.828489   99368 node_ready.go:53] node "ha-142481-m03" has status "Ready":"False"
	I1010 18:16:46.323779   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:46.323813   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:46.323825   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:46.323830   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:46.327223   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:46.823931   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:46.823955   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:46.823964   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:46.823968   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:46.828168   99368 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1010 18:16:47.324172   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:47.324200   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:47.324214   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:47.324232   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:47.327405   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:47.824446   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:47.824470   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:47.824478   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:47.824483   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:47.828085   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:47.828574   99368 node_ready.go:53] node "ha-142481-m03" has status "Ready":"False"
	I1010 18:16:48.324641   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:48.324666   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:48.324674   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:48.324678   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:48.328399   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:48.823841   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:48.823872   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:48.823883   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:48.823899   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:48.827862   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:49.324364   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:49.324391   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:49.324402   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:49.324410   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:49.329836   99368 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1010 18:16:49.824868   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:49.824898   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:49.824909   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:49.824916   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:49.832424   99368 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1010 18:16:49.833781   99368 node_ready.go:53] node "ha-142481-m03" has status "Ready":"False"
	I1010 18:16:50.324106   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:50.324129   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:50.324137   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:50.324141   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:50.327377   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:50.824781   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:50.824809   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:50.824818   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:50.824824   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:50.828461   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:51.324626   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:51.324651   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:51.324659   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:51.324663   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:51.327965   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:51.824004   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:51.824028   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:51.824036   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:51.824041   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:51.827827   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:52.323895   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:52.323930   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:52.323939   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:52.323943   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:52.327292   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:52.327943   99368 node_ready.go:49] node "ha-142481-m03" has status "Ready":"True"
	I1010 18:16:52.327963   99368 node_ready.go:38] duration metric: took 17.004388796s for node "ha-142481-m03" to be "Ready" ...
	I1010 18:16:52.327973   99368 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 18:16:52.328041   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods
	I1010 18:16:52.328051   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:52.328058   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:52.328063   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:52.335352   99368 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1010 18:16:52.341969   99368 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-28dll" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:52.342092   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-28dll
	I1010 18:16:52.342105   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:52.342116   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:52.342121   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:52.346524   99368 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1010 18:16:52.347823   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481
	I1010 18:16:52.347844   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:52.347853   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:52.347860   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:52.352427   99368 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1010 18:16:52.353100   99368 pod_ready.go:93] pod "coredns-7c65d6cfc9-28dll" in "kube-system" namespace has status "Ready":"True"
	I1010 18:16:52.353132   99368 pod_ready.go:82] duration metric: took 11.131703ms for pod "coredns-7c65d6cfc9-28dll" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:52.353146   99368 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-xfhq8" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:52.353233   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-xfhq8
	I1010 18:16:52.353246   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:52.353255   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:52.353262   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:52.358189   99368 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1010 18:16:52.359137   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481
	I1010 18:16:52.359158   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:52.359170   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:52.359194   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:52.361882   99368 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1010 18:16:52.362586   99368 pod_ready.go:93] pod "coredns-7c65d6cfc9-xfhq8" in "kube-system" namespace has status "Ready":"True"
	I1010 18:16:52.362606   99368 pod_ready.go:82] duration metric: took 9.449469ms for pod "coredns-7c65d6cfc9-xfhq8" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:52.362618   99368 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-142481" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:52.362680   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/etcd-ha-142481
	I1010 18:16:52.362689   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:52.362696   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:52.362701   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:52.365259   99368 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1010 18:16:52.365819   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481
	I1010 18:16:52.365835   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:52.365842   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:52.365857   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:52.368864   99368 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1010 18:16:52.369337   99368 pod_ready.go:93] pod "etcd-ha-142481" in "kube-system" namespace has status "Ready":"True"
	I1010 18:16:52.369355   99368 pod_ready.go:82] duration metric: took 6.728138ms for pod "etcd-ha-142481" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:52.369365   99368 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-142481-m02" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:52.369427   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/etcd-ha-142481-m02
	I1010 18:16:52.369435   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:52.369442   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:52.369447   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:52.371801   99368 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1010 18:16:52.372469   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:16:52.372485   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:52.372496   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:52.372501   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:52.374845   99368 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1010 18:16:52.375380   99368 pod_ready.go:93] pod "etcd-ha-142481-m02" in "kube-system" namespace has status "Ready":"True"
	I1010 18:16:52.375400   99368 pod_ready.go:82] duration metric: took 6.028654ms for pod "etcd-ha-142481-m02" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:52.375414   99368 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-142481-m03" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:52.524876   99368 request.go:632] Waited for 149.316037ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/etcd-ha-142481-m03
	I1010 18:16:52.524969   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/etcd-ha-142481-m03
	I1010 18:16:52.524980   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:52.524993   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:52.525002   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:52.528336   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:52.724349   99368 request.go:632] Waited for 195.357304ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:52.724414   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:52.724419   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:52.724429   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:52.724433   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:52.727821   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:52.728420   99368 pod_ready.go:93] pod "etcd-ha-142481-m03" in "kube-system" namespace has status "Ready":"True"
	I1010 18:16:52.728440   99368 pod_ready.go:82] duration metric: took 353.013897ms for pod "etcd-ha-142481-m03" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:52.728461   99368 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-142481" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:52.924606   99368 request.go:632] Waited for 196.006652ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-142481
	I1010 18:16:52.924680   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-142481
	I1010 18:16:52.924687   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:52.924697   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:52.924702   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:52.928387   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:53.124197   99368 request.go:632] Waited for 194.992104ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/nodes/ha-142481
	I1010 18:16:53.124259   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481
	I1010 18:16:53.124264   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:53.124276   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:53.124281   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:53.127550   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:53.128097   99368 pod_ready.go:93] pod "kube-apiserver-ha-142481" in "kube-system" namespace has status "Ready":"True"
	I1010 18:16:53.128116   99368 pod_ready.go:82] duration metric: took 399.647709ms for pod "kube-apiserver-ha-142481" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:53.128127   99368 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-142481-m02" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:53.324538   99368 request.go:632] Waited for 196.340534ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-142481-m02
	I1010 18:16:53.324600   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-142481-m02
	I1010 18:16:53.324606   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:53.324613   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:53.324617   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:53.328266   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:53.524803   99368 request.go:632] Waited for 195.841443ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:16:53.524898   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:16:53.524906   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:53.524920   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:53.524931   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:53.529027   99368 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1010 18:16:53.529616   99368 pod_ready.go:93] pod "kube-apiserver-ha-142481-m02" in "kube-system" namespace has status "Ready":"True"
	I1010 18:16:53.529639   99368 pod_ready.go:82] duration metric: took 401.504985ms for pod "kube-apiserver-ha-142481-m02" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:53.529650   99368 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-142481-m03" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:53.724123   99368 request.go:632] Waited for 194.402378ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-142481-m03
	I1010 18:16:53.724207   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-142481-m03
	I1010 18:16:53.724212   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:53.724220   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:53.724226   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:53.728029   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:53.924000   99368 request.go:632] Waited for 195.20231ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:53.924121   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:53.924136   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:53.924145   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:53.924149   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:53.927318   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:53.927936   99368 pod_ready.go:93] pod "kube-apiserver-ha-142481-m03" in "kube-system" namespace has status "Ready":"True"
	I1010 18:16:53.927963   99368 pod_ready.go:82] duration metric: took 398.303309ms for pod "kube-apiserver-ha-142481-m03" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:53.927977   99368 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-142481" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:54.124931   99368 request.go:632] Waited for 196.86396ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-142481
	I1010 18:16:54.125030   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-142481
	I1010 18:16:54.125037   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:54.125045   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:54.125050   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:54.129323   99368 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1010 18:16:54.324484   99368 request.go:632] Waited for 194.400861ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/nodes/ha-142481
	I1010 18:16:54.324554   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481
	I1010 18:16:54.324564   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:54.324574   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:54.324580   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:54.327854   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:54.328431   99368 pod_ready.go:93] pod "kube-controller-manager-ha-142481" in "kube-system" namespace has status "Ready":"True"
	I1010 18:16:54.328451   99368 pod_ready.go:82] duration metric: took 400.466203ms for pod "kube-controller-manager-ha-142481" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:54.328463   99368 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-142481-m02" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:54.524928   99368 request.go:632] Waited for 196.394012ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-142481-m02
	I1010 18:16:54.524994   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-142481-m02
	I1010 18:16:54.525000   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:54.525008   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:54.525013   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:54.528390   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:54.724248   99368 request.go:632] Waited for 195.108613ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:16:54.724318   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:16:54.724325   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:54.724335   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:54.724341   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:54.727499   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:54.727990   99368 pod_ready.go:93] pod "kube-controller-manager-ha-142481-m02" in "kube-system" namespace has status "Ready":"True"
	I1010 18:16:54.728011   99368 pod_ready.go:82] duration metric: took 399.541027ms for pod "kube-controller-manager-ha-142481-m02" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:54.728023   99368 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-142481-m03" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:54.924017   99368 request.go:632] Waited for 195.924922ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-142481-m03
	I1010 18:16:54.924118   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-142481-m03
	I1010 18:16:54.924129   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:54.924137   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:54.924142   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:54.928875   99368 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1010 18:16:55.123960   99368 request.go:632] Waited for 194.31178ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:55.124017   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:55.124022   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:55.124030   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:55.124033   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:55.127461   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:55.128120   99368 pod_ready.go:93] pod "kube-controller-manager-ha-142481-m03" in "kube-system" namespace has status "Ready":"True"
	I1010 18:16:55.128144   99368 pod_ready.go:82] duration metric: took 400.113475ms for pod "kube-controller-manager-ha-142481-m03" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:55.128160   99368 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-cdjzg" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:55.323986   99368 request.go:632] Waited for 195.748073ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cdjzg
	I1010 18:16:55.324049   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cdjzg
	I1010 18:16:55.324055   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:55.324063   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:55.324069   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:55.327396   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:55.524493   99368 request.go:632] Waited for 196.370396ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:55.524560   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:55.524567   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:55.524578   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:55.524586   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:55.534026   99368 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1010 18:16:55.534701   99368 pod_ready.go:93] pod "kube-proxy-cdjzg" in "kube-system" namespace has status "Ready":"True"
	I1010 18:16:55.534728   99368 pod_ready.go:82] duration metric: took 406.559679ms for pod "kube-proxy-cdjzg" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:55.534745   99368 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-gwvrh" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:55.724765   99368 request.go:632] Waited for 189.945021ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gwvrh
	I1010 18:16:55.724857   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gwvrh
	I1010 18:16:55.724864   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:55.724872   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:55.724878   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:55.727940   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:55.923972   99368 request.go:632] Waited for 195.304711ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/nodes/ha-142481
	I1010 18:16:55.924037   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481
	I1010 18:16:55.924052   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:55.924078   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:55.924085   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:55.927605   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:55.928243   99368 pod_ready.go:93] pod "kube-proxy-gwvrh" in "kube-system" namespace has status "Ready":"True"
	I1010 18:16:55.928264   99368 pod_ready.go:82] duration metric: took 393.511622ms for pod "kube-proxy-gwvrh" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:55.928278   99368 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-srfng" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:56.124193   99368 request.go:632] Waited for 195.82573ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-proxy-srfng
	I1010 18:16:56.124313   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-proxy-srfng
	I1010 18:16:56.124327   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:56.124336   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:56.124340   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:56.127896   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:56.324881   99368 request.go:632] Waited for 196.244687ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:16:56.324996   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:16:56.325012   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:56.325022   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:56.325029   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:56.328576   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:56.329284   99368 pod_ready.go:93] pod "kube-proxy-srfng" in "kube-system" namespace has status "Ready":"True"
	I1010 18:16:56.329304   99368 pod_ready.go:82] duration metric: took 401.01865ms for pod "kube-proxy-srfng" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:56.329315   99368 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-142481" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:56.524473   99368 request.go:632] Waited for 195.075639ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-142481
	I1010 18:16:56.524535   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-142481
	I1010 18:16:56.524541   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:56.524548   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:56.524554   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:56.527661   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:56.724798   99368 request.go:632] Waited for 196.388114ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/nodes/ha-142481
	I1010 18:16:56.724919   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481
	I1010 18:16:56.724934   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:56.724945   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:56.724955   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:56.728172   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:56.728664   99368 pod_ready.go:93] pod "kube-scheduler-ha-142481" in "kube-system" namespace has status "Ready":"True"
	I1010 18:16:56.728684   99368 pod_ready.go:82] duration metric: took 399.362342ms for pod "kube-scheduler-ha-142481" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:56.728700   99368 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-142481-m02" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:56.924703   99368 request.go:632] Waited for 195.908558ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-142481-m02
	I1010 18:16:56.924769   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-142481-m02
	I1010 18:16:56.924784   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:56.924793   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:56.924796   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:56.928241   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:57.124466   99368 request.go:632] Waited for 195.354302ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:16:57.124566   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m02
	I1010 18:16:57.124592   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:57.124604   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:57.124613   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:57.128217   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:57.128748   99368 pod_ready.go:93] pod "kube-scheduler-ha-142481-m02" in "kube-system" namespace has status "Ready":"True"
	I1010 18:16:57.128773   99368 pod_ready.go:82] duration metric: took 400.06441ms for pod "kube-scheduler-ha-142481-m02" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:57.128788   99368 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-142481-m03" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:57.323894   99368 request.go:632] Waited for 195.025916ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-142481-m03
	I1010 18:16:57.323960   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-142481-m03
	I1010 18:16:57.324019   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:57.324032   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:57.324036   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:57.328239   99368 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1010 18:16:57.524431   99368 request.go:632] Waited for 195.425292ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:57.524497   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes/ha-142481-m03
	I1010 18:16:57.524503   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:57.524511   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:57.524515   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:57.527825   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:57.528689   99368 pod_ready.go:93] pod "kube-scheduler-ha-142481-m03" in "kube-system" namespace has status "Ready":"True"
	I1010 18:16:57.528706   99368 pod_ready.go:82] duration metric: took 399.911051ms for pod "kube-scheduler-ha-142481-m03" in "kube-system" namespace to be "Ready" ...
	I1010 18:16:57.528718   99368 pod_ready.go:39] duration metric: took 5.200736466s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 18:16:57.528734   99368 api_server.go:52] waiting for apiserver process to appear ...
	I1010 18:16:57.528787   99368 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 18:16:57.545663   99368 api_server.go:72] duration metric: took 22.605494204s to wait for apiserver process to appear ...
	I1010 18:16:57.545694   99368 api_server.go:88] waiting for apiserver healthz status ...
	I1010 18:16:57.545718   99368 api_server.go:253] Checking apiserver healthz at https://192.168.39.104:8443/healthz ...
	I1010 18:16:57.552066   99368 api_server.go:279] https://192.168.39.104:8443/healthz returned 200:
	ok
	I1010 18:16:57.552813   99368 round_trippers.go:463] GET https://192.168.39.104:8443/version
	I1010 18:16:57.552870   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:57.552882   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:57.552890   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:57.555288   99368 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1010 18:16:57.555381   99368 api_server.go:141] control plane version: v1.31.1
	I1010 18:16:57.555401   99368 api_server.go:131] duration metric: took 9.699914ms to wait for apiserver health ...
	I1010 18:16:57.555411   99368 system_pods.go:43] waiting for kube-system pods to appear ...
	I1010 18:16:57.724005   99368 request.go:632] Waited for 168.467999ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods
	I1010 18:16:57.724082   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods
	I1010 18:16:57.724091   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:57.724106   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:57.724114   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:57.730879   99368 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1010 18:16:57.737404   99368 system_pods.go:59] 24 kube-system pods found
	I1010 18:16:57.737436   99368 system_pods.go:61] "coredns-7c65d6cfc9-28dll" [1d8bde36-6ab6-4b48-b00a-2a80059ecc11] Running
	I1010 18:16:57.737442   99368 system_pods.go:61] "coredns-7c65d6cfc9-xfhq8" [33d478fb-98f0-4029-87ae-9701a54cecd9] Running
	I1010 18:16:57.737445   99368 system_pods.go:61] "etcd-ha-142481" [fa917068-3ac4-4929-b80b-bfabdc3a9b94] Running
	I1010 18:16:57.737449   99368 system_pods.go:61] "etcd-ha-142481-m02" [8e6c1111-716c-41ff-9d31-ee189956993b] Running
	I1010 18:16:57.737452   99368 system_pods.go:61] "etcd-ha-142481-m03" [3f1ae212-d09b-446c-9172-52b9bfc6c20c] Running
	I1010 18:16:57.737456   99368 system_pods.go:61] "kindnet-4d9v4" [f52506a7-d4f9-4112-8b28-f239c7c8230b] Running
	I1010 18:16:57.737459   99368 system_pods.go:61] "kindnet-5k6j8" [e2d0fb49-23e6-4389-b890-25c03f6089a1] Running
	I1010 18:16:57.737463   99368 system_pods.go:61] "kindnet-cjcsf" [237e5649-ed64-401c-befd-99ef520d0761] Running
	I1010 18:16:57.737466   99368 system_pods.go:61] "kube-apiserver-ha-142481" [b3d0326d-123b-43af-b1a2-b745b884c3b5] Running
	I1010 18:16:57.737469   99368 system_pods.go:61] "kube-apiserver-ha-142481-m02" [36f5732a-f056-4751-83e3-67f84744af8e] Running
	I1010 18:16:57.737472   99368 system_pods.go:61] "kube-apiserver-ha-142481-m03" [4c7836a0-6697-4ce5-87d6-582097925f80] Running
	I1010 18:16:57.737476   99368 system_pods.go:61] "kube-controller-manager-ha-142481" [ad42e72b-6208-44c3-b4cb-ac9794ec9683] Running
	I1010 18:16:57.737480   99368 system_pods.go:61] "kube-controller-manager-ha-142481-m02" [4bd4ce73-1b81-44db-a59f-06c89184d88c] Running
	I1010 18:16:57.737484   99368 system_pods.go:61] "kube-controller-manager-ha-142481-m03" [9444eb06-6dc4-44ab-a7d6-2d1d5b3e6410] Running
	I1010 18:16:57.737487   99368 system_pods.go:61] "kube-proxy-cdjzg" [98288460-9764-4e92-a589-e7e34654cfc5] Running
	I1010 18:16:57.737491   99368 system_pods.go:61] "kube-proxy-gwvrh" [06e8f455-b250-46ea-bbcc-75ed230f2b57] Running
	I1010 18:16:57.737494   99368 system_pods.go:61] "kube-proxy-srfng" [0aad186a-89b6-48d9-8434-1063c7c8a42f] Running
	I1010 18:16:57.737499   99368 system_pods.go:61] "kube-scheduler-ha-142481" [c28a2da7-1807-4111-bf99-84891a0b278a] Running
	I1010 18:16:57.737505   99368 system_pods.go:61] "kube-scheduler-ha-142481-m02" [f29260ee-8ec8-47c4-848d-ee8ff1d06610] Running
	I1010 18:16:57.737509   99368 system_pods.go:61] "kube-scheduler-ha-142481-m03" [a3eea545-bc31-4990-ad58-a43666964468] Running
	I1010 18:16:57.737512   99368 system_pods.go:61] "kube-vip-ha-142481" [3ae4dc2c-27b6-4f93-a605-8de6f0c096d0] Running
	I1010 18:16:57.737515   99368 system_pods.go:61] "kube-vip-ha-142481-m02" [bd50bf3a-5e9c-48d4-8bba-fe1fea6c017d] Running
	I1010 18:16:57.737519   99368 system_pods.go:61] "kube-vip-ha-142481-m03" [a93b4d63-0f6c-47b5-b987-a082b2b0d51a] Running
	I1010 18:16:57.737522   99368 system_pods.go:61] "storage-provisioner" [9d74f64e-6391-4ab0-99ad-8c1711840696] Running
	I1010 18:16:57.737528   99368 system_pods.go:74] duration metric: took 182.108204ms to wait for pod list to return data ...
	I1010 18:16:57.737537   99368 default_sa.go:34] waiting for default service account to be created ...
	I1010 18:16:57.923961   99368 request.go:632] Waited for 186.32043ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/default/serviceaccounts
	I1010 18:16:57.924040   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/default/serviceaccounts
	I1010 18:16:57.924048   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:57.924059   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:57.924064   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:57.928023   99368 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1010 18:16:57.928206   99368 default_sa.go:45] found service account: "default"
	I1010 18:16:57.928229   99368 default_sa.go:55] duration metric: took 190.684117ms for default service account to be created ...
	I1010 18:16:57.928243   99368 system_pods.go:116] waiting for k8s-apps to be running ...
	I1010 18:16:58.124915   99368 request.go:632] Waited for 196.547566ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods
	I1010 18:16:58.124982   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/namespaces/kube-system/pods
	I1010 18:16:58.124989   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:58.124999   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:58.125007   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:58.131096   99368 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1010 18:16:58.138059   99368 system_pods.go:86] 24 kube-system pods found
	I1010 18:16:58.138089   99368 system_pods.go:89] "coredns-7c65d6cfc9-28dll" [1d8bde36-6ab6-4b48-b00a-2a80059ecc11] Running
	I1010 18:16:58.138095   99368 system_pods.go:89] "coredns-7c65d6cfc9-xfhq8" [33d478fb-98f0-4029-87ae-9701a54cecd9] Running
	I1010 18:16:58.138099   99368 system_pods.go:89] "etcd-ha-142481" [fa917068-3ac4-4929-b80b-bfabdc3a9b94] Running
	I1010 18:16:58.138103   99368 system_pods.go:89] "etcd-ha-142481-m02" [8e6c1111-716c-41ff-9d31-ee189956993b] Running
	I1010 18:16:58.138107   99368 system_pods.go:89] "etcd-ha-142481-m03" [3f1ae212-d09b-446c-9172-52b9bfc6c20c] Running
	I1010 18:16:58.138111   99368 system_pods.go:89] "kindnet-4d9v4" [f52506a7-d4f9-4112-8b28-f239c7c8230b] Running
	I1010 18:16:58.138114   99368 system_pods.go:89] "kindnet-5k6j8" [e2d0fb49-23e6-4389-b890-25c03f6089a1] Running
	I1010 18:16:58.138117   99368 system_pods.go:89] "kindnet-cjcsf" [237e5649-ed64-401c-befd-99ef520d0761] Running
	I1010 18:16:58.138120   99368 system_pods.go:89] "kube-apiserver-ha-142481" [b3d0326d-123b-43af-b1a2-b745b884c3b5] Running
	I1010 18:16:58.138124   99368 system_pods.go:89] "kube-apiserver-ha-142481-m02" [36f5732a-f056-4751-83e3-67f84744af8e] Running
	I1010 18:16:58.138127   99368 system_pods.go:89] "kube-apiserver-ha-142481-m03" [4c7836a0-6697-4ce5-87d6-582097925f80] Running
	I1010 18:16:58.138131   99368 system_pods.go:89] "kube-controller-manager-ha-142481" [ad42e72b-6208-44c3-b4cb-ac9794ec9683] Running
	I1010 18:16:58.138134   99368 system_pods.go:89] "kube-controller-manager-ha-142481-m02" [4bd4ce73-1b81-44db-a59f-06c89184d88c] Running
	I1010 18:16:58.138138   99368 system_pods.go:89] "kube-controller-manager-ha-142481-m03" [9444eb06-6dc4-44ab-a7d6-2d1d5b3e6410] Running
	I1010 18:16:58.138141   99368 system_pods.go:89] "kube-proxy-cdjzg" [98288460-9764-4e92-a589-e7e34654cfc5] Running
	I1010 18:16:58.138145   99368 system_pods.go:89] "kube-proxy-gwvrh" [06e8f455-b250-46ea-bbcc-75ed230f2b57] Running
	I1010 18:16:58.138148   99368 system_pods.go:89] "kube-proxy-srfng" [0aad186a-89b6-48d9-8434-1063c7c8a42f] Running
	I1010 18:16:58.138150   99368 system_pods.go:89] "kube-scheduler-ha-142481" [c28a2da7-1807-4111-bf99-84891a0b278a] Running
	I1010 18:16:58.138153   99368 system_pods.go:89] "kube-scheduler-ha-142481-m02" [f29260ee-8ec8-47c4-848d-ee8ff1d06610] Running
	I1010 18:16:58.138156   99368 system_pods.go:89] "kube-scheduler-ha-142481-m03" [a3eea545-bc31-4990-ad58-a43666964468] Running
	I1010 18:16:58.138160   99368 system_pods.go:89] "kube-vip-ha-142481" [3ae4dc2c-27b6-4f93-a605-8de6f0c096d0] Running
	I1010 18:16:58.138163   99368 system_pods.go:89] "kube-vip-ha-142481-m02" [bd50bf3a-5e9c-48d4-8bba-fe1fea6c017d] Running
	I1010 18:16:58.138165   99368 system_pods.go:89] "kube-vip-ha-142481-m03" [a93b4d63-0f6c-47b5-b987-a082b2b0d51a] Running
	I1010 18:16:58.138168   99368 system_pods.go:89] "storage-provisioner" [9d74f64e-6391-4ab0-99ad-8c1711840696] Running
	I1010 18:16:58.138175   99368 system_pods.go:126] duration metric: took 209.923309ms to wait for k8s-apps to be running ...
	I1010 18:16:58.138188   99368 system_svc.go:44] waiting for kubelet service to be running ....
	I1010 18:16:58.138234   99368 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 18:16:58.154620   99368 system_svc.go:56] duration metric: took 16.42135ms WaitForService to wait for kubelet
	I1010 18:16:58.154660   99368 kubeadm.go:582] duration metric: took 23.214494056s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1010 18:16:58.154684   99368 node_conditions.go:102] verifying NodePressure condition ...
	I1010 18:16:58.324577   99368 request.go:632] Waited for 169.800219ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.104:8443/api/v1/nodes
	I1010 18:16:58.324670   99368 round_trippers.go:463] GET https://192.168.39.104:8443/api/v1/nodes
	I1010 18:16:58.324677   99368 round_trippers.go:469] Request Headers:
	I1010 18:16:58.324687   99368 round_trippers.go:473]     Accept: application/json, */*
	I1010 18:16:58.324694   99368 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1010 18:16:58.328908   99368 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1010 18:16:58.329887   99368 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1010 18:16:58.329907   99368 node_conditions.go:123] node cpu capacity is 2
	I1010 18:16:58.329918   99368 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1010 18:16:58.329922   99368 node_conditions.go:123] node cpu capacity is 2
	I1010 18:16:58.329926   99368 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1010 18:16:58.329929   99368 node_conditions.go:123] node cpu capacity is 2
	I1010 18:16:58.329932   99368 node_conditions.go:105] duration metric: took 175.242574ms to run NodePressure ...
	I1010 18:16:58.329945   99368 start.go:241] waiting for startup goroutines ...
	I1010 18:16:58.329965   99368 start.go:255] writing updated cluster config ...
	I1010 18:16:58.330248   99368 ssh_runner.go:195] Run: rm -f paused
	I1010 18:16:58.382565   99368 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1010 18:16:58.384704   99368 out.go:177] * Done! kubectl is now configured to use "ha-142481" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 10 18:20:59 ha-142481 crio[662]: time="2024-10-10 18:20:59.139918419Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7e313b95-d017-4610-880b-cd49915cd27c name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 18:20:59 ha-142481 crio[662]: time="2024-10-10 18:20:59.140137688Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c07ad1fe2bce4fa8373def3cddec28c3e2552242ac1c39c134f1ae1b46e61fc7,PodSandboxId:0cebb1db5e1d3f09063f319b62399492ea2520962e06e43ba786fceadf07a397,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728584222925867591,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-xnwpj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ddbe05a6-e1b2-4b9d-b285-ed6a97c9ea98,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:018e6370bdfdae261e6d11d25c43c0456fb2814ad544f82f775181d88055bf37,PodSandboxId:84952d68d14fb8ddef31a76a7d3e10fc7ae9a1bb453a588629cf59972d4cc64d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728584076458035667,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xfhq8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33d478fb-98f0-4029-87ae-9701a54cecd9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c208648c013d8c43a6964604c6ae8de3a53473a5fd4e7885a2755580435fb2e,PodSandboxId:20b740049c58515dfa475c0041efc539108a367ad35f9468d4a5dd454a1ba4c1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728584076394205343,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-28dll,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
1d8bde36-6ab6-4b48-b00a-2a80059ecc11,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2eb7357e74059a7301a73b2f96692b18182bf02d619521a79db9b92779a3b9d3,PodSandboxId:a78996796d2ead6b297dda5318a5aee95e782ac04d0f6d8ae10bb40361461d5e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1728584076376848222,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d74f64e-6391-4ab0-99ad-8c1711840696,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b32ac96128061a8cd1f65949d4849412dd90d7656d1619c49a39c2a6f8cb40d3,PodSandboxId:d5a1a0a19e5bca4c63661ad0dc5406e3b343827728591b94c793305e3054b5bb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17285840
64156910390,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4d9v4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f52506a7-d4f9-4112-8b28-f239c7c8230b,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f7d32719ebd232c16c4b9d557726714c2b6c1836b556ff4ca647c75d46c03e9,PodSandboxId:63eed92e7516ac5ee123fb9b271b50065e605524dc4b10606329006c357fe8c2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728584064036938001,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gwvrh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06e8f455-b250-46ea-bbcc-75ed230f2b57,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80e86419d2aadedb18174a33f8cbe97c26931608fd3d13de863fea4148c76dc4,PodSandboxId:ef586683ae3a5a110607194cda298fc606be593d4b2e270644dcc4828413d726,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1728584055179658221,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-142481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca912ab25249d4277c44ead7f9f185cc,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:751981b34b5e95c8f49366cd245e2877efdef340757e2957722cc0c59f16c09c,PodSandboxId:a1a198bd8221ce4fda4789734e1aba8d3c7f89613592cf88b2fe723e9517e7f8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728584052280961387,Labels:map[string]string{io.kubernetes.container.name: kub
e-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-142481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ba30574b93c2cf97990905a11b77e8d,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d7eb644bee429f0aef0448b6ea6d1efc310f7b08553ebb2af7781f4c493bbaf,PodSandboxId:df70f8cffd3d4746ecd973b5cc68d7d7162723d607a107e3fc78197661178de8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728584052264112180,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-ha-142481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ddee910ffb9f86d5728fc6243e0b7a0,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43b160f9e1140bbc977bac3d49805593ce66960b30fe85ac1e6b104437e5a026,PodSandboxId:cf562380e5c8d4bcf939e430aad1b55e383372d779380ea7d38feca9c503cb13,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728584052244936539,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.
kubernetes.pod.name: kube-scheduler-ha-142481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4d0e8861f11277dd58b525d7e9f233b,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:206693e605977eabf0659fe5400e46be795755080a225b01d46bb7533d055c58,PodSandboxId:84fece63e17b567b38dff30b098d026073682dacac0ce0da5bf2ed8a3b0c34de,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728584052158394276,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-142481,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9682645df643fbbf1d6db4f0d49467bf,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7e313b95-d017-4610-880b-cd49915cd27c name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 18:20:59 ha-142481 crio[662]: time="2024-10-10 18:20:59.180934679Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=90ed115c-190d-490f-b2ff-bceb14e734d1 name=/runtime.v1.RuntimeService/Version
	Oct 10 18:20:59 ha-142481 crio[662]: time="2024-10-10 18:20:59.181029427Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=90ed115c-190d-490f-b2ff-bceb14e734d1 name=/runtime.v1.RuntimeService/Version
	Oct 10 18:20:59 ha-142481 crio[662]: time="2024-10-10 18:20:59.181937706Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=48469934-a55a-4914-a831-685944d83387 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 10 18:20:59 ha-142481 crio[662]: time="2024-10-10 18:20:59.182334543Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728584459182314945,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=48469934-a55a-4914-a831-685944d83387 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 10 18:20:59 ha-142481 crio[662]: time="2024-10-10 18:20:59.182834744Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=da7cf986-758f-4b23-b1c1-119de4b4efe7 name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 18:20:59 ha-142481 crio[662]: time="2024-10-10 18:20:59.182904652Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=da7cf986-758f-4b23-b1c1-119de4b4efe7 name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 18:20:59 ha-142481 crio[662]: time="2024-10-10 18:20:59.183110881Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c07ad1fe2bce4fa8373def3cddec28c3e2552242ac1c39c134f1ae1b46e61fc7,PodSandboxId:0cebb1db5e1d3f09063f319b62399492ea2520962e06e43ba786fceadf07a397,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728584222925867591,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-xnwpj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ddbe05a6-e1b2-4b9d-b285-ed6a97c9ea98,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:018e6370bdfdae261e6d11d25c43c0456fb2814ad544f82f775181d88055bf37,PodSandboxId:84952d68d14fb8ddef31a76a7d3e10fc7ae9a1bb453a588629cf59972d4cc64d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728584076458035667,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xfhq8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33d478fb-98f0-4029-87ae-9701a54cecd9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c208648c013d8c43a6964604c6ae8de3a53473a5fd4e7885a2755580435fb2e,PodSandboxId:20b740049c58515dfa475c0041efc539108a367ad35f9468d4a5dd454a1ba4c1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728584076394205343,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-28dll,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
1d8bde36-6ab6-4b48-b00a-2a80059ecc11,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2eb7357e74059a7301a73b2f96692b18182bf02d619521a79db9b92779a3b9d3,PodSandboxId:a78996796d2ead6b297dda5318a5aee95e782ac04d0f6d8ae10bb40361461d5e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1728584076376848222,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d74f64e-6391-4ab0-99ad-8c1711840696,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b32ac96128061a8cd1f65949d4849412dd90d7656d1619c49a39c2a6f8cb40d3,PodSandboxId:d5a1a0a19e5bca4c63661ad0dc5406e3b343827728591b94c793305e3054b5bb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17285840
64156910390,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4d9v4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f52506a7-d4f9-4112-8b28-f239c7c8230b,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f7d32719ebd232c16c4b9d557726714c2b6c1836b556ff4ca647c75d46c03e9,PodSandboxId:63eed92e7516ac5ee123fb9b271b50065e605524dc4b10606329006c357fe8c2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728584064036938001,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gwvrh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06e8f455-b250-46ea-bbcc-75ed230f2b57,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80e86419d2aadedb18174a33f8cbe97c26931608fd3d13de863fea4148c76dc4,PodSandboxId:ef586683ae3a5a110607194cda298fc606be593d4b2e270644dcc4828413d726,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1728584055179658221,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-142481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca912ab25249d4277c44ead7f9f185cc,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:751981b34b5e95c8f49366cd245e2877efdef340757e2957722cc0c59f16c09c,PodSandboxId:a1a198bd8221ce4fda4789734e1aba8d3c7f89613592cf88b2fe723e9517e7f8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728584052280961387,Labels:map[string]string{io.kubernetes.container.name: kub
e-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-142481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ba30574b93c2cf97990905a11b77e8d,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d7eb644bee429f0aef0448b6ea6d1efc310f7b08553ebb2af7781f4c493bbaf,PodSandboxId:df70f8cffd3d4746ecd973b5cc68d7d7162723d607a107e3fc78197661178de8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728584052264112180,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-ha-142481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ddee910ffb9f86d5728fc6243e0b7a0,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43b160f9e1140bbc977bac3d49805593ce66960b30fe85ac1e6b104437e5a026,PodSandboxId:cf562380e5c8d4bcf939e430aad1b55e383372d779380ea7d38feca9c503cb13,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728584052244936539,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.
kubernetes.pod.name: kube-scheduler-ha-142481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4d0e8861f11277dd58b525d7e9f233b,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:206693e605977eabf0659fe5400e46be795755080a225b01d46bb7533d055c58,PodSandboxId:84fece63e17b567b38dff30b098d026073682dacac0ce0da5bf2ed8a3b0c34de,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728584052158394276,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-142481,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9682645df643fbbf1d6db4f0d49467bf,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=da7cf986-758f-4b23-b1c1-119de4b4efe7 name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 18:20:59 ha-142481 crio[662]: time="2024-10-10 18:20:59.216225544Z" level=debug msg="Request: &VersionRequest{Version:0.1.0,}" file="otel-collector/interceptors.go:62" id=d2176f5a-c91d-4d0e-86ed-956131821066 name=/runtime.v1.RuntimeService/Version
	Oct 10 18:20:59 ha-142481 crio[662]: time="2024-10-10 18:20:59.216397107Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d2176f5a-c91d-4d0e-86ed-956131821066 name=/runtime.v1.RuntimeService/Version
	Oct 10 18:20:59 ha-142481 crio[662]: time="2024-10-10 18:20:59.226888450Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9139b49d-4ad3-4511-b7fa-0b1e0f76619e name=/runtime.v1.RuntimeService/Version
	Oct 10 18:20:59 ha-142481 crio[662]: time="2024-10-10 18:20:59.226976281Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9139b49d-4ad3-4511-b7fa-0b1e0f76619e name=/runtime.v1.RuntimeService/Version
	Oct 10 18:20:59 ha-142481 crio[662]: time="2024-10-10 18:20:59.228535525Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=96ec967d-414c-46ee-b758-c73000a2127d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 10 18:20:59 ha-142481 crio[662]: time="2024-10-10 18:20:59.229236043Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728584459229213656,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=96ec967d-414c-46ee-b758-c73000a2127d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 10 18:20:59 ha-142481 crio[662]: time="2024-10-10 18:20:59.229799099Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8c35342f-d5d4-44ae-b673-d71faca11ce0 name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 18:20:59 ha-142481 crio[662]: time="2024-10-10 18:20:59.229874016Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8c35342f-d5d4-44ae-b673-d71faca11ce0 name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 18:20:59 ha-142481 crio[662]: time="2024-10-10 18:20:59.230088714Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c07ad1fe2bce4fa8373def3cddec28c3e2552242ac1c39c134f1ae1b46e61fc7,PodSandboxId:0cebb1db5e1d3f09063f319b62399492ea2520962e06e43ba786fceadf07a397,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728584222925867591,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-xnwpj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ddbe05a6-e1b2-4b9d-b285-ed6a97c9ea98,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:018e6370bdfdae261e6d11d25c43c0456fb2814ad544f82f775181d88055bf37,PodSandboxId:84952d68d14fb8ddef31a76a7d3e10fc7ae9a1bb453a588629cf59972d4cc64d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728584076458035667,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xfhq8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33d478fb-98f0-4029-87ae-9701a54cecd9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c208648c013d8c43a6964604c6ae8de3a53473a5fd4e7885a2755580435fb2e,PodSandboxId:20b740049c58515dfa475c0041efc539108a367ad35f9468d4a5dd454a1ba4c1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728584076394205343,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-28dll,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
1d8bde36-6ab6-4b48-b00a-2a80059ecc11,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2eb7357e74059a7301a73b2f96692b18182bf02d619521a79db9b92779a3b9d3,PodSandboxId:a78996796d2ead6b297dda5318a5aee95e782ac04d0f6d8ae10bb40361461d5e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1728584076376848222,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d74f64e-6391-4ab0-99ad-8c1711840696,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b32ac96128061a8cd1f65949d4849412dd90d7656d1619c49a39c2a6f8cb40d3,PodSandboxId:d5a1a0a19e5bca4c63661ad0dc5406e3b343827728591b94c793305e3054b5bb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17285840
64156910390,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4d9v4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f52506a7-d4f9-4112-8b28-f239c7c8230b,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f7d32719ebd232c16c4b9d557726714c2b6c1836b556ff4ca647c75d46c03e9,PodSandboxId:63eed92e7516ac5ee123fb9b271b50065e605524dc4b10606329006c357fe8c2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728584064036938001,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gwvrh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06e8f455-b250-46ea-bbcc-75ed230f2b57,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80e86419d2aadedb18174a33f8cbe97c26931608fd3d13de863fea4148c76dc4,PodSandboxId:ef586683ae3a5a110607194cda298fc606be593d4b2e270644dcc4828413d726,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1728584055179658221,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-142481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca912ab25249d4277c44ead7f9f185cc,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:751981b34b5e95c8f49366cd245e2877efdef340757e2957722cc0c59f16c09c,PodSandboxId:a1a198bd8221ce4fda4789734e1aba8d3c7f89613592cf88b2fe723e9517e7f8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728584052280961387,Labels:map[string]string{io.kubernetes.container.name: kub
e-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-142481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ba30574b93c2cf97990905a11b77e8d,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d7eb644bee429f0aef0448b6ea6d1efc310f7b08553ebb2af7781f4c493bbaf,PodSandboxId:df70f8cffd3d4746ecd973b5cc68d7d7162723d607a107e3fc78197661178de8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728584052264112180,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-ha-142481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ddee910ffb9f86d5728fc6243e0b7a0,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43b160f9e1140bbc977bac3d49805593ce66960b30fe85ac1e6b104437e5a026,PodSandboxId:cf562380e5c8d4bcf939e430aad1b55e383372d779380ea7d38feca9c503cb13,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728584052244936539,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.
kubernetes.pod.name: kube-scheduler-ha-142481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4d0e8861f11277dd58b525d7e9f233b,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:206693e605977eabf0659fe5400e46be795755080a225b01d46bb7533d055c58,PodSandboxId:84fece63e17b567b38dff30b098d026073682dacac0ce0da5bf2ed8a3b0c34de,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728584052158394276,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-142481,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9682645df643fbbf1d6db4f0d49467bf,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8c35342f-d5d4-44ae-b673-d71faca11ce0 name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 18:20:59 ha-142481 crio[662]: time="2024-10-10 18:20:59.271881109Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b6bbd889-9f24-4b53-b1db-73b219a0ab74 name=/runtime.v1.RuntimeService/Version
	Oct 10 18:20:59 ha-142481 crio[662]: time="2024-10-10 18:20:59.272000249Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b6bbd889-9f24-4b53-b1db-73b219a0ab74 name=/runtime.v1.RuntimeService/Version
	Oct 10 18:20:59 ha-142481 crio[662]: time="2024-10-10 18:20:59.273785309Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5b5e6b88-ae40-4a8d-b8c5-e9c19fac2dee name=/runtime.v1.ImageService/ImageFsInfo
	Oct 10 18:20:59 ha-142481 crio[662]: time="2024-10-10 18:20:59.274551665Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728584459274526501,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5b5e6b88-ae40-4a8d-b8c5-e9c19fac2dee name=/runtime.v1.ImageService/ImageFsInfo
	Oct 10 18:20:59 ha-142481 crio[662]: time="2024-10-10 18:20:59.275459985Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c93286bb-fac3-46f2-9282-6851bc77df02 name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 18:20:59 ha-142481 crio[662]: time="2024-10-10 18:20:59.275528000Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c93286bb-fac3-46f2-9282-6851bc77df02 name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 18:20:59 ha-142481 crio[662]: time="2024-10-10 18:20:59.275816159Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c07ad1fe2bce4fa8373def3cddec28c3e2552242ac1c39c134f1ae1b46e61fc7,PodSandboxId:0cebb1db5e1d3f09063f319b62399492ea2520962e06e43ba786fceadf07a397,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728584222925867591,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-xnwpj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ddbe05a6-e1b2-4b9d-b285-ed6a97c9ea98,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:018e6370bdfdae261e6d11d25c43c0456fb2814ad544f82f775181d88055bf37,PodSandboxId:84952d68d14fb8ddef31a76a7d3e10fc7ae9a1bb453a588629cf59972d4cc64d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728584076458035667,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-xfhq8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33d478fb-98f0-4029-87ae-9701a54cecd9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c208648c013d8c43a6964604c6ae8de3a53473a5fd4e7885a2755580435fb2e,PodSandboxId:20b740049c58515dfa475c0041efc539108a367ad35f9468d4a5dd454a1ba4c1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728584076394205343,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-28dll,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
1d8bde36-6ab6-4b48-b00a-2a80059ecc11,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2eb7357e74059a7301a73b2f96692b18182bf02d619521a79db9b92779a3b9d3,PodSandboxId:a78996796d2ead6b297dda5318a5aee95e782ac04d0f6d8ae10bb40361461d5e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1728584076376848222,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d74f64e-6391-4ab0-99ad-8c1711840696,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b32ac96128061a8cd1f65949d4849412dd90d7656d1619c49a39c2a6f8cb40d3,PodSandboxId:d5a1a0a19e5bca4c63661ad0dc5406e3b343827728591b94c793305e3054b5bb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17285840
64156910390,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4d9v4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f52506a7-d4f9-4112-8b28-f239c7c8230b,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f7d32719ebd232c16c4b9d557726714c2b6c1836b556ff4ca647c75d46c03e9,PodSandboxId:63eed92e7516ac5ee123fb9b271b50065e605524dc4b10606329006c357fe8c2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728584064036938001,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gwvrh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06e8f455-b250-46ea-bbcc-75ed230f2b57,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80e86419d2aadedb18174a33f8cbe97c26931608fd3d13de863fea4148c76dc4,PodSandboxId:ef586683ae3a5a110607194cda298fc606be593d4b2e270644dcc4828413d726,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1728584055179658221,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-142481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca912ab25249d4277c44ead7f9f185cc,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:751981b34b5e95c8f49366cd245e2877efdef340757e2957722cc0c59f16c09c,PodSandboxId:a1a198bd8221ce4fda4789734e1aba8d3c7f89613592cf88b2fe723e9517e7f8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728584052280961387,Labels:map[string]string{io.kubernetes.container.name: kub
e-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-142481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ba30574b93c2cf97990905a11b77e8d,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d7eb644bee429f0aef0448b6ea6d1efc310f7b08553ebb2af7781f4c493bbaf,PodSandboxId:df70f8cffd3d4746ecd973b5cc68d7d7162723d607a107e3fc78197661178de8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728584052264112180,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-ha-142481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ddee910ffb9f86d5728fc6243e0b7a0,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43b160f9e1140bbc977bac3d49805593ce66960b30fe85ac1e6b104437e5a026,PodSandboxId:cf562380e5c8d4bcf939e430aad1b55e383372d779380ea7d38feca9c503cb13,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728584052244936539,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.
kubernetes.pod.name: kube-scheduler-ha-142481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4d0e8861f11277dd58b525d7e9f233b,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:206693e605977eabf0659fe5400e46be795755080a225b01d46bb7533d055c58,PodSandboxId:84fece63e17b567b38dff30b098d026073682dacac0ce0da5bf2ed8a3b0c34de,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728584052158394276,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-142481,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9682645df643fbbf1d6db4f0d49467bf,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c93286bb-fac3-46f2-9282-6851bc77df02 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c07ad1fe2bce4       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   0cebb1db5e1d3       busybox-7dff88458-xnwpj
	018e6370bdfda       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   84952d68d14fb       coredns-7c65d6cfc9-xfhq8
	5c208648c013d       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   20b740049c585       coredns-7c65d6cfc9-28dll
	2eb7357e74059       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   a78996796d2ea       storage-provisioner
	b32ac96128061       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      6 minutes ago       Running             kindnet-cni               0                   d5a1a0a19e5bc       kindnet-4d9v4
	9f7d32719ebd2       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      6 minutes ago       Running             kube-proxy                0                   63eed92e7516a       kube-proxy-gwvrh
	80e86419d2aad       ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413     6 minutes ago       Running             kube-vip                  0                   ef586683ae3a5       kube-vip-ha-142481
	751981b34b5e9       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      6 minutes ago       Running             kube-apiserver            0                   a1a198bd8221c       kube-apiserver-ha-142481
	4d7eb644bee42       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      6 minutes ago       Running             kube-controller-manager   0                   df70f8cffd3d4       kube-controller-manager-ha-142481
	43b160f9e1140       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      6 minutes ago       Running             kube-scheduler            0                   cf562380e5c8d       kube-scheduler-ha-142481
	206693e605977       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   84fece63e17b5       etcd-ha-142481
	
	
	==> coredns [018e6370bdfdae261e6d11d25c43c0456fb2814ad544f82f775181d88055bf37] <==
	[INFO] 10.244.1.2:34545 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001557695s
	[INFO] 10.244.1.2:38085 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000108964s
	[INFO] 10.244.1.2:51531 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000130545s
	[INFO] 10.244.0.4:44429 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002010271s
	[INFO] 10.244.0.4:54303 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000097043s
	[INFO] 10.244.0.4:42398 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000046814s
	[INFO] 10.244.0.4:45760 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00003792s
	[INFO] 10.244.2.2:37649 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000126566s
	[INFO] 10.244.2.2:40587 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000124439s
	[INFO] 10.244.2.2:57109 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00008569s
	[INFO] 10.244.1.2:44569 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000190494s
	[INFO] 10.244.1.2:36745 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000100275s
	[INFO] 10.244.1.2:43935 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000110935s
	[INFO] 10.244.0.4:38393 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000150867s
	[INFO] 10.244.0.4:42701 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000114037s
	[INFO] 10.244.0.4:38022 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000153775s
	[INFO] 10.244.0.4:54617 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000066619s
	[INFO] 10.244.2.2:38084 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000171s
	[INFO] 10.244.2.2:42518 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000188177s
	[INFO] 10.244.2.2:46288 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000151696s
	[INFO] 10.244.1.2:54065 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000167454s
	[INFO] 10.244.1.2:49349 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000138818s
	[INFO] 10.244.0.4:46873 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000110042s
	[INFO] 10.244.0.4:51740 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000092418s
	[INFO] 10.244.0.4:46743 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000066541s
	
	
	==> coredns [5c208648c013d8c43a6964604c6ae8de3a53473a5fd4e7885a2755580435fb2e] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:51137 - 38313 "HINFO IN 987630183612321637.831480708693955805. udp 55 false 512" NXDOMAIN qr,rd,ra 55 0.022844151s
	[INFO] 10.244.2.2:42578 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.001085393s
	[INFO] 10.244.1.2:46574 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.002185448s
	[INFO] 10.244.0.4:39782 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.001587443s
	[INFO] 10.244.0.4:53063 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000500521s
	[INFO] 10.244.2.2:54233 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000215976s
	[INFO] 10.244.2.2:58923 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000163879s
	[INFO] 10.244.1.2:45749 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000253197s
	[INFO] 10.244.1.2:48261 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001731s
	[INFO] 10.244.1.2:46306 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000179475s
	[INFO] 10.244.0.4:41358 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00015898s
	[INFO] 10.244.0.4:57383 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000192727s
	[INFO] 10.244.0.4:41993 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000083721s
	[INFO] 10.244.0.4:60789 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001398106s
	[INFO] 10.244.2.2:56030 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000145862s
	[INFO] 10.244.1.2:34434 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000144043s
	[INFO] 10.244.2.2:40687 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000170156s
	[INFO] 10.244.1.2:56591 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000140447s
	[INFO] 10.244.1.2:34586 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000215712s
	[INFO] 10.244.0.4:49420 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000094221s
	
	
	==> describe nodes <==
	Name:               ha-142481
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-142481
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e90d7de550e839433dc631623053398b652747fc
	                    minikube.k8s.io/name=ha-142481
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_10T18_14_22_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 10 Oct 2024 18:14:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-142481
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 10 Oct 2024 18:20:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 10 Oct 2024 18:17:25 +0000   Thu, 10 Oct 2024 18:14:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 10 Oct 2024 18:17:25 +0000   Thu, 10 Oct 2024 18:14:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 10 Oct 2024 18:17:25 +0000   Thu, 10 Oct 2024 18:14:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 10 Oct 2024 18:17:25 +0000   Thu, 10 Oct 2024 18:14:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.104
	  Hostname:    ha-142481
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 103fd1cad9094f108b20248867a8c9f2
	  System UUID:                103fd1ca-d909-4f10-8b20-248867a8c9f2
	  Boot ID:                    ea46d519-f733-4cdc-b631-5fb0eb75e07c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-xnwpj              0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m
	  kube-system                 coredns-7c65d6cfc9-28dll             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m36s
	  kube-system                 coredns-7c65d6cfc9-xfhq8             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m36s
	  kube-system                 etcd-ha-142481                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m38s
	  kube-system                 kindnet-4d9v4                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m37s
	  kube-system                 kube-apiserver-ha-142481             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m38s
	  kube-system                 kube-controller-manager-ha-142481    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m38s
	  kube-system                 kube-proxy-gwvrh                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m37s
	  kube-system                 kube-scheduler-ha-142481             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m38s
	  kube-system                 kube-vip-ha-142481                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m38s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m35s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m35s                  kube-proxy       
	  Normal  NodeHasSufficientPID     6m48s (x7 over 6m48s)  kubelet          Node ha-142481 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m48s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 6m48s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m48s (x8 over 6m48s)  kubelet          Node ha-142481 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m48s (x8 over 6m48s)  kubelet          Node ha-142481 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 6m38s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m38s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m38s                  kubelet          Node ha-142481 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m38s                  kubelet          Node ha-142481 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m38s                  kubelet          Node ha-142481 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m37s                  node-controller  Node ha-142481 event: Registered Node ha-142481 in Controller
	  Normal  NodeReady                6m24s                  kubelet          Node ha-142481 status is now: NodeReady
	  Normal  RegisteredNode           5m38s                  node-controller  Node ha-142481 event: Registered Node ha-142481 in Controller
	  Normal  RegisteredNode           4m19s                  node-controller  Node ha-142481 event: Registered Node ha-142481 in Controller
	
	
	Name:               ha-142481-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-142481-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e90d7de550e839433dc631623053398b652747fc
	                    minikube.k8s.io/name=ha-142481
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_10T18_15_15_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 10 Oct 2024 18:15:12 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-142481-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 10 Oct 2024 18:18:16 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Thu, 10 Oct 2024 18:17:15 +0000   Thu, 10 Oct 2024 18:18:57 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Thu, 10 Oct 2024 18:17:15 +0000   Thu, 10 Oct 2024 18:18:57 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Thu, 10 Oct 2024 18:17:15 +0000   Thu, 10 Oct 2024 18:18:57 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Thu, 10 Oct 2024 18:17:15 +0000   Thu, 10 Oct 2024 18:18:57 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.186
	  Hostname:    ha-142481-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 64af1b9db3cc41a38fc696e261399a82
	  System UUID:                64af1b9d-b3cc-41a3-8fc6-96e261399a82
	  Boot ID:                    1ad9a5aa-6f71-4b62-94f2-fcfc6f775bcc
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-wf7qs                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m
	  kube-system                 etcd-ha-142481-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m45s
	  kube-system                 kindnet-5k6j8                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m47s
	  kube-system                 kube-apiserver-ha-142481-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m45s
	  kube-system                 kube-controller-manager-ha-142481-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m40s
	  kube-system                 kube-proxy-srfng                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m47s
	  kube-system                 kube-scheduler-ha-142481-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m45s
	  kube-system                 kube-vip-ha-142481-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m42s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m42s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m47s (x8 over 5m47s)  kubelet          Node ha-142481-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m47s (x8 over 5m47s)  kubelet          Node ha-142481-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m47s (x7 over 5m47s)  kubelet          Node ha-142481-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m47s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m42s                  node-controller  Node ha-142481-m02 event: Registered Node ha-142481-m02 in Controller
	  Normal  RegisteredNode           5m38s                  node-controller  Node ha-142481-m02 event: Registered Node ha-142481-m02 in Controller
	  Normal  RegisteredNode           4m19s                  node-controller  Node ha-142481-m02 event: Registered Node ha-142481-m02 in Controller
	  Normal  NodeNotReady             2m2s                   node-controller  Node ha-142481-m02 status is now: NodeNotReady
	
	
	Name:               ha-142481-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-142481-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e90d7de550e839433dc631623053398b652747fc
	                    minikube.k8s.io/name=ha-142481
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_10T18_16_34_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 10 Oct 2024 18:16:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-142481-m03
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 10 Oct 2024 18:20:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 10 Oct 2024 18:17:32 +0000   Thu, 10 Oct 2024 18:16:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 10 Oct 2024 18:17:32 +0000   Thu, 10 Oct 2024 18:16:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 10 Oct 2024 18:17:32 +0000   Thu, 10 Oct 2024 18:16:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 10 Oct 2024 18:17:32 +0000   Thu, 10 Oct 2024 18:16:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.175
	  Hostname:    ha-142481-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 940ef061e50d4431baad36dbbc54f8b4
	  System UUID:                940ef061-e50d-4431-baad-36dbbc54f8b4
	  Boot ID:                    48ae8d44-92c8-45fc-a610-982f0242851e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-5544l                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m
	  kube-system                 etcd-ha-142481-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m26s
	  kube-system                 kindnet-cjcsf                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m28s
	  kube-system                 kube-apiserver-ha-142481-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m26s
	  kube-system                 kube-controller-manager-ha-142481-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m26s
	  kube-system                 kube-proxy-cdjzg                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m28s
	  kube-system                 kube-scheduler-ha-142481-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m26s
	  kube-system                 kube-vip-ha-142481-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m22s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m28s (x8 over 4m28s)  kubelet          Node ha-142481-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m28s (x8 over 4m28s)  kubelet          Node ha-142481-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m28s (x7 over 4m28s)  kubelet          Node ha-142481-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m28s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m27s                  node-controller  Node ha-142481-m03 event: Registered Node ha-142481-m03 in Controller
	  Normal  RegisteredNode           4m23s                  node-controller  Node ha-142481-m03 event: Registered Node ha-142481-m03 in Controller
	  Normal  RegisteredNode           4m19s                  node-controller  Node ha-142481-m03 event: Registered Node ha-142481-m03 in Controller
	
	
	Name:               ha-142481-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-142481-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e90d7de550e839433dc631623053398b652747fc
	                    minikube.k8s.io/name=ha-142481
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_10T18_17_40_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 10 Oct 2024 18:17:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-142481-m04
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 10 Oct 2024 18:20:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 10 Oct 2024 18:18:09 +0000   Thu, 10 Oct 2024 18:17:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 10 Oct 2024 18:18:09 +0000   Thu, 10 Oct 2024 18:17:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 10 Oct 2024 18:18:09 +0000   Thu, 10 Oct 2024 18:17:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 10 Oct 2024 18:18:09 +0000   Thu, 10 Oct 2024 18:17:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.164
	  Hostname:    ha-142481-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 98346cf85e5d4e1e831142d0f2e86f20
	  System UUID:                98346cf8-5e5d-4e1e-8311-42d0f2e86f20
	  Boot ID:                    0fd379eb-2eaf-4e1b-aeda-b9abfe41644d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-qbvk6       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m20s
	  kube-system                 kube-proxy-4xzhw    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m20s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m15s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  3m20s (x2 over 3m20s)  kubelet          Node ha-142481-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m20s (x2 over 3m20s)  kubelet          Node ha-142481-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m20s (x2 over 3m20s)  kubelet          Node ha-142481-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m20s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m19s                  node-controller  Node ha-142481-m04 event: Registered Node ha-142481-m04 in Controller
	  Normal  RegisteredNode           3m18s                  node-controller  Node ha-142481-m04 event: Registered Node ha-142481-m04 in Controller
	  Normal  RegisteredNode           3m17s                  node-controller  Node ha-142481-m04 event: Registered Node ha-142481-m04 in Controller
	  Normal  NodeReady                3m                     kubelet          Node ha-142481-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct10 18:13] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050451] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040403] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.885132] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.655679] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.952802] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Oct10 18:14] systemd-fstab-generator[584]: Ignoring "noauto" option for root device
	[  +0.063573] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.063579] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.169358] systemd-fstab-generator[611]: Ignoring "noauto" option for root device
	[  +0.137879] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.284778] systemd-fstab-generator[653]: Ignoring "noauto" option for root device
	[  +4.055847] systemd-fstab-generator[745]: Ignoring "noauto" option for root device
	[  +3.359583] systemd-fstab-generator[872]: Ignoring "noauto" option for root device
	[  +0.065935] kauditd_printk_skb: 158 callbacks suppressed
	[ +10.163908] systemd-fstab-generator[1291]: Ignoring "noauto" option for root device
	[  +0.085716] kauditd_printk_skb: 79 callbacks suppressed
	[ +14.930913] kauditd_printk_skb: 69 callbacks suppressed
	[Oct10 18:15] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [206693e605977eabf0659fe5400e46be795755080a225b01d46bb7533d055c58] <==
	{"level":"warn","ts":"2024-10-10T18:20:59.569936Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"223628dc6b2f68bd","from":"223628dc6b2f68bd","remote-peer-id":"74546b85ba45f826","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-10T18:20:59.576506Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"223628dc6b2f68bd","from":"223628dc6b2f68bd","remote-peer-id":"74546b85ba45f826","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-10T18:20:59.581078Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"223628dc6b2f68bd","from":"223628dc6b2f68bd","remote-peer-id":"74546b85ba45f826","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-10T18:20:59.592940Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"223628dc6b2f68bd","from":"223628dc6b2f68bd","remote-peer-id":"74546b85ba45f826","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-10T18:20:59.600335Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"223628dc6b2f68bd","from":"223628dc6b2f68bd","remote-peer-id":"74546b85ba45f826","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-10T18:20:59.608811Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"223628dc6b2f68bd","from":"223628dc6b2f68bd","remote-peer-id":"74546b85ba45f826","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-10T18:20:59.613854Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"223628dc6b2f68bd","from":"223628dc6b2f68bd","remote-peer-id":"74546b85ba45f826","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-10T18:20:59.617776Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"223628dc6b2f68bd","from":"223628dc6b2f68bd","remote-peer-id":"74546b85ba45f826","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-10T18:20:59.624484Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"223628dc6b2f68bd","from":"223628dc6b2f68bd","remote-peer-id":"74546b85ba45f826","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-10T18:20:59.632906Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"223628dc6b2f68bd","from":"223628dc6b2f68bd","remote-peer-id":"74546b85ba45f826","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-10T18:20:59.640446Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"223628dc6b2f68bd","from":"223628dc6b2f68bd","remote-peer-id":"74546b85ba45f826","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-10T18:20:59.644500Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"223628dc6b2f68bd","from":"223628dc6b2f68bd","remote-peer-id":"74546b85ba45f826","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-10T18:20:59.647236Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"223628dc6b2f68bd","from":"223628dc6b2f68bd","remote-peer-id":"74546b85ba45f826","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-10T18:20:59.649053Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"223628dc6b2f68bd","from":"223628dc6b2f68bd","remote-peer-id":"74546b85ba45f826","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-10T18:20:59.654775Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"223628dc6b2f68bd","from":"223628dc6b2f68bd","remote-peer-id":"74546b85ba45f826","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-10T18:20:59.661181Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"223628dc6b2f68bd","from":"223628dc6b2f68bd","remote-peer-id":"74546b85ba45f826","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-10T18:20:59.676157Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"223628dc6b2f68bd","from":"223628dc6b2f68bd","remote-peer-id":"74546b85ba45f826","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-10T18:20:59.680259Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"223628dc6b2f68bd","from":"223628dc6b2f68bd","remote-peer-id":"74546b85ba45f826","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-10T18:20:59.683165Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"223628dc6b2f68bd","from":"223628dc6b2f68bd","remote-peer-id":"74546b85ba45f826","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-10T18:20:59.686715Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"223628dc6b2f68bd","from":"223628dc6b2f68bd","remote-peer-id":"74546b85ba45f826","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-10T18:20:59.692365Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"223628dc6b2f68bd","from":"223628dc6b2f68bd","remote-peer-id":"74546b85ba45f826","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-10T18:20:59.698260Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"223628dc6b2f68bd","from":"223628dc6b2f68bd","remote-peer-id":"74546b85ba45f826","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-10T18:20:59.704837Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"223628dc6b2f68bd","from":"223628dc6b2f68bd","remote-peer-id":"74546b85ba45f826","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-10T18:20:59.707221Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"223628dc6b2f68bd","from":"223628dc6b2f68bd","remote-peer-id":"74546b85ba45f826","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-10T18:20:59.747010Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"223628dc6b2f68bd","from":"223628dc6b2f68bd","remote-peer-id":"74546b85ba45f826","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 18:20:59 up 7 min,  0 users,  load average: 0.39, 0.37, 0.19
	Linux ha-142481 5.10.207 #1 SMP Tue Oct 8 15:16:25 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [b32ac96128061a8cd1f65949d4849412dd90d7656d1619c49a39c2a6f8cb40d3] <==
	I1010 18:20:25.390755       1 main.go:322] Node ha-142481-m04 has CIDR [10.244.3.0/24] 
	I1010 18:20:35.399378       1 main.go:295] Handling node with IPs: map[192.168.39.104:{}]
	I1010 18:20:35.399430       1 main.go:299] handling current node
	I1010 18:20:35.399452       1 main.go:295] Handling node with IPs: map[192.168.39.186:{}]
	I1010 18:20:35.399457       1 main.go:322] Node ha-142481-m02 has CIDR [10.244.1.0/24] 
	I1010 18:20:35.399642       1 main.go:295] Handling node with IPs: map[192.168.39.175:{}]
	I1010 18:20:35.399667       1 main.go:322] Node ha-142481-m03 has CIDR [10.244.2.0/24] 
	I1010 18:20:35.399718       1 main.go:295] Handling node with IPs: map[192.168.39.164:{}]
	I1010 18:20:35.399723       1 main.go:322] Node ha-142481-m04 has CIDR [10.244.3.0/24] 
	I1010 18:20:45.399629       1 main.go:295] Handling node with IPs: map[192.168.39.175:{}]
	I1010 18:20:45.399760       1 main.go:322] Node ha-142481-m03 has CIDR [10.244.2.0/24] 
	I1010 18:20:45.399950       1 main.go:295] Handling node with IPs: map[192.168.39.164:{}]
	I1010 18:20:45.399978       1 main.go:322] Node ha-142481-m04 has CIDR [10.244.3.0/24] 
	I1010 18:20:45.400080       1 main.go:295] Handling node with IPs: map[192.168.39.104:{}]
	I1010 18:20:45.400105       1 main.go:299] handling current node
	I1010 18:20:45.400138       1 main.go:295] Handling node with IPs: map[192.168.39.186:{}]
	I1010 18:20:45.400158       1 main.go:322] Node ha-142481-m02 has CIDR [10.244.1.0/24] 
	I1010 18:20:55.390711       1 main.go:295] Handling node with IPs: map[192.168.39.104:{}]
	I1010 18:20:55.390846       1 main.go:299] handling current node
	I1010 18:20:55.390876       1 main.go:295] Handling node with IPs: map[192.168.39.186:{}]
	I1010 18:20:55.390894       1 main.go:322] Node ha-142481-m02 has CIDR [10.244.1.0/24] 
	I1010 18:20:55.391070       1 main.go:295] Handling node with IPs: map[192.168.39.175:{}]
	I1010 18:20:55.391111       1 main.go:322] Node ha-142481-m03 has CIDR [10.244.2.0/24] 
	I1010 18:20:55.391172       1 main.go:295] Handling node with IPs: map[192.168.39.164:{}]
	I1010 18:20:55.391190       1 main.go:322] Node ha-142481-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [751981b34b5e95c8f49366cd245e2877efdef340757e2957722cc0c59f16c09c] <==
	I1010 18:14:21.601752       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1010 18:14:21.615538       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1010 18:14:22.685756       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I1010 18:14:22.961093       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E1010 18:15:13.597943       1 writers.go:122] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E1010 18:15:13.598021       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 8.162µs, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	E1010 18:15:13.599137       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E1010 18:15:13.600311       1 writers.go:135] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E1010 18:15:13.601619       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="3.769951ms" method="POST" path="/api/v1/namespaces/kube-system/events" result=null
	E1010 18:17:03.850296       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:50978: use of closed network connection
	E1010 18:17:04.060164       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:50998: use of closed network connection
	E1010 18:17:04.265073       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51022: use of closed network connection
	E1010 18:17:04.497148       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51026: use of closed network connection
	E1010 18:17:04.691753       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51052: use of closed network connection
	E1010 18:17:04.874313       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51072: use of closed network connection
	E1010 18:17:05.055509       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51096: use of closed network connection
	E1010 18:17:05.241806       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51110: use of closed network connection
	E1010 18:17:05.418962       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:51128: use of closed network connection
	E1010 18:17:05.714305       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35886: use of closed network connection
	E1010 18:17:05.894226       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35894: use of closed network connection
	E1010 18:17:06.084951       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35922: use of closed network connection
	E1010 18:17:06.281751       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35936: use of closed network connection
	E1010 18:17:06.459430       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35954: use of closed network connection
	E1010 18:17:06.642941       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35966: use of closed network connection
	W1010 18:18:37.363890       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.104 192.168.39.175]
	
	
	==> kube-controller-manager [4d7eb644bee429f0aef0448b6ea6d1efc310f7b08553ebb2af7781f4c493bbaf] <==
	I1010 18:17:39.636355       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-142481-m04" podCIDRs=["10.244.3.0/24"]
	I1010 18:17:39.636414       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-142481-m04"
	I1010 18:17:39.636469       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-142481-m04"
	I1010 18:17:39.668112       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-142481-m04"
	I1010 18:17:39.689740       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-142481-m04"
	I1010 18:17:40.177402       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-142481-m04"
	I1010 18:17:40.233291       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-142481-m04"
	I1010 18:17:41.187681       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-142481-m04"
	I1010 18:17:41.226193       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-142481-m04"
	I1010 18:17:42.243172       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-142481-m04"
	I1010 18:17:42.243646       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-142481-m04"
	I1010 18:17:42.333986       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-142481-m04"
	I1010 18:17:49.941287       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-142481-m04"
	I1010 18:17:59.249000       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-142481-m04"
	I1010 18:17:59.249257       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-142481-m04"
	I1010 18:17:59.269371       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-142481-m04"
	I1010 18:18:00.212787       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-142481-m04"
	I1010 18:18:09.988078       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-142481-m04"
	I1010 18:18:57.270927       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-142481-m04"
	I1010 18:18:57.272138       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-142481-m02"
	I1010 18:18:57.296852       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-142481-m02"
	I1010 18:18:57.478314       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="49.230176ms"
	I1010 18:18:57.478428       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="80.474µs"
	I1010 18:19:00.278371       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-142481-m02"
	I1010 18:19:02.479119       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-142481-m02"
	
	
	==> kube-proxy [9f7d32719ebd232c16c4b9d557726714c2b6c1836b556ff4ca647c75d46c03e9] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1010 18:14:24.446239       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1010 18:14:24.508320       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.104"]
	E1010 18:14:24.508809       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1010 18:14:24.556831       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1010 18:14:24.556922       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1010 18:14:24.556961       1 server_linux.go:169] "Using iptables Proxier"
	I1010 18:14:24.559536       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1010 18:14:24.560518       1 server.go:483] "Version info" version="v1.31.1"
	I1010 18:14:24.560742       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1010 18:14:24.562971       1 config.go:199] "Starting service config controller"
	I1010 18:14:24.563611       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1010 18:14:24.563720       1 config.go:105] "Starting endpoint slice config controller"
	I1010 18:14:24.563744       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1010 18:14:24.566215       1 config.go:328] "Starting node config controller"
	I1010 18:14:24.566227       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1010 18:14:24.665476       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1010 18:14:24.665712       1 shared_informer.go:320] Caches are synced for service config
	I1010 18:14:24.667666       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [43b160f9e1140bbc977bac3d49805593ce66960b30fe85ac1e6b104437e5a026] <==
	W1010 18:14:16.494936       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1010 18:14:16.495042       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1010 18:14:16.517223       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1010 18:14:16.517488       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1010 18:14:16.544128       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1010 18:14:16.544233       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1010 18:14:16.560806       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1010 18:14:16.560856       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1010 18:14:16.640427       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1010 18:14:16.640554       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1010 18:14:16.701938       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1010 18:14:16.702008       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1010 18:14:16.773339       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1010 18:14:16.773523       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1010 18:14:16.873800       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1010 18:14:16.874006       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1010 18:14:18.221733       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1010 18:16:59.352658       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-wf7qs\": pod busybox-7dff88458-wf7qs is already assigned to node \"ha-142481-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-wf7qs" node="ha-142481-m02"
	E1010 18:16:59.352878       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 8cfeb378-41dd-4850-bbc6-610453612cf5(default/busybox-7dff88458-wf7qs) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-wf7qs"
	E1010 18:16:59.352933       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-wf7qs\": pod busybox-7dff88458-wf7qs is already assigned to node \"ha-142481-m02\"" pod="default/busybox-7dff88458-wf7qs"
	I1010 18:16:59.352990       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-wf7qs" node="ha-142481-m02"
	E1010 18:17:39.876287       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-qbvk6\": pod kindnet-qbvk6 is already assigned to node \"ha-142481-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-qbvk6" node="ha-142481-m04"
	E1010 18:17:39.876531       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 67b280c2-562d-45e0-a362-726dadaf5cf6(kube-system/kindnet-qbvk6) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-qbvk6"
	E1010 18:17:39.876554       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-qbvk6\": pod kindnet-qbvk6 is already assigned to node \"ha-142481-m04\"" pod="kube-system/kindnet-qbvk6"
	I1010 18:17:39.876861       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-qbvk6" node="ha-142481-m04"
	
	
	==> kubelet <==
	Oct 10 18:19:21 ha-142481 kubelet[1298]: E1010 18:19:21.653774    1298 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728584361653351989,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 18:19:21 ha-142481 kubelet[1298]: E1010 18:19:21.654165    1298 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728584361653351989,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 18:19:31 ha-142481 kubelet[1298]: E1010 18:19:31.655501    1298 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728584371655103881,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 18:19:31 ha-142481 kubelet[1298]: E1010 18:19:31.656061    1298 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728584371655103881,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 18:19:41 ha-142481 kubelet[1298]: E1010 18:19:41.657888    1298 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728584381657459506,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 18:19:41 ha-142481 kubelet[1298]: E1010 18:19:41.657923    1298 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728584381657459506,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 18:19:51 ha-142481 kubelet[1298]: E1010 18:19:51.662516    1298 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728584391660533273,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 18:19:51 ha-142481 kubelet[1298]: E1010 18:19:51.662805    1298 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728584391660533273,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 18:20:01 ha-142481 kubelet[1298]: E1010 18:20:01.665482    1298 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728584401664880599,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 18:20:01 ha-142481 kubelet[1298]: E1010 18:20:01.665528    1298 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728584401664880599,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 18:20:11 ha-142481 kubelet[1298]: E1010 18:20:11.668335    1298 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728584411667894103,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 18:20:11 ha-142481 kubelet[1298]: E1010 18:20:11.668374    1298 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728584411667894103,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 18:20:21 ha-142481 kubelet[1298]: E1010 18:20:21.541634    1298 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 10 18:20:21 ha-142481 kubelet[1298]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 10 18:20:21 ha-142481 kubelet[1298]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 10 18:20:21 ha-142481 kubelet[1298]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 10 18:20:21 ha-142481 kubelet[1298]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 10 18:20:21 ha-142481 kubelet[1298]: E1010 18:20:21.670317    1298 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728584421670063294,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 18:20:21 ha-142481 kubelet[1298]: E1010 18:20:21.670363    1298 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728584421670063294,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 18:20:31 ha-142481 kubelet[1298]: E1010 18:20:31.672182    1298 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728584431671864331,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 18:20:31 ha-142481 kubelet[1298]: E1010 18:20:31.672436    1298 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728584431671864331,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 18:20:41 ha-142481 kubelet[1298]: E1010 18:20:41.682034    1298 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728584441680876363,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 18:20:41 ha-142481 kubelet[1298]: E1010 18:20:41.682449    1298 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728584441680876363,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 18:20:51 ha-142481 kubelet[1298]: E1010 18:20:51.683877    1298 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728584451683638908,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 18:20:51 ha-142481 kubelet[1298]: E1010 18:20:51.683909    1298 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728584451683638908,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-142481 -n ha-142481
helpers_test.go:261: (dbg) Run:  kubectl --context ha-142481 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (6.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (401.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-142481 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-142481 -v=7 --alsologtostderr
E1010 18:22:50.018849   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/functional-346405/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-142481 -v=7 --alsologtostderr: exit status 82 (2m1.968860484s)

                                                
                                                
-- stdout --
	* Stopping node "ha-142481-m04"  ...
	* Stopping node "ha-142481-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1010 18:21:00.828711  105052 out.go:345] Setting OutFile to fd 1 ...
	I1010 18:21:00.828825  105052 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 18:21:00.828836  105052 out.go:358] Setting ErrFile to fd 2...
	I1010 18:21:00.828840  105052 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 18:21:00.829085  105052 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19787-81676/.minikube/bin
	I1010 18:21:00.829339  105052 out.go:352] Setting JSON to false
	I1010 18:21:00.829428  105052 mustload.go:65] Loading cluster: ha-142481
	I1010 18:21:00.829855  105052 config.go:182] Loaded profile config "ha-142481": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 18:21:00.829949  105052 profile.go:143] Saving config to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/config.json ...
	I1010 18:21:00.830131  105052 mustload.go:65] Loading cluster: ha-142481
	I1010 18:21:00.830257  105052 config.go:182] Loaded profile config "ha-142481": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 18:21:00.830283  105052 stop.go:39] StopHost: ha-142481-m04
	I1010 18:21:00.830664  105052 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 18:21:00.830717  105052 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 18:21:00.846509  105052 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36783
	I1010 18:21:00.847197  105052 main.go:141] libmachine: () Calling .GetVersion
	I1010 18:21:00.847956  105052 main.go:141] libmachine: Using API Version  1
	I1010 18:21:00.847991  105052 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 18:21:00.848341  105052 main.go:141] libmachine: () Calling .GetMachineName
	I1010 18:21:00.851214  105052 out.go:177] * Stopping node "ha-142481-m04"  ...
	I1010 18:21:00.852716  105052 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1010 18:21:00.852755  105052 main.go:141] libmachine: (ha-142481-m04) Calling .DriverName
	I1010 18:21:00.853017  105052 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1010 18:21:00.853047  105052 main.go:141] libmachine: (ha-142481-m04) Calling .GetSSHHostname
	I1010 18:21:00.856190  105052 main.go:141] libmachine: (ha-142481-m04) DBG | domain ha-142481-m04 has defined MAC address 52:54:00:5e:f1:0f in network mk-ha-142481
	I1010 18:21:00.856713  105052 main.go:141] libmachine: (ha-142481-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:f1:0f", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:17:22 +0000 UTC Type:0 Mac:52:54:00:5e:f1:0f Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:ha-142481-m04 Clientid:01:52:54:00:5e:f1:0f}
	I1010 18:21:00.856759  105052 main.go:141] libmachine: (ha-142481-m04) DBG | domain ha-142481-m04 has defined IP address 192.168.39.164 and MAC address 52:54:00:5e:f1:0f in network mk-ha-142481
	I1010 18:21:00.856916  105052 main.go:141] libmachine: (ha-142481-m04) Calling .GetSSHPort
	I1010 18:21:00.857084  105052 main.go:141] libmachine: (ha-142481-m04) Calling .GetSSHKeyPath
	I1010 18:21:00.857321  105052 main.go:141] libmachine: (ha-142481-m04) Calling .GetSSHUsername
	I1010 18:21:00.857511  105052 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481-m04/id_rsa Username:docker}
	I1010 18:21:00.946227  105052 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1010 18:21:01.002089  105052 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1010 18:21:01.057292  105052 main.go:141] libmachine: Stopping "ha-142481-m04"...
	I1010 18:21:01.057331  105052 main.go:141] libmachine: (ha-142481-m04) Calling .GetState
	I1010 18:21:01.059117  105052 main.go:141] libmachine: (ha-142481-m04) Calling .Stop
	I1010 18:21:01.062960  105052 main.go:141] libmachine: (ha-142481-m04) Waiting for machine to stop 0/120
	I1010 18:21:02.298702  105052 main.go:141] libmachine: (ha-142481-m04) Calling .GetState
	I1010 18:21:02.300028  105052 main.go:141] libmachine: Machine "ha-142481-m04" was stopped.
	I1010 18:21:02.300050  105052 stop.go:75] duration metric: took 1.447340083s to stop
	I1010 18:21:02.300077  105052 stop.go:39] StopHost: ha-142481-m03
	I1010 18:21:02.300495  105052 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 18:21:02.300553  105052 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 18:21:02.315597  105052 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37403
	I1010 18:21:02.316090  105052 main.go:141] libmachine: () Calling .GetVersion
	I1010 18:21:02.316768  105052 main.go:141] libmachine: Using API Version  1
	I1010 18:21:02.316796  105052 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 18:21:02.317128  105052 main.go:141] libmachine: () Calling .GetMachineName
	I1010 18:21:02.319724  105052 out.go:177] * Stopping node "ha-142481-m03"  ...
	I1010 18:21:02.321218  105052 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1010 18:21:02.321257  105052 main.go:141] libmachine: (ha-142481-m03) Calling .DriverName
	I1010 18:21:02.321549  105052 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1010 18:21:02.321583  105052 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHHostname
	I1010 18:21:02.325209  105052 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:21:02.325713  105052 main.go:141] libmachine: (ha-142481-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:ed:5a", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:15:53 +0000 UTC Type:0 Mac:52:54:00:06:ed:5a Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:ha-142481-m03 Clientid:01:52:54:00:06:ed:5a}
	I1010 18:21:02.325747  105052 main.go:141] libmachine: (ha-142481-m03) DBG | domain ha-142481-m03 has defined IP address 192.168.39.175 and MAC address 52:54:00:06:ed:5a in network mk-ha-142481
	I1010 18:21:02.325868  105052 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHPort
	I1010 18:21:02.326036  105052 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHKeyPath
	I1010 18:21:02.326180  105052 main.go:141] libmachine: (ha-142481-m03) Calling .GetSSHUsername
	I1010 18:21:02.326329  105052 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481-m03/id_rsa Username:docker}
	I1010 18:21:02.418954  105052 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1010 18:21:02.473717  105052 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1010 18:21:02.532226  105052 main.go:141] libmachine: Stopping "ha-142481-m03"...
	I1010 18:21:02.532271  105052 main.go:141] libmachine: (ha-142481-m03) Calling .GetState
	I1010 18:21:02.533989  105052 main.go:141] libmachine: (ha-142481-m03) Calling .Stop
	I1010 18:21:02.537732  105052 main.go:141] libmachine: (ha-142481-m03) Waiting for machine to stop 0/120
	I1010 18:21:03.539176  105052 main.go:141] libmachine: (ha-142481-m03) Waiting for machine to stop 1/120
	I1010 18:21:04.540772  105052 main.go:141] libmachine: (ha-142481-m03) Waiting for machine to stop 2/120
	I1010 18:21:05.542366  105052 main.go:141] libmachine: (ha-142481-m03) Waiting for machine to stop 3/120
	I1010 18:21:06.543862  105052 main.go:141] libmachine: (ha-142481-m03) Waiting for machine to stop 4/120
	I1010 18:21:07.545840  105052 main.go:141] libmachine: (ha-142481-m03) Waiting for machine to stop 5/120
	I1010 18:21:08.547509  105052 main.go:141] libmachine: (ha-142481-m03) Waiting for machine to stop 6/120
	I1010 18:21:09.549064  105052 main.go:141] libmachine: (ha-142481-m03) Waiting for machine to stop 7/120
	I1010 18:21:10.551534  105052 main.go:141] libmachine: (ha-142481-m03) Waiting for machine to stop 8/120
	I1010 18:21:11.553095  105052 main.go:141] libmachine: (ha-142481-m03) Waiting for machine to stop 9/120
	I1010 18:21:12.555600  105052 main.go:141] libmachine: (ha-142481-m03) Waiting for machine to stop 10/120
	I1010 18:21:13.557092  105052 main.go:141] libmachine: (ha-142481-m03) Waiting for machine to stop 11/120
	I1010 18:21:14.559759  105052 main.go:141] libmachine: (ha-142481-m03) Waiting for machine to stop 12/120
	I1010 18:21:15.561247  105052 main.go:141] libmachine: (ha-142481-m03) Waiting for machine to stop 13/120
	I1010 18:21:16.563021  105052 main.go:141] libmachine: (ha-142481-m03) Waiting for machine to stop 14/120
	I1010 18:21:17.565046  105052 main.go:141] libmachine: (ha-142481-m03) Waiting for machine to stop 15/120
	I1010 18:21:18.566943  105052 main.go:141] libmachine: (ha-142481-m03) Waiting for machine to stop 16/120
	I1010 18:21:19.568544  105052 main.go:141] libmachine: (ha-142481-m03) Waiting for machine to stop 17/120
	I1010 18:21:20.570281  105052 main.go:141] libmachine: (ha-142481-m03) Waiting for machine to stop 18/120
	I1010 18:21:21.571770  105052 main.go:141] libmachine: (ha-142481-m03) Waiting for machine to stop 19/120
	I1010 18:21:22.573929  105052 main.go:141] libmachine: (ha-142481-m03) Waiting for machine to stop 20/120
	I1010 18:21:23.575931  105052 main.go:141] libmachine: (ha-142481-m03) Waiting for machine to stop 21/120
	I1010 18:21:24.577616  105052 main.go:141] libmachine: (ha-142481-m03) Waiting for machine to stop 22/120
	I1010 18:21:25.579551  105052 main.go:141] libmachine: (ha-142481-m03) Waiting for machine to stop 23/120
	I1010 18:21:26.581237  105052 main.go:141] libmachine: (ha-142481-m03) Waiting for machine to stop 24/120
	I1010 18:21:27.583251  105052 main.go:141] libmachine: (ha-142481-m03) Waiting for machine to stop 25/120
	I1010 18:21:28.585876  105052 main.go:141] libmachine: (ha-142481-m03) Waiting for machine to stop 26/120
	I1010 18:21:29.587963  105052 main.go:141] libmachine: (ha-142481-m03) Waiting for machine to stop 27/120
	I1010 18:21:30.589759  105052 main.go:141] libmachine: (ha-142481-m03) Waiting for machine to stop 28/120
	I1010 18:21:31.591475  105052 main.go:141] libmachine: (ha-142481-m03) Waiting for machine to stop 29/120
	I1010 18:21:32.594066  105052 main.go:141] libmachine: (ha-142481-m03) Waiting for machine to stop 30/120
	I1010 18:21:33.595455  105052 main.go:141] libmachine: (ha-142481-m03) Waiting for machine to stop 31/120
	I1010 18:21:34.597285  105052 main.go:141] libmachine: (ha-142481-m03) Waiting for machine to stop 32/120
	I1010 18:21:35.598585  105052 main.go:141] libmachine: (ha-142481-m03) Waiting for machine to stop 33/120
	I1010 18:21:36.599911  105052 main.go:141] libmachine: (ha-142481-m03) Waiting for machine to stop 34/120
	I1010 18:21:37.601851  105052 main.go:141] libmachine: (ha-142481-m03) Waiting for machine to stop 35/120
	I1010 18:21:38.603135  105052 main.go:141] libmachine: (ha-142481-m03) Waiting for machine to stop 36/120
	I1010 18:21:39.604630  105052 main.go:141] libmachine: (ha-142481-m03) Waiting for machine to stop 37/120
	I1010 18:21:40.606047  105052 main.go:141] libmachine: (ha-142481-m03) Waiting for machine to stop 38/120
	I1010 18:21:41.607450  105052 main.go:141] libmachine: (ha-142481-m03) Waiting for machine to stop 39/120
	I1010 18:21:42.608954  105052 main.go:141] libmachine: (ha-142481-m03) Waiting for machine to stop 40/120
	I1010 18:21:43.610881  105052 main.go:141] libmachine: (ha-142481-m03) Waiting for machine to stop 41/120
	I1010 18:21:44.612216  105052 main.go:141] libmachine: (ha-142481-m03) Waiting for machine to stop 42/120
	I1010 18:21:45.613661  105052 main.go:141] libmachine: (ha-142481-m03) Waiting for machine to stop 43/120
	I1010 18:21:46.614995  105052 main.go:141] libmachine: (ha-142481-m03) Waiting for machine to stop 44/120
	I1010 18:21:47.616909  105052 main.go:141] libmachine: (ha-142481-m03) Waiting for machine to stop 45/120
	I1010 18:21:48.618251  105052 main.go:141] libmachine: (ha-142481-m03) Waiting for machine to stop 46/120
	I1010 18:21:49.619866  105052 main.go:141] libmachine: (ha-142481-m03) Waiting for machine to stop 47/120
	I1010 18:21:50.621291  105052 main.go:141] libmachine: (ha-142481-m03) Waiting for machine to stop 48/120
	I1010 18:21:51.623367  105052 main.go:141] libmachine: (ha-142481-m03) Waiting for machine to stop 49/120
	I1010 18:21:52.625587  105052 main.go:141] libmachine: (ha-142481-m03) Waiting for machine to stop 50/120
	I1010 18:21:53.627591  105052 main.go:141] libmachine: (ha-142481-m03) Waiting for machine to stop 51/120
	I1010 18:21:54.628957  105052 main.go:141] libmachine: (ha-142481-m03) Waiting for machine to stop 52/120
	I1010 18:21:55.630519  105052 main.go:141] libmachine: (ha-142481-m03) Waiting for machine to stop 53/120
	I1010 18:21:56.632468  105052 main.go:141] libmachine: (ha-142481-m03) Waiting for machine to stop 54/120
	I1010 18:21:57.634618  105052 main.go:141] libmachine: (ha-142481-m03) Waiting for machine to stop 55/120
	I1010 18:21:58.636145  105052 main.go:141] libmachine: (ha-142481-m03) Waiting for machine to stop 56/120
	I1010 18:21:59.637684  105052 main.go:141] libmachine: (ha-142481-m03) Waiting for machine to stop 57/120
	I1010 18:22:00.639258  105052 main.go:141] libmachine: (ha-142481-m03) Waiting for machine to stop 58/120
	I1010 18:22:01.640982  105052 main.go:141] libmachine: (ha-142481-m03) Waiting for machine to stop 59/120
	I1010 18:22:02.643168  105052 main.go:141] libmachine: (ha-142481-m03) Waiting for machine to stop 60/120
	I1010 18:22:03.644570  105052 main.go:141] libmachine: (ha-142481-m03) Waiting for machine to stop 61/120
	I1010 18:22:04.646392  105052 main.go:141] libmachine: (ha-142481-m03) Waiting for machine to stop 62/120
	I1010 18:22:05.647657  105052 main.go:141] libmachine: (ha-142481-m03) Waiting for machine to stop 63/120
	I1010 18:22:06.649352  105052 main.go:141] libmachine: (ha-142481-m03) Waiting for machine to stop 64/120
	I1010 18:22:07.651146  105052 main.go:141] libmachine: (ha-142481-m03) Waiting for machine to stop 65/120
	I1010 18:22:08.652489  105052 main.go:141] libmachine: (ha-142481-m03) Waiting for machine to stop 66/120
	I1010 18:22:09.653860  105052 main.go:141] libmachine: (ha-142481-m03) Waiting for machine to stop 67/120
	I1010 18:22:10.655876  105052 main.go:141] libmachine: (ha-142481-m03) Waiting for machine to stop 68/120
	I1010 18:22:11.657336  105052 main.go:141] libmachine: (ha-142481-m03) Waiting for machine to stop 69/120
	I1010 18:22:12.658985  105052 main.go:141] libmachine: (ha-142481-m03) Waiting for machine to stop 70/120
	I1010 18:22:13.660559  105052 main.go:141] libmachine: (ha-142481-m03) Waiting for machine to stop 71/120
	I1010 18:22:14.662137  105052 main.go:141] libmachine: (ha-142481-m03) Waiting for machine to stop 72/120
	I1010 18:22:15.663591  105052 main.go:141] libmachine: (ha-142481-m03) Waiting for machine to stop 73/120
	I1010 18:22:16.664907  105052 main.go:141] libmachine: (ha-142481-m03) Waiting for machine to stop 74/120
	I1010 18:22:17.666937  105052 main.go:141] libmachine: (ha-142481-m03) Waiting for machine to stop 75/120
	I1010 18:22:18.668575  105052 main.go:141] libmachine: (ha-142481-m03) Waiting for machine to stop 76/120
	I1010 18:22:19.670355  105052 main.go:141] libmachine: (ha-142481-m03) Waiting for machine to stop 77/120
	I1010 18:22:20.671700  105052 main.go:141] libmachine: (ha-142481-m03) Waiting for machine to stop 78/120
	I1010 18:22:21.673493  105052 main.go:141] libmachine: (ha-142481-m03) Waiting for machine to stop 79/120
	I1010 18:22:22.675439  105052 main.go:141] libmachine: (ha-142481-m03) Waiting for machine to stop 80/120
	I1010 18:22:23.677222  105052 main.go:141] libmachine: (ha-142481-m03) Waiting for machine to stop 81/120
	I1010 18:22:24.678647  105052 main.go:141] libmachine: (ha-142481-m03) Waiting for machine to stop 82/120
	I1010 18:22:25.680057  105052 main.go:141] libmachine: (ha-142481-m03) Waiting for machine to stop 83/120
	I1010 18:22:26.681397  105052 main.go:141] libmachine: (ha-142481-m03) Waiting for machine to stop 84/120
	I1010 18:22:27.683535  105052 main.go:141] libmachine: (ha-142481-m03) Waiting for machine to stop 85/120
	I1010 18:22:28.684975  105052 main.go:141] libmachine: (ha-142481-m03) Waiting for machine to stop 86/120
	I1010 18:22:29.686290  105052 main.go:141] libmachine: (ha-142481-m03) Waiting for machine to stop 87/120
	I1010 18:22:30.687565  105052 main.go:141] libmachine: (ha-142481-m03) Waiting for machine to stop 88/120
	I1010 18:22:31.688993  105052 main.go:141] libmachine: (ha-142481-m03) Waiting for machine to stop 89/120
	I1010 18:22:32.691288  105052 main.go:141] libmachine: (ha-142481-m03) Waiting for machine to stop 90/120
	I1010 18:22:33.692746  105052 main.go:141] libmachine: (ha-142481-m03) Waiting for machine to stop 91/120
	I1010 18:22:34.694547  105052 main.go:141] libmachine: (ha-142481-m03) Waiting for machine to stop 92/120
	I1010 18:22:35.696021  105052 main.go:141] libmachine: (ha-142481-m03) Waiting for machine to stop 93/120
	I1010 18:22:36.697419  105052 main.go:141] libmachine: (ha-142481-m03) Waiting for machine to stop 94/120
	I1010 18:22:37.699323  105052 main.go:141] libmachine: (ha-142481-m03) Waiting for machine to stop 95/120
	I1010 18:22:38.700730  105052 main.go:141] libmachine: (ha-142481-m03) Waiting for machine to stop 96/120
	I1010 18:22:39.702106  105052 main.go:141] libmachine: (ha-142481-m03) Waiting for machine to stop 97/120
	I1010 18:22:40.703662  105052 main.go:141] libmachine: (ha-142481-m03) Waiting for machine to stop 98/120
	I1010 18:22:41.705310  105052 main.go:141] libmachine: (ha-142481-m03) Waiting for machine to stop 99/120
	I1010 18:22:42.707000  105052 main.go:141] libmachine: (ha-142481-m03) Waiting for machine to stop 100/120
	I1010 18:22:43.708746  105052 main.go:141] libmachine: (ha-142481-m03) Waiting for machine to stop 101/120
	I1010 18:22:44.710119  105052 main.go:141] libmachine: (ha-142481-m03) Waiting for machine to stop 102/120
	I1010 18:22:45.711577  105052 main.go:141] libmachine: (ha-142481-m03) Waiting for machine to stop 103/120
	I1010 18:22:46.713180  105052 main.go:141] libmachine: (ha-142481-m03) Waiting for machine to stop 104/120
	I1010 18:22:47.714942  105052 main.go:141] libmachine: (ha-142481-m03) Waiting for machine to stop 105/120
	I1010 18:22:48.716404  105052 main.go:141] libmachine: (ha-142481-m03) Waiting for machine to stop 106/120
	I1010 18:22:49.718038  105052 main.go:141] libmachine: (ha-142481-m03) Waiting for machine to stop 107/120
	I1010 18:22:50.719594  105052 main.go:141] libmachine: (ha-142481-m03) Waiting for machine to stop 108/120
	I1010 18:22:51.721036  105052 main.go:141] libmachine: (ha-142481-m03) Waiting for machine to stop 109/120
	I1010 18:22:52.722477  105052 main.go:141] libmachine: (ha-142481-m03) Waiting for machine to stop 110/120
	I1010 18:22:53.723810  105052 main.go:141] libmachine: (ha-142481-m03) Waiting for machine to stop 111/120
	I1010 18:22:54.725297  105052 main.go:141] libmachine: (ha-142481-m03) Waiting for machine to stop 112/120
	I1010 18:22:55.727298  105052 main.go:141] libmachine: (ha-142481-m03) Waiting for machine to stop 113/120
	I1010 18:22:56.728712  105052 main.go:141] libmachine: (ha-142481-m03) Waiting for machine to stop 114/120
	I1010 18:22:57.730203  105052 main.go:141] libmachine: (ha-142481-m03) Waiting for machine to stop 115/120
	I1010 18:22:58.731651  105052 main.go:141] libmachine: (ha-142481-m03) Waiting for machine to stop 116/120
	I1010 18:22:59.733238  105052 main.go:141] libmachine: (ha-142481-m03) Waiting for machine to stop 117/120
	I1010 18:23:00.734699  105052 main.go:141] libmachine: (ha-142481-m03) Waiting for machine to stop 118/120
	I1010 18:23:01.736342  105052 main.go:141] libmachine: (ha-142481-m03) Waiting for machine to stop 119/120
	I1010 18:23:02.737329  105052 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1010 18:23:02.737396  105052 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1010 18:23:02.739602  105052 out.go:201] 
	W1010 18:23:02.741079  105052 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1010 18:23:02.741104  105052 out.go:270] * 
	* 
	W1010 18:23:02.745387  105052 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1010 18:23:02.747145  105052 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:466: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-142481 -v=7 --alsologtostderr" : exit status 82
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 start -p ha-142481 --wait=true -v=7 --alsologtostderr
E1010 18:23:17.725785   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/functional-346405/client.crt: no such file or directory" logger="UnhandledError"
E1010 18:24:59.530427   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/addons-473910/client.crt: no such file or directory" logger="UnhandledError"
E1010 18:26:22.602720   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/addons-473910/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 start -p ha-142481 --wait=true -v=7 --alsologtostderr: (4m37.006679481s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-142481
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-142481 -n ha-142481
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-142481 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-142481 logs -n 25: (2.186402141s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-142481 cp ha-142481-m03:/home/docker/cp-test.txt                              | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | ha-142481-m02:/home/docker/cp-test_ha-142481-m03_ha-142481-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-142481 ssh -n                                                                 | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | ha-142481-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-142481 ssh -n ha-142481-m02 sudo cat                                          | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | /home/docker/cp-test_ha-142481-m03_ha-142481-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-142481 cp ha-142481-m03:/home/docker/cp-test.txt                              | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | ha-142481-m04:/home/docker/cp-test_ha-142481-m03_ha-142481-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-142481 ssh -n                                                                 | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | ha-142481-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-142481 ssh -n ha-142481-m04 sudo cat                                          | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | /home/docker/cp-test_ha-142481-m03_ha-142481-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-142481 cp testdata/cp-test.txt                                                | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | ha-142481-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-142481 ssh -n                                                                 | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | ha-142481-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-142481 cp ha-142481-m04:/home/docker/cp-test.txt                              | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile4115718102/001/cp-test_ha-142481-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-142481 ssh -n                                                                 | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | ha-142481-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-142481 cp ha-142481-m04:/home/docker/cp-test.txt                              | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | ha-142481:/home/docker/cp-test_ha-142481-m04_ha-142481.txt                       |           |         |         |                     |                     |
	| ssh     | ha-142481 ssh -n                                                                 | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | ha-142481-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-142481 ssh -n ha-142481 sudo cat                                              | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | /home/docker/cp-test_ha-142481-m04_ha-142481.txt                                 |           |         |         |                     |                     |
	| cp      | ha-142481 cp ha-142481-m04:/home/docker/cp-test.txt                              | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | ha-142481-m02:/home/docker/cp-test_ha-142481-m04_ha-142481-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-142481 ssh -n                                                                 | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | ha-142481-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-142481 ssh -n ha-142481-m02 sudo cat                                          | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | /home/docker/cp-test_ha-142481-m04_ha-142481-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-142481 cp ha-142481-m04:/home/docker/cp-test.txt                              | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | ha-142481-m03:/home/docker/cp-test_ha-142481-m04_ha-142481-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-142481 ssh -n                                                                 | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | ha-142481-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-142481 ssh -n ha-142481-m03 sudo cat                                          | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | /home/docker/cp-test_ha-142481-m04_ha-142481-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-142481 node stop m02 -v=7                                                     | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-142481 node start m02 -v=7                                                    | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:20 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-142481 -v=7                                                           | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:21 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-142481 -v=7                                                                | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:21 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-142481 --wait=true -v=7                                                    | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:23 UTC | 10 Oct 24 18:27 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-142481                                                                | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:27 UTC |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/10 18:23:02
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1010 18:23:02.799869  105566 out.go:345] Setting OutFile to fd 1 ...
	I1010 18:23:02.799974  105566 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 18:23:02.799981  105566 out.go:358] Setting ErrFile to fd 2...
	I1010 18:23:02.799986  105566 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 18:23:02.800189  105566 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19787-81676/.minikube/bin
	I1010 18:23:02.800770  105566 out.go:352] Setting JSON to false
	I1010 18:23:02.801782  105566 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":7529,"bootTime":1728577054,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1010 18:23:02.801848  105566 start.go:139] virtualization: kvm guest
	I1010 18:23:02.804271  105566 out.go:177] * [ha-142481] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1010 18:23:02.805716  105566 notify.go:220] Checking for updates...
	I1010 18:23:02.805750  105566 out.go:177]   - MINIKUBE_LOCATION=19787
	I1010 18:23:02.807264  105566 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1010 18:23:02.808999  105566 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19787-81676/kubeconfig
	I1010 18:23:02.810604  105566 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19787-81676/.minikube
	I1010 18:23:02.812099  105566 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1010 18:23:02.813629  105566 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1010 18:23:02.815396  105566 config.go:182] Loaded profile config "ha-142481": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 18:23:02.815505  105566 driver.go:394] Setting default libvirt URI to qemu:///system
	I1010 18:23:02.815945  105566 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 18:23:02.815985  105566 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 18:23:02.831894  105566 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46801
	I1010 18:23:02.832436  105566 main.go:141] libmachine: () Calling .GetVersion
	I1010 18:23:02.833068  105566 main.go:141] libmachine: Using API Version  1
	I1010 18:23:02.833090  105566 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 18:23:02.833466  105566 main.go:141] libmachine: () Calling .GetMachineName
	I1010 18:23:02.833710  105566 main.go:141] libmachine: (ha-142481) Calling .DriverName
	I1010 18:23:02.870627  105566 out.go:177] * Using the kvm2 driver based on existing profile
	I1010 18:23:02.872583  105566 start.go:297] selected driver: kvm2
	I1010 18:23:02.872608  105566 start.go:901] validating driver "kvm2" against &{Name:ha-142481 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.1 ClusterName:ha-142481 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.104 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.186 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.175 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.164 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false e
fk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:
docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 18:23:02.872758  105566 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1010 18:23:02.873163  105566 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 18:23:02.873257  105566 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19787-81676/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1010 18:23:02.889051  105566 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1010 18:23:02.889804  105566 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1010 18:23:02.889850  105566 cni.go:84] Creating CNI manager for ""
	I1010 18:23:02.889895  105566 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1010 18:23:02.889963  105566 start.go:340] cluster config:
	{Name:ha-142481 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-142481 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.104 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.186 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.175 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.164 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel
:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 18:23:02.890097  105566 iso.go:125] acquiring lock: {Name:mk53f4624ffe67cac4f5d6b29b9ed87292a010a5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 18:23:02.892086  105566 out.go:177] * Starting "ha-142481" primary control-plane node in "ha-142481" cluster
	I1010 18:23:02.893407  105566 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1010 18:23:02.893449  105566 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1010 18:23:02.893460  105566 cache.go:56] Caching tarball of preloaded images
	I1010 18:23:02.893581  105566 preload.go:172] Found /home/jenkins/minikube-integration/19787-81676/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1010 18:23:02.893597  105566 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1010 18:23:02.893721  105566 profile.go:143] Saving config to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/config.json ...
	I1010 18:23:02.893915  105566 start.go:360] acquireMachinesLock for ha-142481: {Name:mkf50a8a3600473176c10a3ea212772e747151e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1010 18:23:02.893960  105566 start.go:364] duration metric: took 25.747µs to acquireMachinesLock for "ha-142481"
	I1010 18:23:02.893974  105566 start.go:96] Skipping create...Using existing machine configuration
	I1010 18:23:02.893982  105566 fix.go:54] fixHost starting: 
	I1010 18:23:02.894237  105566 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 18:23:02.894268  105566 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 18:23:02.909184  105566 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42679
	I1010 18:23:02.909771  105566 main.go:141] libmachine: () Calling .GetVersion
	I1010 18:23:02.910279  105566 main.go:141] libmachine: Using API Version  1
	I1010 18:23:02.910302  105566 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 18:23:02.910631  105566 main.go:141] libmachine: () Calling .GetMachineName
	I1010 18:23:02.910808  105566 main.go:141] libmachine: (ha-142481) Calling .DriverName
	I1010 18:23:02.910962  105566 main.go:141] libmachine: (ha-142481) Calling .GetState
	I1010 18:23:02.912715  105566 fix.go:112] recreateIfNeeded on ha-142481: state=Running err=<nil>
	W1010 18:23:02.912750  105566 fix.go:138] unexpected machine state, will restart: <nil>
	I1010 18:23:02.915047  105566 out.go:177] * Updating the running kvm2 "ha-142481" VM ...
	I1010 18:23:02.916470  105566 machine.go:93] provisionDockerMachine start ...
	I1010 18:23:02.916496  105566 main.go:141] libmachine: (ha-142481) Calling .DriverName
	I1010 18:23:02.916718  105566 main.go:141] libmachine: (ha-142481) Calling .GetSSHHostname
	I1010 18:23:02.919545  105566 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:23:02.920001  105566 main.go:141] libmachine: (ha-142481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:fa:00", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:13:53 +0000 UTC Type:0 Mac:52:54:00:3e:fa:00 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-142481 Clientid:01:52:54:00:3e:fa:00}
	I1010 18:23:02.920024  105566 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined IP address 192.168.39.104 and MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:23:02.920158  105566 main.go:141] libmachine: (ha-142481) Calling .GetSSHPort
	I1010 18:23:02.920356  105566 main.go:141] libmachine: (ha-142481) Calling .GetSSHKeyPath
	I1010 18:23:02.920539  105566 main.go:141] libmachine: (ha-142481) Calling .GetSSHKeyPath
	I1010 18:23:02.920713  105566 main.go:141] libmachine: (ha-142481) Calling .GetSSHUsername
	I1010 18:23:02.920914  105566 main.go:141] libmachine: Using SSH client type: native
	I1010 18:23:02.921133  105566 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I1010 18:23:02.921146  105566 main.go:141] libmachine: About to run SSH command:
	hostname
	I1010 18:23:03.034002  105566 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-142481
	
	I1010 18:23:03.034028  105566 main.go:141] libmachine: (ha-142481) Calling .GetMachineName
	I1010 18:23:03.034309  105566 buildroot.go:166] provisioning hostname "ha-142481"
	I1010 18:23:03.034345  105566 main.go:141] libmachine: (ha-142481) Calling .GetMachineName
	I1010 18:23:03.034510  105566 main.go:141] libmachine: (ha-142481) Calling .GetSSHHostname
	I1010 18:23:03.036973  105566 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:23:03.037398  105566 main.go:141] libmachine: (ha-142481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:fa:00", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:13:53 +0000 UTC Type:0 Mac:52:54:00:3e:fa:00 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-142481 Clientid:01:52:54:00:3e:fa:00}
	I1010 18:23:03.037444  105566 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined IP address 192.168.39.104 and MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:23:03.037577  105566 main.go:141] libmachine: (ha-142481) Calling .GetSSHPort
	I1010 18:23:03.037760  105566 main.go:141] libmachine: (ha-142481) Calling .GetSSHKeyPath
	I1010 18:23:03.037926  105566 main.go:141] libmachine: (ha-142481) Calling .GetSSHKeyPath
	I1010 18:23:03.038039  105566 main.go:141] libmachine: (ha-142481) Calling .GetSSHUsername
	I1010 18:23:03.038287  105566 main.go:141] libmachine: Using SSH client type: native
	I1010 18:23:03.038516  105566 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I1010 18:23:03.038529  105566 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-142481 && echo "ha-142481" | sudo tee /etc/hostname
	I1010 18:23:03.159604  105566 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-142481
	
	I1010 18:23:03.159637  105566 main.go:141] libmachine: (ha-142481) Calling .GetSSHHostname
	I1010 18:23:03.162741  105566 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:23:03.163148  105566 main.go:141] libmachine: (ha-142481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:fa:00", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:13:53 +0000 UTC Type:0 Mac:52:54:00:3e:fa:00 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-142481 Clientid:01:52:54:00:3e:fa:00}
	I1010 18:23:03.163175  105566 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined IP address 192.168.39.104 and MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:23:03.163417  105566 main.go:141] libmachine: (ha-142481) Calling .GetSSHPort
	I1010 18:23:03.163625  105566 main.go:141] libmachine: (ha-142481) Calling .GetSSHKeyPath
	I1010 18:23:03.163812  105566 main.go:141] libmachine: (ha-142481) Calling .GetSSHKeyPath
	I1010 18:23:03.163955  105566 main.go:141] libmachine: (ha-142481) Calling .GetSSHUsername
	I1010 18:23:03.164182  105566 main.go:141] libmachine: Using SSH client type: native
	I1010 18:23:03.164383  105566 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I1010 18:23:03.164398  105566 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-142481' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-142481/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-142481' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1010 18:23:03.274006  105566 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1010 18:23:03.274032  105566 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19787-81676/.minikube CaCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19787-81676/.minikube}
	I1010 18:23:03.274062  105566 buildroot.go:174] setting up certificates
	I1010 18:23:03.274076  105566 provision.go:84] configureAuth start
	I1010 18:23:03.274089  105566 main.go:141] libmachine: (ha-142481) Calling .GetMachineName
	I1010 18:23:03.274398  105566 main.go:141] libmachine: (ha-142481) Calling .GetIP
	I1010 18:23:03.277050  105566 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:23:03.277403  105566 main.go:141] libmachine: (ha-142481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:fa:00", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:13:53 +0000 UTC Type:0 Mac:52:54:00:3e:fa:00 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-142481 Clientid:01:52:54:00:3e:fa:00}
	I1010 18:23:03.277427  105566 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined IP address 192.168.39.104 and MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:23:03.277583  105566 main.go:141] libmachine: (ha-142481) Calling .GetSSHHostname
	I1010 18:23:03.280234  105566 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:23:03.280699  105566 main.go:141] libmachine: (ha-142481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:fa:00", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:13:53 +0000 UTC Type:0 Mac:52:54:00:3e:fa:00 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-142481 Clientid:01:52:54:00:3e:fa:00}
	I1010 18:23:03.280724  105566 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined IP address 192.168.39.104 and MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:23:03.280903  105566 provision.go:143] copyHostCerts
	I1010 18:23:03.280946  105566 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem
	I1010 18:23:03.280994  105566 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem, removing ...
	I1010 18:23:03.281004  105566 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem
	I1010 18:23:03.281068  105566 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem (1123 bytes)
	I1010 18:23:03.281159  105566 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem
	I1010 18:23:03.281176  105566 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem, removing ...
	I1010 18:23:03.281182  105566 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem
	I1010 18:23:03.281209  105566 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem (1675 bytes)
	I1010 18:23:03.281282  105566 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem
	I1010 18:23:03.281299  105566 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem, removing ...
	I1010 18:23:03.281305  105566 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem
	I1010 18:23:03.281326  105566 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem (1078 bytes)
	I1010 18:23:03.281387  105566 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem org=jenkins.ha-142481 san=[127.0.0.1 192.168.39.104 ha-142481 localhost minikube]
	I1010 18:23:03.433036  105566 provision.go:177] copyRemoteCerts
	I1010 18:23:03.433099  105566 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1010 18:23:03.433125  105566 main.go:141] libmachine: (ha-142481) Calling .GetSSHHostname
	I1010 18:23:03.435918  105566 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:23:03.436348  105566 main.go:141] libmachine: (ha-142481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:fa:00", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:13:53 +0000 UTC Type:0 Mac:52:54:00:3e:fa:00 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-142481 Clientid:01:52:54:00:3e:fa:00}
	I1010 18:23:03.436392  105566 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined IP address 192.168.39.104 and MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:23:03.436602  105566 main.go:141] libmachine: (ha-142481) Calling .GetSSHPort
	I1010 18:23:03.436786  105566 main.go:141] libmachine: (ha-142481) Calling .GetSSHKeyPath
	I1010 18:23:03.436954  105566 main.go:141] libmachine: (ha-142481) Calling .GetSSHUsername
	I1010 18:23:03.437083  105566 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481/id_rsa Username:docker}
	I1010 18:23:03.523664  105566 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1010 18:23:03.523740  105566 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1010 18:23:03.555874  105566 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1010 18:23:03.555974  105566 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1010 18:23:03.582403  105566 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1010 18:23:03.582490  105566 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1010 18:23:03.609058  105566 provision.go:87] duration metric: took 334.963124ms to configureAuth
	I1010 18:23:03.609099  105566 buildroot.go:189] setting minikube options for container-runtime
	I1010 18:23:03.609410  105566 config.go:182] Loaded profile config "ha-142481": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 18:23:03.609549  105566 main.go:141] libmachine: (ha-142481) Calling .GetSSHHostname
	I1010 18:23:03.612233  105566 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:23:03.613009  105566 main.go:141] libmachine: (ha-142481) Calling .GetSSHPort
	I1010 18:23:03.614113  105566 main.go:141] libmachine: (ha-142481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:fa:00", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:13:53 +0000 UTC Type:0 Mac:52:54:00:3e:fa:00 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-142481 Clientid:01:52:54:00:3e:fa:00}
	I1010 18:23:03.614145  105566 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined IP address 192.168.39.104 and MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:23:03.614184  105566 main.go:141] libmachine: (ha-142481) Calling .GetSSHKeyPath
	I1010 18:23:03.614514  105566 main.go:141] libmachine: (ha-142481) Calling .GetSSHKeyPath
	I1010 18:23:03.614709  105566 main.go:141] libmachine: (ha-142481) Calling .GetSSHUsername
	I1010 18:23:03.614883  105566 main.go:141] libmachine: Using SSH client type: native
	I1010 18:23:03.615121  105566 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I1010 18:23:03.615158  105566 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1010 18:24:34.476953  105566 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1010 18:24:34.476994  105566 machine.go:96] duration metric: took 1m31.560504087s to provisionDockerMachine
	I1010 18:24:34.477012  105566 start.go:293] postStartSetup for "ha-142481" (driver="kvm2")
	I1010 18:24:34.477027  105566 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1010 18:24:34.477055  105566 main.go:141] libmachine: (ha-142481) Calling .DriverName
	I1010 18:24:34.477380  105566 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1010 18:24:34.477420  105566 main.go:141] libmachine: (ha-142481) Calling .GetSSHHostname
	I1010 18:24:34.480707  105566 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:24:34.481161  105566 main.go:141] libmachine: (ha-142481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:fa:00", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:13:53 +0000 UTC Type:0 Mac:52:54:00:3e:fa:00 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-142481 Clientid:01:52:54:00:3e:fa:00}
	I1010 18:24:34.481181  105566 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined IP address 192.168.39.104 and MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:24:34.481390  105566 main.go:141] libmachine: (ha-142481) Calling .GetSSHPort
	I1010 18:24:34.481570  105566 main.go:141] libmachine: (ha-142481) Calling .GetSSHKeyPath
	I1010 18:24:34.481771  105566 main.go:141] libmachine: (ha-142481) Calling .GetSSHUsername
	I1010 18:24:34.481926  105566 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481/id_rsa Username:docker}
	I1010 18:24:34.569267  105566 ssh_runner.go:195] Run: cat /etc/os-release
	I1010 18:24:34.574040  105566 info.go:137] Remote host: Buildroot 2023.02.9
	I1010 18:24:34.574072  105566 filesync.go:126] Scanning /home/jenkins/minikube-integration/19787-81676/.minikube/addons for local assets ...
	I1010 18:24:34.574173  105566 filesync.go:126] Scanning /home/jenkins/minikube-integration/19787-81676/.minikube/files for local assets ...
	I1010 18:24:34.574287  105566 filesync.go:149] local asset: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem -> 888762.pem in /etc/ssl/certs
	I1010 18:24:34.574308  105566 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem -> /etc/ssl/certs/888762.pem
	I1010 18:24:34.574437  105566 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1010 18:24:34.584565  105566 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem --> /etc/ssl/certs/888762.pem (1708 bytes)
	I1010 18:24:34.612294  105566 start.go:296] duration metric: took 135.263736ms for postStartSetup
	I1010 18:24:34.612349  105566 main.go:141] libmachine: (ha-142481) Calling .DriverName
	I1010 18:24:34.612707  105566 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1010 18:24:34.612741  105566 main.go:141] libmachine: (ha-142481) Calling .GetSSHHostname
	I1010 18:24:34.615527  105566 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:24:34.616032  105566 main.go:141] libmachine: (ha-142481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:fa:00", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:13:53 +0000 UTC Type:0 Mac:52:54:00:3e:fa:00 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-142481 Clientid:01:52:54:00:3e:fa:00}
	I1010 18:24:34.616061  105566 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined IP address 192.168.39.104 and MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:24:34.616283  105566 main.go:141] libmachine: (ha-142481) Calling .GetSSHPort
	I1010 18:24:34.616442  105566 main.go:141] libmachine: (ha-142481) Calling .GetSSHKeyPath
	I1010 18:24:34.616652  105566 main.go:141] libmachine: (ha-142481) Calling .GetSSHUsername
	I1010 18:24:34.616790  105566 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481/id_rsa Username:docker}
	W1010 18:24:34.699466  105566 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I1010 18:24:34.699501  105566 fix.go:56] duration metric: took 1m31.80551892s for fixHost
	I1010 18:24:34.699564  105566 main.go:141] libmachine: (ha-142481) Calling .GetSSHHostname
	I1010 18:24:34.702515  105566 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:24:34.702963  105566 main.go:141] libmachine: (ha-142481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:fa:00", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:13:53 +0000 UTC Type:0 Mac:52:54:00:3e:fa:00 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-142481 Clientid:01:52:54:00:3e:fa:00}
	I1010 18:24:34.702989  105566 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined IP address 192.168.39.104 and MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:24:34.703221  105566 main.go:141] libmachine: (ha-142481) Calling .GetSSHPort
	I1010 18:24:34.703416  105566 main.go:141] libmachine: (ha-142481) Calling .GetSSHKeyPath
	I1010 18:24:34.703584  105566 main.go:141] libmachine: (ha-142481) Calling .GetSSHKeyPath
	I1010 18:24:34.703713  105566 main.go:141] libmachine: (ha-142481) Calling .GetSSHUsername
	I1010 18:24:34.703953  105566 main.go:141] libmachine: Using SSH client type: native
	I1010 18:24:34.704137  105566 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I1010 18:24:34.704150  105566 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1010 18:24:34.818407  105566 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728584674.801891150
	
	I1010 18:24:34.818434  105566 fix.go:216] guest clock: 1728584674.801891150
	I1010 18:24:34.818444  105566 fix.go:229] Guest: 2024-10-10 18:24:34.80189115 +0000 UTC Remote: 2024-10-10 18:24:34.699509961 +0000 UTC m=+91.940454718 (delta=102.381189ms)
	I1010 18:24:34.818478  105566 fix.go:200] guest clock delta is within tolerance: 102.381189ms
	I1010 18:24:34.818485  105566 start.go:83] releasing machines lock for "ha-142481", held for 1m31.924514882s
	I1010 18:24:34.818520  105566 main.go:141] libmachine: (ha-142481) Calling .DriverName
	I1010 18:24:34.818768  105566 main.go:141] libmachine: (ha-142481) Calling .GetIP
	I1010 18:24:34.821813  105566 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:24:34.822151  105566 main.go:141] libmachine: (ha-142481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:fa:00", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:13:53 +0000 UTC Type:0 Mac:52:54:00:3e:fa:00 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-142481 Clientid:01:52:54:00:3e:fa:00}
	I1010 18:24:34.822170  105566 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined IP address 192.168.39.104 and MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:24:34.822302  105566 main.go:141] libmachine: (ha-142481) Calling .DriverName
	I1010 18:24:34.822958  105566 main.go:141] libmachine: (ha-142481) Calling .DriverName
	I1010 18:24:34.823141  105566 main.go:141] libmachine: (ha-142481) Calling .DriverName
	I1010 18:24:34.823245  105566 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1010 18:24:34.823306  105566 main.go:141] libmachine: (ha-142481) Calling .GetSSHHostname
	I1010 18:24:34.823402  105566 ssh_runner.go:195] Run: cat /version.json
	I1010 18:24:34.823433  105566 main.go:141] libmachine: (ha-142481) Calling .GetSSHHostname
	I1010 18:24:34.826075  105566 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:24:34.826432  105566 main.go:141] libmachine: (ha-142481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:fa:00", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:13:53 +0000 UTC Type:0 Mac:52:54:00:3e:fa:00 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-142481 Clientid:01:52:54:00:3e:fa:00}
	I1010 18:24:34.826453  105566 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined IP address 192.168.39.104 and MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:24:34.826471  105566 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:24:34.826616  105566 main.go:141] libmachine: (ha-142481) Calling .GetSSHPort
	I1010 18:24:34.826783  105566 main.go:141] libmachine: (ha-142481) Calling .GetSSHKeyPath
	I1010 18:24:34.826908  105566 main.go:141] libmachine: (ha-142481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:fa:00", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:13:53 +0000 UTC Type:0 Mac:52:54:00:3e:fa:00 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-142481 Clientid:01:52:54:00:3e:fa:00}
	I1010 18:24:34.826931  105566 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined IP address 192.168.39.104 and MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:24:34.826956  105566 main.go:141] libmachine: (ha-142481) Calling .GetSSHUsername
	I1010 18:24:34.827085  105566 main.go:141] libmachine: (ha-142481) Calling .GetSSHPort
	I1010 18:24:34.827268  105566 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481/id_rsa Username:docker}
	I1010 18:24:34.827346  105566 main.go:141] libmachine: (ha-142481) Calling .GetSSHKeyPath
	I1010 18:24:34.827552  105566 main.go:141] libmachine: (ha-142481) Calling .GetSSHUsername
	I1010 18:24:34.827708  105566 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481/id_rsa Username:docker}
	I1010 18:24:34.929952  105566 ssh_runner.go:195] Run: systemctl --version
	I1010 18:24:34.936136  105566 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1010 18:24:35.099285  105566 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1010 18:24:35.107144  105566 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1010 18:24:35.107220  105566 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1010 18:24:35.117620  105566 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1010 18:24:35.117647  105566 start.go:495] detecting cgroup driver to use...
	I1010 18:24:35.117721  105566 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1010 18:24:35.134625  105566 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1010 18:24:35.149118  105566 docker.go:217] disabling cri-docker service (if available) ...
	I1010 18:24:35.149186  105566 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1010 18:24:35.163579  105566 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1010 18:24:35.178125  105566 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1010 18:24:35.329029  105566 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1010 18:24:35.475666  105566 docker.go:233] disabling docker service ...
	I1010 18:24:35.475764  105566 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1010 18:24:35.495156  105566 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1010 18:24:35.509957  105566 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1010 18:24:35.656185  105566 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1010 18:24:35.806588  105566 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1010 18:24:35.825490  105566 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1010 18:24:35.844879  105566 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1010 18:24:35.844943  105566 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:24:35.857000  105566 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1010 18:24:35.857063  105566 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:24:35.868006  105566 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:24:35.879217  105566 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:24:35.891582  105566 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1010 18:24:35.903802  105566 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:24:35.915175  105566 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:24:35.926660  105566 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:24:35.938352  105566 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1010 18:24:35.948342  105566 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1010 18:24:35.958296  105566 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 18:24:36.106096  105566 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1010 18:24:41.954317  105566 ssh_runner.go:235] Completed: sudo systemctl restart crio: (5.848174322s)
	I1010 18:24:41.954356  105566 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1010 18:24:41.954406  105566 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1010 18:24:41.959808  105566 start.go:563] Will wait 60s for crictl version
	I1010 18:24:41.959874  105566 ssh_runner.go:195] Run: which crictl
	I1010 18:24:41.963918  105566 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1010 18:24:42.002757  105566 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1010 18:24:42.002857  105566 ssh_runner.go:195] Run: crio --version
	I1010 18:24:42.033367  105566 ssh_runner.go:195] Run: crio --version
	I1010 18:24:42.066228  105566 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1010 18:24:42.067983  105566 main.go:141] libmachine: (ha-142481) Calling .GetIP
	I1010 18:24:42.070753  105566 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:24:42.071117  105566 main.go:141] libmachine: (ha-142481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:fa:00", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:13:53 +0000 UTC Type:0 Mac:52:54:00:3e:fa:00 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-142481 Clientid:01:52:54:00:3e:fa:00}
	I1010 18:24:42.071147  105566 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined IP address 192.168.39.104 and MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:24:42.071332  105566 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1010 18:24:42.076464  105566 kubeadm.go:883] updating cluster {Name:ha-142481 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-142481 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.104 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.186 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.175 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.164 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fr
eshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Moun
tIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1010 18:24:42.076607  105566 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1010 18:24:42.076652  105566 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 18:24:42.120159  105566 crio.go:514] all images are preloaded for cri-o runtime.
	I1010 18:24:42.120182  105566 crio.go:433] Images already preloaded, skipping extraction
	I1010 18:24:42.120244  105566 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 18:24:42.158431  105566 crio.go:514] all images are preloaded for cri-o runtime.
	I1010 18:24:42.158462  105566 cache_images.go:84] Images are preloaded, skipping loading
	I1010 18:24:42.158474  105566 kubeadm.go:934] updating node { 192.168.39.104 8443 v1.31.1 crio true true} ...
	I1010 18:24:42.158622  105566 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-142481 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.104
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-142481 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1010 18:24:42.158705  105566 ssh_runner.go:195] Run: crio config
	I1010 18:24:42.206423  105566 cni.go:84] Creating CNI manager for ""
	I1010 18:24:42.206447  105566 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1010 18:24:42.206463  105566 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1010 18:24:42.206483  105566 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.104 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-142481 NodeName:ha-142481 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.104"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.104 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1010 18:24:42.206622  105566 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.104
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-142481"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.104
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.104"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1010 18:24:42.206640  105566 kube-vip.go:115] generating kube-vip config ...
	I1010 18:24:42.206681  105566 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1010 18:24:42.218504  105566 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1010 18:24:42.218604  105566 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1010 18:24:42.218697  105566 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1010 18:24:42.228568  105566 binaries.go:44] Found k8s binaries, skipping transfer
	I1010 18:24:42.228640  105566 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1010 18:24:42.238233  105566 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I1010 18:24:42.255239  105566 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1010 18:24:42.272092  105566 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I1010 18:24:42.289591  105566 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1010 18:24:42.306980  105566 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1010 18:24:42.311912  105566 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 18:24:42.460829  105566 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 18:24:42.475897  105566 certs.go:68] Setting up /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481 for IP: 192.168.39.104
	I1010 18:24:42.475924  105566 certs.go:194] generating shared ca certs ...
	I1010 18:24:42.475941  105566 certs.go:226] acquiring lock for ca certs: {Name:mk934aa2a82050d7b4e8800492bf64f91d140517 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:24:42.476127  105566 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key
	I1010 18:24:42.476187  105566 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key
	I1010 18:24:42.476199  105566 certs.go:256] generating profile certs ...
	I1010 18:24:42.476298  105566 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/client.key
	I1010 18:24:42.476334  105566 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.key.8eb6d082
	I1010 18:24:42.476365  105566 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.crt.8eb6d082 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.104 192.168.39.186 192.168.39.175 192.168.39.254]
	I1010 18:24:42.740943  105566 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.crt.8eb6d082 ...
	I1010 18:24:42.740981  105566 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.crt.8eb6d082: {Name:mkb42965377953a0f50d0ba2dc7c2ec3a85872d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:24:42.741155  105566 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.key.8eb6d082 ...
	I1010 18:24:42.741166  105566 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.key.8eb6d082: {Name:mkecaa2382e2d924f9081504237cbd5394d213b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:24:42.741258  105566 certs.go:381] copying /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.crt.8eb6d082 -> /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.crt
	I1010 18:24:42.741423  105566 certs.go:385] copying /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.key.8eb6d082 -> /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.key
	I1010 18:24:42.741562  105566 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/proxy-client.key
	I1010 18:24:42.741579  105566 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1010 18:24:42.741593  105566 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1010 18:24:42.741606  105566 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1010 18:24:42.741618  105566 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1010 18:24:42.741632  105566 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1010 18:24:42.741644  105566 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1010 18:24:42.741660  105566 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1010 18:24:42.741675  105566 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1010 18:24:42.741725  105566 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876.pem (1338 bytes)
	W1010 18:24:42.741752  105566 certs.go:480] ignoring /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876_empty.pem, impossibly tiny 0 bytes
	I1010 18:24:42.741761  105566 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem (1679 bytes)
	I1010 18:24:42.741789  105566 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem (1078 bytes)
	I1010 18:24:42.741810  105566 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem (1123 bytes)
	I1010 18:24:42.741831  105566 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem (1675 bytes)
	I1010 18:24:42.741868  105566 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem (1708 bytes)
	I1010 18:24:42.741892  105566 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876.pem -> /usr/share/ca-certificates/88876.pem
	I1010 18:24:42.741905  105566 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem -> /usr/share/ca-certificates/888762.pem
	I1010 18:24:42.741917  105566 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:24:42.742475  105566 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1010 18:24:42.771347  105566 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1010 18:24:42.796627  105566 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1010 18:24:42.822730  105566 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1010 18:24:42.849047  105566 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1010 18:24:42.875516  105566 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1010 18:24:42.901199  105566 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1010 18:24:42.925795  105566 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1010 18:24:42.951007  105566 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876.pem --> /usr/share/ca-certificates/88876.pem (1338 bytes)
	I1010 18:24:42.976727  105566 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem --> /usr/share/ca-certificates/888762.pem (1708 bytes)
	I1010 18:24:43.000736  105566 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1010 18:24:43.026304  105566 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1010 18:24:43.044525  105566 ssh_runner.go:195] Run: openssl version
	I1010 18:24:43.050725  105566 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/888762.pem && ln -fs /usr/share/ca-certificates/888762.pem /etc/ssl/certs/888762.pem"
	I1010 18:24:43.061836  105566 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/888762.pem
	I1010 18:24:43.066494  105566 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 10 18:09 /usr/share/ca-certificates/888762.pem
	I1010 18:24:43.066546  105566 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/888762.pem
	I1010 18:24:43.072205  105566 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/888762.pem /etc/ssl/certs/3ec20f2e.0"
	I1010 18:24:43.081254  105566 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1010 18:24:43.092147  105566 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:24:43.096621  105566 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 10 17:58 /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:24:43.096674  105566 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:24:43.102292  105566 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1010 18:24:43.111591  105566 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/88876.pem && ln -fs /usr/share/ca-certificates/88876.pem /etc/ssl/certs/88876.pem"
	I1010 18:24:43.122378  105566 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/88876.pem
	I1010 18:24:43.127233  105566 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 10 18:09 /usr/share/ca-certificates/88876.pem
	I1010 18:24:43.127289  105566 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/88876.pem
	I1010 18:24:43.133161  105566 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/88876.pem /etc/ssl/certs/51391683.0"
	I1010 18:24:43.142538  105566 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1010 18:24:43.147321  105566 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1010 18:24:43.152973  105566 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1010 18:24:43.158637  105566 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1010 18:24:43.164323  105566 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1010 18:24:43.169916  105566 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1010 18:24:43.175512  105566 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1010 18:24:43.181088  105566 kubeadm.go:392] StartCluster: {Name:ha-142481 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-142481 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.104 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.186 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.175 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.164 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fresh
pod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 18:24:43.181254  105566 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1010 18:24:43.181298  105566 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1010 18:24:43.222814  105566 cri.go:89] found id: "85a0a1ee7a4cf4538126707aee79f70aec9dcb25dfbade502986283693e191ce"
	I1010 18:24:43.222838  105566 cri.go:89] found id: "bae82cf35c584cc9a970120a0e5807eb94fbae4a014b20431c5dcbb6f9cf74a7"
	I1010 18:24:43.222841  105566 cri.go:89] found id: "c7d0b95f4e67fff73260e403f16b365fbb721bc6dea2c2b868fb3c44f8b72844"
	I1010 18:24:43.222844  105566 cri.go:89] found id: "4dfd60c6197ecf8458417fd5dd0853e22c5baa8dd86f0bfa0d706e1ba6f65928"
	I1010 18:24:43.222847  105566 cri.go:89] found id: "018e6370bdfdae261e6d11d25c43c0456fb2814ad544f82f775181d88055bf37"
	I1010 18:24:43.222850  105566 cri.go:89] found id: "5c208648c013d8c43a6964604c6ae8de3a53473a5fd4e7885a2755580435fb2e"
	I1010 18:24:43.222852  105566 cri.go:89] found id: "b32ac96128061a8cd1f65949d4849412dd90d7656d1619c49a39c2a6f8cb40d3"
	I1010 18:24:43.222855  105566 cri.go:89] found id: "9f7d32719ebd232c16c4b9d557726714c2b6c1836b556ff4ca647c75d46c03e9"
	I1010 18:24:43.222857  105566 cri.go:89] found id: "80e86419d2aadedb18174a33f8cbe97c26931608fd3d13de863fea4148c76dc4"
	I1010 18:24:43.222863  105566 cri.go:89] found id: "751981b34b5e95c8f49366cd245e2877efdef340757e2957722cc0c59f16c09c"
	I1010 18:24:43.222866  105566 cri.go:89] found id: "4d7eb644bee429f0aef0448b6ea6d1efc310f7b08553ebb2af7781f4c493bbaf"
	I1010 18:24:43.222869  105566 cri.go:89] found id: "43b160f9e1140bbc977bac3d49805593ce66960b30fe85ac1e6b104437e5a026"
	I1010 18:24:43.222873  105566 cri.go:89] found id: "206693e605977eabf0659fe5400e46be795755080a225b01d46bb7533d055c58"
	I1010 18:24:43.222877  105566 cri.go:89] found id: ""
	I1010 18:24:43.222930  105566 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-142481 -n ha-142481
helpers_test.go:261: (dbg) Run:  kubectl --context ha-142481 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (401.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (142.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-142481 stop -v=7 --alsologtostderr
E1010 18:29:59.530491   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/addons-473910/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:533: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-142481 stop -v=7 --alsologtostderr: exit status 82 (2m0.489298617s)

                                                
                                                
-- stdout --
	* Stopping node "ha-142481-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1010 18:28:00.135862  107455 out.go:345] Setting OutFile to fd 1 ...
	I1010 18:28:00.136117  107455 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 18:28:00.136127  107455 out.go:358] Setting ErrFile to fd 2...
	I1010 18:28:00.136131  107455 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 18:28:00.136320  107455 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19787-81676/.minikube/bin
	I1010 18:28:00.136544  107455 out.go:352] Setting JSON to false
	I1010 18:28:00.136624  107455 mustload.go:65] Loading cluster: ha-142481
	I1010 18:28:00.137030  107455 config.go:182] Loaded profile config "ha-142481": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 18:28:00.137131  107455 profile.go:143] Saving config to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/config.json ...
	I1010 18:28:00.137324  107455 mustload.go:65] Loading cluster: ha-142481
	I1010 18:28:00.137449  107455 config.go:182] Loaded profile config "ha-142481": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 18:28:00.137471  107455 stop.go:39] StopHost: ha-142481-m04
	I1010 18:28:00.137823  107455 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 18:28:00.137880  107455 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 18:28:00.153925  107455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40277
	I1010 18:28:00.154627  107455 main.go:141] libmachine: () Calling .GetVersion
	I1010 18:28:00.155211  107455 main.go:141] libmachine: Using API Version  1
	I1010 18:28:00.155246  107455 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 18:28:00.156601  107455 main.go:141] libmachine: () Calling .GetMachineName
	I1010 18:28:00.159554  107455 out.go:177] * Stopping node "ha-142481-m04"  ...
	I1010 18:28:00.160888  107455 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1010 18:28:00.160956  107455 main.go:141] libmachine: (ha-142481-m04) Calling .DriverName
	I1010 18:28:00.161206  107455 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1010 18:28:00.161337  107455 main.go:141] libmachine: (ha-142481-m04) Calling .GetSSHHostname
	I1010 18:28:00.164622  107455 main.go:141] libmachine: (ha-142481-m04) DBG | domain ha-142481-m04 has defined MAC address 52:54:00:5e:f1:0f in network mk-ha-142481
	I1010 18:28:00.165139  107455 main.go:141] libmachine: (ha-142481-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:f1:0f", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:27:27 +0000 UTC Type:0 Mac:52:54:00:5e:f1:0f Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:ha-142481-m04 Clientid:01:52:54:00:5e:f1:0f}
	I1010 18:28:00.165163  107455 main.go:141] libmachine: (ha-142481-m04) DBG | domain ha-142481-m04 has defined IP address 192.168.39.164 and MAC address 52:54:00:5e:f1:0f in network mk-ha-142481
	I1010 18:28:00.165386  107455 main.go:141] libmachine: (ha-142481-m04) Calling .GetSSHPort
	I1010 18:28:00.165592  107455 main.go:141] libmachine: (ha-142481-m04) Calling .GetSSHKeyPath
	I1010 18:28:00.165737  107455 main.go:141] libmachine: (ha-142481-m04) Calling .GetSSHUsername
	I1010 18:28:00.165882  107455 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481-m04/id_rsa Username:docker}
	I1010 18:28:00.247587  107455 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1010 18:28:00.301138  107455 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1010 18:28:00.353643  107455 main.go:141] libmachine: Stopping "ha-142481-m04"...
	I1010 18:28:00.353674  107455 main.go:141] libmachine: (ha-142481-m04) Calling .GetState
	I1010 18:28:00.355322  107455 main.go:141] libmachine: (ha-142481-m04) Calling .Stop
	I1010 18:28:00.359342  107455 main.go:141] libmachine: (ha-142481-m04) Waiting for machine to stop 0/120
	I1010 18:28:01.360774  107455 main.go:141] libmachine: (ha-142481-m04) Waiting for machine to stop 1/120
	I1010 18:28:02.362239  107455 main.go:141] libmachine: (ha-142481-m04) Waiting for machine to stop 2/120
	I1010 18:28:03.363816  107455 main.go:141] libmachine: (ha-142481-m04) Waiting for machine to stop 3/120
	I1010 18:28:04.365478  107455 main.go:141] libmachine: (ha-142481-m04) Waiting for machine to stop 4/120
	I1010 18:28:05.367532  107455 main.go:141] libmachine: (ha-142481-m04) Waiting for machine to stop 5/120
	I1010 18:28:06.369371  107455 main.go:141] libmachine: (ha-142481-m04) Waiting for machine to stop 6/120
	I1010 18:28:07.371522  107455 main.go:141] libmachine: (ha-142481-m04) Waiting for machine to stop 7/120
	I1010 18:28:08.373405  107455 main.go:141] libmachine: (ha-142481-m04) Waiting for machine to stop 8/120
	I1010 18:28:09.375389  107455 main.go:141] libmachine: (ha-142481-m04) Waiting for machine to stop 9/120
	I1010 18:28:10.376487  107455 main.go:141] libmachine: (ha-142481-m04) Waiting for machine to stop 10/120
	I1010 18:28:11.377891  107455 main.go:141] libmachine: (ha-142481-m04) Waiting for machine to stop 11/120
	I1010 18:28:12.379126  107455 main.go:141] libmachine: (ha-142481-m04) Waiting for machine to stop 12/120
	I1010 18:28:13.380492  107455 main.go:141] libmachine: (ha-142481-m04) Waiting for machine to stop 13/120
	I1010 18:28:14.382155  107455 main.go:141] libmachine: (ha-142481-m04) Waiting for machine to stop 14/120
	I1010 18:28:15.384007  107455 main.go:141] libmachine: (ha-142481-m04) Waiting for machine to stop 15/120
	I1010 18:28:16.385481  107455 main.go:141] libmachine: (ha-142481-m04) Waiting for machine to stop 16/120
	I1010 18:28:17.387158  107455 main.go:141] libmachine: (ha-142481-m04) Waiting for machine to stop 17/120
	I1010 18:28:18.389045  107455 main.go:141] libmachine: (ha-142481-m04) Waiting for machine to stop 18/120
	I1010 18:28:19.390841  107455 main.go:141] libmachine: (ha-142481-m04) Waiting for machine to stop 19/120
	I1010 18:28:20.393636  107455 main.go:141] libmachine: (ha-142481-m04) Waiting for machine to stop 20/120
	I1010 18:28:21.395109  107455 main.go:141] libmachine: (ha-142481-m04) Waiting for machine to stop 21/120
	I1010 18:28:22.396660  107455 main.go:141] libmachine: (ha-142481-m04) Waiting for machine to stop 22/120
	I1010 18:28:23.398319  107455 main.go:141] libmachine: (ha-142481-m04) Waiting for machine to stop 23/120
	I1010 18:28:24.399628  107455 main.go:141] libmachine: (ha-142481-m04) Waiting for machine to stop 24/120
	I1010 18:28:25.401800  107455 main.go:141] libmachine: (ha-142481-m04) Waiting for machine to stop 25/120
	I1010 18:28:26.403584  107455 main.go:141] libmachine: (ha-142481-m04) Waiting for machine to stop 26/120
	I1010 18:28:27.405102  107455 main.go:141] libmachine: (ha-142481-m04) Waiting for machine to stop 27/120
	I1010 18:28:28.406577  107455 main.go:141] libmachine: (ha-142481-m04) Waiting for machine to stop 28/120
	I1010 18:28:29.408032  107455 main.go:141] libmachine: (ha-142481-m04) Waiting for machine to stop 29/120
	I1010 18:28:30.409968  107455 main.go:141] libmachine: (ha-142481-m04) Waiting for machine to stop 30/120
	I1010 18:28:31.411489  107455 main.go:141] libmachine: (ha-142481-m04) Waiting for machine to stop 31/120
	I1010 18:28:32.413931  107455 main.go:141] libmachine: (ha-142481-m04) Waiting for machine to stop 32/120
	I1010 18:28:33.415738  107455 main.go:141] libmachine: (ha-142481-m04) Waiting for machine to stop 33/120
	I1010 18:28:34.417573  107455 main.go:141] libmachine: (ha-142481-m04) Waiting for machine to stop 34/120
	I1010 18:28:35.419435  107455 main.go:141] libmachine: (ha-142481-m04) Waiting for machine to stop 35/120
	I1010 18:28:36.420891  107455 main.go:141] libmachine: (ha-142481-m04) Waiting for machine to stop 36/120
	I1010 18:28:37.422549  107455 main.go:141] libmachine: (ha-142481-m04) Waiting for machine to stop 37/120
	I1010 18:28:38.423983  107455 main.go:141] libmachine: (ha-142481-m04) Waiting for machine to stop 38/120
	I1010 18:28:39.425465  107455 main.go:141] libmachine: (ha-142481-m04) Waiting for machine to stop 39/120
	I1010 18:28:40.427484  107455 main.go:141] libmachine: (ha-142481-m04) Waiting for machine to stop 40/120
	I1010 18:28:41.429043  107455 main.go:141] libmachine: (ha-142481-m04) Waiting for machine to stop 41/120
	I1010 18:28:42.431711  107455 main.go:141] libmachine: (ha-142481-m04) Waiting for machine to stop 42/120
	I1010 18:28:43.433076  107455 main.go:141] libmachine: (ha-142481-m04) Waiting for machine to stop 43/120
	I1010 18:28:44.435686  107455 main.go:141] libmachine: (ha-142481-m04) Waiting for machine to stop 44/120
	I1010 18:28:45.437695  107455 main.go:141] libmachine: (ha-142481-m04) Waiting for machine to stop 45/120
	I1010 18:28:46.439062  107455 main.go:141] libmachine: (ha-142481-m04) Waiting for machine to stop 46/120
	I1010 18:28:47.440789  107455 main.go:141] libmachine: (ha-142481-m04) Waiting for machine to stop 47/120
	I1010 18:28:48.443097  107455 main.go:141] libmachine: (ha-142481-m04) Waiting for machine to stop 48/120
	I1010 18:28:49.444553  107455 main.go:141] libmachine: (ha-142481-m04) Waiting for machine to stop 49/120
	I1010 18:28:50.447039  107455 main.go:141] libmachine: (ha-142481-m04) Waiting for machine to stop 50/120
	I1010 18:28:51.448515  107455 main.go:141] libmachine: (ha-142481-m04) Waiting for machine to stop 51/120
	I1010 18:28:52.449923  107455 main.go:141] libmachine: (ha-142481-m04) Waiting for machine to stop 52/120
	I1010 18:28:53.451576  107455 main.go:141] libmachine: (ha-142481-m04) Waiting for machine to stop 53/120
	I1010 18:28:54.453058  107455 main.go:141] libmachine: (ha-142481-m04) Waiting for machine to stop 54/120
	I1010 18:28:55.455423  107455 main.go:141] libmachine: (ha-142481-m04) Waiting for machine to stop 55/120
	I1010 18:28:56.457270  107455 main.go:141] libmachine: (ha-142481-m04) Waiting for machine to stop 56/120
	I1010 18:28:57.458743  107455 main.go:141] libmachine: (ha-142481-m04) Waiting for machine to stop 57/120
	I1010 18:28:58.460369  107455 main.go:141] libmachine: (ha-142481-m04) Waiting for machine to stop 58/120
	I1010 18:28:59.461832  107455 main.go:141] libmachine: (ha-142481-m04) Waiting for machine to stop 59/120
	I1010 18:29:00.464209  107455 main.go:141] libmachine: (ha-142481-m04) Waiting for machine to stop 60/120
	I1010 18:29:01.466102  107455 main.go:141] libmachine: (ha-142481-m04) Waiting for machine to stop 61/120
	I1010 18:29:02.467537  107455 main.go:141] libmachine: (ha-142481-m04) Waiting for machine to stop 62/120
	I1010 18:29:03.469129  107455 main.go:141] libmachine: (ha-142481-m04) Waiting for machine to stop 63/120
	I1010 18:29:04.470670  107455 main.go:141] libmachine: (ha-142481-m04) Waiting for machine to stop 64/120
	I1010 18:29:05.473109  107455 main.go:141] libmachine: (ha-142481-m04) Waiting for machine to stop 65/120
	I1010 18:29:06.475427  107455 main.go:141] libmachine: (ha-142481-m04) Waiting for machine to stop 66/120
	I1010 18:29:07.476771  107455 main.go:141] libmachine: (ha-142481-m04) Waiting for machine to stop 67/120
	I1010 18:29:08.478134  107455 main.go:141] libmachine: (ha-142481-m04) Waiting for machine to stop 68/120
	I1010 18:29:09.479462  107455 main.go:141] libmachine: (ha-142481-m04) Waiting for machine to stop 69/120
	I1010 18:29:10.481661  107455 main.go:141] libmachine: (ha-142481-m04) Waiting for machine to stop 70/120
	I1010 18:29:11.482978  107455 main.go:141] libmachine: (ha-142481-m04) Waiting for machine to stop 71/120
	I1010 18:29:12.484344  107455 main.go:141] libmachine: (ha-142481-m04) Waiting for machine to stop 72/120
	I1010 18:29:13.485964  107455 main.go:141] libmachine: (ha-142481-m04) Waiting for machine to stop 73/120
	I1010 18:29:14.487228  107455 main.go:141] libmachine: (ha-142481-m04) Waiting for machine to stop 74/120
	I1010 18:29:15.489323  107455 main.go:141] libmachine: (ha-142481-m04) Waiting for machine to stop 75/120
	I1010 18:29:16.491345  107455 main.go:141] libmachine: (ha-142481-m04) Waiting for machine to stop 76/120
	I1010 18:29:17.492691  107455 main.go:141] libmachine: (ha-142481-m04) Waiting for machine to stop 77/120
	I1010 18:29:18.494120  107455 main.go:141] libmachine: (ha-142481-m04) Waiting for machine to stop 78/120
	I1010 18:29:19.495397  107455 main.go:141] libmachine: (ha-142481-m04) Waiting for machine to stop 79/120
	I1010 18:29:20.497443  107455 main.go:141] libmachine: (ha-142481-m04) Waiting for machine to stop 80/120
	I1010 18:29:21.498902  107455 main.go:141] libmachine: (ha-142481-m04) Waiting for machine to stop 81/120
	I1010 18:29:22.500383  107455 main.go:141] libmachine: (ha-142481-m04) Waiting for machine to stop 82/120
	I1010 18:29:23.501969  107455 main.go:141] libmachine: (ha-142481-m04) Waiting for machine to stop 83/120
	I1010 18:29:24.503339  107455 main.go:141] libmachine: (ha-142481-m04) Waiting for machine to stop 84/120
	I1010 18:29:25.505903  107455 main.go:141] libmachine: (ha-142481-m04) Waiting for machine to stop 85/120
	I1010 18:29:26.507236  107455 main.go:141] libmachine: (ha-142481-m04) Waiting for machine to stop 86/120
	I1010 18:29:27.508845  107455 main.go:141] libmachine: (ha-142481-m04) Waiting for machine to stop 87/120
	I1010 18:29:28.510658  107455 main.go:141] libmachine: (ha-142481-m04) Waiting for machine to stop 88/120
	I1010 18:29:29.512615  107455 main.go:141] libmachine: (ha-142481-m04) Waiting for machine to stop 89/120
	I1010 18:29:30.514993  107455 main.go:141] libmachine: (ha-142481-m04) Waiting for machine to stop 90/120
	I1010 18:29:31.516762  107455 main.go:141] libmachine: (ha-142481-m04) Waiting for machine to stop 91/120
	I1010 18:29:32.518612  107455 main.go:141] libmachine: (ha-142481-m04) Waiting for machine to stop 92/120
	I1010 18:29:33.520156  107455 main.go:141] libmachine: (ha-142481-m04) Waiting for machine to stop 93/120
	I1010 18:29:34.521676  107455 main.go:141] libmachine: (ha-142481-m04) Waiting for machine to stop 94/120
	I1010 18:29:35.523461  107455 main.go:141] libmachine: (ha-142481-m04) Waiting for machine to stop 95/120
	I1010 18:29:36.525987  107455 main.go:141] libmachine: (ha-142481-m04) Waiting for machine to stop 96/120
	I1010 18:29:37.527567  107455 main.go:141] libmachine: (ha-142481-m04) Waiting for machine to stop 97/120
	I1010 18:29:38.529769  107455 main.go:141] libmachine: (ha-142481-m04) Waiting for machine to stop 98/120
	I1010 18:29:39.531435  107455 main.go:141] libmachine: (ha-142481-m04) Waiting for machine to stop 99/120
	I1010 18:29:40.533729  107455 main.go:141] libmachine: (ha-142481-m04) Waiting for machine to stop 100/120
	I1010 18:29:41.535468  107455 main.go:141] libmachine: (ha-142481-m04) Waiting for machine to stop 101/120
	I1010 18:29:42.537117  107455 main.go:141] libmachine: (ha-142481-m04) Waiting for machine to stop 102/120
	I1010 18:29:43.539486  107455 main.go:141] libmachine: (ha-142481-m04) Waiting for machine to stop 103/120
	I1010 18:29:44.541330  107455 main.go:141] libmachine: (ha-142481-m04) Waiting for machine to stop 104/120
	I1010 18:29:45.543844  107455 main.go:141] libmachine: (ha-142481-m04) Waiting for machine to stop 105/120
	I1010 18:29:46.545495  107455 main.go:141] libmachine: (ha-142481-m04) Waiting for machine to stop 106/120
	I1010 18:29:47.547484  107455 main.go:141] libmachine: (ha-142481-m04) Waiting for machine to stop 107/120
	I1010 18:29:48.549471  107455 main.go:141] libmachine: (ha-142481-m04) Waiting for machine to stop 108/120
	I1010 18:29:49.550994  107455 main.go:141] libmachine: (ha-142481-m04) Waiting for machine to stop 109/120
	I1010 18:29:50.552602  107455 main.go:141] libmachine: (ha-142481-m04) Waiting for machine to stop 110/120
	I1010 18:29:51.553883  107455 main.go:141] libmachine: (ha-142481-m04) Waiting for machine to stop 111/120
	I1010 18:29:52.555623  107455 main.go:141] libmachine: (ha-142481-m04) Waiting for machine to stop 112/120
	I1010 18:29:53.557197  107455 main.go:141] libmachine: (ha-142481-m04) Waiting for machine to stop 113/120
	I1010 18:29:54.558670  107455 main.go:141] libmachine: (ha-142481-m04) Waiting for machine to stop 114/120
	I1010 18:29:55.561024  107455 main.go:141] libmachine: (ha-142481-m04) Waiting for machine to stop 115/120
	I1010 18:29:56.562428  107455 main.go:141] libmachine: (ha-142481-m04) Waiting for machine to stop 116/120
	I1010 18:29:57.564049  107455 main.go:141] libmachine: (ha-142481-m04) Waiting for machine to stop 117/120
	I1010 18:29:58.565197  107455 main.go:141] libmachine: (ha-142481-m04) Waiting for machine to stop 118/120
	I1010 18:29:59.567469  107455 main.go:141] libmachine: (ha-142481-m04) Waiting for machine to stop 119/120
	I1010 18:30:00.568822  107455 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1010 18:30:00.568949  107455 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1010 18:30:00.570774  107455 out.go:201] 
	W1010 18:30:00.572095  107455 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1010 18:30:00.572116  107455 out.go:270] * 
	* 
	W1010 18:30:00.576000  107455 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1010 18:30:00.577607  107455 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:535: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-142481 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-142481 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Done: out/minikube-linux-amd64 -p ha-142481 status -v=7 --alsologtostderr: (18.882790083s)
ha_test.go:545: status says not two control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-142481 status -v=7 --alsologtostderr": 
ha_test.go:551: status says not three kubelets are stopped: args "out/minikube-linux-amd64 -p ha-142481 status -v=7 --alsologtostderr": 
ha_test.go:554: status says not two apiservers are stopped: args "out/minikube-linux-amd64 -p ha-142481 status -v=7 --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-142481 -n ha-142481
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-142481 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-142481 logs -n 25: (2.224248285s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-142481 ssh -n ha-142481-m02 sudo cat                                          | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | /home/docker/cp-test_ha-142481-m03_ha-142481-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-142481 cp ha-142481-m03:/home/docker/cp-test.txt                              | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | ha-142481-m04:/home/docker/cp-test_ha-142481-m03_ha-142481-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-142481 ssh -n                                                                 | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | ha-142481-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-142481 ssh -n ha-142481-m04 sudo cat                                          | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | /home/docker/cp-test_ha-142481-m03_ha-142481-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-142481 cp testdata/cp-test.txt                                                | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | ha-142481-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-142481 ssh -n                                                                 | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | ha-142481-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-142481 cp ha-142481-m04:/home/docker/cp-test.txt                              | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile4115718102/001/cp-test_ha-142481-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-142481 ssh -n                                                                 | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | ha-142481-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-142481 cp ha-142481-m04:/home/docker/cp-test.txt                              | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | ha-142481:/home/docker/cp-test_ha-142481-m04_ha-142481.txt                       |           |         |         |                     |                     |
	| ssh     | ha-142481 ssh -n                                                                 | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | ha-142481-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-142481 ssh -n ha-142481 sudo cat                                              | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | /home/docker/cp-test_ha-142481-m04_ha-142481.txt                                 |           |         |         |                     |                     |
	| cp      | ha-142481 cp ha-142481-m04:/home/docker/cp-test.txt                              | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | ha-142481-m02:/home/docker/cp-test_ha-142481-m04_ha-142481-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-142481 ssh -n                                                                 | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | ha-142481-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-142481 ssh -n ha-142481-m02 sudo cat                                          | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | /home/docker/cp-test_ha-142481-m04_ha-142481-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-142481 cp ha-142481-m04:/home/docker/cp-test.txt                              | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | ha-142481-m03:/home/docker/cp-test_ha-142481-m04_ha-142481-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-142481 ssh -n                                                                 | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | ha-142481-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-142481 ssh -n ha-142481-m03 sudo cat                                          | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC | 10 Oct 24 18:18 UTC |
	|         | /home/docker/cp-test_ha-142481-m04_ha-142481-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-142481 node stop m02 -v=7                                                     | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:18 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-142481 node start m02 -v=7                                                    | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:20 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-142481 -v=7                                                           | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:21 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-142481 -v=7                                                                | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:21 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-142481 --wait=true -v=7                                                    | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:23 UTC | 10 Oct 24 18:27 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-142481                                                                | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:27 UTC |                     |
	| node    | ha-142481 node delete m03 -v=7                                                   | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:27 UTC | 10 Oct 24 18:27 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-142481 stop -v=7                                                              | ha-142481 | jenkins | v1.34.0 | 10 Oct 24 18:28 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/10 18:23:02
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1010 18:23:02.799869  105566 out.go:345] Setting OutFile to fd 1 ...
	I1010 18:23:02.799974  105566 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 18:23:02.799981  105566 out.go:358] Setting ErrFile to fd 2...
	I1010 18:23:02.799986  105566 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 18:23:02.800189  105566 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19787-81676/.minikube/bin
	I1010 18:23:02.800770  105566 out.go:352] Setting JSON to false
	I1010 18:23:02.801782  105566 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":7529,"bootTime":1728577054,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1010 18:23:02.801848  105566 start.go:139] virtualization: kvm guest
	I1010 18:23:02.804271  105566 out.go:177] * [ha-142481] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1010 18:23:02.805716  105566 notify.go:220] Checking for updates...
	I1010 18:23:02.805750  105566 out.go:177]   - MINIKUBE_LOCATION=19787
	I1010 18:23:02.807264  105566 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1010 18:23:02.808999  105566 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19787-81676/kubeconfig
	I1010 18:23:02.810604  105566 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19787-81676/.minikube
	I1010 18:23:02.812099  105566 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1010 18:23:02.813629  105566 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1010 18:23:02.815396  105566 config.go:182] Loaded profile config "ha-142481": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 18:23:02.815505  105566 driver.go:394] Setting default libvirt URI to qemu:///system
	I1010 18:23:02.815945  105566 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 18:23:02.815985  105566 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 18:23:02.831894  105566 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46801
	I1010 18:23:02.832436  105566 main.go:141] libmachine: () Calling .GetVersion
	I1010 18:23:02.833068  105566 main.go:141] libmachine: Using API Version  1
	I1010 18:23:02.833090  105566 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 18:23:02.833466  105566 main.go:141] libmachine: () Calling .GetMachineName
	I1010 18:23:02.833710  105566 main.go:141] libmachine: (ha-142481) Calling .DriverName
	I1010 18:23:02.870627  105566 out.go:177] * Using the kvm2 driver based on existing profile
	I1010 18:23:02.872583  105566 start.go:297] selected driver: kvm2
	I1010 18:23:02.872608  105566 start.go:901] validating driver "kvm2" against &{Name:ha-142481 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.1 ClusterName:ha-142481 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.104 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.186 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.175 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.164 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false e
fk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:
docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 18:23:02.872758  105566 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1010 18:23:02.873163  105566 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 18:23:02.873257  105566 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19787-81676/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1010 18:23:02.889051  105566 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1010 18:23:02.889804  105566 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1010 18:23:02.889850  105566 cni.go:84] Creating CNI manager for ""
	I1010 18:23:02.889895  105566 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1010 18:23:02.889963  105566 start.go:340] cluster config:
	{Name:ha-142481 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-142481 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.104 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.186 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.175 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.164 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel
:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 18:23:02.890097  105566 iso.go:125] acquiring lock: {Name:mk53f4624ffe67cac4f5d6b29b9ed87292a010a5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 18:23:02.892086  105566 out.go:177] * Starting "ha-142481" primary control-plane node in "ha-142481" cluster
	I1010 18:23:02.893407  105566 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1010 18:23:02.893449  105566 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1010 18:23:02.893460  105566 cache.go:56] Caching tarball of preloaded images
	I1010 18:23:02.893581  105566 preload.go:172] Found /home/jenkins/minikube-integration/19787-81676/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1010 18:23:02.893597  105566 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1010 18:23:02.893721  105566 profile.go:143] Saving config to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/config.json ...
	I1010 18:23:02.893915  105566 start.go:360] acquireMachinesLock for ha-142481: {Name:mkf50a8a3600473176c10a3ea212772e747151e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1010 18:23:02.893960  105566 start.go:364] duration metric: took 25.747µs to acquireMachinesLock for "ha-142481"
	I1010 18:23:02.893974  105566 start.go:96] Skipping create...Using existing machine configuration
	I1010 18:23:02.893982  105566 fix.go:54] fixHost starting: 
	I1010 18:23:02.894237  105566 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 18:23:02.894268  105566 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 18:23:02.909184  105566 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42679
	I1010 18:23:02.909771  105566 main.go:141] libmachine: () Calling .GetVersion
	I1010 18:23:02.910279  105566 main.go:141] libmachine: Using API Version  1
	I1010 18:23:02.910302  105566 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 18:23:02.910631  105566 main.go:141] libmachine: () Calling .GetMachineName
	I1010 18:23:02.910808  105566 main.go:141] libmachine: (ha-142481) Calling .DriverName
	I1010 18:23:02.910962  105566 main.go:141] libmachine: (ha-142481) Calling .GetState
	I1010 18:23:02.912715  105566 fix.go:112] recreateIfNeeded on ha-142481: state=Running err=<nil>
	W1010 18:23:02.912750  105566 fix.go:138] unexpected machine state, will restart: <nil>
	I1010 18:23:02.915047  105566 out.go:177] * Updating the running kvm2 "ha-142481" VM ...
	I1010 18:23:02.916470  105566 machine.go:93] provisionDockerMachine start ...
	I1010 18:23:02.916496  105566 main.go:141] libmachine: (ha-142481) Calling .DriverName
	I1010 18:23:02.916718  105566 main.go:141] libmachine: (ha-142481) Calling .GetSSHHostname
	I1010 18:23:02.919545  105566 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:23:02.920001  105566 main.go:141] libmachine: (ha-142481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:fa:00", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:13:53 +0000 UTC Type:0 Mac:52:54:00:3e:fa:00 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-142481 Clientid:01:52:54:00:3e:fa:00}
	I1010 18:23:02.920024  105566 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined IP address 192.168.39.104 and MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:23:02.920158  105566 main.go:141] libmachine: (ha-142481) Calling .GetSSHPort
	I1010 18:23:02.920356  105566 main.go:141] libmachine: (ha-142481) Calling .GetSSHKeyPath
	I1010 18:23:02.920539  105566 main.go:141] libmachine: (ha-142481) Calling .GetSSHKeyPath
	I1010 18:23:02.920713  105566 main.go:141] libmachine: (ha-142481) Calling .GetSSHUsername
	I1010 18:23:02.920914  105566 main.go:141] libmachine: Using SSH client type: native
	I1010 18:23:02.921133  105566 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I1010 18:23:02.921146  105566 main.go:141] libmachine: About to run SSH command:
	hostname
	I1010 18:23:03.034002  105566 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-142481
	
	I1010 18:23:03.034028  105566 main.go:141] libmachine: (ha-142481) Calling .GetMachineName
	I1010 18:23:03.034309  105566 buildroot.go:166] provisioning hostname "ha-142481"
	I1010 18:23:03.034345  105566 main.go:141] libmachine: (ha-142481) Calling .GetMachineName
	I1010 18:23:03.034510  105566 main.go:141] libmachine: (ha-142481) Calling .GetSSHHostname
	I1010 18:23:03.036973  105566 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:23:03.037398  105566 main.go:141] libmachine: (ha-142481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:fa:00", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:13:53 +0000 UTC Type:0 Mac:52:54:00:3e:fa:00 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-142481 Clientid:01:52:54:00:3e:fa:00}
	I1010 18:23:03.037444  105566 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined IP address 192.168.39.104 and MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:23:03.037577  105566 main.go:141] libmachine: (ha-142481) Calling .GetSSHPort
	I1010 18:23:03.037760  105566 main.go:141] libmachine: (ha-142481) Calling .GetSSHKeyPath
	I1010 18:23:03.037926  105566 main.go:141] libmachine: (ha-142481) Calling .GetSSHKeyPath
	I1010 18:23:03.038039  105566 main.go:141] libmachine: (ha-142481) Calling .GetSSHUsername
	I1010 18:23:03.038287  105566 main.go:141] libmachine: Using SSH client type: native
	I1010 18:23:03.038516  105566 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I1010 18:23:03.038529  105566 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-142481 && echo "ha-142481" | sudo tee /etc/hostname
	I1010 18:23:03.159604  105566 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-142481
	
	I1010 18:23:03.159637  105566 main.go:141] libmachine: (ha-142481) Calling .GetSSHHostname
	I1010 18:23:03.162741  105566 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:23:03.163148  105566 main.go:141] libmachine: (ha-142481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:fa:00", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:13:53 +0000 UTC Type:0 Mac:52:54:00:3e:fa:00 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-142481 Clientid:01:52:54:00:3e:fa:00}
	I1010 18:23:03.163175  105566 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined IP address 192.168.39.104 and MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:23:03.163417  105566 main.go:141] libmachine: (ha-142481) Calling .GetSSHPort
	I1010 18:23:03.163625  105566 main.go:141] libmachine: (ha-142481) Calling .GetSSHKeyPath
	I1010 18:23:03.163812  105566 main.go:141] libmachine: (ha-142481) Calling .GetSSHKeyPath
	I1010 18:23:03.163955  105566 main.go:141] libmachine: (ha-142481) Calling .GetSSHUsername
	I1010 18:23:03.164182  105566 main.go:141] libmachine: Using SSH client type: native
	I1010 18:23:03.164383  105566 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I1010 18:23:03.164398  105566 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-142481' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-142481/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-142481' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1010 18:23:03.274006  105566 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1010 18:23:03.274032  105566 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19787-81676/.minikube CaCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19787-81676/.minikube}
	I1010 18:23:03.274062  105566 buildroot.go:174] setting up certificates
	I1010 18:23:03.274076  105566 provision.go:84] configureAuth start
	I1010 18:23:03.274089  105566 main.go:141] libmachine: (ha-142481) Calling .GetMachineName
	I1010 18:23:03.274398  105566 main.go:141] libmachine: (ha-142481) Calling .GetIP
	I1010 18:23:03.277050  105566 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:23:03.277403  105566 main.go:141] libmachine: (ha-142481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:fa:00", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:13:53 +0000 UTC Type:0 Mac:52:54:00:3e:fa:00 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-142481 Clientid:01:52:54:00:3e:fa:00}
	I1010 18:23:03.277427  105566 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined IP address 192.168.39.104 and MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:23:03.277583  105566 main.go:141] libmachine: (ha-142481) Calling .GetSSHHostname
	I1010 18:23:03.280234  105566 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:23:03.280699  105566 main.go:141] libmachine: (ha-142481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:fa:00", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:13:53 +0000 UTC Type:0 Mac:52:54:00:3e:fa:00 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-142481 Clientid:01:52:54:00:3e:fa:00}
	I1010 18:23:03.280724  105566 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined IP address 192.168.39.104 and MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:23:03.280903  105566 provision.go:143] copyHostCerts
	I1010 18:23:03.280946  105566 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem
	I1010 18:23:03.280994  105566 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem, removing ...
	I1010 18:23:03.281004  105566 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem
	I1010 18:23:03.281068  105566 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem (1123 bytes)
	I1010 18:23:03.281159  105566 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem
	I1010 18:23:03.281176  105566 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem, removing ...
	I1010 18:23:03.281182  105566 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem
	I1010 18:23:03.281209  105566 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem (1675 bytes)
	I1010 18:23:03.281282  105566 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem
	I1010 18:23:03.281299  105566 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem, removing ...
	I1010 18:23:03.281305  105566 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem
	I1010 18:23:03.281326  105566 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem (1078 bytes)
	I1010 18:23:03.281387  105566 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem org=jenkins.ha-142481 san=[127.0.0.1 192.168.39.104 ha-142481 localhost minikube]
	I1010 18:23:03.433036  105566 provision.go:177] copyRemoteCerts
	I1010 18:23:03.433099  105566 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1010 18:23:03.433125  105566 main.go:141] libmachine: (ha-142481) Calling .GetSSHHostname
	I1010 18:23:03.435918  105566 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:23:03.436348  105566 main.go:141] libmachine: (ha-142481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:fa:00", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:13:53 +0000 UTC Type:0 Mac:52:54:00:3e:fa:00 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-142481 Clientid:01:52:54:00:3e:fa:00}
	I1010 18:23:03.436392  105566 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined IP address 192.168.39.104 and MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:23:03.436602  105566 main.go:141] libmachine: (ha-142481) Calling .GetSSHPort
	I1010 18:23:03.436786  105566 main.go:141] libmachine: (ha-142481) Calling .GetSSHKeyPath
	I1010 18:23:03.436954  105566 main.go:141] libmachine: (ha-142481) Calling .GetSSHUsername
	I1010 18:23:03.437083  105566 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481/id_rsa Username:docker}
	I1010 18:23:03.523664  105566 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1010 18:23:03.523740  105566 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1010 18:23:03.555874  105566 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1010 18:23:03.555974  105566 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1010 18:23:03.582403  105566 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1010 18:23:03.582490  105566 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1010 18:23:03.609058  105566 provision.go:87] duration metric: took 334.963124ms to configureAuth
	I1010 18:23:03.609099  105566 buildroot.go:189] setting minikube options for container-runtime
	I1010 18:23:03.609410  105566 config.go:182] Loaded profile config "ha-142481": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 18:23:03.609549  105566 main.go:141] libmachine: (ha-142481) Calling .GetSSHHostname
	I1010 18:23:03.612233  105566 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:23:03.613009  105566 main.go:141] libmachine: (ha-142481) Calling .GetSSHPort
	I1010 18:23:03.614113  105566 main.go:141] libmachine: (ha-142481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:fa:00", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:13:53 +0000 UTC Type:0 Mac:52:54:00:3e:fa:00 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-142481 Clientid:01:52:54:00:3e:fa:00}
	I1010 18:23:03.614145  105566 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined IP address 192.168.39.104 and MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:23:03.614184  105566 main.go:141] libmachine: (ha-142481) Calling .GetSSHKeyPath
	I1010 18:23:03.614514  105566 main.go:141] libmachine: (ha-142481) Calling .GetSSHKeyPath
	I1010 18:23:03.614709  105566 main.go:141] libmachine: (ha-142481) Calling .GetSSHUsername
	I1010 18:23:03.614883  105566 main.go:141] libmachine: Using SSH client type: native
	I1010 18:23:03.615121  105566 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I1010 18:23:03.615158  105566 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1010 18:24:34.476953  105566 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1010 18:24:34.476994  105566 machine.go:96] duration metric: took 1m31.560504087s to provisionDockerMachine
	I1010 18:24:34.477012  105566 start.go:293] postStartSetup for "ha-142481" (driver="kvm2")
	I1010 18:24:34.477027  105566 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1010 18:24:34.477055  105566 main.go:141] libmachine: (ha-142481) Calling .DriverName
	I1010 18:24:34.477380  105566 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1010 18:24:34.477420  105566 main.go:141] libmachine: (ha-142481) Calling .GetSSHHostname
	I1010 18:24:34.480707  105566 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:24:34.481161  105566 main.go:141] libmachine: (ha-142481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:fa:00", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:13:53 +0000 UTC Type:0 Mac:52:54:00:3e:fa:00 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-142481 Clientid:01:52:54:00:3e:fa:00}
	I1010 18:24:34.481181  105566 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined IP address 192.168.39.104 and MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:24:34.481390  105566 main.go:141] libmachine: (ha-142481) Calling .GetSSHPort
	I1010 18:24:34.481570  105566 main.go:141] libmachine: (ha-142481) Calling .GetSSHKeyPath
	I1010 18:24:34.481771  105566 main.go:141] libmachine: (ha-142481) Calling .GetSSHUsername
	I1010 18:24:34.481926  105566 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481/id_rsa Username:docker}
	I1010 18:24:34.569267  105566 ssh_runner.go:195] Run: cat /etc/os-release
	I1010 18:24:34.574040  105566 info.go:137] Remote host: Buildroot 2023.02.9
	I1010 18:24:34.574072  105566 filesync.go:126] Scanning /home/jenkins/minikube-integration/19787-81676/.minikube/addons for local assets ...
	I1010 18:24:34.574173  105566 filesync.go:126] Scanning /home/jenkins/minikube-integration/19787-81676/.minikube/files for local assets ...
	I1010 18:24:34.574287  105566 filesync.go:149] local asset: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem -> 888762.pem in /etc/ssl/certs
	I1010 18:24:34.574308  105566 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem -> /etc/ssl/certs/888762.pem
	I1010 18:24:34.574437  105566 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1010 18:24:34.584565  105566 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem --> /etc/ssl/certs/888762.pem (1708 bytes)
	I1010 18:24:34.612294  105566 start.go:296] duration metric: took 135.263736ms for postStartSetup
	I1010 18:24:34.612349  105566 main.go:141] libmachine: (ha-142481) Calling .DriverName
	I1010 18:24:34.612707  105566 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1010 18:24:34.612741  105566 main.go:141] libmachine: (ha-142481) Calling .GetSSHHostname
	I1010 18:24:34.615527  105566 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:24:34.616032  105566 main.go:141] libmachine: (ha-142481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:fa:00", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:13:53 +0000 UTC Type:0 Mac:52:54:00:3e:fa:00 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-142481 Clientid:01:52:54:00:3e:fa:00}
	I1010 18:24:34.616061  105566 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined IP address 192.168.39.104 and MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:24:34.616283  105566 main.go:141] libmachine: (ha-142481) Calling .GetSSHPort
	I1010 18:24:34.616442  105566 main.go:141] libmachine: (ha-142481) Calling .GetSSHKeyPath
	I1010 18:24:34.616652  105566 main.go:141] libmachine: (ha-142481) Calling .GetSSHUsername
	I1010 18:24:34.616790  105566 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481/id_rsa Username:docker}
	W1010 18:24:34.699466  105566 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I1010 18:24:34.699501  105566 fix.go:56] duration metric: took 1m31.80551892s for fixHost
	I1010 18:24:34.699564  105566 main.go:141] libmachine: (ha-142481) Calling .GetSSHHostname
	I1010 18:24:34.702515  105566 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:24:34.702963  105566 main.go:141] libmachine: (ha-142481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:fa:00", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:13:53 +0000 UTC Type:0 Mac:52:54:00:3e:fa:00 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-142481 Clientid:01:52:54:00:3e:fa:00}
	I1010 18:24:34.702989  105566 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined IP address 192.168.39.104 and MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:24:34.703221  105566 main.go:141] libmachine: (ha-142481) Calling .GetSSHPort
	I1010 18:24:34.703416  105566 main.go:141] libmachine: (ha-142481) Calling .GetSSHKeyPath
	I1010 18:24:34.703584  105566 main.go:141] libmachine: (ha-142481) Calling .GetSSHKeyPath
	I1010 18:24:34.703713  105566 main.go:141] libmachine: (ha-142481) Calling .GetSSHUsername
	I1010 18:24:34.703953  105566 main.go:141] libmachine: Using SSH client type: native
	I1010 18:24:34.704137  105566 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I1010 18:24:34.704150  105566 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1010 18:24:34.818407  105566 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728584674.801891150
	
	I1010 18:24:34.818434  105566 fix.go:216] guest clock: 1728584674.801891150
	I1010 18:24:34.818444  105566 fix.go:229] Guest: 2024-10-10 18:24:34.80189115 +0000 UTC Remote: 2024-10-10 18:24:34.699509961 +0000 UTC m=+91.940454718 (delta=102.381189ms)
	I1010 18:24:34.818478  105566 fix.go:200] guest clock delta is within tolerance: 102.381189ms
	I1010 18:24:34.818485  105566 start.go:83] releasing machines lock for "ha-142481", held for 1m31.924514882s
	I1010 18:24:34.818520  105566 main.go:141] libmachine: (ha-142481) Calling .DriverName
	I1010 18:24:34.818768  105566 main.go:141] libmachine: (ha-142481) Calling .GetIP
	I1010 18:24:34.821813  105566 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:24:34.822151  105566 main.go:141] libmachine: (ha-142481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:fa:00", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:13:53 +0000 UTC Type:0 Mac:52:54:00:3e:fa:00 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-142481 Clientid:01:52:54:00:3e:fa:00}
	I1010 18:24:34.822170  105566 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined IP address 192.168.39.104 and MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:24:34.822302  105566 main.go:141] libmachine: (ha-142481) Calling .DriverName
	I1010 18:24:34.822958  105566 main.go:141] libmachine: (ha-142481) Calling .DriverName
	I1010 18:24:34.823141  105566 main.go:141] libmachine: (ha-142481) Calling .DriverName
	I1010 18:24:34.823245  105566 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1010 18:24:34.823306  105566 main.go:141] libmachine: (ha-142481) Calling .GetSSHHostname
	I1010 18:24:34.823402  105566 ssh_runner.go:195] Run: cat /version.json
	I1010 18:24:34.823433  105566 main.go:141] libmachine: (ha-142481) Calling .GetSSHHostname
	I1010 18:24:34.826075  105566 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:24:34.826432  105566 main.go:141] libmachine: (ha-142481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:fa:00", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:13:53 +0000 UTC Type:0 Mac:52:54:00:3e:fa:00 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-142481 Clientid:01:52:54:00:3e:fa:00}
	I1010 18:24:34.826453  105566 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined IP address 192.168.39.104 and MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:24:34.826471  105566 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:24:34.826616  105566 main.go:141] libmachine: (ha-142481) Calling .GetSSHPort
	I1010 18:24:34.826783  105566 main.go:141] libmachine: (ha-142481) Calling .GetSSHKeyPath
	I1010 18:24:34.826908  105566 main.go:141] libmachine: (ha-142481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:fa:00", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:13:53 +0000 UTC Type:0 Mac:52:54:00:3e:fa:00 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-142481 Clientid:01:52:54:00:3e:fa:00}
	I1010 18:24:34.826931  105566 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined IP address 192.168.39.104 and MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:24:34.826956  105566 main.go:141] libmachine: (ha-142481) Calling .GetSSHUsername
	I1010 18:24:34.827085  105566 main.go:141] libmachine: (ha-142481) Calling .GetSSHPort
	I1010 18:24:34.827268  105566 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481/id_rsa Username:docker}
	I1010 18:24:34.827346  105566 main.go:141] libmachine: (ha-142481) Calling .GetSSHKeyPath
	I1010 18:24:34.827552  105566 main.go:141] libmachine: (ha-142481) Calling .GetSSHUsername
	I1010 18:24:34.827708  105566 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/ha-142481/id_rsa Username:docker}
	I1010 18:24:34.929952  105566 ssh_runner.go:195] Run: systemctl --version
	I1010 18:24:34.936136  105566 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1010 18:24:35.099285  105566 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1010 18:24:35.107144  105566 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1010 18:24:35.107220  105566 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1010 18:24:35.117620  105566 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1010 18:24:35.117647  105566 start.go:495] detecting cgroup driver to use...
	I1010 18:24:35.117721  105566 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1010 18:24:35.134625  105566 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1010 18:24:35.149118  105566 docker.go:217] disabling cri-docker service (if available) ...
	I1010 18:24:35.149186  105566 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1010 18:24:35.163579  105566 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1010 18:24:35.178125  105566 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1010 18:24:35.329029  105566 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1010 18:24:35.475666  105566 docker.go:233] disabling docker service ...
	I1010 18:24:35.475764  105566 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1010 18:24:35.495156  105566 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1010 18:24:35.509957  105566 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1010 18:24:35.656185  105566 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1010 18:24:35.806588  105566 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1010 18:24:35.825490  105566 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1010 18:24:35.844879  105566 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1010 18:24:35.844943  105566 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:24:35.857000  105566 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1010 18:24:35.857063  105566 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:24:35.868006  105566 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:24:35.879217  105566 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:24:35.891582  105566 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1010 18:24:35.903802  105566 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:24:35.915175  105566 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:24:35.926660  105566 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:24:35.938352  105566 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1010 18:24:35.948342  105566 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1010 18:24:35.958296  105566 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 18:24:36.106096  105566 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1010 18:24:41.954317  105566 ssh_runner.go:235] Completed: sudo systemctl restart crio: (5.848174322s)
	I1010 18:24:41.954356  105566 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1010 18:24:41.954406  105566 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1010 18:24:41.959808  105566 start.go:563] Will wait 60s for crictl version
	I1010 18:24:41.959874  105566 ssh_runner.go:195] Run: which crictl
	I1010 18:24:41.963918  105566 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1010 18:24:42.002757  105566 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1010 18:24:42.002857  105566 ssh_runner.go:195] Run: crio --version
	I1010 18:24:42.033367  105566 ssh_runner.go:195] Run: crio --version
	I1010 18:24:42.066228  105566 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1010 18:24:42.067983  105566 main.go:141] libmachine: (ha-142481) Calling .GetIP
	I1010 18:24:42.070753  105566 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:24:42.071117  105566 main.go:141] libmachine: (ha-142481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:fa:00", ip: ""} in network mk-ha-142481: {Iface:virbr1 ExpiryTime:2024-10-10 19:13:53 +0000 UTC Type:0 Mac:52:54:00:3e:fa:00 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-142481 Clientid:01:52:54:00:3e:fa:00}
	I1010 18:24:42.071147  105566 main.go:141] libmachine: (ha-142481) DBG | domain ha-142481 has defined IP address 192.168.39.104 and MAC address 52:54:00:3e:fa:00 in network mk-ha-142481
	I1010 18:24:42.071332  105566 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1010 18:24:42.076464  105566 kubeadm.go:883] updating cluster {Name:ha-142481 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-142481 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.104 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.186 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.175 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.164 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fr
eshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Moun
tIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1010 18:24:42.076607  105566 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1010 18:24:42.076652  105566 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 18:24:42.120159  105566 crio.go:514] all images are preloaded for cri-o runtime.
	I1010 18:24:42.120182  105566 crio.go:433] Images already preloaded, skipping extraction
	I1010 18:24:42.120244  105566 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 18:24:42.158431  105566 crio.go:514] all images are preloaded for cri-o runtime.
	I1010 18:24:42.158462  105566 cache_images.go:84] Images are preloaded, skipping loading
	I1010 18:24:42.158474  105566 kubeadm.go:934] updating node { 192.168.39.104 8443 v1.31.1 crio true true} ...
	I1010 18:24:42.158622  105566 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-142481 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.104
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-142481 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1010 18:24:42.158705  105566 ssh_runner.go:195] Run: crio config
	I1010 18:24:42.206423  105566 cni.go:84] Creating CNI manager for ""
	I1010 18:24:42.206447  105566 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1010 18:24:42.206463  105566 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1010 18:24:42.206483  105566 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.104 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-142481 NodeName:ha-142481 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.104"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.104 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1010 18:24:42.206622  105566 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.104
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-142481"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.104
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.104"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1010 18:24:42.206640  105566 kube-vip.go:115] generating kube-vip config ...
	I1010 18:24:42.206681  105566 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1010 18:24:42.218504  105566 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1010 18:24:42.218604  105566 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1010 18:24:42.218697  105566 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1010 18:24:42.228568  105566 binaries.go:44] Found k8s binaries, skipping transfer
	I1010 18:24:42.228640  105566 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1010 18:24:42.238233  105566 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I1010 18:24:42.255239  105566 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1010 18:24:42.272092  105566 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I1010 18:24:42.289591  105566 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1010 18:24:42.306980  105566 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1010 18:24:42.311912  105566 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 18:24:42.460829  105566 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 18:24:42.475897  105566 certs.go:68] Setting up /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481 for IP: 192.168.39.104
	I1010 18:24:42.475924  105566 certs.go:194] generating shared ca certs ...
	I1010 18:24:42.475941  105566 certs.go:226] acquiring lock for ca certs: {Name:mk934aa2a82050d7b4e8800492bf64f91d140517 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:24:42.476127  105566 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key
	I1010 18:24:42.476187  105566 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key
	I1010 18:24:42.476199  105566 certs.go:256] generating profile certs ...
	I1010 18:24:42.476298  105566 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/client.key
	I1010 18:24:42.476334  105566 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.key.8eb6d082
	I1010 18:24:42.476365  105566 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.crt.8eb6d082 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.104 192.168.39.186 192.168.39.175 192.168.39.254]
	I1010 18:24:42.740943  105566 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.crt.8eb6d082 ...
	I1010 18:24:42.740981  105566 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.crt.8eb6d082: {Name:mkb42965377953a0f50d0ba2dc7c2ec3a85872d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:24:42.741155  105566 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.key.8eb6d082 ...
	I1010 18:24:42.741166  105566 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.key.8eb6d082: {Name:mkecaa2382e2d924f9081504237cbd5394d213b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:24:42.741258  105566 certs.go:381] copying /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.crt.8eb6d082 -> /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.crt
	I1010 18:24:42.741423  105566 certs.go:385] copying /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.key.8eb6d082 -> /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.key
	I1010 18:24:42.741562  105566 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/proxy-client.key
	I1010 18:24:42.741579  105566 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1010 18:24:42.741593  105566 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1010 18:24:42.741606  105566 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1010 18:24:42.741618  105566 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1010 18:24:42.741632  105566 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1010 18:24:42.741644  105566 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1010 18:24:42.741660  105566 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1010 18:24:42.741675  105566 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1010 18:24:42.741725  105566 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876.pem (1338 bytes)
	W1010 18:24:42.741752  105566 certs.go:480] ignoring /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876_empty.pem, impossibly tiny 0 bytes
	I1010 18:24:42.741761  105566 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem (1679 bytes)
	I1010 18:24:42.741789  105566 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem (1078 bytes)
	I1010 18:24:42.741810  105566 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem (1123 bytes)
	I1010 18:24:42.741831  105566 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem (1675 bytes)
	I1010 18:24:42.741868  105566 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem (1708 bytes)
	I1010 18:24:42.741892  105566 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876.pem -> /usr/share/ca-certificates/88876.pem
	I1010 18:24:42.741905  105566 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem -> /usr/share/ca-certificates/888762.pem
	I1010 18:24:42.741917  105566 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:24:42.742475  105566 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1010 18:24:42.771347  105566 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1010 18:24:42.796627  105566 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1010 18:24:42.822730  105566 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1010 18:24:42.849047  105566 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1010 18:24:42.875516  105566 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1010 18:24:42.901199  105566 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1010 18:24:42.925795  105566 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/ha-142481/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1010 18:24:42.951007  105566 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876.pem --> /usr/share/ca-certificates/88876.pem (1338 bytes)
	I1010 18:24:42.976727  105566 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem --> /usr/share/ca-certificates/888762.pem (1708 bytes)
	I1010 18:24:43.000736  105566 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1010 18:24:43.026304  105566 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1010 18:24:43.044525  105566 ssh_runner.go:195] Run: openssl version
	I1010 18:24:43.050725  105566 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/888762.pem && ln -fs /usr/share/ca-certificates/888762.pem /etc/ssl/certs/888762.pem"
	I1010 18:24:43.061836  105566 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/888762.pem
	I1010 18:24:43.066494  105566 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 10 18:09 /usr/share/ca-certificates/888762.pem
	I1010 18:24:43.066546  105566 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/888762.pem
	I1010 18:24:43.072205  105566 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/888762.pem /etc/ssl/certs/3ec20f2e.0"
	I1010 18:24:43.081254  105566 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1010 18:24:43.092147  105566 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:24:43.096621  105566 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 10 17:58 /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:24:43.096674  105566 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:24:43.102292  105566 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1010 18:24:43.111591  105566 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/88876.pem && ln -fs /usr/share/ca-certificates/88876.pem /etc/ssl/certs/88876.pem"
	I1010 18:24:43.122378  105566 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/88876.pem
	I1010 18:24:43.127233  105566 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 10 18:09 /usr/share/ca-certificates/88876.pem
	I1010 18:24:43.127289  105566 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/88876.pem
	I1010 18:24:43.133161  105566 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/88876.pem /etc/ssl/certs/51391683.0"
	I1010 18:24:43.142538  105566 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1010 18:24:43.147321  105566 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1010 18:24:43.152973  105566 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1010 18:24:43.158637  105566 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1010 18:24:43.164323  105566 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1010 18:24:43.169916  105566 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1010 18:24:43.175512  105566 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1010 18:24:43.181088  105566 kubeadm.go:392] StartCluster: {Name:ha-142481 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-142481 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.104 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.186 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.175 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.164 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fresh
pod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 18:24:43.181254  105566 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1010 18:24:43.181298  105566 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1010 18:24:43.222814  105566 cri.go:89] found id: "85a0a1ee7a4cf4538126707aee79f70aec9dcb25dfbade502986283693e191ce"
	I1010 18:24:43.222838  105566 cri.go:89] found id: "bae82cf35c584cc9a970120a0e5807eb94fbae4a014b20431c5dcbb6f9cf74a7"
	I1010 18:24:43.222841  105566 cri.go:89] found id: "c7d0b95f4e67fff73260e403f16b365fbb721bc6dea2c2b868fb3c44f8b72844"
	I1010 18:24:43.222844  105566 cri.go:89] found id: "4dfd60c6197ecf8458417fd5dd0853e22c5baa8dd86f0bfa0d706e1ba6f65928"
	I1010 18:24:43.222847  105566 cri.go:89] found id: "018e6370bdfdae261e6d11d25c43c0456fb2814ad544f82f775181d88055bf37"
	I1010 18:24:43.222850  105566 cri.go:89] found id: "5c208648c013d8c43a6964604c6ae8de3a53473a5fd4e7885a2755580435fb2e"
	I1010 18:24:43.222852  105566 cri.go:89] found id: "b32ac96128061a8cd1f65949d4849412dd90d7656d1619c49a39c2a6f8cb40d3"
	I1010 18:24:43.222855  105566 cri.go:89] found id: "9f7d32719ebd232c16c4b9d557726714c2b6c1836b556ff4ca647c75d46c03e9"
	I1010 18:24:43.222857  105566 cri.go:89] found id: "80e86419d2aadedb18174a33f8cbe97c26931608fd3d13de863fea4148c76dc4"
	I1010 18:24:43.222863  105566 cri.go:89] found id: "751981b34b5e95c8f49366cd245e2877efdef340757e2957722cc0c59f16c09c"
	I1010 18:24:43.222866  105566 cri.go:89] found id: "4d7eb644bee429f0aef0448b6ea6d1efc310f7b08553ebb2af7781f4c493bbaf"
	I1010 18:24:43.222869  105566 cri.go:89] found id: "43b160f9e1140bbc977bac3d49805593ce66960b30fe85ac1e6b104437e5a026"
	I1010 18:24:43.222873  105566 cri.go:89] found id: "206693e605977eabf0659fe5400e46be795755080a225b01d46bb7533d055c58"
	I1010 18:24:43.222877  105566 cri.go:89] found id: ""
	I1010 18:24:43.222930  105566 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-142481 -n ha-142481
helpers_test.go:261: (dbg) Run:  kubectl --context ha-142481 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (142.20s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (323.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-965291
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-965291
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-965291: exit status 82 (2m1.917499107s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-965291-m03"  ...
	* Stopping node "multinode-965291-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-965291" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-965291 --wait=true -v=8 --alsologtostderr
E1010 18:47:50.018348   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/functional-346405/client.crt: no such file or directory" logger="UnhandledError"
E1010 18:49:59.530589   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/addons-473910/client.crt: no such file or directory" logger="UnhandledError"
E1010 18:50:53.090394   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/functional-346405/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-965291 --wait=true -v=8 --alsologtostderr: (3m18.539866218s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-965291
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-965291 -n multinode-965291
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-965291 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-965291 logs -n 25: (2.111221691s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-965291 ssh -n                                                                 | multinode-965291 | jenkins | v1.34.0 | 10 Oct 24 18:44 UTC | 10 Oct 24 18:44 UTC |
	|         | multinode-965291-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-965291 cp multinode-965291-m02:/home/docker/cp-test.txt                       | multinode-965291 | jenkins | v1.34.0 | 10 Oct 24 18:44 UTC | 10 Oct 24 18:44 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3017167106/001/cp-test_multinode-965291-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-965291 ssh -n                                                                 | multinode-965291 | jenkins | v1.34.0 | 10 Oct 24 18:44 UTC | 10 Oct 24 18:44 UTC |
	|         | multinode-965291-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-965291 cp multinode-965291-m02:/home/docker/cp-test.txt                       | multinode-965291 | jenkins | v1.34.0 | 10 Oct 24 18:44 UTC | 10 Oct 24 18:44 UTC |
	|         | multinode-965291:/home/docker/cp-test_multinode-965291-m02_multinode-965291.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-965291 ssh -n                                                                 | multinode-965291 | jenkins | v1.34.0 | 10 Oct 24 18:44 UTC | 10 Oct 24 18:44 UTC |
	|         | multinode-965291-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-965291 ssh -n multinode-965291 sudo cat                                       | multinode-965291 | jenkins | v1.34.0 | 10 Oct 24 18:45 UTC | 10 Oct 24 18:45 UTC |
	|         | /home/docker/cp-test_multinode-965291-m02_multinode-965291.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-965291 cp multinode-965291-m02:/home/docker/cp-test.txt                       | multinode-965291 | jenkins | v1.34.0 | 10 Oct 24 18:45 UTC | 10 Oct 24 18:45 UTC |
	|         | multinode-965291-m03:/home/docker/cp-test_multinode-965291-m02_multinode-965291-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-965291 ssh -n                                                                 | multinode-965291 | jenkins | v1.34.0 | 10 Oct 24 18:45 UTC | 10 Oct 24 18:45 UTC |
	|         | multinode-965291-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-965291 ssh -n multinode-965291-m03 sudo cat                                   | multinode-965291 | jenkins | v1.34.0 | 10 Oct 24 18:45 UTC | 10 Oct 24 18:45 UTC |
	|         | /home/docker/cp-test_multinode-965291-m02_multinode-965291-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-965291 cp testdata/cp-test.txt                                                | multinode-965291 | jenkins | v1.34.0 | 10 Oct 24 18:45 UTC | 10 Oct 24 18:45 UTC |
	|         | multinode-965291-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-965291 ssh -n                                                                 | multinode-965291 | jenkins | v1.34.0 | 10 Oct 24 18:45 UTC | 10 Oct 24 18:45 UTC |
	|         | multinode-965291-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-965291 cp multinode-965291-m03:/home/docker/cp-test.txt                       | multinode-965291 | jenkins | v1.34.0 | 10 Oct 24 18:45 UTC | 10 Oct 24 18:45 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3017167106/001/cp-test_multinode-965291-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-965291 ssh -n                                                                 | multinode-965291 | jenkins | v1.34.0 | 10 Oct 24 18:45 UTC | 10 Oct 24 18:45 UTC |
	|         | multinode-965291-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-965291 cp multinode-965291-m03:/home/docker/cp-test.txt                       | multinode-965291 | jenkins | v1.34.0 | 10 Oct 24 18:45 UTC | 10 Oct 24 18:45 UTC |
	|         | multinode-965291:/home/docker/cp-test_multinode-965291-m03_multinode-965291.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-965291 ssh -n                                                                 | multinode-965291 | jenkins | v1.34.0 | 10 Oct 24 18:45 UTC | 10 Oct 24 18:45 UTC |
	|         | multinode-965291-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-965291 ssh -n multinode-965291 sudo cat                                       | multinode-965291 | jenkins | v1.34.0 | 10 Oct 24 18:45 UTC | 10 Oct 24 18:45 UTC |
	|         | /home/docker/cp-test_multinode-965291-m03_multinode-965291.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-965291 cp multinode-965291-m03:/home/docker/cp-test.txt                       | multinode-965291 | jenkins | v1.34.0 | 10 Oct 24 18:45 UTC | 10 Oct 24 18:45 UTC |
	|         | multinode-965291-m02:/home/docker/cp-test_multinode-965291-m03_multinode-965291-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-965291 ssh -n                                                                 | multinode-965291 | jenkins | v1.34.0 | 10 Oct 24 18:45 UTC | 10 Oct 24 18:45 UTC |
	|         | multinode-965291-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-965291 ssh -n multinode-965291-m02 sudo cat                                   | multinode-965291 | jenkins | v1.34.0 | 10 Oct 24 18:45 UTC | 10 Oct 24 18:45 UTC |
	|         | /home/docker/cp-test_multinode-965291-m03_multinode-965291-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-965291 node stop m03                                                          | multinode-965291 | jenkins | v1.34.0 | 10 Oct 24 18:45 UTC | 10 Oct 24 18:45 UTC |
	| node    | multinode-965291 node start                                                             | multinode-965291 | jenkins | v1.34.0 | 10 Oct 24 18:45 UTC | 10 Oct 24 18:45 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-965291                                                                | multinode-965291 | jenkins | v1.34.0 | 10 Oct 24 18:45 UTC |                     |
	| stop    | -p multinode-965291                                                                     | multinode-965291 | jenkins | v1.34.0 | 10 Oct 24 18:45 UTC |                     |
	| start   | -p multinode-965291                                                                     | multinode-965291 | jenkins | v1.34.0 | 10 Oct 24 18:47 UTC | 10 Oct 24 18:51 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-965291                                                                | multinode-965291 | jenkins | v1.34.0 | 10 Oct 24 18:51 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/10 18:47:46
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1010 18:47:46.293173  117647 out.go:345] Setting OutFile to fd 1 ...
	I1010 18:47:46.293450  117647 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 18:47:46.293461  117647 out.go:358] Setting ErrFile to fd 2...
	I1010 18:47:46.293466  117647 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 18:47:46.293697  117647 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19787-81676/.minikube/bin
	I1010 18:47:46.294262  117647 out.go:352] Setting JSON to false
	I1010 18:47:46.295211  117647 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":9012,"bootTime":1728577054,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1010 18:47:46.295285  117647 start.go:139] virtualization: kvm guest
	I1010 18:47:46.297725  117647 out.go:177] * [multinode-965291] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1010 18:47:46.299292  117647 out.go:177]   - MINIKUBE_LOCATION=19787
	I1010 18:47:46.299310  117647 notify.go:220] Checking for updates...
	I1010 18:47:46.302434  117647 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1010 18:47:46.303975  117647 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19787-81676/kubeconfig
	I1010 18:47:46.305927  117647 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19787-81676/.minikube
	I1010 18:47:46.307844  117647 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1010 18:47:46.309708  117647 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1010 18:47:46.311776  117647 config.go:182] Loaded profile config "multinode-965291": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 18:47:46.311933  117647 driver.go:394] Setting default libvirt URI to qemu:///system
	I1010 18:47:46.312485  117647 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 18:47:46.312538  117647 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 18:47:46.328741  117647 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35901
	I1010 18:47:46.329358  117647 main.go:141] libmachine: () Calling .GetVersion
	I1010 18:47:46.329937  117647 main.go:141] libmachine: Using API Version  1
	I1010 18:47:46.329961  117647 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 18:47:46.330317  117647 main.go:141] libmachine: () Calling .GetMachineName
	I1010 18:47:46.330495  117647 main.go:141] libmachine: (multinode-965291) Calling .DriverName
	I1010 18:47:46.368088  117647 out.go:177] * Using the kvm2 driver based on existing profile
	I1010 18:47:46.369609  117647 start.go:297] selected driver: kvm2
	I1010 18:47:46.369627  117647 start.go:901] validating driver "kvm2" against &{Name:multinode-965291 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.1 ClusterName:multinode-965291 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.28 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.122 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.137 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false insp
ektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptim
izations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 18:47:46.369761  117647 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1010 18:47:46.370086  117647 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 18:47:46.370161  117647 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19787-81676/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1010 18:47:46.385840  117647 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1010 18:47:46.386585  117647 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1010 18:47:46.386641  117647 cni.go:84] Creating CNI manager for ""
	I1010 18:47:46.386693  117647 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1010 18:47:46.386766  117647 start.go:340] cluster config:
	{Name:multinode-965291 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-965291 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.28 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.122 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.137 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow
:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetCli
entPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 18:47:46.386893  117647 iso.go:125] acquiring lock: {Name:mk53f4624ffe67cac4f5d6b29b9ed87292a010a5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 18:47:46.388896  117647 out.go:177] * Starting "multinode-965291" primary control-plane node in "multinode-965291" cluster
	I1010 18:47:46.390479  117647 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1010 18:47:46.390539  117647 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1010 18:47:46.390553  117647 cache.go:56] Caching tarball of preloaded images
	I1010 18:47:46.390695  117647 preload.go:172] Found /home/jenkins/minikube-integration/19787-81676/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1010 18:47:46.390712  117647 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1010 18:47:46.390855  117647 profile.go:143] Saving config to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/multinode-965291/config.json ...
	I1010 18:47:46.391100  117647 start.go:360] acquireMachinesLock for multinode-965291: {Name:mkf50a8a3600473176c10a3ea212772e747151e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1010 18:47:46.391165  117647 start.go:364] duration metric: took 39.072µs to acquireMachinesLock for "multinode-965291"
	I1010 18:47:46.391188  117647 start.go:96] Skipping create...Using existing machine configuration
	I1010 18:47:46.391197  117647 fix.go:54] fixHost starting: 
	I1010 18:47:46.391481  117647 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 18:47:46.391523  117647 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 18:47:46.406401  117647 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43447
	I1010 18:47:46.406914  117647 main.go:141] libmachine: () Calling .GetVersion
	I1010 18:47:46.407442  117647 main.go:141] libmachine: Using API Version  1
	I1010 18:47:46.407459  117647 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 18:47:46.407824  117647 main.go:141] libmachine: () Calling .GetMachineName
	I1010 18:47:46.408046  117647 main.go:141] libmachine: (multinode-965291) Calling .DriverName
	I1010 18:47:46.408232  117647 main.go:141] libmachine: (multinode-965291) Calling .GetState
	I1010 18:47:46.410112  117647 fix.go:112] recreateIfNeeded on multinode-965291: state=Running err=<nil>
	W1010 18:47:46.410132  117647 fix.go:138] unexpected machine state, will restart: <nil>
	I1010 18:47:46.413073  117647 out.go:177] * Updating the running kvm2 "multinode-965291" VM ...
	I1010 18:47:46.414493  117647 machine.go:93] provisionDockerMachine start ...
	I1010 18:47:46.414521  117647 main.go:141] libmachine: (multinode-965291) Calling .DriverName
	I1010 18:47:46.414758  117647 main.go:141] libmachine: (multinode-965291) Calling .GetSSHHostname
	I1010 18:47:46.417653  117647 main.go:141] libmachine: (multinode-965291) DBG | domain multinode-965291 has defined MAC address 52:54:00:cf:c0:21 in network mk-multinode-965291
	I1010 18:47:46.418096  117647 main.go:141] libmachine: (multinode-965291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:c0:21", ip: ""} in network mk-multinode-965291: {Iface:virbr1 ExpiryTime:2024-10-10 19:42:19 +0000 UTC Type:0 Mac:52:54:00:cf:c0:21 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:multinode-965291 Clientid:01:52:54:00:cf:c0:21}
	I1010 18:47:46.418134  117647 main.go:141] libmachine: (multinode-965291) DBG | domain multinode-965291 has defined IP address 192.168.39.28 and MAC address 52:54:00:cf:c0:21 in network mk-multinode-965291
	I1010 18:47:46.418359  117647 main.go:141] libmachine: (multinode-965291) Calling .GetSSHPort
	I1010 18:47:46.418567  117647 main.go:141] libmachine: (multinode-965291) Calling .GetSSHKeyPath
	I1010 18:47:46.418759  117647 main.go:141] libmachine: (multinode-965291) Calling .GetSSHKeyPath
	I1010 18:47:46.418900  117647 main.go:141] libmachine: (multinode-965291) Calling .GetSSHUsername
	I1010 18:47:46.419059  117647 main.go:141] libmachine: Using SSH client type: native
	I1010 18:47:46.419261  117647 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.28 22 <nil> <nil>}
	I1010 18:47:46.419273  117647 main.go:141] libmachine: About to run SSH command:
	hostname
	I1010 18:47:46.530794  117647 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-965291
	
	I1010 18:47:46.530827  117647 main.go:141] libmachine: (multinode-965291) Calling .GetMachineName
	I1010 18:47:46.531126  117647 buildroot.go:166] provisioning hostname "multinode-965291"
	I1010 18:47:46.531160  117647 main.go:141] libmachine: (multinode-965291) Calling .GetMachineName
	I1010 18:47:46.531332  117647 main.go:141] libmachine: (multinode-965291) Calling .GetSSHHostname
	I1010 18:47:46.534393  117647 main.go:141] libmachine: (multinode-965291) DBG | domain multinode-965291 has defined MAC address 52:54:00:cf:c0:21 in network mk-multinode-965291
	I1010 18:47:46.534897  117647 main.go:141] libmachine: (multinode-965291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:c0:21", ip: ""} in network mk-multinode-965291: {Iface:virbr1 ExpiryTime:2024-10-10 19:42:19 +0000 UTC Type:0 Mac:52:54:00:cf:c0:21 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:multinode-965291 Clientid:01:52:54:00:cf:c0:21}
	I1010 18:47:46.534940  117647 main.go:141] libmachine: (multinode-965291) DBG | domain multinode-965291 has defined IP address 192.168.39.28 and MAC address 52:54:00:cf:c0:21 in network mk-multinode-965291
	I1010 18:47:46.535114  117647 main.go:141] libmachine: (multinode-965291) Calling .GetSSHPort
	I1010 18:47:46.535307  117647 main.go:141] libmachine: (multinode-965291) Calling .GetSSHKeyPath
	I1010 18:47:46.535472  117647 main.go:141] libmachine: (multinode-965291) Calling .GetSSHKeyPath
	I1010 18:47:46.535667  117647 main.go:141] libmachine: (multinode-965291) Calling .GetSSHUsername
	I1010 18:47:46.535913  117647 main.go:141] libmachine: Using SSH client type: native
	I1010 18:47:46.536145  117647 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.28 22 <nil> <nil>}
	I1010 18:47:46.536163  117647 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-965291 && echo "multinode-965291" | sudo tee /etc/hostname
	I1010 18:47:46.673579  117647 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-965291
	
	I1010 18:47:46.673614  117647 main.go:141] libmachine: (multinode-965291) Calling .GetSSHHostname
	I1010 18:47:46.676609  117647 main.go:141] libmachine: (multinode-965291) DBG | domain multinode-965291 has defined MAC address 52:54:00:cf:c0:21 in network mk-multinode-965291
	I1010 18:47:46.677067  117647 main.go:141] libmachine: (multinode-965291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:c0:21", ip: ""} in network mk-multinode-965291: {Iface:virbr1 ExpiryTime:2024-10-10 19:42:19 +0000 UTC Type:0 Mac:52:54:00:cf:c0:21 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:multinode-965291 Clientid:01:52:54:00:cf:c0:21}
	I1010 18:47:46.677091  117647 main.go:141] libmachine: (multinode-965291) DBG | domain multinode-965291 has defined IP address 192.168.39.28 and MAC address 52:54:00:cf:c0:21 in network mk-multinode-965291
	I1010 18:47:46.677420  117647 main.go:141] libmachine: (multinode-965291) Calling .GetSSHPort
	I1010 18:47:46.677611  117647 main.go:141] libmachine: (multinode-965291) Calling .GetSSHKeyPath
	I1010 18:47:46.677777  117647 main.go:141] libmachine: (multinode-965291) Calling .GetSSHKeyPath
	I1010 18:47:46.677916  117647 main.go:141] libmachine: (multinode-965291) Calling .GetSSHUsername
	I1010 18:47:46.678063  117647 main.go:141] libmachine: Using SSH client type: native
	I1010 18:47:46.678273  117647 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.28 22 <nil> <nil>}
	I1010 18:47:46.678295  117647 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-965291' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-965291/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-965291' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1010 18:47:46.790570  117647 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1010 18:47:46.790605  117647 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19787-81676/.minikube CaCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19787-81676/.minikube}
	I1010 18:47:46.790624  117647 buildroot.go:174] setting up certificates
	I1010 18:47:46.790634  117647 provision.go:84] configureAuth start
	I1010 18:47:46.790644  117647 main.go:141] libmachine: (multinode-965291) Calling .GetMachineName
	I1010 18:47:46.790907  117647 main.go:141] libmachine: (multinode-965291) Calling .GetIP
	I1010 18:47:46.793597  117647 main.go:141] libmachine: (multinode-965291) DBG | domain multinode-965291 has defined MAC address 52:54:00:cf:c0:21 in network mk-multinode-965291
	I1010 18:47:46.794024  117647 main.go:141] libmachine: (multinode-965291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:c0:21", ip: ""} in network mk-multinode-965291: {Iface:virbr1 ExpiryTime:2024-10-10 19:42:19 +0000 UTC Type:0 Mac:52:54:00:cf:c0:21 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:multinode-965291 Clientid:01:52:54:00:cf:c0:21}
	I1010 18:47:46.794058  117647 main.go:141] libmachine: (multinode-965291) DBG | domain multinode-965291 has defined IP address 192.168.39.28 and MAC address 52:54:00:cf:c0:21 in network mk-multinode-965291
	I1010 18:47:46.794211  117647 main.go:141] libmachine: (multinode-965291) Calling .GetSSHHostname
	I1010 18:47:46.796868  117647 main.go:141] libmachine: (multinode-965291) DBG | domain multinode-965291 has defined MAC address 52:54:00:cf:c0:21 in network mk-multinode-965291
	I1010 18:47:46.797285  117647 main.go:141] libmachine: (multinode-965291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:c0:21", ip: ""} in network mk-multinode-965291: {Iface:virbr1 ExpiryTime:2024-10-10 19:42:19 +0000 UTC Type:0 Mac:52:54:00:cf:c0:21 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:multinode-965291 Clientid:01:52:54:00:cf:c0:21}
	I1010 18:47:46.797328  117647 main.go:141] libmachine: (multinode-965291) DBG | domain multinode-965291 has defined IP address 192.168.39.28 and MAC address 52:54:00:cf:c0:21 in network mk-multinode-965291
	I1010 18:47:46.797513  117647 provision.go:143] copyHostCerts
	I1010 18:47:46.797545  117647 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem
	I1010 18:47:46.797575  117647 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem, removing ...
	I1010 18:47:46.797582  117647 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem
	I1010 18:47:46.797649  117647 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem (1078 bytes)
	I1010 18:47:46.797740  117647 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem
	I1010 18:47:46.797757  117647 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem, removing ...
	I1010 18:47:46.797763  117647 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem
	I1010 18:47:46.797787  117647 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem (1123 bytes)
	I1010 18:47:46.797846  117647 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem
	I1010 18:47:46.797862  117647 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem, removing ...
	I1010 18:47:46.797873  117647 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem
	I1010 18:47:46.797902  117647 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem (1675 bytes)
	I1010 18:47:46.797961  117647 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem org=jenkins.multinode-965291 san=[127.0.0.1 192.168.39.28 localhost minikube multinode-965291]
	I1010 18:47:47.373209  117647 provision.go:177] copyRemoteCerts
	I1010 18:47:47.373279  117647 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1010 18:47:47.373307  117647 main.go:141] libmachine: (multinode-965291) Calling .GetSSHHostname
	I1010 18:47:47.375881  117647 main.go:141] libmachine: (multinode-965291) DBG | domain multinode-965291 has defined MAC address 52:54:00:cf:c0:21 in network mk-multinode-965291
	I1010 18:47:47.376229  117647 main.go:141] libmachine: (multinode-965291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:c0:21", ip: ""} in network mk-multinode-965291: {Iface:virbr1 ExpiryTime:2024-10-10 19:42:19 +0000 UTC Type:0 Mac:52:54:00:cf:c0:21 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:multinode-965291 Clientid:01:52:54:00:cf:c0:21}
	I1010 18:47:47.376265  117647 main.go:141] libmachine: (multinode-965291) DBG | domain multinode-965291 has defined IP address 192.168.39.28 and MAC address 52:54:00:cf:c0:21 in network mk-multinode-965291
	I1010 18:47:47.376479  117647 main.go:141] libmachine: (multinode-965291) Calling .GetSSHPort
	I1010 18:47:47.376697  117647 main.go:141] libmachine: (multinode-965291) Calling .GetSSHKeyPath
	I1010 18:47:47.376865  117647 main.go:141] libmachine: (multinode-965291) Calling .GetSSHUsername
	I1010 18:47:47.376996  117647 sshutil.go:53] new ssh client: &{IP:192.168.39.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/multinode-965291/id_rsa Username:docker}
	I1010 18:47:47.460334  117647 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1010 18:47:47.460413  117647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1010 18:47:47.486423  117647 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1010 18:47:47.486536  117647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1010 18:47:47.512795  117647 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1010 18:47:47.512885  117647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1010 18:47:47.544811  117647 provision.go:87] duration metric: took 754.154389ms to configureAuth
	I1010 18:47:47.544844  117647 buildroot.go:189] setting minikube options for container-runtime
	I1010 18:47:47.545078  117647 config.go:182] Loaded profile config "multinode-965291": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 18:47:47.545151  117647 main.go:141] libmachine: (multinode-965291) Calling .GetSSHHostname
	I1010 18:47:47.547966  117647 main.go:141] libmachine: (multinode-965291) DBG | domain multinode-965291 has defined MAC address 52:54:00:cf:c0:21 in network mk-multinode-965291
	I1010 18:47:47.548384  117647 main.go:141] libmachine: (multinode-965291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:c0:21", ip: ""} in network mk-multinode-965291: {Iface:virbr1 ExpiryTime:2024-10-10 19:42:19 +0000 UTC Type:0 Mac:52:54:00:cf:c0:21 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:multinode-965291 Clientid:01:52:54:00:cf:c0:21}
	I1010 18:47:47.548419  117647 main.go:141] libmachine: (multinode-965291) DBG | domain multinode-965291 has defined IP address 192.168.39.28 and MAC address 52:54:00:cf:c0:21 in network mk-multinode-965291
	I1010 18:47:47.548563  117647 main.go:141] libmachine: (multinode-965291) Calling .GetSSHPort
	I1010 18:47:47.548769  117647 main.go:141] libmachine: (multinode-965291) Calling .GetSSHKeyPath
	I1010 18:47:47.548932  117647 main.go:141] libmachine: (multinode-965291) Calling .GetSSHKeyPath
	I1010 18:47:47.549055  117647 main.go:141] libmachine: (multinode-965291) Calling .GetSSHUsername
	I1010 18:47:47.549225  117647 main.go:141] libmachine: Using SSH client type: native
	I1010 18:47:47.549400  117647 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.28 22 <nil> <nil>}
	I1010 18:47:47.549417  117647 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1010 18:49:18.287285  117647 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1010 18:49:18.287331  117647 machine.go:96] duration metric: took 1m31.872808891s to provisionDockerMachine
	I1010 18:49:18.287350  117647 start.go:293] postStartSetup for "multinode-965291" (driver="kvm2")
	I1010 18:49:18.287365  117647 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1010 18:49:18.287398  117647 main.go:141] libmachine: (multinode-965291) Calling .DriverName
	I1010 18:49:18.287781  117647 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1010 18:49:18.287810  117647 main.go:141] libmachine: (multinode-965291) Calling .GetSSHHostname
	I1010 18:49:18.291535  117647 main.go:141] libmachine: (multinode-965291) DBG | domain multinode-965291 has defined MAC address 52:54:00:cf:c0:21 in network mk-multinode-965291
	I1010 18:49:18.292093  117647 main.go:141] libmachine: (multinode-965291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:c0:21", ip: ""} in network mk-multinode-965291: {Iface:virbr1 ExpiryTime:2024-10-10 19:42:19 +0000 UTC Type:0 Mac:52:54:00:cf:c0:21 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:multinode-965291 Clientid:01:52:54:00:cf:c0:21}
	I1010 18:49:18.292126  117647 main.go:141] libmachine: (multinode-965291) DBG | domain multinode-965291 has defined IP address 192.168.39.28 and MAC address 52:54:00:cf:c0:21 in network mk-multinode-965291
	I1010 18:49:18.292285  117647 main.go:141] libmachine: (multinode-965291) Calling .GetSSHPort
	I1010 18:49:18.292492  117647 main.go:141] libmachine: (multinode-965291) Calling .GetSSHKeyPath
	I1010 18:49:18.292675  117647 main.go:141] libmachine: (multinode-965291) Calling .GetSSHUsername
	I1010 18:49:18.292879  117647 sshutil.go:53] new ssh client: &{IP:192.168.39.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/multinode-965291/id_rsa Username:docker}
	I1010 18:49:18.381162  117647 ssh_runner.go:195] Run: cat /etc/os-release
	I1010 18:49:18.386114  117647 command_runner.go:130] > NAME=Buildroot
	I1010 18:49:18.386141  117647 command_runner.go:130] > VERSION=2023.02.9-dirty
	I1010 18:49:18.386148  117647 command_runner.go:130] > ID=buildroot
	I1010 18:49:18.386156  117647 command_runner.go:130] > VERSION_ID=2023.02.9
	I1010 18:49:18.386164  117647 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I1010 18:49:18.386199  117647 info.go:137] Remote host: Buildroot 2023.02.9
	I1010 18:49:18.386215  117647 filesync.go:126] Scanning /home/jenkins/minikube-integration/19787-81676/.minikube/addons for local assets ...
	I1010 18:49:18.386281  117647 filesync.go:126] Scanning /home/jenkins/minikube-integration/19787-81676/.minikube/files for local assets ...
	I1010 18:49:18.386358  117647 filesync.go:149] local asset: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem -> 888762.pem in /etc/ssl/certs
	I1010 18:49:18.386370  117647 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem -> /etc/ssl/certs/888762.pem
	I1010 18:49:18.386464  117647 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1010 18:49:18.396778  117647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem --> /etc/ssl/certs/888762.pem (1708 bytes)
	I1010 18:49:18.423812  117647 start.go:296] duration metric: took 136.444288ms for postStartSetup
	I1010 18:49:18.423860  117647 fix.go:56] duration metric: took 1m32.032662324s for fixHost
	I1010 18:49:18.423885  117647 main.go:141] libmachine: (multinode-965291) Calling .GetSSHHostname
	I1010 18:49:18.426994  117647 main.go:141] libmachine: (multinode-965291) DBG | domain multinode-965291 has defined MAC address 52:54:00:cf:c0:21 in network mk-multinode-965291
	I1010 18:49:18.427505  117647 main.go:141] libmachine: (multinode-965291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:c0:21", ip: ""} in network mk-multinode-965291: {Iface:virbr1 ExpiryTime:2024-10-10 19:42:19 +0000 UTC Type:0 Mac:52:54:00:cf:c0:21 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:multinode-965291 Clientid:01:52:54:00:cf:c0:21}
	I1010 18:49:18.427542  117647 main.go:141] libmachine: (multinode-965291) DBG | domain multinode-965291 has defined IP address 192.168.39.28 and MAC address 52:54:00:cf:c0:21 in network mk-multinode-965291
	I1010 18:49:18.427755  117647 main.go:141] libmachine: (multinode-965291) Calling .GetSSHPort
	I1010 18:49:18.427917  117647 main.go:141] libmachine: (multinode-965291) Calling .GetSSHKeyPath
	I1010 18:49:18.428157  117647 main.go:141] libmachine: (multinode-965291) Calling .GetSSHKeyPath
	I1010 18:49:18.428362  117647 main.go:141] libmachine: (multinode-965291) Calling .GetSSHUsername
	I1010 18:49:18.428588  117647 main.go:141] libmachine: Using SSH client type: native
	I1010 18:49:18.428747  117647 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.28 22 <nil> <nil>}
	I1010 18:49:18.428758  117647 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1010 18:49:18.538384  117647 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728586158.518055076
	
	I1010 18:49:18.538414  117647 fix.go:216] guest clock: 1728586158.518055076
	I1010 18:49:18.538423  117647 fix.go:229] Guest: 2024-10-10 18:49:18.518055076 +0000 UTC Remote: 2024-10-10 18:49:18.42386518 +0000 UTC m=+92.171279660 (delta=94.189896ms)
	I1010 18:49:18.538485  117647 fix.go:200] guest clock delta is within tolerance: 94.189896ms
	I1010 18:49:18.538496  117647 start.go:83] releasing machines lock for "multinode-965291", held for 1m32.147315907s
	I1010 18:49:18.538538  117647 main.go:141] libmachine: (multinode-965291) Calling .DriverName
	I1010 18:49:18.538820  117647 main.go:141] libmachine: (multinode-965291) Calling .GetIP
	I1010 18:49:18.541786  117647 main.go:141] libmachine: (multinode-965291) DBG | domain multinode-965291 has defined MAC address 52:54:00:cf:c0:21 in network mk-multinode-965291
	I1010 18:49:18.542290  117647 main.go:141] libmachine: (multinode-965291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:c0:21", ip: ""} in network mk-multinode-965291: {Iface:virbr1 ExpiryTime:2024-10-10 19:42:19 +0000 UTC Type:0 Mac:52:54:00:cf:c0:21 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:multinode-965291 Clientid:01:52:54:00:cf:c0:21}
	I1010 18:49:18.542325  117647 main.go:141] libmachine: (multinode-965291) DBG | domain multinode-965291 has defined IP address 192.168.39.28 and MAC address 52:54:00:cf:c0:21 in network mk-multinode-965291
	I1010 18:49:18.542496  117647 main.go:141] libmachine: (multinode-965291) Calling .DriverName
	I1010 18:49:18.543133  117647 main.go:141] libmachine: (multinode-965291) Calling .DriverName
	I1010 18:49:18.543380  117647 main.go:141] libmachine: (multinode-965291) Calling .DriverName
	I1010 18:49:18.543514  117647 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1010 18:49:18.543576  117647 main.go:141] libmachine: (multinode-965291) Calling .GetSSHHostname
	I1010 18:49:18.543594  117647 ssh_runner.go:195] Run: cat /version.json
	I1010 18:49:18.543620  117647 main.go:141] libmachine: (multinode-965291) Calling .GetSSHHostname
	I1010 18:49:18.546722  117647 main.go:141] libmachine: (multinode-965291) DBG | domain multinode-965291 has defined MAC address 52:54:00:cf:c0:21 in network mk-multinode-965291
	I1010 18:49:18.547054  117647 main.go:141] libmachine: (multinode-965291) DBG | domain multinode-965291 has defined MAC address 52:54:00:cf:c0:21 in network mk-multinode-965291
	I1010 18:49:18.547114  117647 main.go:141] libmachine: (multinode-965291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:c0:21", ip: ""} in network mk-multinode-965291: {Iface:virbr1 ExpiryTime:2024-10-10 19:42:19 +0000 UTC Type:0 Mac:52:54:00:cf:c0:21 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:multinode-965291 Clientid:01:52:54:00:cf:c0:21}
	I1010 18:49:18.547150  117647 main.go:141] libmachine: (multinode-965291) DBG | domain multinode-965291 has defined IP address 192.168.39.28 and MAC address 52:54:00:cf:c0:21 in network mk-multinode-965291
	I1010 18:49:18.547306  117647 main.go:141] libmachine: (multinode-965291) Calling .GetSSHPort
	I1010 18:49:18.547477  117647 main.go:141] libmachine: (multinode-965291) Calling .GetSSHKeyPath
	I1010 18:49:18.547539  117647 main.go:141] libmachine: (multinode-965291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:c0:21", ip: ""} in network mk-multinode-965291: {Iface:virbr1 ExpiryTime:2024-10-10 19:42:19 +0000 UTC Type:0 Mac:52:54:00:cf:c0:21 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:multinode-965291 Clientid:01:52:54:00:cf:c0:21}
	I1010 18:49:18.547570  117647 main.go:141] libmachine: (multinode-965291) DBG | domain multinode-965291 has defined IP address 192.168.39.28 and MAC address 52:54:00:cf:c0:21 in network mk-multinode-965291
	I1010 18:49:18.547637  117647 main.go:141] libmachine: (multinode-965291) Calling .GetSSHUsername
	I1010 18:49:18.547715  117647 main.go:141] libmachine: (multinode-965291) Calling .GetSSHPort
	I1010 18:49:18.547799  117647 sshutil.go:53] new ssh client: &{IP:192.168.39.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/multinode-965291/id_rsa Username:docker}
	I1010 18:49:18.547858  117647 main.go:141] libmachine: (multinode-965291) Calling .GetSSHKeyPath
	I1010 18:49:18.547996  117647 main.go:141] libmachine: (multinode-965291) Calling .GetSSHUsername
	I1010 18:49:18.548149  117647 sshutil.go:53] new ssh client: &{IP:192.168.39.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/multinode-965291/id_rsa Username:docker}
	I1010 18:49:18.626343  117647 command_runner.go:130] > {"iso_version": "v1.34.0-1728382514-19774", "kicbase_version": "v0.0.45-1728063813-19756", "minikube_version": "v1.34.0", "commit": "cf9f11c2b0369efc07a929c4a1fdb2b4b3c62ee9"}
	I1010 18:49:18.626536  117647 ssh_runner.go:195] Run: systemctl --version
	I1010 18:49:18.657814  117647 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1010 18:49:18.657898  117647 command_runner.go:130] > systemd 252 (252)
	I1010 18:49:18.657944  117647 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I1010 18:49:18.658013  117647 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1010 18:49:18.818506  117647 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1010 18:49:18.826335  117647 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1010 18:49:18.826624  117647 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1010 18:49:18.826691  117647 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1010 18:49:18.836611  117647 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1010 18:49:18.836642  117647 start.go:495] detecting cgroup driver to use...
	I1010 18:49:18.836712  117647 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1010 18:49:18.854116  117647 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1010 18:49:18.868936  117647 docker.go:217] disabling cri-docker service (if available) ...
	I1010 18:49:18.869002  117647 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1010 18:49:18.883684  117647 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1010 18:49:18.898280  117647 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1010 18:49:19.045206  117647 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1010 18:49:19.211098  117647 docker.go:233] disabling docker service ...
	I1010 18:49:19.211165  117647 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1010 18:49:19.231977  117647 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1010 18:49:19.246964  117647 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1010 18:49:19.401708  117647 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1010 18:49:19.559847  117647 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1010 18:49:19.576284  117647 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1010 18:49:19.595711  117647 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1010 18:49:19.595991  117647 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1010 18:49:19.596152  117647 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:49:19.608281  117647 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1010 18:49:19.608353  117647 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:49:19.620270  117647 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:49:19.633140  117647 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:49:19.644308  117647 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1010 18:49:19.656507  117647 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:49:19.668077  117647 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:49:19.691194  117647 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:49:19.706550  117647 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1010 18:49:19.718575  117647 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1010 18:49:19.719145  117647 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1010 18:49:19.759850  117647 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 18:49:19.946179  117647 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1010 18:49:20.191891  117647 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1010 18:49:20.191978  117647 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1010 18:49:20.197305  117647 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1010 18:49:20.197334  117647 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1010 18:49:20.197344  117647 command_runner.go:130] > Device: 0,22	Inode: 1350        Links: 1
	I1010 18:49:20.197352  117647 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1010 18:49:20.197359  117647 command_runner.go:130] > Access: 2024-10-10 18:49:20.025969704 +0000
	I1010 18:49:20.197367  117647 command_runner.go:130] > Modify: 2024-10-10 18:49:20.025969704 +0000
	I1010 18:49:20.197374  117647 command_runner.go:130] > Change: 2024-10-10 18:49:20.025969704 +0000
	I1010 18:49:20.197379  117647 command_runner.go:130] >  Birth: -
	I1010 18:49:20.197398  117647 start.go:563] Will wait 60s for crictl version
	I1010 18:49:20.197528  117647 ssh_runner.go:195] Run: which crictl
	I1010 18:49:20.201896  117647 command_runner.go:130] > /usr/bin/crictl
	I1010 18:49:20.202057  117647 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1010 18:49:20.244488  117647 command_runner.go:130] > Version:  0.1.0
	I1010 18:49:20.244516  117647 command_runner.go:130] > RuntimeName:  cri-o
	I1010 18:49:20.244521  117647 command_runner.go:130] > RuntimeVersion:  1.29.1
	I1010 18:49:20.244526  117647 command_runner.go:130] > RuntimeApiVersion:  v1
	I1010 18:49:20.244541  117647 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1010 18:49:20.244608  117647 ssh_runner.go:195] Run: crio --version
	I1010 18:49:20.273755  117647 command_runner.go:130] > crio version 1.29.1
	I1010 18:49:20.273787  117647 command_runner.go:130] > Version:        1.29.1
	I1010 18:49:20.273796  117647 command_runner.go:130] > GitCommit:      unknown
	I1010 18:49:20.273803  117647 command_runner.go:130] > GitCommitDate:  unknown
	I1010 18:49:20.273810  117647 command_runner.go:130] > GitTreeState:   clean
	I1010 18:49:20.273819  117647 command_runner.go:130] > BuildDate:      2024-10-08T15:57:16Z
	I1010 18:49:20.273825  117647 command_runner.go:130] > GoVersion:      go1.21.6
	I1010 18:49:20.273831  117647 command_runner.go:130] > Compiler:       gc
	I1010 18:49:20.273839  117647 command_runner.go:130] > Platform:       linux/amd64
	I1010 18:49:20.273846  117647 command_runner.go:130] > Linkmode:       dynamic
	I1010 18:49:20.273853  117647 command_runner.go:130] > BuildTags:      
	I1010 18:49:20.273860  117647 command_runner.go:130] >   containers_image_ostree_stub
	I1010 18:49:20.273870  117647 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1010 18:49:20.273882  117647 command_runner.go:130] >   btrfs_noversion
	I1010 18:49:20.273891  117647 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1010 18:49:20.273898  117647 command_runner.go:130] >   libdm_no_deferred_remove
	I1010 18:49:20.273908  117647 command_runner.go:130] >   seccomp
	I1010 18:49:20.273916  117647 command_runner.go:130] > LDFlags:          unknown
	I1010 18:49:20.273924  117647 command_runner.go:130] > SeccompEnabled:   true
	I1010 18:49:20.273930  117647 command_runner.go:130] > AppArmorEnabled:  false
	I1010 18:49:20.275048  117647 ssh_runner.go:195] Run: crio --version
	I1010 18:49:20.303968  117647 command_runner.go:130] > crio version 1.29.1
	I1010 18:49:20.303999  117647 command_runner.go:130] > Version:        1.29.1
	I1010 18:49:20.304006  117647 command_runner.go:130] > GitCommit:      unknown
	I1010 18:49:20.304010  117647 command_runner.go:130] > GitCommitDate:  unknown
	I1010 18:49:20.304014  117647 command_runner.go:130] > GitTreeState:   clean
	I1010 18:49:20.304023  117647 command_runner.go:130] > BuildDate:      2024-10-08T15:57:16Z
	I1010 18:49:20.304029  117647 command_runner.go:130] > GoVersion:      go1.21.6
	I1010 18:49:20.304035  117647 command_runner.go:130] > Compiler:       gc
	I1010 18:49:20.304042  117647 command_runner.go:130] > Platform:       linux/amd64
	I1010 18:49:20.304048  117647 command_runner.go:130] > Linkmode:       dynamic
	I1010 18:49:20.304061  117647 command_runner.go:130] > BuildTags:      
	I1010 18:49:20.304067  117647 command_runner.go:130] >   containers_image_ostree_stub
	I1010 18:49:20.304072  117647 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1010 18:49:20.304089  117647 command_runner.go:130] >   btrfs_noversion
	I1010 18:49:20.304097  117647 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1010 18:49:20.304104  117647 command_runner.go:130] >   libdm_no_deferred_remove
	I1010 18:49:20.304109  117647 command_runner.go:130] >   seccomp
	I1010 18:49:20.304115  117647 command_runner.go:130] > LDFlags:          unknown
	I1010 18:49:20.304124  117647 command_runner.go:130] > SeccompEnabled:   true
	I1010 18:49:20.304130  117647 command_runner.go:130] > AppArmorEnabled:  false
	I1010 18:49:20.311483  117647 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1010 18:49:20.313396  117647 main.go:141] libmachine: (multinode-965291) Calling .GetIP
	I1010 18:49:20.316646  117647 main.go:141] libmachine: (multinode-965291) DBG | domain multinode-965291 has defined MAC address 52:54:00:cf:c0:21 in network mk-multinode-965291
	I1010 18:49:20.317139  117647 main.go:141] libmachine: (multinode-965291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:c0:21", ip: ""} in network mk-multinode-965291: {Iface:virbr1 ExpiryTime:2024-10-10 19:42:19 +0000 UTC Type:0 Mac:52:54:00:cf:c0:21 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:multinode-965291 Clientid:01:52:54:00:cf:c0:21}
	I1010 18:49:20.317169  117647 main.go:141] libmachine: (multinode-965291) DBG | domain multinode-965291 has defined IP address 192.168.39.28 and MAC address 52:54:00:cf:c0:21 in network mk-multinode-965291
	I1010 18:49:20.317403  117647 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1010 18:49:20.322205  117647 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I1010 18:49:20.322329  117647 kubeadm.go:883] updating cluster {Name:multinode-965291 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.1 ClusterName:multinode-965291 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.28 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.122 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.137 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadge
t:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fa
lse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1010 18:49:20.322475  117647 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1010 18:49:20.322514  117647 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 18:49:20.364746  117647 command_runner.go:130] > {
	I1010 18:49:20.364770  117647 command_runner.go:130] >   "images": [
	I1010 18:49:20.364774  117647 command_runner.go:130] >     {
	I1010 18:49:20.364783  117647 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I1010 18:49:20.364788  117647 command_runner.go:130] >       "repoTags": [
	I1010 18:49:20.364798  117647 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I1010 18:49:20.364802  117647 command_runner.go:130] >       ],
	I1010 18:49:20.364806  117647 command_runner.go:130] >       "repoDigests": [
	I1010 18:49:20.364814  117647 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I1010 18:49:20.364821  117647 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I1010 18:49:20.364824  117647 command_runner.go:130] >       ],
	I1010 18:49:20.364829  117647 command_runner.go:130] >       "size": "87190579",
	I1010 18:49:20.364833  117647 command_runner.go:130] >       "uid": null,
	I1010 18:49:20.364837  117647 command_runner.go:130] >       "username": "",
	I1010 18:49:20.364843  117647 command_runner.go:130] >       "spec": null,
	I1010 18:49:20.364861  117647 command_runner.go:130] >       "pinned": false
	I1010 18:49:20.364865  117647 command_runner.go:130] >     },
	I1010 18:49:20.364872  117647 command_runner.go:130] >     {
	I1010 18:49:20.364880  117647 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I1010 18:49:20.364886  117647 command_runner.go:130] >       "repoTags": [
	I1010 18:49:20.364894  117647 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I1010 18:49:20.364898  117647 command_runner.go:130] >       ],
	I1010 18:49:20.364903  117647 command_runner.go:130] >       "repoDigests": [
	I1010 18:49:20.364909  117647 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I1010 18:49:20.364918  117647 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I1010 18:49:20.364922  117647 command_runner.go:130] >       ],
	I1010 18:49:20.364926  117647 command_runner.go:130] >       "size": "1363676",
	I1010 18:49:20.364930  117647 command_runner.go:130] >       "uid": null,
	I1010 18:49:20.364935  117647 command_runner.go:130] >       "username": "",
	I1010 18:49:20.364941  117647 command_runner.go:130] >       "spec": null,
	I1010 18:49:20.364945  117647 command_runner.go:130] >       "pinned": false
	I1010 18:49:20.364948  117647 command_runner.go:130] >     },
	I1010 18:49:20.364952  117647 command_runner.go:130] >     {
	I1010 18:49:20.364958  117647 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1010 18:49:20.364963  117647 command_runner.go:130] >       "repoTags": [
	I1010 18:49:20.364967  117647 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1010 18:49:20.364970  117647 command_runner.go:130] >       ],
	I1010 18:49:20.364975  117647 command_runner.go:130] >       "repoDigests": [
	I1010 18:49:20.364985  117647 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1010 18:49:20.364992  117647 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1010 18:49:20.364998  117647 command_runner.go:130] >       ],
	I1010 18:49:20.365002  117647 command_runner.go:130] >       "size": "31470524",
	I1010 18:49:20.365005  117647 command_runner.go:130] >       "uid": null,
	I1010 18:49:20.365012  117647 command_runner.go:130] >       "username": "",
	I1010 18:49:20.365015  117647 command_runner.go:130] >       "spec": null,
	I1010 18:49:20.365019  117647 command_runner.go:130] >       "pinned": false
	I1010 18:49:20.365023  117647 command_runner.go:130] >     },
	I1010 18:49:20.365026  117647 command_runner.go:130] >     {
	I1010 18:49:20.365031  117647 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I1010 18:49:20.365036  117647 command_runner.go:130] >       "repoTags": [
	I1010 18:49:20.365041  117647 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I1010 18:49:20.365045  117647 command_runner.go:130] >       ],
	I1010 18:49:20.365048  117647 command_runner.go:130] >       "repoDigests": [
	I1010 18:49:20.365054  117647 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I1010 18:49:20.365067  117647 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I1010 18:49:20.365073  117647 command_runner.go:130] >       ],
	I1010 18:49:20.365076  117647 command_runner.go:130] >       "size": "63273227",
	I1010 18:49:20.365080  117647 command_runner.go:130] >       "uid": null,
	I1010 18:49:20.365087  117647 command_runner.go:130] >       "username": "nonroot",
	I1010 18:49:20.365091  117647 command_runner.go:130] >       "spec": null,
	I1010 18:49:20.365096  117647 command_runner.go:130] >       "pinned": false
	I1010 18:49:20.365099  117647 command_runner.go:130] >     },
	I1010 18:49:20.365104  117647 command_runner.go:130] >     {
	I1010 18:49:20.365110  117647 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I1010 18:49:20.365116  117647 command_runner.go:130] >       "repoTags": [
	I1010 18:49:20.365120  117647 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I1010 18:49:20.365126  117647 command_runner.go:130] >       ],
	I1010 18:49:20.365130  117647 command_runner.go:130] >       "repoDigests": [
	I1010 18:49:20.365137  117647 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I1010 18:49:20.365145  117647 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I1010 18:49:20.365149  117647 command_runner.go:130] >       ],
	I1010 18:49:20.365161  117647 command_runner.go:130] >       "size": "149009664",
	I1010 18:49:20.365166  117647 command_runner.go:130] >       "uid": {
	I1010 18:49:20.365170  117647 command_runner.go:130] >         "value": "0"
	I1010 18:49:20.365174  117647 command_runner.go:130] >       },
	I1010 18:49:20.365178  117647 command_runner.go:130] >       "username": "",
	I1010 18:49:20.365182  117647 command_runner.go:130] >       "spec": null,
	I1010 18:49:20.365186  117647 command_runner.go:130] >       "pinned": false
	I1010 18:49:20.365190  117647 command_runner.go:130] >     },
	I1010 18:49:20.365195  117647 command_runner.go:130] >     {
	I1010 18:49:20.365205  117647 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I1010 18:49:20.365212  117647 command_runner.go:130] >       "repoTags": [
	I1010 18:49:20.365217  117647 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I1010 18:49:20.365222  117647 command_runner.go:130] >       ],
	I1010 18:49:20.365226  117647 command_runner.go:130] >       "repoDigests": [
	I1010 18:49:20.365234  117647 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I1010 18:49:20.365242  117647 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I1010 18:49:20.365246  117647 command_runner.go:130] >       ],
	I1010 18:49:20.365252  117647 command_runner.go:130] >       "size": "95237600",
	I1010 18:49:20.365256  117647 command_runner.go:130] >       "uid": {
	I1010 18:49:20.365262  117647 command_runner.go:130] >         "value": "0"
	I1010 18:49:20.365266  117647 command_runner.go:130] >       },
	I1010 18:49:20.365272  117647 command_runner.go:130] >       "username": "",
	I1010 18:49:20.365276  117647 command_runner.go:130] >       "spec": null,
	I1010 18:49:20.365280  117647 command_runner.go:130] >       "pinned": false
	I1010 18:49:20.365284  117647 command_runner.go:130] >     },
	I1010 18:49:20.365287  117647 command_runner.go:130] >     {
	I1010 18:49:20.365293  117647 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I1010 18:49:20.365299  117647 command_runner.go:130] >       "repoTags": [
	I1010 18:49:20.365304  117647 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I1010 18:49:20.365308  117647 command_runner.go:130] >       ],
	I1010 18:49:20.365312  117647 command_runner.go:130] >       "repoDigests": [
	I1010 18:49:20.365321  117647 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I1010 18:49:20.365328  117647 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I1010 18:49:20.365334  117647 command_runner.go:130] >       ],
	I1010 18:49:20.365338  117647 command_runner.go:130] >       "size": "89437508",
	I1010 18:49:20.365342  117647 command_runner.go:130] >       "uid": {
	I1010 18:49:20.365346  117647 command_runner.go:130] >         "value": "0"
	I1010 18:49:20.365352  117647 command_runner.go:130] >       },
	I1010 18:49:20.365355  117647 command_runner.go:130] >       "username": "",
	I1010 18:49:20.365359  117647 command_runner.go:130] >       "spec": null,
	I1010 18:49:20.365363  117647 command_runner.go:130] >       "pinned": false
	I1010 18:49:20.365366  117647 command_runner.go:130] >     },
	I1010 18:49:20.365370  117647 command_runner.go:130] >     {
	I1010 18:49:20.365376  117647 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I1010 18:49:20.365382  117647 command_runner.go:130] >       "repoTags": [
	I1010 18:49:20.365387  117647 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I1010 18:49:20.365391  117647 command_runner.go:130] >       ],
	I1010 18:49:20.365395  117647 command_runner.go:130] >       "repoDigests": [
	I1010 18:49:20.365420  117647 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I1010 18:49:20.365429  117647 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I1010 18:49:20.365432  117647 command_runner.go:130] >       ],
	I1010 18:49:20.365448  117647 command_runner.go:130] >       "size": "92733849",
	I1010 18:49:20.365454  117647 command_runner.go:130] >       "uid": null,
	I1010 18:49:20.365458  117647 command_runner.go:130] >       "username": "",
	I1010 18:49:20.365462  117647 command_runner.go:130] >       "spec": null,
	I1010 18:49:20.365466  117647 command_runner.go:130] >       "pinned": false
	I1010 18:49:20.365470  117647 command_runner.go:130] >     },
	I1010 18:49:20.365473  117647 command_runner.go:130] >     {
	I1010 18:49:20.365478  117647 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I1010 18:49:20.365482  117647 command_runner.go:130] >       "repoTags": [
	I1010 18:49:20.365487  117647 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I1010 18:49:20.365490  117647 command_runner.go:130] >       ],
	I1010 18:49:20.365494  117647 command_runner.go:130] >       "repoDigests": [
	I1010 18:49:20.365500  117647 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I1010 18:49:20.365507  117647 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I1010 18:49:20.365511  117647 command_runner.go:130] >       ],
	I1010 18:49:20.365515  117647 command_runner.go:130] >       "size": "68420934",
	I1010 18:49:20.365519  117647 command_runner.go:130] >       "uid": {
	I1010 18:49:20.365523  117647 command_runner.go:130] >         "value": "0"
	I1010 18:49:20.365526  117647 command_runner.go:130] >       },
	I1010 18:49:20.365530  117647 command_runner.go:130] >       "username": "",
	I1010 18:49:20.365536  117647 command_runner.go:130] >       "spec": null,
	I1010 18:49:20.365542  117647 command_runner.go:130] >       "pinned": false
	I1010 18:49:20.365545  117647 command_runner.go:130] >     },
	I1010 18:49:20.365549  117647 command_runner.go:130] >     {
	I1010 18:49:20.365554  117647 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I1010 18:49:20.365561  117647 command_runner.go:130] >       "repoTags": [
	I1010 18:49:20.365565  117647 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I1010 18:49:20.365569  117647 command_runner.go:130] >       ],
	I1010 18:49:20.365573  117647 command_runner.go:130] >       "repoDigests": [
	I1010 18:49:20.365580  117647 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I1010 18:49:20.365589  117647 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I1010 18:49:20.365593  117647 command_runner.go:130] >       ],
	I1010 18:49:20.365597  117647 command_runner.go:130] >       "size": "742080",
	I1010 18:49:20.365601  117647 command_runner.go:130] >       "uid": {
	I1010 18:49:20.365605  117647 command_runner.go:130] >         "value": "65535"
	I1010 18:49:20.365608  117647 command_runner.go:130] >       },
	I1010 18:49:20.365612  117647 command_runner.go:130] >       "username": "",
	I1010 18:49:20.365616  117647 command_runner.go:130] >       "spec": null,
	I1010 18:49:20.365619  117647 command_runner.go:130] >       "pinned": true
	I1010 18:49:20.365623  117647 command_runner.go:130] >     }
	I1010 18:49:20.365626  117647 command_runner.go:130] >   ]
	I1010 18:49:20.365629  117647 command_runner.go:130] > }
	I1010 18:49:20.366333  117647 crio.go:514] all images are preloaded for cri-o runtime.
	I1010 18:49:20.366357  117647 crio.go:433] Images already preloaded, skipping extraction
	I1010 18:49:20.366407  117647 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 18:49:20.402003  117647 command_runner.go:130] > {
	I1010 18:49:20.402027  117647 command_runner.go:130] >   "images": [
	I1010 18:49:20.402031  117647 command_runner.go:130] >     {
	I1010 18:49:20.402039  117647 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I1010 18:49:20.402044  117647 command_runner.go:130] >       "repoTags": [
	I1010 18:49:20.402050  117647 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I1010 18:49:20.402054  117647 command_runner.go:130] >       ],
	I1010 18:49:20.402058  117647 command_runner.go:130] >       "repoDigests": [
	I1010 18:49:20.402065  117647 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I1010 18:49:20.402072  117647 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I1010 18:49:20.402076  117647 command_runner.go:130] >       ],
	I1010 18:49:20.402080  117647 command_runner.go:130] >       "size": "87190579",
	I1010 18:49:20.402083  117647 command_runner.go:130] >       "uid": null,
	I1010 18:49:20.402088  117647 command_runner.go:130] >       "username": "",
	I1010 18:49:20.402097  117647 command_runner.go:130] >       "spec": null,
	I1010 18:49:20.402103  117647 command_runner.go:130] >       "pinned": false
	I1010 18:49:20.402107  117647 command_runner.go:130] >     },
	I1010 18:49:20.402113  117647 command_runner.go:130] >     {
	I1010 18:49:20.402119  117647 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I1010 18:49:20.402125  117647 command_runner.go:130] >       "repoTags": [
	I1010 18:49:20.402130  117647 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I1010 18:49:20.402144  117647 command_runner.go:130] >       ],
	I1010 18:49:20.402151  117647 command_runner.go:130] >       "repoDigests": [
	I1010 18:49:20.402158  117647 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I1010 18:49:20.402167  117647 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I1010 18:49:20.402173  117647 command_runner.go:130] >       ],
	I1010 18:49:20.402178  117647 command_runner.go:130] >       "size": "1363676",
	I1010 18:49:20.402184  117647 command_runner.go:130] >       "uid": null,
	I1010 18:49:20.402191  117647 command_runner.go:130] >       "username": "",
	I1010 18:49:20.402198  117647 command_runner.go:130] >       "spec": null,
	I1010 18:49:20.402202  117647 command_runner.go:130] >       "pinned": false
	I1010 18:49:20.402205  117647 command_runner.go:130] >     },
	I1010 18:49:20.402210  117647 command_runner.go:130] >     {
	I1010 18:49:20.402216  117647 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1010 18:49:20.402232  117647 command_runner.go:130] >       "repoTags": [
	I1010 18:49:20.402239  117647 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1010 18:49:20.402243  117647 command_runner.go:130] >       ],
	I1010 18:49:20.402248  117647 command_runner.go:130] >       "repoDigests": [
	I1010 18:49:20.402255  117647 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1010 18:49:20.402265  117647 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1010 18:49:20.402271  117647 command_runner.go:130] >       ],
	I1010 18:49:20.402275  117647 command_runner.go:130] >       "size": "31470524",
	I1010 18:49:20.402281  117647 command_runner.go:130] >       "uid": null,
	I1010 18:49:20.402285  117647 command_runner.go:130] >       "username": "",
	I1010 18:49:20.402291  117647 command_runner.go:130] >       "spec": null,
	I1010 18:49:20.402295  117647 command_runner.go:130] >       "pinned": false
	I1010 18:49:20.402300  117647 command_runner.go:130] >     },
	I1010 18:49:20.402304  117647 command_runner.go:130] >     {
	I1010 18:49:20.402312  117647 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I1010 18:49:20.402316  117647 command_runner.go:130] >       "repoTags": [
	I1010 18:49:20.402323  117647 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I1010 18:49:20.402327  117647 command_runner.go:130] >       ],
	I1010 18:49:20.402333  117647 command_runner.go:130] >       "repoDigests": [
	I1010 18:49:20.402340  117647 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I1010 18:49:20.402360  117647 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I1010 18:49:20.402366  117647 command_runner.go:130] >       ],
	I1010 18:49:20.402371  117647 command_runner.go:130] >       "size": "63273227",
	I1010 18:49:20.402376  117647 command_runner.go:130] >       "uid": null,
	I1010 18:49:20.402380  117647 command_runner.go:130] >       "username": "nonroot",
	I1010 18:49:20.402384  117647 command_runner.go:130] >       "spec": null,
	I1010 18:49:20.402388  117647 command_runner.go:130] >       "pinned": false
	I1010 18:49:20.402395  117647 command_runner.go:130] >     },
	I1010 18:49:20.402398  117647 command_runner.go:130] >     {
	I1010 18:49:20.402408  117647 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I1010 18:49:20.402416  117647 command_runner.go:130] >       "repoTags": [
	I1010 18:49:20.402423  117647 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I1010 18:49:20.402431  117647 command_runner.go:130] >       ],
	I1010 18:49:20.402437  117647 command_runner.go:130] >       "repoDigests": [
	I1010 18:49:20.402450  117647 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I1010 18:49:20.402464  117647 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I1010 18:49:20.402470  117647 command_runner.go:130] >       ],
	I1010 18:49:20.402477  117647 command_runner.go:130] >       "size": "149009664",
	I1010 18:49:20.402484  117647 command_runner.go:130] >       "uid": {
	I1010 18:49:20.402488  117647 command_runner.go:130] >         "value": "0"
	I1010 18:49:20.402493  117647 command_runner.go:130] >       },
	I1010 18:49:20.402498  117647 command_runner.go:130] >       "username": "",
	I1010 18:49:20.402503  117647 command_runner.go:130] >       "spec": null,
	I1010 18:49:20.402508  117647 command_runner.go:130] >       "pinned": false
	I1010 18:49:20.402513  117647 command_runner.go:130] >     },
	I1010 18:49:20.402517  117647 command_runner.go:130] >     {
	I1010 18:49:20.402528  117647 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I1010 18:49:20.402538  117647 command_runner.go:130] >       "repoTags": [
	I1010 18:49:20.402548  117647 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I1010 18:49:20.402557  117647 command_runner.go:130] >       ],
	I1010 18:49:20.402564  117647 command_runner.go:130] >       "repoDigests": [
	I1010 18:49:20.402571  117647 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I1010 18:49:20.402580  117647 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I1010 18:49:20.402595  117647 command_runner.go:130] >       ],
	I1010 18:49:20.402602  117647 command_runner.go:130] >       "size": "95237600",
	I1010 18:49:20.402606  117647 command_runner.go:130] >       "uid": {
	I1010 18:49:20.402614  117647 command_runner.go:130] >         "value": "0"
	I1010 18:49:20.402622  117647 command_runner.go:130] >       },
	I1010 18:49:20.402632  117647 command_runner.go:130] >       "username": "",
	I1010 18:49:20.402639  117647 command_runner.go:130] >       "spec": null,
	I1010 18:49:20.402649  117647 command_runner.go:130] >       "pinned": false
	I1010 18:49:20.402657  117647 command_runner.go:130] >     },
	I1010 18:49:20.402665  117647 command_runner.go:130] >     {
	I1010 18:49:20.402674  117647 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I1010 18:49:20.402680  117647 command_runner.go:130] >       "repoTags": [
	I1010 18:49:20.402686  117647 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I1010 18:49:20.402691  117647 command_runner.go:130] >       ],
	I1010 18:49:20.402696  117647 command_runner.go:130] >       "repoDigests": [
	I1010 18:49:20.402705  117647 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I1010 18:49:20.402718  117647 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I1010 18:49:20.402728  117647 command_runner.go:130] >       ],
	I1010 18:49:20.402734  117647 command_runner.go:130] >       "size": "89437508",
	I1010 18:49:20.402743  117647 command_runner.go:130] >       "uid": {
	I1010 18:49:20.402753  117647 command_runner.go:130] >         "value": "0"
	I1010 18:49:20.402761  117647 command_runner.go:130] >       },
	I1010 18:49:20.402767  117647 command_runner.go:130] >       "username": "",
	I1010 18:49:20.402777  117647 command_runner.go:130] >       "spec": null,
	I1010 18:49:20.402785  117647 command_runner.go:130] >       "pinned": false
	I1010 18:49:20.402790  117647 command_runner.go:130] >     },
	I1010 18:49:20.402793  117647 command_runner.go:130] >     {
	I1010 18:49:20.402802  117647 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I1010 18:49:20.402809  117647 command_runner.go:130] >       "repoTags": [
	I1010 18:49:20.402819  117647 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I1010 18:49:20.402828  117647 command_runner.go:130] >       ],
	I1010 18:49:20.402835  117647 command_runner.go:130] >       "repoDigests": [
	I1010 18:49:20.402872  117647 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I1010 18:49:20.402892  117647 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I1010 18:49:20.402900  117647 command_runner.go:130] >       ],
	I1010 18:49:20.402904  117647 command_runner.go:130] >       "size": "92733849",
	I1010 18:49:20.402912  117647 command_runner.go:130] >       "uid": null,
	I1010 18:49:20.402921  117647 command_runner.go:130] >       "username": "",
	I1010 18:49:20.402932  117647 command_runner.go:130] >       "spec": null,
	I1010 18:49:20.402940  117647 command_runner.go:130] >       "pinned": false
	I1010 18:49:20.402949  117647 command_runner.go:130] >     },
	I1010 18:49:20.402958  117647 command_runner.go:130] >     {
	I1010 18:49:20.402969  117647 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I1010 18:49:20.402978  117647 command_runner.go:130] >       "repoTags": [
	I1010 18:49:20.402987  117647 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I1010 18:49:20.402994  117647 command_runner.go:130] >       ],
	I1010 18:49:20.402999  117647 command_runner.go:130] >       "repoDigests": [
	I1010 18:49:20.403014  117647 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I1010 18:49:20.403028  117647 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I1010 18:49:20.403036  117647 command_runner.go:130] >       ],
	I1010 18:49:20.403046  117647 command_runner.go:130] >       "size": "68420934",
	I1010 18:49:20.403054  117647 command_runner.go:130] >       "uid": {
	I1010 18:49:20.403062  117647 command_runner.go:130] >         "value": "0"
	I1010 18:49:20.403069  117647 command_runner.go:130] >       },
	I1010 18:49:20.403076  117647 command_runner.go:130] >       "username": "",
	I1010 18:49:20.403082  117647 command_runner.go:130] >       "spec": null,
	I1010 18:49:20.403087  117647 command_runner.go:130] >       "pinned": false
	I1010 18:49:20.403095  117647 command_runner.go:130] >     },
	I1010 18:49:20.403104  117647 command_runner.go:130] >     {
	I1010 18:49:20.403114  117647 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I1010 18:49:20.403123  117647 command_runner.go:130] >       "repoTags": [
	I1010 18:49:20.403133  117647 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I1010 18:49:20.403142  117647 command_runner.go:130] >       ],
	I1010 18:49:20.403150  117647 command_runner.go:130] >       "repoDigests": [
	I1010 18:49:20.403164  117647 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I1010 18:49:20.403175  117647 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I1010 18:49:20.403188  117647 command_runner.go:130] >       ],
	I1010 18:49:20.403199  117647 command_runner.go:130] >       "size": "742080",
	I1010 18:49:20.403205  117647 command_runner.go:130] >       "uid": {
	I1010 18:49:20.403212  117647 command_runner.go:130] >         "value": "65535"
	I1010 18:49:20.403221  117647 command_runner.go:130] >       },
	I1010 18:49:20.403234  117647 command_runner.go:130] >       "username": "",
	I1010 18:49:20.403243  117647 command_runner.go:130] >       "spec": null,
	I1010 18:49:20.403253  117647 command_runner.go:130] >       "pinned": true
	I1010 18:49:20.403261  117647 command_runner.go:130] >     }
	I1010 18:49:20.403269  117647 command_runner.go:130] >   ]
	I1010 18:49:20.403275  117647 command_runner.go:130] > }
	I1010 18:49:20.403412  117647 crio.go:514] all images are preloaded for cri-o runtime.
	I1010 18:49:20.403425  117647 cache_images.go:84] Images are preloaded, skipping loading
	I1010 18:49:20.403434  117647 kubeadm.go:934] updating node { 192.168.39.28 8443 v1.31.1 crio true true} ...
	I1010 18:49:20.403555  117647 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-965291 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.28
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:multinode-965291 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1010 18:49:20.403678  117647 ssh_runner.go:195] Run: crio config
	I1010 18:49:20.438214  117647 command_runner.go:130] ! time="2024-10-10 18:49:20.418078460Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I1010 18:49:20.444692  117647 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1010 18:49:20.451528  117647 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1010 18:49:20.451560  117647 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1010 18:49:20.451575  117647 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1010 18:49:20.451581  117647 command_runner.go:130] > #
	I1010 18:49:20.451592  117647 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1010 18:49:20.451602  117647 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1010 18:49:20.451613  117647 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1010 18:49:20.451623  117647 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1010 18:49:20.451629  117647 command_runner.go:130] > # reload'.
	I1010 18:49:20.451638  117647 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1010 18:49:20.451653  117647 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1010 18:49:20.451663  117647 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1010 18:49:20.451673  117647 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1010 18:49:20.451683  117647 command_runner.go:130] > [crio]
	I1010 18:49:20.451692  117647 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1010 18:49:20.451699  117647 command_runner.go:130] > # containers images, in this directory.
	I1010 18:49:20.451707  117647 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1010 18:49:20.451736  117647 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1010 18:49:20.451748  117647 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1010 18:49:20.451759  117647 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1010 18:49:20.451766  117647 command_runner.go:130] > # imagestore = ""
	I1010 18:49:20.451777  117647 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1010 18:49:20.451783  117647 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1010 18:49:20.451790  117647 command_runner.go:130] > storage_driver = "overlay"
	I1010 18:49:20.451796  117647 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1010 18:49:20.451802  117647 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1010 18:49:20.451807  117647 command_runner.go:130] > storage_option = [
	I1010 18:49:20.451811  117647 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1010 18:49:20.451817  117647 command_runner.go:130] > ]
	I1010 18:49:20.451823  117647 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1010 18:49:20.451832  117647 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1010 18:49:20.451837  117647 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1010 18:49:20.451844  117647 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1010 18:49:20.451850  117647 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1010 18:49:20.451856  117647 command_runner.go:130] > # always happen on a node reboot
	I1010 18:49:20.451861  117647 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1010 18:49:20.451873  117647 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1010 18:49:20.451881  117647 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1010 18:49:20.451886  117647 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1010 18:49:20.451893  117647 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I1010 18:49:20.451900  117647 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1010 18:49:20.451912  117647 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1010 18:49:20.451919  117647 command_runner.go:130] > # internal_wipe = true
	I1010 18:49:20.451926  117647 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1010 18:49:20.451934  117647 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1010 18:49:20.451939  117647 command_runner.go:130] > # internal_repair = false
	I1010 18:49:20.451946  117647 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1010 18:49:20.451952  117647 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1010 18:49:20.451958  117647 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1010 18:49:20.451964  117647 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1010 18:49:20.451976  117647 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1010 18:49:20.451982  117647 command_runner.go:130] > [crio.api]
	I1010 18:49:20.451988  117647 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1010 18:49:20.451992  117647 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1010 18:49:20.451998  117647 command_runner.go:130] > # IP address on which the stream server will listen.
	I1010 18:49:20.452004  117647 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1010 18:49:20.452010  117647 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1010 18:49:20.452015  117647 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1010 18:49:20.452018  117647 command_runner.go:130] > # stream_port = "0"
	I1010 18:49:20.452023  117647 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1010 18:49:20.452027  117647 command_runner.go:130] > # stream_enable_tls = false
	I1010 18:49:20.452032  117647 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1010 18:49:20.452036  117647 command_runner.go:130] > # stream_idle_timeout = ""
	I1010 18:49:20.452042  117647 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1010 18:49:20.452048  117647 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1010 18:49:20.452051  117647 command_runner.go:130] > # minutes.
	I1010 18:49:20.452057  117647 command_runner.go:130] > # stream_tls_cert = ""
	I1010 18:49:20.452062  117647 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1010 18:49:20.452068  117647 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1010 18:49:20.452072  117647 command_runner.go:130] > # stream_tls_key = ""
	I1010 18:49:20.452078  117647 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1010 18:49:20.452086  117647 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1010 18:49:20.452109  117647 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1010 18:49:20.452115  117647 command_runner.go:130] > # stream_tls_ca = ""
	I1010 18:49:20.452122  117647 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1010 18:49:20.452128  117647 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1010 18:49:20.452135  117647 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1010 18:49:20.452139  117647 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1010 18:49:20.452164  117647 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1010 18:49:20.452176  117647 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1010 18:49:20.452182  117647 command_runner.go:130] > [crio.runtime]
	I1010 18:49:20.452188  117647 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1010 18:49:20.452195  117647 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1010 18:49:20.452204  117647 command_runner.go:130] > # "nofile=1024:2048"
	I1010 18:49:20.452213  117647 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1010 18:49:20.452217  117647 command_runner.go:130] > # default_ulimits = [
	I1010 18:49:20.452220  117647 command_runner.go:130] > # ]
	I1010 18:49:20.452227  117647 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1010 18:49:20.452231  117647 command_runner.go:130] > # no_pivot = false
	I1010 18:49:20.452237  117647 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1010 18:49:20.452245  117647 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1010 18:49:20.452254  117647 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1010 18:49:20.452261  117647 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1010 18:49:20.452267  117647 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1010 18:49:20.452275  117647 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1010 18:49:20.452280  117647 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1010 18:49:20.452284  117647 command_runner.go:130] > # Cgroup setting for conmon
	I1010 18:49:20.452291  117647 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1010 18:49:20.452297  117647 command_runner.go:130] > conmon_cgroup = "pod"
	I1010 18:49:20.452303  117647 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1010 18:49:20.452309  117647 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1010 18:49:20.452317  117647 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1010 18:49:20.452321  117647 command_runner.go:130] > conmon_env = [
	I1010 18:49:20.452328  117647 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1010 18:49:20.452332  117647 command_runner.go:130] > ]
	I1010 18:49:20.452337  117647 command_runner.go:130] > # Additional environment variables to set for all the
	I1010 18:49:20.452344  117647 command_runner.go:130] > # containers. These are overridden if set in the
	I1010 18:49:20.452349  117647 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1010 18:49:20.452352  117647 command_runner.go:130] > # default_env = [
	I1010 18:49:20.452356  117647 command_runner.go:130] > # ]
	I1010 18:49:20.452361  117647 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1010 18:49:20.452370  117647 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1010 18:49:20.452375  117647 command_runner.go:130] > # selinux = false
	I1010 18:49:20.452383  117647 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1010 18:49:20.452389  117647 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1010 18:49:20.452394  117647 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1010 18:49:20.452404  117647 command_runner.go:130] > # seccomp_profile = ""
	I1010 18:49:20.452411  117647 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1010 18:49:20.452421  117647 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1010 18:49:20.452430  117647 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1010 18:49:20.452434  117647 command_runner.go:130] > # which might increase security.
	I1010 18:49:20.452439  117647 command_runner.go:130] > # This option is currently deprecated,
	I1010 18:49:20.452446  117647 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I1010 18:49:20.452453  117647 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1010 18:49:20.452460  117647 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1010 18:49:20.452468  117647 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1010 18:49:20.452474  117647 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1010 18:49:20.452480  117647 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1010 18:49:20.452487  117647 command_runner.go:130] > # This option supports live configuration reload.
	I1010 18:49:20.452492  117647 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1010 18:49:20.452499  117647 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1010 18:49:20.452504  117647 command_runner.go:130] > # the cgroup blockio controller.
	I1010 18:49:20.452508  117647 command_runner.go:130] > # blockio_config_file = ""
	I1010 18:49:20.452514  117647 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1010 18:49:20.452520  117647 command_runner.go:130] > # blockio parameters.
	I1010 18:49:20.452526  117647 command_runner.go:130] > # blockio_reload = false
	I1010 18:49:20.452549  117647 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1010 18:49:20.452560  117647 command_runner.go:130] > # irqbalance daemon.
	I1010 18:49:20.452565  117647 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1010 18:49:20.452572  117647 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1010 18:49:20.452580  117647 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1010 18:49:20.452587  117647 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1010 18:49:20.452597  117647 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1010 18:49:20.452603  117647 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1010 18:49:20.452610  117647 command_runner.go:130] > # This option supports live configuration reload.
	I1010 18:49:20.452615  117647 command_runner.go:130] > # rdt_config_file = ""
	I1010 18:49:20.452622  117647 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1010 18:49:20.452627  117647 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1010 18:49:20.452654  117647 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1010 18:49:20.452663  117647 command_runner.go:130] > # separate_pull_cgroup = ""
	I1010 18:49:20.452669  117647 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1010 18:49:20.452675  117647 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1010 18:49:20.452679  117647 command_runner.go:130] > # will be added.
	I1010 18:49:20.452683  117647 command_runner.go:130] > # default_capabilities = [
	I1010 18:49:20.452689  117647 command_runner.go:130] > # 	"CHOWN",
	I1010 18:49:20.452693  117647 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1010 18:49:20.452698  117647 command_runner.go:130] > # 	"FSETID",
	I1010 18:49:20.452702  117647 command_runner.go:130] > # 	"FOWNER",
	I1010 18:49:20.452708  117647 command_runner.go:130] > # 	"SETGID",
	I1010 18:49:20.452711  117647 command_runner.go:130] > # 	"SETUID",
	I1010 18:49:20.452717  117647 command_runner.go:130] > # 	"SETPCAP",
	I1010 18:49:20.452721  117647 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1010 18:49:20.452726  117647 command_runner.go:130] > # 	"KILL",
	I1010 18:49:20.452729  117647 command_runner.go:130] > # ]
	I1010 18:49:20.452737  117647 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1010 18:49:20.452745  117647 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1010 18:49:20.452750  117647 command_runner.go:130] > # add_inheritable_capabilities = false
	I1010 18:49:20.452758  117647 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1010 18:49:20.452764  117647 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1010 18:49:20.452768  117647 command_runner.go:130] > default_sysctls = [
	I1010 18:49:20.452773  117647 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1010 18:49:20.452777  117647 command_runner.go:130] > ]
	I1010 18:49:20.452782  117647 command_runner.go:130] > # List of devices on the host that a
	I1010 18:49:20.452788  117647 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1010 18:49:20.452792  117647 command_runner.go:130] > # allowed_devices = [
	I1010 18:49:20.452797  117647 command_runner.go:130] > # 	"/dev/fuse",
	I1010 18:49:20.452800  117647 command_runner.go:130] > # ]
	I1010 18:49:20.452805  117647 command_runner.go:130] > # List of additional devices. specified as
	I1010 18:49:20.452815  117647 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1010 18:49:20.452819  117647 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1010 18:49:20.452827  117647 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1010 18:49:20.452831  117647 command_runner.go:130] > # additional_devices = [
	I1010 18:49:20.452835  117647 command_runner.go:130] > # ]
	I1010 18:49:20.452840  117647 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1010 18:49:20.452846  117647 command_runner.go:130] > # cdi_spec_dirs = [
	I1010 18:49:20.452858  117647 command_runner.go:130] > # 	"/etc/cdi",
	I1010 18:49:20.452862  117647 command_runner.go:130] > # 	"/var/run/cdi",
	I1010 18:49:20.452866  117647 command_runner.go:130] > # ]
	I1010 18:49:20.452872  117647 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1010 18:49:20.452879  117647 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1010 18:49:20.452883  117647 command_runner.go:130] > # Defaults to false.
	I1010 18:49:20.452887  117647 command_runner.go:130] > # device_ownership_from_security_context = false
	I1010 18:49:20.452894  117647 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1010 18:49:20.452902  117647 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1010 18:49:20.452906  117647 command_runner.go:130] > # hooks_dir = [
	I1010 18:49:20.452913  117647 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1010 18:49:20.452916  117647 command_runner.go:130] > # ]
	I1010 18:49:20.452922  117647 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1010 18:49:20.452931  117647 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1010 18:49:20.452936  117647 command_runner.go:130] > # its default mounts from the following two files:
	I1010 18:49:20.452941  117647 command_runner.go:130] > #
	I1010 18:49:20.452946  117647 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1010 18:49:20.452954  117647 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1010 18:49:20.452960  117647 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1010 18:49:20.452965  117647 command_runner.go:130] > #
	I1010 18:49:20.452971  117647 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1010 18:49:20.452977  117647 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1010 18:49:20.452985  117647 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1010 18:49:20.452990  117647 command_runner.go:130] > #      only add mounts it finds in this file.
	I1010 18:49:20.452996  117647 command_runner.go:130] > #
	I1010 18:49:20.453000  117647 command_runner.go:130] > # default_mounts_file = ""
	I1010 18:49:20.453006  117647 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1010 18:49:20.453015  117647 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1010 18:49:20.453020  117647 command_runner.go:130] > pids_limit = 1024
	I1010 18:49:20.453026  117647 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1010 18:49:20.453034  117647 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1010 18:49:20.453040  117647 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1010 18:49:20.453048  117647 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1010 18:49:20.453054  117647 command_runner.go:130] > # log_size_max = -1
	I1010 18:49:20.453060  117647 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1010 18:49:20.453067  117647 command_runner.go:130] > # log_to_journald = false
	I1010 18:49:20.453074  117647 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1010 18:49:20.453081  117647 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1010 18:49:20.453085  117647 command_runner.go:130] > # Path to directory for container attach sockets.
	I1010 18:49:20.453090  117647 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1010 18:49:20.453097  117647 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1010 18:49:20.453101  117647 command_runner.go:130] > # bind_mount_prefix = ""
	I1010 18:49:20.453110  117647 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1010 18:49:20.453115  117647 command_runner.go:130] > # read_only = false
	I1010 18:49:20.453124  117647 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1010 18:49:20.453130  117647 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1010 18:49:20.453136  117647 command_runner.go:130] > # live configuration reload.
	I1010 18:49:20.453141  117647 command_runner.go:130] > # log_level = "info"
	I1010 18:49:20.453149  117647 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1010 18:49:20.453154  117647 command_runner.go:130] > # This option supports live configuration reload.
	I1010 18:49:20.453160  117647 command_runner.go:130] > # log_filter = ""
	I1010 18:49:20.453167  117647 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1010 18:49:20.453177  117647 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1010 18:49:20.453181  117647 command_runner.go:130] > # separated by comma.
	I1010 18:49:20.453188  117647 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1010 18:49:20.453194  117647 command_runner.go:130] > # uid_mappings = ""
	I1010 18:49:20.453200  117647 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1010 18:49:20.453208  117647 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1010 18:49:20.453213  117647 command_runner.go:130] > # separated by comma.
	I1010 18:49:20.453222  117647 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1010 18:49:20.453228  117647 command_runner.go:130] > # gid_mappings = ""
	I1010 18:49:20.453234  117647 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1010 18:49:20.453241  117647 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1010 18:49:20.453248  117647 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1010 18:49:20.453257  117647 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1010 18:49:20.453261  117647 command_runner.go:130] > # minimum_mappable_uid = -1
	I1010 18:49:20.453267  117647 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1010 18:49:20.453275  117647 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1010 18:49:20.453281  117647 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1010 18:49:20.453289  117647 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1010 18:49:20.453295  117647 command_runner.go:130] > # minimum_mappable_gid = -1
	I1010 18:49:20.453300  117647 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1010 18:49:20.453306  117647 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1010 18:49:20.453314  117647 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1010 18:49:20.453318  117647 command_runner.go:130] > # ctr_stop_timeout = 30
	I1010 18:49:20.453325  117647 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1010 18:49:20.453331  117647 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1010 18:49:20.453338  117647 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1010 18:49:20.453342  117647 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1010 18:49:20.453348  117647 command_runner.go:130] > drop_infra_ctr = false
	I1010 18:49:20.453353  117647 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1010 18:49:20.453361  117647 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1010 18:49:20.453368  117647 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1010 18:49:20.453374  117647 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1010 18:49:20.453380  117647 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1010 18:49:20.453388  117647 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1010 18:49:20.453393  117647 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1010 18:49:20.453400  117647 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1010 18:49:20.453404  117647 command_runner.go:130] > # shared_cpuset = ""
	I1010 18:49:20.453412  117647 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1010 18:49:20.453420  117647 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1010 18:49:20.453427  117647 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1010 18:49:20.453433  117647 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1010 18:49:20.453440  117647 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1010 18:49:20.453445  117647 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1010 18:49:20.453453  117647 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1010 18:49:20.453457  117647 command_runner.go:130] > # enable_criu_support = false
	I1010 18:49:20.453464  117647 command_runner.go:130] > # Enable/disable the generation of the container,
	I1010 18:49:20.453469  117647 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1010 18:49:20.453475  117647 command_runner.go:130] > # enable_pod_events = false
	I1010 18:49:20.453481  117647 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1010 18:49:20.453489  117647 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1010 18:49:20.453494  117647 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1010 18:49:20.453498  117647 command_runner.go:130] > # default_runtime = "runc"
	I1010 18:49:20.453503  117647 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1010 18:49:20.453510  117647 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1010 18:49:20.453520  117647 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1010 18:49:20.453525  117647 command_runner.go:130] > # creation as a file is not desired either.
	I1010 18:49:20.453533  117647 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1010 18:49:20.453539  117647 command_runner.go:130] > # the hostname is being managed dynamically.
	I1010 18:49:20.453543  117647 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1010 18:49:20.453549  117647 command_runner.go:130] > # ]
	I1010 18:49:20.453554  117647 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1010 18:49:20.453560  117647 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1010 18:49:20.453566  117647 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1010 18:49:20.453571  117647 command_runner.go:130] > # Each entry in the table should follow the format:
	I1010 18:49:20.453576  117647 command_runner.go:130] > #
	I1010 18:49:20.453581  117647 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1010 18:49:20.453585  117647 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1010 18:49:20.453607  117647 command_runner.go:130] > # runtime_type = "oci"
	I1010 18:49:20.453614  117647 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1010 18:49:20.453620  117647 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1010 18:49:20.453626  117647 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1010 18:49:20.453631  117647 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1010 18:49:20.453637  117647 command_runner.go:130] > # monitor_env = []
	I1010 18:49:20.453642  117647 command_runner.go:130] > # privileged_without_host_devices = false
	I1010 18:49:20.453646  117647 command_runner.go:130] > # allowed_annotations = []
	I1010 18:49:20.453651  117647 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1010 18:49:20.453658  117647 command_runner.go:130] > # Where:
	I1010 18:49:20.453664  117647 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1010 18:49:20.453671  117647 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1010 18:49:20.453677  117647 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1010 18:49:20.453685  117647 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1010 18:49:20.453690  117647 command_runner.go:130] > #   in $PATH.
	I1010 18:49:20.453698  117647 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1010 18:49:20.453702  117647 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1010 18:49:20.453708  117647 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1010 18:49:20.453714  117647 command_runner.go:130] > #   state.
	I1010 18:49:20.453720  117647 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1010 18:49:20.453729  117647 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1010 18:49:20.453735  117647 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1010 18:49:20.453742  117647 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1010 18:49:20.453749  117647 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1010 18:49:20.453757  117647 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1010 18:49:20.453762  117647 command_runner.go:130] > #   The currently recognized values are:
	I1010 18:49:20.453770  117647 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1010 18:49:20.453777  117647 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1010 18:49:20.453785  117647 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1010 18:49:20.453791  117647 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1010 18:49:20.453798  117647 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1010 18:49:20.453806  117647 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1010 18:49:20.453814  117647 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1010 18:49:20.453822  117647 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1010 18:49:20.453828  117647 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1010 18:49:20.453836  117647 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1010 18:49:20.453840  117647 command_runner.go:130] > #   deprecated option "conmon".
	I1010 18:49:20.453849  117647 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1010 18:49:20.453854  117647 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1010 18:49:20.453862  117647 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1010 18:49:20.453867  117647 command_runner.go:130] > #   should be moved to the container's cgroup
	I1010 18:49:20.453875  117647 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I1010 18:49:20.453880  117647 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1010 18:49:20.453889  117647 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1010 18:49:20.453896  117647 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1010 18:49:20.453899  117647 command_runner.go:130] > #
	I1010 18:49:20.453904  117647 command_runner.go:130] > # Using the seccomp notifier feature:
	I1010 18:49:20.453909  117647 command_runner.go:130] > #
	I1010 18:49:20.453916  117647 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1010 18:49:20.453925  117647 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1010 18:49:20.453928  117647 command_runner.go:130] > #
	I1010 18:49:20.453934  117647 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1010 18:49:20.453941  117647 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1010 18:49:20.453944  117647 command_runner.go:130] > #
	I1010 18:49:20.453950  117647 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1010 18:49:20.453956  117647 command_runner.go:130] > # feature.
	I1010 18:49:20.453959  117647 command_runner.go:130] > #
	I1010 18:49:20.453965  117647 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1010 18:49:20.453973  117647 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1010 18:49:20.453979  117647 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1010 18:49:20.453987  117647 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1010 18:49:20.453994  117647 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1010 18:49:20.453999  117647 command_runner.go:130] > #
	I1010 18:49:20.454005  117647 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1010 18:49:20.454012  117647 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1010 18:49:20.454033  117647 command_runner.go:130] > #
	I1010 18:49:20.454041  117647 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1010 18:49:20.454046  117647 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1010 18:49:20.454049  117647 command_runner.go:130] > #
	I1010 18:49:20.454055  117647 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1010 18:49:20.454062  117647 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1010 18:49:20.454066  117647 command_runner.go:130] > # limitation.
	I1010 18:49:20.454071  117647 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1010 18:49:20.454076  117647 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1010 18:49:20.454080  117647 command_runner.go:130] > runtime_type = "oci"
	I1010 18:49:20.454086  117647 command_runner.go:130] > runtime_root = "/run/runc"
	I1010 18:49:20.454090  117647 command_runner.go:130] > runtime_config_path = ""
	I1010 18:49:20.454097  117647 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1010 18:49:20.454102  117647 command_runner.go:130] > monitor_cgroup = "pod"
	I1010 18:49:20.454107  117647 command_runner.go:130] > monitor_exec_cgroup = ""
	I1010 18:49:20.454112  117647 command_runner.go:130] > monitor_env = [
	I1010 18:49:20.454117  117647 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1010 18:49:20.454123  117647 command_runner.go:130] > ]
	I1010 18:49:20.454128  117647 command_runner.go:130] > privileged_without_host_devices = false
	I1010 18:49:20.454134  117647 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1010 18:49:20.454141  117647 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1010 18:49:20.454147  117647 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1010 18:49:20.454156  117647 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1010 18:49:20.454163  117647 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1010 18:49:20.454174  117647 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1010 18:49:20.454183  117647 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1010 18:49:20.454192  117647 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1010 18:49:20.454198  117647 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1010 18:49:20.454205  117647 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1010 18:49:20.454212  117647 command_runner.go:130] > # Example:
	I1010 18:49:20.454216  117647 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1010 18:49:20.454221  117647 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1010 18:49:20.454226  117647 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1010 18:49:20.454231  117647 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1010 18:49:20.454238  117647 command_runner.go:130] > # cpuset = 0
	I1010 18:49:20.454243  117647 command_runner.go:130] > # cpushares = "0-1"
	I1010 18:49:20.454248  117647 command_runner.go:130] > # Where:
	I1010 18:49:20.454253  117647 command_runner.go:130] > # The workload name is workload-type.
	I1010 18:49:20.454259  117647 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1010 18:49:20.454267  117647 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1010 18:49:20.454272  117647 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1010 18:49:20.454282  117647 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1010 18:49:20.454289  117647 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1010 18:49:20.454294  117647 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1010 18:49:20.454303  117647 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1010 18:49:20.454308  117647 command_runner.go:130] > # Default value is set to true
	I1010 18:49:20.454312  117647 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1010 18:49:20.454319  117647 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1010 18:49:20.454324  117647 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1010 18:49:20.454329  117647 command_runner.go:130] > # Default value is set to 'false'
	I1010 18:49:20.454333  117647 command_runner.go:130] > # disable_hostport_mapping = false
	I1010 18:49:20.454341  117647 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1010 18:49:20.454345  117647 command_runner.go:130] > #
	I1010 18:49:20.454350  117647 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1010 18:49:20.454355  117647 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1010 18:49:20.454361  117647 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1010 18:49:20.454367  117647 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1010 18:49:20.454372  117647 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1010 18:49:20.454376  117647 command_runner.go:130] > [crio.image]
	I1010 18:49:20.454381  117647 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1010 18:49:20.454385  117647 command_runner.go:130] > # default_transport = "docker://"
	I1010 18:49:20.454391  117647 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1010 18:49:20.454397  117647 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1010 18:49:20.454401  117647 command_runner.go:130] > # global_auth_file = ""
	I1010 18:49:20.454405  117647 command_runner.go:130] > # The image used to instantiate infra containers.
	I1010 18:49:20.454410  117647 command_runner.go:130] > # This option supports live configuration reload.
	I1010 18:49:20.454419  117647 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I1010 18:49:20.454427  117647 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1010 18:49:20.454432  117647 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1010 18:49:20.454439  117647 command_runner.go:130] > # This option supports live configuration reload.
	I1010 18:49:20.454443  117647 command_runner.go:130] > # pause_image_auth_file = ""
	I1010 18:49:20.454448  117647 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1010 18:49:20.454456  117647 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1010 18:49:20.454462  117647 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1010 18:49:20.454469  117647 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1010 18:49:20.454473  117647 command_runner.go:130] > # pause_command = "/pause"
	I1010 18:49:20.454481  117647 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1010 18:49:20.454489  117647 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1010 18:49:20.454496  117647 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1010 18:49:20.454504  117647 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1010 18:49:20.454511  117647 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1010 18:49:20.454517  117647 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1010 18:49:20.454522  117647 command_runner.go:130] > # pinned_images = [
	I1010 18:49:20.454524  117647 command_runner.go:130] > # ]
	I1010 18:49:20.454532  117647 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1010 18:49:20.454538  117647 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1010 18:49:20.454545  117647 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1010 18:49:20.454551  117647 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1010 18:49:20.454558  117647 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1010 18:49:20.454562  117647 command_runner.go:130] > # signature_policy = ""
	I1010 18:49:20.454570  117647 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1010 18:49:20.454576  117647 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1010 18:49:20.454584  117647 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1010 18:49:20.454591  117647 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1010 18:49:20.454599  117647 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1010 18:49:20.454604  117647 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1010 18:49:20.454612  117647 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1010 18:49:20.454618  117647 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1010 18:49:20.454621  117647 command_runner.go:130] > # changing them here.
	I1010 18:49:20.454625  117647 command_runner.go:130] > # insecure_registries = [
	I1010 18:49:20.454628  117647 command_runner.go:130] > # ]
	I1010 18:49:20.454634  117647 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1010 18:49:20.454641  117647 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1010 18:49:20.454646  117647 command_runner.go:130] > # image_volumes = "mkdir"
	I1010 18:49:20.454652  117647 command_runner.go:130] > # Temporary directory to use for storing big files
	I1010 18:49:20.454657  117647 command_runner.go:130] > # big_files_temporary_dir = ""
	I1010 18:49:20.454663  117647 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1010 18:49:20.454669  117647 command_runner.go:130] > # CNI plugins.
	I1010 18:49:20.454673  117647 command_runner.go:130] > [crio.network]
	I1010 18:49:20.454678  117647 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1010 18:49:20.454685  117647 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1010 18:49:20.454689  117647 command_runner.go:130] > # cni_default_network = ""
	I1010 18:49:20.454695  117647 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1010 18:49:20.454702  117647 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1010 18:49:20.454708  117647 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1010 18:49:20.454712  117647 command_runner.go:130] > # plugin_dirs = [
	I1010 18:49:20.454716  117647 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1010 18:49:20.454719  117647 command_runner.go:130] > # ]
	I1010 18:49:20.454725  117647 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1010 18:49:20.454729  117647 command_runner.go:130] > [crio.metrics]
	I1010 18:49:20.454734  117647 command_runner.go:130] > # Globally enable or disable metrics support.
	I1010 18:49:20.454738  117647 command_runner.go:130] > enable_metrics = true
	I1010 18:49:20.454742  117647 command_runner.go:130] > # Specify enabled metrics collectors.
	I1010 18:49:20.454747  117647 command_runner.go:130] > # Per default all metrics are enabled.
	I1010 18:49:20.454754  117647 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1010 18:49:20.454761  117647 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1010 18:49:20.454767  117647 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1010 18:49:20.454773  117647 command_runner.go:130] > # metrics_collectors = [
	I1010 18:49:20.454777  117647 command_runner.go:130] > # 	"operations",
	I1010 18:49:20.454781  117647 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1010 18:49:20.454786  117647 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1010 18:49:20.454791  117647 command_runner.go:130] > # 	"operations_errors",
	I1010 18:49:20.454795  117647 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1010 18:49:20.454801  117647 command_runner.go:130] > # 	"image_pulls_by_name",
	I1010 18:49:20.454805  117647 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1010 18:49:20.454810  117647 command_runner.go:130] > # 	"image_pulls_failures",
	I1010 18:49:20.454814  117647 command_runner.go:130] > # 	"image_pulls_successes",
	I1010 18:49:20.454821  117647 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1010 18:49:20.454839  117647 command_runner.go:130] > # 	"image_layer_reuse",
	I1010 18:49:20.454846  117647 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1010 18:49:20.454850  117647 command_runner.go:130] > # 	"containers_oom_total",
	I1010 18:49:20.454856  117647 command_runner.go:130] > # 	"containers_oom",
	I1010 18:49:20.454860  117647 command_runner.go:130] > # 	"processes_defunct",
	I1010 18:49:20.454865  117647 command_runner.go:130] > # 	"operations_total",
	I1010 18:49:20.454869  117647 command_runner.go:130] > # 	"operations_latency_seconds",
	I1010 18:49:20.454874  117647 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1010 18:49:20.454880  117647 command_runner.go:130] > # 	"operations_errors_total",
	I1010 18:49:20.454885  117647 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1010 18:49:20.454889  117647 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1010 18:49:20.454894  117647 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1010 18:49:20.454899  117647 command_runner.go:130] > # 	"image_pulls_success_total",
	I1010 18:49:20.454904  117647 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1010 18:49:20.454910  117647 command_runner.go:130] > # 	"containers_oom_count_total",
	I1010 18:49:20.454914  117647 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1010 18:49:20.454918  117647 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1010 18:49:20.454921  117647 command_runner.go:130] > # ]
	I1010 18:49:20.454928  117647 command_runner.go:130] > # The port on which the metrics server will listen.
	I1010 18:49:20.454932  117647 command_runner.go:130] > # metrics_port = 9090
	I1010 18:49:20.454936  117647 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1010 18:49:20.454940  117647 command_runner.go:130] > # metrics_socket = ""
	I1010 18:49:20.454945  117647 command_runner.go:130] > # The certificate for the secure metrics server.
	I1010 18:49:20.454951  117647 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1010 18:49:20.454959  117647 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1010 18:49:20.454963  117647 command_runner.go:130] > # certificate on any modification event.
	I1010 18:49:20.454970  117647 command_runner.go:130] > # metrics_cert = ""
	I1010 18:49:20.454975  117647 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1010 18:49:20.454982  117647 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1010 18:49:20.454986  117647 command_runner.go:130] > # metrics_key = ""
	I1010 18:49:20.454994  117647 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1010 18:49:20.454997  117647 command_runner.go:130] > [crio.tracing]
	I1010 18:49:20.455003  117647 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1010 18:49:20.455008  117647 command_runner.go:130] > # enable_tracing = false
	I1010 18:49:20.455014  117647 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1010 18:49:20.455020  117647 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1010 18:49:20.455027  117647 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1010 18:49:20.455033  117647 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1010 18:49:20.455037  117647 command_runner.go:130] > # CRI-O NRI configuration.
	I1010 18:49:20.455043  117647 command_runner.go:130] > [crio.nri]
	I1010 18:49:20.455047  117647 command_runner.go:130] > # Globally enable or disable NRI.
	I1010 18:49:20.455051  117647 command_runner.go:130] > # enable_nri = false
	I1010 18:49:20.455056  117647 command_runner.go:130] > # NRI socket to listen on.
	I1010 18:49:20.455062  117647 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1010 18:49:20.455066  117647 command_runner.go:130] > # NRI plugin directory to use.
	I1010 18:49:20.455071  117647 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1010 18:49:20.455078  117647 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1010 18:49:20.455082  117647 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1010 18:49:20.455089  117647 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1010 18:49:20.455093  117647 command_runner.go:130] > # nri_disable_connections = false
	I1010 18:49:20.455098  117647 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1010 18:49:20.455104  117647 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1010 18:49:20.455109  117647 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1010 18:49:20.455116  117647 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1010 18:49:20.455124  117647 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1010 18:49:20.455128  117647 command_runner.go:130] > [crio.stats]
	I1010 18:49:20.455134  117647 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1010 18:49:20.455140  117647 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1010 18:49:20.455144  117647 command_runner.go:130] > # stats_collection_period = 0
	I1010 18:49:20.455229  117647 cni.go:84] Creating CNI manager for ""
	I1010 18:49:20.455241  117647 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1010 18:49:20.455252  117647 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1010 18:49:20.455275  117647 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.28 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-965291 NodeName:multinode-965291 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.28"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.28 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1010 18:49:20.455404  117647 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.28
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-965291"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.28
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.28"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1010 18:49:20.455478  117647 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1010 18:49:20.466520  117647 command_runner.go:130] > kubeadm
	I1010 18:49:20.466545  117647 command_runner.go:130] > kubectl
	I1010 18:49:20.466550  117647 command_runner.go:130] > kubelet
	I1010 18:49:20.466575  117647 binaries.go:44] Found k8s binaries, skipping transfer
	I1010 18:49:20.466623  117647 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1010 18:49:20.477411  117647 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I1010 18:49:20.496981  117647 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1010 18:49:20.516702  117647 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I1010 18:49:20.536051  117647 ssh_runner.go:195] Run: grep 192.168.39.28	control-plane.minikube.internal$ /etc/hosts
	I1010 18:49:20.540178  117647 command_runner.go:130] > 192.168.39.28	control-plane.minikube.internal
	I1010 18:49:20.540266  117647 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 18:49:20.693531  117647 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 18:49:20.713012  117647 certs.go:68] Setting up /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/multinode-965291 for IP: 192.168.39.28
	I1010 18:49:20.713039  117647 certs.go:194] generating shared ca certs ...
	I1010 18:49:20.713073  117647 certs.go:226] acquiring lock for ca certs: {Name:mk934aa2a82050d7b4e8800492bf64f91d140517 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:49:20.713307  117647 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key
	I1010 18:49:20.713378  117647 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key
	I1010 18:49:20.713393  117647 certs.go:256] generating profile certs ...
	I1010 18:49:20.713519  117647 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/multinode-965291/client.key
	I1010 18:49:20.713598  117647 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/multinode-965291/apiserver.key.411c3c18
	I1010 18:49:20.713674  117647 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/multinode-965291/proxy-client.key
	I1010 18:49:20.713691  117647 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1010 18:49:20.713709  117647 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1010 18:49:20.713724  117647 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1010 18:49:20.713739  117647 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1010 18:49:20.713753  117647 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/multinode-965291/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1010 18:49:20.713772  117647 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/multinode-965291/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1010 18:49:20.713790  117647 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/multinode-965291/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1010 18:49:20.713806  117647 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/multinode-965291/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1010 18:49:20.713872  117647 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876.pem (1338 bytes)
	W1010 18:49:20.713924  117647 certs.go:480] ignoring /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876_empty.pem, impossibly tiny 0 bytes
	I1010 18:49:20.713938  117647 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem (1679 bytes)
	I1010 18:49:20.713970  117647 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem (1078 bytes)
	I1010 18:49:20.713999  117647 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem (1123 bytes)
	I1010 18:49:20.714027  117647 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem (1675 bytes)
	I1010 18:49:20.714069  117647 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem (1708 bytes)
	I1010 18:49:20.714097  117647 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:49:20.714111  117647 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876.pem -> /usr/share/ca-certificates/88876.pem
	I1010 18:49:20.714121  117647 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem -> /usr/share/ca-certificates/888762.pem
	I1010 18:49:20.714803  117647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1010 18:49:20.741207  117647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1010 18:49:20.766932  117647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1010 18:49:20.793077  117647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1010 18:49:20.818410  117647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/multinode-965291/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1010 18:49:20.844146  117647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/multinode-965291/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1010 18:49:20.869832  117647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/multinode-965291/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1010 18:49:20.895246  117647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/multinode-965291/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1010 18:49:20.920331  117647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1010 18:49:20.946436  117647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876.pem --> /usr/share/ca-certificates/88876.pem (1338 bytes)
	I1010 18:49:20.972329  117647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem --> /usr/share/ca-certificates/888762.pem (1708 bytes)
	I1010 18:49:20.998636  117647 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1010 18:49:21.017522  117647 ssh_runner.go:195] Run: openssl version
	I1010 18:49:21.024079  117647 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I1010 18:49:21.024292  117647 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1010 18:49:21.036499  117647 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:49:21.041996  117647 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct 10 17:58 /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:49:21.042042  117647 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 10 17:58 /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:49:21.042090  117647 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:49:21.048277  117647 command_runner.go:130] > b5213941
	I1010 18:49:21.048369  117647 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1010 18:49:21.059155  117647 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/88876.pem && ln -fs /usr/share/ca-certificates/88876.pem /etc/ssl/certs/88876.pem"
	I1010 18:49:21.070962  117647 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/88876.pem
	I1010 18:49:21.076482  117647 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct 10 18:09 /usr/share/ca-certificates/88876.pem
	I1010 18:49:21.076681  117647 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 10 18:09 /usr/share/ca-certificates/88876.pem
	I1010 18:49:21.076745  117647 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/88876.pem
	I1010 18:49:21.082792  117647 command_runner.go:130] > 51391683
	I1010 18:49:21.082861  117647 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/88876.pem /etc/ssl/certs/51391683.0"
	I1010 18:49:21.093358  117647 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/888762.pem && ln -fs /usr/share/ca-certificates/888762.pem /etc/ssl/certs/888762.pem"
	I1010 18:49:21.106000  117647 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/888762.pem
	I1010 18:49:21.111155  117647 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct 10 18:09 /usr/share/ca-certificates/888762.pem
	I1010 18:49:21.111431  117647 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 10 18:09 /usr/share/ca-certificates/888762.pem
	I1010 18:49:21.111501  117647 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/888762.pem
	I1010 18:49:21.117647  117647 command_runner.go:130] > 3ec20f2e
	I1010 18:49:21.117715  117647 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/888762.pem /etc/ssl/certs/3ec20f2e.0"
	I1010 18:49:21.127779  117647 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1010 18:49:21.132589  117647 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1010 18:49:21.132614  117647 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1010 18:49:21.132622  117647 command_runner.go:130] > Device: 253,1	Inode: 6289960     Links: 1
	I1010 18:49:21.132632  117647 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1010 18:49:21.132642  117647 command_runner.go:130] > Access: 2024-10-10 18:42:36.625167643 +0000
	I1010 18:49:21.132650  117647 command_runner.go:130] > Modify: 2024-10-10 18:42:36.625167643 +0000
	I1010 18:49:21.132658  117647 command_runner.go:130] > Change: 2024-10-10 18:42:36.625167643 +0000
	I1010 18:49:21.132665  117647 command_runner.go:130] >  Birth: 2024-10-10 18:42:36.625167643 +0000
	I1010 18:49:21.132725  117647 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1010 18:49:21.139041  117647 command_runner.go:130] > Certificate will not expire
	I1010 18:49:21.139211  117647 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1010 18:49:21.145366  117647 command_runner.go:130] > Certificate will not expire
	I1010 18:49:21.145443  117647 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1010 18:49:21.151621  117647 command_runner.go:130] > Certificate will not expire
	I1010 18:49:21.151712  117647 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1010 18:49:21.157881  117647 command_runner.go:130] > Certificate will not expire
	I1010 18:49:21.157974  117647 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1010 18:49:21.164169  117647 command_runner.go:130] > Certificate will not expire
	I1010 18:49:21.164239  117647 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1010 18:49:21.170327  117647 command_runner.go:130] > Certificate will not expire
	I1010 18:49:21.170400  117647 kubeadm.go:392] StartCluster: {Name:multinode-965291 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:multinode-965291 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.28 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.122 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.137 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:f
alse istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 18:49:21.170509  117647 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1010 18:49:21.170557  117647 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1010 18:49:21.212963  117647 command_runner.go:130] > b3ca0ce060637bf39d23b9c1e7488f637044d041ee138579ff7f478d0d60b669
	I1010 18:49:21.212984  117647 command_runner.go:130] > 76513cc8bb6d03925307e236110027724b85565078d4d922c8311ed6083a01a6
	I1010 18:49:21.212996  117647 command_runner.go:130] > 97c9f528dac21bf44e6c02e64554a81132b8abf9b31ec579102ba9a042b3d38a
	I1010 18:49:21.213005  117647 command_runner.go:130] > f092e089c21847e0ff48a8248e7faeafada4c7ed83f9e2d19aac4160fde8cf56
	I1010 18:49:21.213010  117647 command_runner.go:130] > c65a6383e328fc83b5178b1b9052992dfd78946436001f2eb8b63fec22e3fa1f
	I1010 18:49:21.213015  117647 command_runner.go:130] > fe6a7f6a2a2853173494e508a5539fa554ffb163cfb97097eb7c185a48e87da8
	I1010 18:49:21.213021  117647 command_runner.go:130] > 5794c9a1761178b22d3156765ad9ecd2f40f38e87266266ce24367188f9b5018
	I1010 18:49:21.213027  117647 command_runner.go:130] > f00129d23471b49f194e49f9941ac40c8694efa5885506aafe4a6628465e47f1
	I1010 18:49:21.213045  117647 cri.go:89] found id: "b3ca0ce060637bf39d23b9c1e7488f637044d041ee138579ff7f478d0d60b669"
	I1010 18:49:21.213054  117647 cri.go:89] found id: "76513cc8bb6d03925307e236110027724b85565078d4d922c8311ed6083a01a6"
	I1010 18:49:21.213057  117647 cri.go:89] found id: "97c9f528dac21bf44e6c02e64554a81132b8abf9b31ec579102ba9a042b3d38a"
	I1010 18:49:21.213060  117647 cri.go:89] found id: "f092e089c21847e0ff48a8248e7faeafada4c7ed83f9e2d19aac4160fde8cf56"
	I1010 18:49:21.213063  117647 cri.go:89] found id: "c65a6383e328fc83b5178b1b9052992dfd78946436001f2eb8b63fec22e3fa1f"
	I1010 18:49:21.213067  117647 cri.go:89] found id: "fe6a7f6a2a2853173494e508a5539fa554ffb163cfb97097eb7c185a48e87da8"
	I1010 18:49:21.213073  117647 cri.go:89] found id: "5794c9a1761178b22d3156765ad9ecd2f40f38e87266266ce24367188f9b5018"
	I1010 18:49:21.213075  117647 cri.go:89] found id: "f00129d23471b49f194e49f9941ac40c8694efa5885506aafe4a6628465e47f1"
	I1010 18:49:21.213078  117647 cri.go:89] found id: ""
	I1010 18:49:21.213118  117647 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-965291 -n multinode-965291
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-965291 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (323.29s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (145.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-965291 stop
E1010 18:52:50.018200   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/functional-346405/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-965291 stop: exit status 82 (2m0.498173154s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-965291-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-965291 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-965291 status
multinode_test.go:351: (dbg) Done: out/minikube-linux-amd64 -p multinode-965291 status: (18.693107977s)
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-965291 status --alsologtostderr
multinode_test.go:358: (dbg) Done: out/minikube-linux-amd64 -p multinode-965291 status --alsologtostderr: (3.390686456s)
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-linux-amd64 -p multinode-965291 status --alsologtostderr": 
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-linux-amd64 -p multinode-965291 status --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-965291 -n multinode-965291
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-965291 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-965291 logs -n 25: (2.056424558s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-965291 ssh -n                                                                 | multinode-965291 | jenkins | v1.34.0 | 10 Oct 24 18:44 UTC | 10 Oct 24 18:44 UTC |
	|         | multinode-965291-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-965291 cp multinode-965291-m02:/home/docker/cp-test.txt                       | multinode-965291 | jenkins | v1.34.0 | 10 Oct 24 18:44 UTC | 10 Oct 24 18:44 UTC |
	|         | multinode-965291:/home/docker/cp-test_multinode-965291-m02_multinode-965291.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-965291 ssh -n                                                                 | multinode-965291 | jenkins | v1.34.0 | 10 Oct 24 18:44 UTC | 10 Oct 24 18:44 UTC |
	|         | multinode-965291-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-965291 ssh -n multinode-965291 sudo cat                                       | multinode-965291 | jenkins | v1.34.0 | 10 Oct 24 18:45 UTC | 10 Oct 24 18:45 UTC |
	|         | /home/docker/cp-test_multinode-965291-m02_multinode-965291.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-965291 cp multinode-965291-m02:/home/docker/cp-test.txt                       | multinode-965291 | jenkins | v1.34.0 | 10 Oct 24 18:45 UTC | 10 Oct 24 18:45 UTC |
	|         | multinode-965291-m03:/home/docker/cp-test_multinode-965291-m02_multinode-965291-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-965291 ssh -n                                                                 | multinode-965291 | jenkins | v1.34.0 | 10 Oct 24 18:45 UTC | 10 Oct 24 18:45 UTC |
	|         | multinode-965291-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-965291 ssh -n multinode-965291-m03 sudo cat                                   | multinode-965291 | jenkins | v1.34.0 | 10 Oct 24 18:45 UTC | 10 Oct 24 18:45 UTC |
	|         | /home/docker/cp-test_multinode-965291-m02_multinode-965291-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-965291 cp testdata/cp-test.txt                                                | multinode-965291 | jenkins | v1.34.0 | 10 Oct 24 18:45 UTC | 10 Oct 24 18:45 UTC |
	|         | multinode-965291-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-965291 ssh -n                                                                 | multinode-965291 | jenkins | v1.34.0 | 10 Oct 24 18:45 UTC | 10 Oct 24 18:45 UTC |
	|         | multinode-965291-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-965291 cp multinode-965291-m03:/home/docker/cp-test.txt                       | multinode-965291 | jenkins | v1.34.0 | 10 Oct 24 18:45 UTC | 10 Oct 24 18:45 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3017167106/001/cp-test_multinode-965291-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-965291 ssh -n                                                                 | multinode-965291 | jenkins | v1.34.0 | 10 Oct 24 18:45 UTC | 10 Oct 24 18:45 UTC |
	|         | multinode-965291-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-965291 cp multinode-965291-m03:/home/docker/cp-test.txt                       | multinode-965291 | jenkins | v1.34.0 | 10 Oct 24 18:45 UTC | 10 Oct 24 18:45 UTC |
	|         | multinode-965291:/home/docker/cp-test_multinode-965291-m03_multinode-965291.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-965291 ssh -n                                                                 | multinode-965291 | jenkins | v1.34.0 | 10 Oct 24 18:45 UTC | 10 Oct 24 18:45 UTC |
	|         | multinode-965291-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-965291 ssh -n multinode-965291 sudo cat                                       | multinode-965291 | jenkins | v1.34.0 | 10 Oct 24 18:45 UTC | 10 Oct 24 18:45 UTC |
	|         | /home/docker/cp-test_multinode-965291-m03_multinode-965291.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-965291 cp multinode-965291-m03:/home/docker/cp-test.txt                       | multinode-965291 | jenkins | v1.34.0 | 10 Oct 24 18:45 UTC | 10 Oct 24 18:45 UTC |
	|         | multinode-965291-m02:/home/docker/cp-test_multinode-965291-m03_multinode-965291-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-965291 ssh -n                                                                 | multinode-965291 | jenkins | v1.34.0 | 10 Oct 24 18:45 UTC | 10 Oct 24 18:45 UTC |
	|         | multinode-965291-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-965291 ssh -n multinode-965291-m02 sudo cat                                   | multinode-965291 | jenkins | v1.34.0 | 10 Oct 24 18:45 UTC | 10 Oct 24 18:45 UTC |
	|         | /home/docker/cp-test_multinode-965291-m03_multinode-965291-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-965291 node stop m03                                                          | multinode-965291 | jenkins | v1.34.0 | 10 Oct 24 18:45 UTC | 10 Oct 24 18:45 UTC |
	| node    | multinode-965291 node start                                                             | multinode-965291 | jenkins | v1.34.0 | 10 Oct 24 18:45 UTC | 10 Oct 24 18:45 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-965291                                                                | multinode-965291 | jenkins | v1.34.0 | 10 Oct 24 18:45 UTC |                     |
	| stop    | -p multinode-965291                                                                     | multinode-965291 | jenkins | v1.34.0 | 10 Oct 24 18:45 UTC |                     |
	| start   | -p multinode-965291                                                                     | multinode-965291 | jenkins | v1.34.0 | 10 Oct 24 18:47 UTC | 10 Oct 24 18:51 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-965291                                                                | multinode-965291 | jenkins | v1.34.0 | 10 Oct 24 18:51 UTC |                     |
	| node    | multinode-965291 node delete                                                            | multinode-965291 | jenkins | v1.34.0 | 10 Oct 24 18:51 UTC | 10 Oct 24 18:51 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-965291 stop                                                                   | multinode-965291 | jenkins | v1.34.0 | 10 Oct 24 18:51 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/10 18:47:46
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1010 18:47:46.293173  117647 out.go:345] Setting OutFile to fd 1 ...
	I1010 18:47:46.293450  117647 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 18:47:46.293461  117647 out.go:358] Setting ErrFile to fd 2...
	I1010 18:47:46.293466  117647 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 18:47:46.293697  117647 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19787-81676/.minikube/bin
	I1010 18:47:46.294262  117647 out.go:352] Setting JSON to false
	I1010 18:47:46.295211  117647 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":9012,"bootTime":1728577054,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1010 18:47:46.295285  117647 start.go:139] virtualization: kvm guest
	I1010 18:47:46.297725  117647 out.go:177] * [multinode-965291] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1010 18:47:46.299292  117647 out.go:177]   - MINIKUBE_LOCATION=19787
	I1010 18:47:46.299310  117647 notify.go:220] Checking for updates...
	I1010 18:47:46.302434  117647 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1010 18:47:46.303975  117647 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19787-81676/kubeconfig
	I1010 18:47:46.305927  117647 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19787-81676/.minikube
	I1010 18:47:46.307844  117647 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1010 18:47:46.309708  117647 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1010 18:47:46.311776  117647 config.go:182] Loaded profile config "multinode-965291": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 18:47:46.311933  117647 driver.go:394] Setting default libvirt URI to qemu:///system
	I1010 18:47:46.312485  117647 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 18:47:46.312538  117647 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 18:47:46.328741  117647 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35901
	I1010 18:47:46.329358  117647 main.go:141] libmachine: () Calling .GetVersion
	I1010 18:47:46.329937  117647 main.go:141] libmachine: Using API Version  1
	I1010 18:47:46.329961  117647 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 18:47:46.330317  117647 main.go:141] libmachine: () Calling .GetMachineName
	I1010 18:47:46.330495  117647 main.go:141] libmachine: (multinode-965291) Calling .DriverName
	I1010 18:47:46.368088  117647 out.go:177] * Using the kvm2 driver based on existing profile
	I1010 18:47:46.369609  117647 start.go:297] selected driver: kvm2
	I1010 18:47:46.369627  117647 start.go:901] validating driver "kvm2" against &{Name:multinode-965291 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.1 ClusterName:multinode-965291 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.28 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.122 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.137 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false insp
ektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptim
izations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 18:47:46.369761  117647 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1010 18:47:46.370086  117647 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 18:47:46.370161  117647 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19787-81676/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1010 18:47:46.385840  117647 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1010 18:47:46.386585  117647 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1010 18:47:46.386641  117647 cni.go:84] Creating CNI manager for ""
	I1010 18:47:46.386693  117647 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1010 18:47:46.386766  117647 start.go:340] cluster config:
	{Name:multinode-965291 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-965291 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.28 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.122 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.137 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow
:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetCli
entPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 18:47:46.386893  117647 iso.go:125] acquiring lock: {Name:mk53f4624ffe67cac4f5d6b29b9ed87292a010a5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 18:47:46.388896  117647 out.go:177] * Starting "multinode-965291" primary control-plane node in "multinode-965291" cluster
	I1010 18:47:46.390479  117647 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1010 18:47:46.390539  117647 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1010 18:47:46.390553  117647 cache.go:56] Caching tarball of preloaded images
	I1010 18:47:46.390695  117647 preload.go:172] Found /home/jenkins/minikube-integration/19787-81676/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1010 18:47:46.390712  117647 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1010 18:47:46.390855  117647 profile.go:143] Saving config to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/multinode-965291/config.json ...
	I1010 18:47:46.391100  117647 start.go:360] acquireMachinesLock for multinode-965291: {Name:mkf50a8a3600473176c10a3ea212772e747151e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1010 18:47:46.391165  117647 start.go:364] duration metric: took 39.072µs to acquireMachinesLock for "multinode-965291"
	I1010 18:47:46.391188  117647 start.go:96] Skipping create...Using existing machine configuration
	I1010 18:47:46.391197  117647 fix.go:54] fixHost starting: 
	I1010 18:47:46.391481  117647 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 18:47:46.391523  117647 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 18:47:46.406401  117647 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43447
	I1010 18:47:46.406914  117647 main.go:141] libmachine: () Calling .GetVersion
	I1010 18:47:46.407442  117647 main.go:141] libmachine: Using API Version  1
	I1010 18:47:46.407459  117647 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 18:47:46.407824  117647 main.go:141] libmachine: () Calling .GetMachineName
	I1010 18:47:46.408046  117647 main.go:141] libmachine: (multinode-965291) Calling .DriverName
	I1010 18:47:46.408232  117647 main.go:141] libmachine: (multinode-965291) Calling .GetState
	I1010 18:47:46.410112  117647 fix.go:112] recreateIfNeeded on multinode-965291: state=Running err=<nil>
	W1010 18:47:46.410132  117647 fix.go:138] unexpected machine state, will restart: <nil>
	I1010 18:47:46.413073  117647 out.go:177] * Updating the running kvm2 "multinode-965291" VM ...
	I1010 18:47:46.414493  117647 machine.go:93] provisionDockerMachine start ...
	I1010 18:47:46.414521  117647 main.go:141] libmachine: (multinode-965291) Calling .DriverName
	I1010 18:47:46.414758  117647 main.go:141] libmachine: (multinode-965291) Calling .GetSSHHostname
	I1010 18:47:46.417653  117647 main.go:141] libmachine: (multinode-965291) DBG | domain multinode-965291 has defined MAC address 52:54:00:cf:c0:21 in network mk-multinode-965291
	I1010 18:47:46.418096  117647 main.go:141] libmachine: (multinode-965291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:c0:21", ip: ""} in network mk-multinode-965291: {Iface:virbr1 ExpiryTime:2024-10-10 19:42:19 +0000 UTC Type:0 Mac:52:54:00:cf:c0:21 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:multinode-965291 Clientid:01:52:54:00:cf:c0:21}
	I1010 18:47:46.418134  117647 main.go:141] libmachine: (multinode-965291) DBG | domain multinode-965291 has defined IP address 192.168.39.28 and MAC address 52:54:00:cf:c0:21 in network mk-multinode-965291
	I1010 18:47:46.418359  117647 main.go:141] libmachine: (multinode-965291) Calling .GetSSHPort
	I1010 18:47:46.418567  117647 main.go:141] libmachine: (multinode-965291) Calling .GetSSHKeyPath
	I1010 18:47:46.418759  117647 main.go:141] libmachine: (multinode-965291) Calling .GetSSHKeyPath
	I1010 18:47:46.418900  117647 main.go:141] libmachine: (multinode-965291) Calling .GetSSHUsername
	I1010 18:47:46.419059  117647 main.go:141] libmachine: Using SSH client type: native
	I1010 18:47:46.419261  117647 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.28 22 <nil> <nil>}
	I1010 18:47:46.419273  117647 main.go:141] libmachine: About to run SSH command:
	hostname
	I1010 18:47:46.530794  117647 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-965291
	
	I1010 18:47:46.530827  117647 main.go:141] libmachine: (multinode-965291) Calling .GetMachineName
	I1010 18:47:46.531126  117647 buildroot.go:166] provisioning hostname "multinode-965291"
	I1010 18:47:46.531160  117647 main.go:141] libmachine: (multinode-965291) Calling .GetMachineName
	I1010 18:47:46.531332  117647 main.go:141] libmachine: (multinode-965291) Calling .GetSSHHostname
	I1010 18:47:46.534393  117647 main.go:141] libmachine: (multinode-965291) DBG | domain multinode-965291 has defined MAC address 52:54:00:cf:c0:21 in network mk-multinode-965291
	I1010 18:47:46.534897  117647 main.go:141] libmachine: (multinode-965291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:c0:21", ip: ""} in network mk-multinode-965291: {Iface:virbr1 ExpiryTime:2024-10-10 19:42:19 +0000 UTC Type:0 Mac:52:54:00:cf:c0:21 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:multinode-965291 Clientid:01:52:54:00:cf:c0:21}
	I1010 18:47:46.534940  117647 main.go:141] libmachine: (multinode-965291) DBG | domain multinode-965291 has defined IP address 192.168.39.28 and MAC address 52:54:00:cf:c0:21 in network mk-multinode-965291
	I1010 18:47:46.535114  117647 main.go:141] libmachine: (multinode-965291) Calling .GetSSHPort
	I1010 18:47:46.535307  117647 main.go:141] libmachine: (multinode-965291) Calling .GetSSHKeyPath
	I1010 18:47:46.535472  117647 main.go:141] libmachine: (multinode-965291) Calling .GetSSHKeyPath
	I1010 18:47:46.535667  117647 main.go:141] libmachine: (multinode-965291) Calling .GetSSHUsername
	I1010 18:47:46.535913  117647 main.go:141] libmachine: Using SSH client type: native
	I1010 18:47:46.536145  117647 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.28 22 <nil> <nil>}
	I1010 18:47:46.536163  117647 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-965291 && echo "multinode-965291" | sudo tee /etc/hostname
	I1010 18:47:46.673579  117647 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-965291
	
	I1010 18:47:46.673614  117647 main.go:141] libmachine: (multinode-965291) Calling .GetSSHHostname
	I1010 18:47:46.676609  117647 main.go:141] libmachine: (multinode-965291) DBG | domain multinode-965291 has defined MAC address 52:54:00:cf:c0:21 in network mk-multinode-965291
	I1010 18:47:46.677067  117647 main.go:141] libmachine: (multinode-965291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:c0:21", ip: ""} in network mk-multinode-965291: {Iface:virbr1 ExpiryTime:2024-10-10 19:42:19 +0000 UTC Type:0 Mac:52:54:00:cf:c0:21 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:multinode-965291 Clientid:01:52:54:00:cf:c0:21}
	I1010 18:47:46.677091  117647 main.go:141] libmachine: (multinode-965291) DBG | domain multinode-965291 has defined IP address 192.168.39.28 and MAC address 52:54:00:cf:c0:21 in network mk-multinode-965291
	I1010 18:47:46.677420  117647 main.go:141] libmachine: (multinode-965291) Calling .GetSSHPort
	I1010 18:47:46.677611  117647 main.go:141] libmachine: (multinode-965291) Calling .GetSSHKeyPath
	I1010 18:47:46.677777  117647 main.go:141] libmachine: (multinode-965291) Calling .GetSSHKeyPath
	I1010 18:47:46.677916  117647 main.go:141] libmachine: (multinode-965291) Calling .GetSSHUsername
	I1010 18:47:46.678063  117647 main.go:141] libmachine: Using SSH client type: native
	I1010 18:47:46.678273  117647 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.28 22 <nil> <nil>}
	I1010 18:47:46.678295  117647 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-965291' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-965291/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-965291' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1010 18:47:46.790570  117647 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1010 18:47:46.790605  117647 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19787-81676/.minikube CaCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19787-81676/.minikube}
	I1010 18:47:46.790624  117647 buildroot.go:174] setting up certificates
	I1010 18:47:46.790634  117647 provision.go:84] configureAuth start
	I1010 18:47:46.790644  117647 main.go:141] libmachine: (multinode-965291) Calling .GetMachineName
	I1010 18:47:46.790907  117647 main.go:141] libmachine: (multinode-965291) Calling .GetIP
	I1010 18:47:46.793597  117647 main.go:141] libmachine: (multinode-965291) DBG | domain multinode-965291 has defined MAC address 52:54:00:cf:c0:21 in network mk-multinode-965291
	I1010 18:47:46.794024  117647 main.go:141] libmachine: (multinode-965291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:c0:21", ip: ""} in network mk-multinode-965291: {Iface:virbr1 ExpiryTime:2024-10-10 19:42:19 +0000 UTC Type:0 Mac:52:54:00:cf:c0:21 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:multinode-965291 Clientid:01:52:54:00:cf:c0:21}
	I1010 18:47:46.794058  117647 main.go:141] libmachine: (multinode-965291) DBG | domain multinode-965291 has defined IP address 192.168.39.28 and MAC address 52:54:00:cf:c0:21 in network mk-multinode-965291
	I1010 18:47:46.794211  117647 main.go:141] libmachine: (multinode-965291) Calling .GetSSHHostname
	I1010 18:47:46.796868  117647 main.go:141] libmachine: (multinode-965291) DBG | domain multinode-965291 has defined MAC address 52:54:00:cf:c0:21 in network mk-multinode-965291
	I1010 18:47:46.797285  117647 main.go:141] libmachine: (multinode-965291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:c0:21", ip: ""} in network mk-multinode-965291: {Iface:virbr1 ExpiryTime:2024-10-10 19:42:19 +0000 UTC Type:0 Mac:52:54:00:cf:c0:21 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:multinode-965291 Clientid:01:52:54:00:cf:c0:21}
	I1010 18:47:46.797328  117647 main.go:141] libmachine: (multinode-965291) DBG | domain multinode-965291 has defined IP address 192.168.39.28 and MAC address 52:54:00:cf:c0:21 in network mk-multinode-965291
	I1010 18:47:46.797513  117647 provision.go:143] copyHostCerts
	I1010 18:47:46.797545  117647 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem
	I1010 18:47:46.797575  117647 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem, removing ...
	I1010 18:47:46.797582  117647 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem
	I1010 18:47:46.797649  117647 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem (1078 bytes)
	I1010 18:47:46.797740  117647 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem
	I1010 18:47:46.797757  117647 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem, removing ...
	I1010 18:47:46.797763  117647 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem
	I1010 18:47:46.797787  117647 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem (1123 bytes)
	I1010 18:47:46.797846  117647 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem
	I1010 18:47:46.797862  117647 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem, removing ...
	I1010 18:47:46.797873  117647 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem
	I1010 18:47:46.797902  117647 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem (1675 bytes)
	I1010 18:47:46.797961  117647 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem org=jenkins.multinode-965291 san=[127.0.0.1 192.168.39.28 localhost minikube multinode-965291]
	I1010 18:47:47.373209  117647 provision.go:177] copyRemoteCerts
	I1010 18:47:47.373279  117647 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1010 18:47:47.373307  117647 main.go:141] libmachine: (multinode-965291) Calling .GetSSHHostname
	I1010 18:47:47.375881  117647 main.go:141] libmachine: (multinode-965291) DBG | domain multinode-965291 has defined MAC address 52:54:00:cf:c0:21 in network mk-multinode-965291
	I1010 18:47:47.376229  117647 main.go:141] libmachine: (multinode-965291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:c0:21", ip: ""} in network mk-multinode-965291: {Iface:virbr1 ExpiryTime:2024-10-10 19:42:19 +0000 UTC Type:0 Mac:52:54:00:cf:c0:21 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:multinode-965291 Clientid:01:52:54:00:cf:c0:21}
	I1010 18:47:47.376265  117647 main.go:141] libmachine: (multinode-965291) DBG | domain multinode-965291 has defined IP address 192.168.39.28 and MAC address 52:54:00:cf:c0:21 in network mk-multinode-965291
	I1010 18:47:47.376479  117647 main.go:141] libmachine: (multinode-965291) Calling .GetSSHPort
	I1010 18:47:47.376697  117647 main.go:141] libmachine: (multinode-965291) Calling .GetSSHKeyPath
	I1010 18:47:47.376865  117647 main.go:141] libmachine: (multinode-965291) Calling .GetSSHUsername
	I1010 18:47:47.376996  117647 sshutil.go:53] new ssh client: &{IP:192.168.39.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/multinode-965291/id_rsa Username:docker}
	I1010 18:47:47.460334  117647 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1010 18:47:47.460413  117647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1010 18:47:47.486423  117647 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1010 18:47:47.486536  117647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1010 18:47:47.512795  117647 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1010 18:47:47.512885  117647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1010 18:47:47.544811  117647 provision.go:87] duration metric: took 754.154389ms to configureAuth
	I1010 18:47:47.544844  117647 buildroot.go:189] setting minikube options for container-runtime
	I1010 18:47:47.545078  117647 config.go:182] Loaded profile config "multinode-965291": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 18:47:47.545151  117647 main.go:141] libmachine: (multinode-965291) Calling .GetSSHHostname
	I1010 18:47:47.547966  117647 main.go:141] libmachine: (multinode-965291) DBG | domain multinode-965291 has defined MAC address 52:54:00:cf:c0:21 in network mk-multinode-965291
	I1010 18:47:47.548384  117647 main.go:141] libmachine: (multinode-965291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:c0:21", ip: ""} in network mk-multinode-965291: {Iface:virbr1 ExpiryTime:2024-10-10 19:42:19 +0000 UTC Type:0 Mac:52:54:00:cf:c0:21 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:multinode-965291 Clientid:01:52:54:00:cf:c0:21}
	I1010 18:47:47.548419  117647 main.go:141] libmachine: (multinode-965291) DBG | domain multinode-965291 has defined IP address 192.168.39.28 and MAC address 52:54:00:cf:c0:21 in network mk-multinode-965291
	I1010 18:47:47.548563  117647 main.go:141] libmachine: (multinode-965291) Calling .GetSSHPort
	I1010 18:47:47.548769  117647 main.go:141] libmachine: (multinode-965291) Calling .GetSSHKeyPath
	I1010 18:47:47.548932  117647 main.go:141] libmachine: (multinode-965291) Calling .GetSSHKeyPath
	I1010 18:47:47.549055  117647 main.go:141] libmachine: (multinode-965291) Calling .GetSSHUsername
	I1010 18:47:47.549225  117647 main.go:141] libmachine: Using SSH client type: native
	I1010 18:47:47.549400  117647 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.28 22 <nil> <nil>}
	I1010 18:47:47.549417  117647 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1010 18:49:18.287285  117647 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1010 18:49:18.287331  117647 machine.go:96] duration metric: took 1m31.872808891s to provisionDockerMachine
	I1010 18:49:18.287350  117647 start.go:293] postStartSetup for "multinode-965291" (driver="kvm2")
	I1010 18:49:18.287365  117647 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1010 18:49:18.287398  117647 main.go:141] libmachine: (multinode-965291) Calling .DriverName
	I1010 18:49:18.287781  117647 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1010 18:49:18.287810  117647 main.go:141] libmachine: (multinode-965291) Calling .GetSSHHostname
	I1010 18:49:18.291535  117647 main.go:141] libmachine: (multinode-965291) DBG | domain multinode-965291 has defined MAC address 52:54:00:cf:c0:21 in network mk-multinode-965291
	I1010 18:49:18.292093  117647 main.go:141] libmachine: (multinode-965291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:c0:21", ip: ""} in network mk-multinode-965291: {Iface:virbr1 ExpiryTime:2024-10-10 19:42:19 +0000 UTC Type:0 Mac:52:54:00:cf:c0:21 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:multinode-965291 Clientid:01:52:54:00:cf:c0:21}
	I1010 18:49:18.292126  117647 main.go:141] libmachine: (multinode-965291) DBG | domain multinode-965291 has defined IP address 192.168.39.28 and MAC address 52:54:00:cf:c0:21 in network mk-multinode-965291
	I1010 18:49:18.292285  117647 main.go:141] libmachine: (multinode-965291) Calling .GetSSHPort
	I1010 18:49:18.292492  117647 main.go:141] libmachine: (multinode-965291) Calling .GetSSHKeyPath
	I1010 18:49:18.292675  117647 main.go:141] libmachine: (multinode-965291) Calling .GetSSHUsername
	I1010 18:49:18.292879  117647 sshutil.go:53] new ssh client: &{IP:192.168.39.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/multinode-965291/id_rsa Username:docker}
	I1010 18:49:18.381162  117647 ssh_runner.go:195] Run: cat /etc/os-release
	I1010 18:49:18.386114  117647 command_runner.go:130] > NAME=Buildroot
	I1010 18:49:18.386141  117647 command_runner.go:130] > VERSION=2023.02.9-dirty
	I1010 18:49:18.386148  117647 command_runner.go:130] > ID=buildroot
	I1010 18:49:18.386156  117647 command_runner.go:130] > VERSION_ID=2023.02.9
	I1010 18:49:18.386164  117647 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I1010 18:49:18.386199  117647 info.go:137] Remote host: Buildroot 2023.02.9
	I1010 18:49:18.386215  117647 filesync.go:126] Scanning /home/jenkins/minikube-integration/19787-81676/.minikube/addons for local assets ...
	I1010 18:49:18.386281  117647 filesync.go:126] Scanning /home/jenkins/minikube-integration/19787-81676/.minikube/files for local assets ...
	I1010 18:49:18.386358  117647 filesync.go:149] local asset: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem -> 888762.pem in /etc/ssl/certs
	I1010 18:49:18.386370  117647 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem -> /etc/ssl/certs/888762.pem
	I1010 18:49:18.386464  117647 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1010 18:49:18.396778  117647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem --> /etc/ssl/certs/888762.pem (1708 bytes)
	I1010 18:49:18.423812  117647 start.go:296] duration metric: took 136.444288ms for postStartSetup
	I1010 18:49:18.423860  117647 fix.go:56] duration metric: took 1m32.032662324s for fixHost
	I1010 18:49:18.423885  117647 main.go:141] libmachine: (multinode-965291) Calling .GetSSHHostname
	I1010 18:49:18.426994  117647 main.go:141] libmachine: (multinode-965291) DBG | domain multinode-965291 has defined MAC address 52:54:00:cf:c0:21 in network mk-multinode-965291
	I1010 18:49:18.427505  117647 main.go:141] libmachine: (multinode-965291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:c0:21", ip: ""} in network mk-multinode-965291: {Iface:virbr1 ExpiryTime:2024-10-10 19:42:19 +0000 UTC Type:0 Mac:52:54:00:cf:c0:21 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:multinode-965291 Clientid:01:52:54:00:cf:c0:21}
	I1010 18:49:18.427542  117647 main.go:141] libmachine: (multinode-965291) DBG | domain multinode-965291 has defined IP address 192.168.39.28 and MAC address 52:54:00:cf:c0:21 in network mk-multinode-965291
	I1010 18:49:18.427755  117647 main.go:141] libmachine: (multinode-965291) Calling .GetSSHPort
	I1010 18:49:18.427917  117647 main.go:141] libmachine: (multinode-965291) Calling .GetSSHKeyPath
	I1010 18:49:18.428157  117647 main.go:141] libmachine: (multinode-965291) Calling .GetSSHKeyPath
	I1010 18:49:18.428362  117647 main.go:141] libmachine: (multinode-965291) Calling .GetSSHUsername
	I1010 18:49:18.428588  117647 main.go:141] libmachine: Using SSH client type: native
	I1010 18:49:18.428747  117647 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.28 22 <nil> <nil>}
	I1010 18:49:18.428758  117647 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1010 18:49:18.538384  117647 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728586158.518055076
	
	I1010 18:49:18.538414  117647 fix.go:216] guest clock: 1728586158.518055076
	I1010 18:49:18.538423  117647 fix.go:229] Guest: 2024-10-10 18:49:18.518055076 +0000 UTC Remote: 2024-10-10 18:49:18.42386518 +0000 UTC m=+92.171279660 (delta=94.189896ms)
	I1010 18:49:18.538485  117647 fix.go:200] guest clock delta is within tolerance: 94.189896ms
	I1010 18:49:18.538496  117647 start.go:83] releasing machines lock for "multinode-965291", held for 1m32.147315907s
	I1010 18:49:18.538538  117647 main.go:141] libmachine: (multinode-965291) Calling .DriverName
	I1010 18:49:18.538820  117647 main.go:141] libmachine: (multinode-965291) Calling .GetIP
	I1010 18:49:18.541786  117647 main.go:141] libmachine: (multinode-965291) DBG | domain multinode-965291 has defined MAC address 52:54:00:cf:c0:21 in network mk-multinode-965291
	I1010 18:49:18.542290  117647 main.go:141] libmachine: (multinode-965291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:c0:21", ip: ""} in network mk-multinode-965291: {Iface:virbr1 ExpiryTime:2024-10-10 19:42:19 +0000 UTC Type:0 Mac:52:54:00:cf:c0:21 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:multinode-965291 Clientid:01:52:54:00:cf:c0:21}
	I1010 18:49:18.542325  117647 main.go:141] libmachine: (multinode-965291) DBG | domain multinode-965291 has defined IP address 192.168.39.28 and MAC address 52:54:00:cf:c0:21 in network mk-multinode-965291
	I1010 18:49:18.542496  117647 main.go:141] libmachine: (multinode-965291) Calling .DriverName
	I1010 18:49:18.543133  117647 main.go:141] libmachine: (multinode-965291) Calling .DriverName
	I1010 18:49:18.543380  117647 main.go:141] libmachine: (multinode-965291) Calling .DriverName
	I1010 18:49:18.543514  117647 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1010 18:49:18.543576  117647 main.go:141] libmachine: (multinode-965291) Calling .GetSSHHostname
	I1010 18:49:18.543594  117647 ssh_runner.go:195] Run: cat /version.json
	I1010 18:49:18.543620  117647 main.go:141] libmachine: (multinode-965291) Calling .GetSSHHostname
	I1010 18:49:18.546722  117647 main.go:141] libmachine: (multinode-965291) DBG | domain multinode-965291 has defined MAC address 52:54:00:cf:c0:21 in network mk-multinode-965291
	I1010 18:49:18.547054  117647 main.go:141] libmachine: (multinode-965291) DBG | domain multinode-965291 has defined MAC address 52:54:00:cf:c0:21 in network mk-multinode-965291
	I1010 18:49:18.547114  117647 main.go:141] libmachine: (multinode-965291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:c0:21", ip: ""} in network mk-multinode-965291: {Iface:virbr1 ExpiryTime:2024-10-10 19:42:19 +0000 UTC Type:0 Mac:52:54:00:cf:c0:21 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:multinode-965291 Clientid:01:52:54:00:cf:c0:21}
	I1010 18:49:18.547150  117647 main.go:141] libmachine: (multinode-965291) DBG | domain multinode-965291 has defined IP address 192.168.39.28 and MAC address 52:54:00:cf:c0:21 in network mk-multinode-965291
	I1010 18:49:18.547306  117647 main.go:141] libmachine: (multinode-965291) Calling .GetSSHPort
	I1010 18:49:18.547477  117647 main.go:141] libmachine: (multinode-965291) Calling .GetSSHKeyPath
	I1010 18:49:18.547539  117647 main.go:141] libmachine: (multinode-965291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:c0:21", ip: ""} in network mk-multinode-965291: {Iface:virbr1 ExpiryTime:2024-10-10 19:42:19 +0000 UTC Type:0 Mac:52:54:00:cf:c0:21 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:multinode-965291 Clientid:01:52:54:00:cf:c0:21}
	I1010 18:49:18.547570  117647 main.go:141] libmachine: (multinode-965291) DBG | domain multinode-965291 has defined IP address 192.168.39.28 and MAC address 52:54:00:cf:c0:21 in network mk-multinode-965291
	I1010 18:49:18.547637  117647 main.go:141] libmachine: (multinode-965291) Calling .GetSSHUsername
	I1010 18:49:18.547715  117647 main.go:141] libmachine: (multinode-965291) Calling .GetSSHPort
	I1010 18:49:18.547799  117647 sshutil.go:53] new ssh client: &{IP:192.168.39.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/multinode-965291/id_rsa Username:docker}
	I1010 18:49:18.547858  117647 main.go:141] libmachine: (multinode-965291) Calling .GetSSHKeyPath
	I1010 18:49:18.547996  117647 main.go:141] libmachine: (multinode-965291) Calling .GetSSHUsername
	I1010 18:49:18.548149  117647 sshutil.go:53] new ssh client: &{IP:192.168.39.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/multinode-965291/id_rsa Username:docker}
	I1010 18:49:18.626343  117647 command_runner.go:130] > {"iso_version": "v1.34.0-1728382514-19774", "kicbase_version": "v0.0.45-1728063813-19756", "minikube_version": "v1.34.0", "commit": "cf9f11c2b0369efc07a929c4a1fdb2b4b3c62ee9"}
	I1010 18:49:18.626536  117647 ssh_runner.go:195] Run: systemctl --version
	I1010 18:49:18.657814  117647 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1010 18:49:18.657898  117647 command_runner.go:130] > systemd 252 (252)
	I1010 18:49:18.657944  117647 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I1010 18:49:18.658013  117647 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1010 18:49:18.818506  117647 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1010 18:49:18.826335  117647 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1010 18:49:18.826624  117647 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1010 18:49:18.826691  117647 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1010 18:49:18.836611  117647 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1010 18:49:18.836642  117647 start.go:495] detecting cgroup driver to use...
	I1010 18:49:18.836712  117647 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1010 18:49:18.854116  117647 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1010 18:49:18.868936  117647 docker.go:217] disabling cri-docker service (if available) ...
	I1010 18:49:18.869002  117647 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1010 18:49:18.883684  117647 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1010 18:49:18.898280  117647 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1010 18:49:19.045206  117647 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1010 18:49:19.211098  117647 docker.go:233] disabling docker service ...
	I1010 18:49:19.211165  117647 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1010 18:49:19.231977  117647 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1010 18:49:19.246964  117647 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1010 18:49:19.401708  117647 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1010 18:49:19.559847  117647 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1010 18:49:19.576284  117647 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1010 18:49:19.595711  117647 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1010 18:49:19.595991  117647 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1010 18:49:19.596152  117647 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:49:19.608281  117647 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1010 18:49:19.608353  117647 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:49:19.620270  117647 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:49:19.633140  117647 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:49:19.644308  117647 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1010 18:49:19.656507  117647 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:49:19.668077  117647 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:49:19.691194  117647 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:49:19.706550  117647 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1010 18:49:19.718575  117647 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1010 18:49:19.719145  117647 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1010 18:49:19.759850  117647 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 18:49:19.946179  117647 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1010 18:49:20.191891  117647 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1010 18:49:20.191978  117647 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1010 18:49:20.197305  117647 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1010 18:49:20.197334  117647 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1010 18:49:20.197344  117647 command_runner.go:130] > Device: 0,22	Inode: 1350        Links: 1
	I1010 18:49:20.197352  117647 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1010 18:49:20.197359  117647 command_runner.go:130] > Access: 2024-10-10 18:49:20.025969704 +0000
	I1010 18:49:20.197367  117647 command_runner.go:130] > Modify: 2024-10-10 18:49:20.025969704 +0000
	I1010 18:49:20.197374  117647 command_runner.go:130] > Change: 2024-10-10 18:49:20.025969704 +0000
	I1010 18:49:20.197379  117647 command_runner.go:130] >  Birth: -
	I1010 18:49:20.197398  117647 start.go:563] Will wait 60s for crictl version
	I1010 18:49:20.197528  117647 ssh_runner.go:195] Run: which crictl
	I1010 18:49:20.201896  117647 command_runner.go:130] > /usr/bin/crictl
	I1010 18:49:20.202057  117647 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1010 18:49:20.244488  117647 command_runner.go:130] > Version:  0.1.0
	I1010 18:49:20.244516  117647 command_runner.go:130] > RuntimeName:  cri-o
	I1010 18:49:20.244521  117647 command_runner.go:130] > RuntimeVersion:  1.29.1
	I1010 18:49:20.244526  117647 command_runner.go:130] > RuntimeApiVersion:  v1
	I1010 18:49:20.244541  117647 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1010 18:49:20.244608  117647 ssh_runner.go:195] Run: crio --version
	I1010 18:49:20.273755  117647 command_runner.go:130] > crio version 1.29.1
	I1010 18:49:20.273787  117647 command_runner.go:130] > Version:        1.29.1
	I1010 18:49:20.273796  117647 command_runner.go:130] > GitCommit:      unknown
	I1010 18:49:20.273803  117647 command_runner.go:130] > GitCommitDate:  unknown
	I1010 18:49:20.273810  117647 command_runner.go:130] > GitTreeState:   clean
	I1010 18:49:20.273819  117647 command_runner.go:130] > BuildDate:      2024-10-08T15:57:16Z
	I1010 18:49:20.273825  117647 command_runner.go:130] > GoVersion:      go1.21.6
	I1010 18:49:20.273831  117647 command_runner.go:130] > Compiler:       gc
	I1010 18:49:20.273839  117647 command_runner.go:130] > Platform:       linux/amd64
	I1010 18:49:20.273846  117647 command_runner.go:130] > Linkmode:       dynamic
	I1010 18:49:20.273853  117647 command_runner.go:130] > BuildTags:      
	I1010 18:49:20.273860  117647 command_runner.go:130] >   containers_image_ostree_stub
	I1010 18:49:20.273870  117647 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1010 18:49:20.273882  117647 command_runner.go:130] >   btrfs_noversion
	I1010 18:49:20.273891  117647 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1010 18:49:20.273898  117647 command_runner.go:130] >   libdm_no_deferred_remove
	I1010 18:49:20.273908  117647 command_runner.go:130] >   seccomp
	I1010 18:49:20.273916  117647 command_runner.go:130] > LDFlags:          unknown
	I1010 18:49:20.273924  117647 command_runner.go:130] > SeccompEnabled:   true
	I1010 18:49:20.273930  117647 command_runner.go:130] > AppArmorEnabled:  false
	I1010 18:49:20.275048  117647 ssh_runner.go:195] Run: crio --version
	I1010 18:49:20.303968  117647 command_runner.go:130] > crio version 1.29.1
	I1010 18:49:20.303999  117647 command_runner.go:130] > Version:        1.29.1
	I1010 18:49:20.304006  117647 command_runner.go:130] > GitCommit:      unknown
	I1010 18:49:20.304010  117647 command_runner.go:130] > GitCommitDate:  unknown
	I1010 18:49:20.304014  117647 command_runner.go:130] > GitTreeState:   clean
	I1010 18:49:20.304023  117647 command_runner.go:130] > BuildDate:      2024-10-08T15:57:16Z
	I1010 18:49:20.304029  117647 command_runner.go:130] > GoVersion:      go1.21.6
	I1010 18:49:20.304035  117647 command_runner.go:130] > Compiler:       gc
	I1010 18:49:20.304042  117647 command_runner.go:130] > Platform:       linux/amd64
	I1010 18:49:20.304048  117647 command_runner.go:130] > Linkmode:       dynamic
	I1010 18:49:20.304061  117647 command_runner.go:130] > BuildTags:      
	I1010 18:49:20.304067  117647 command_runner.go:130] >   containers_image_ostree_stub
	I1010 18:49:20.304072  117647 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1010 18:49:20.304089  117647 command_runner.go:130] >   btrfs_noversion
	I1010 18:49:20.304097  117647 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1010 18:49:20.304104  117647 command_runner.go:130] >   libdm_no_deferred_remove
	I1010 18:49:20.304109  117647 command_runner.go:130] >   seccomp
	I1010 18:49:20.304115  117647 command_runner.go:130] > LDFlags:          unknown
	I1010 18:49:20.304124  117647 command_runner.go:130] > SeccompEnabled:   true
	I1010 18:49:20.304130  117647 command_runner.go:130] > AppArmorEnabled:  false
	I1010 18:49:20.311483  117647 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1010 18:49:20.313396  117647 main.go:141] libmachine: (multinode-965291) Calling .GetIP
	I1010 18:49:20.316646  117647 main.go:141] libmachine: (multinode-965291) DBG | domain multinode-965291 has defined MAC address 52:54:00:cf:c0:21 in network mk-multinode-965291
	I1010 18:49:20.317139  117647 main.go:141] libmachine: (multinode-965291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:c0:21", ip: ""} in network mk-multinode-965291: {Iface:virbr1 ExpiryTime:2024-10-10 19:42:19 +0000 UTC Type:0 Mac:52:54:00:cf:c0:21 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:multinode-965291 Clientid:01:52:54:00:cf:c0:21}
	I1010 18:49:20.317169  117647 main.go:141] libmachine: (multinode-965291) DBG | domain multinode-965291 has defined IP address 192.168.39.28 and MAC address 52:54:00:cf:c0:21 in network mk-multinode-965291
	I1010 18:49:20.317403  117647 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1010 18:49:20.322205  117647 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I1010 18:49:20.322329  117647 kubeadm.go:883] updating cluster {Name:multinode-965291 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.1 ClusterName:multinode-965291 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.28 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.122 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.137 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadge
t:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fa
lse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1010 18:49:20.322475  117647 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1010 18:49:20.322514  117647 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 18:49:20.364746  117647 command_runner.go:130] > {
	I1010 18:49:20.364770  117647 command_runner.go:130] >   "images": [
	I1010 18:49:20.364774  117647 command_runner.go:130] >     {
	I1010 18:49:20.364783  117647 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I1010 18:49:20.364788  117647 command_runner.go:130] >       "repoTags": [
	I1010 18:49:20.364798  117647 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I1010 18:49:20.364802  117647 command_runner.go:130] >       ],
	I1010 18:49:20.364806  117647 command_runner.go:130] >       "repoDigests": [
	I1010 18:49:20.364814  117647 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I1010 18:49:20.364821  117647 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I1010 18:49:20.364824  117647 command_runner.go:130] >       ],
	I1010 18:49:20.364829  117647 command_runner.go:130] >       "size": "87190579",
	I1010 18:49:20.364833  117647 command_runner.go:130] >       "uid": null,
	I1010 18:49:20.364837  117647 command_runner.go:130] >       "username": "",
	I1010 18:49:20.364843  117647 command_runner.go:130] >       "spec": null,
	I1010 18:49:20.364861  117647 command_runner.go:130] >       "pinned": false
	I1010 18:49:20.364865  117647 command_runner.go:130] >     },
	I1010 18:49:20.364872  117647 command_runner.go:130] >     {
	I1010 18:49:20.364880  117647 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I1010 18:49:20.364886  117647 command_runner.go:130] >       "repoTags": [
	I1010 18:49:20.364894  117647 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I1010 18:49:20.364898  117647 command_runner.go:130] >       ],
	I1010 18:49:20.364903  117647 command_runner.go:130] >       "repoDigests": [
	I1010 18:49:20.364909  117647 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I1010 18:49:20.364918  117647 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I1010 18:49:20.364922  117647 command_runner.go:130] >       ],
	I1010 18:49:20.364926  117647 command_runner.go:130] >       "size": "1363676",
	I1010 18:49:20.364930  117647 command_runner.go:130] >       "uid": null,
	I1010 18:49:20.364935  117647 command_runner.go:130] >       "username": "",
	I1010 18:49:20.364941  117647 command_runner.go:130] >       "spec": null,
	I1010 18:49:20.364945  117647 command_runner.go:130] >       "pinned": false
	I1010 18:49:20.364948  117647 command_runner.go:130] >     },
	I1010 18:49:20.364952  117647 command_runner.go:130] >     {
	I1010 18:49:20.364958  117647 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1010 18:49:20.364963  117647 command_runner.go:130] >       "repoTags": [
	I1010 18:49:20.364967  117647 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1010 18:49:20.364970  117647 command_runner.go:130] >       ],
	I1010 18:49:20.364975  117647 command_runner.go:130] >       "repoDigests": [
	I1010 18:49:20.364985  117647 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1010 18:49:20.364992  117647 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1010 18:49:20.364998  117647 command_runner.go:130] >       ],
	I1010 18:49:20.365002  117647 command_runner.go:130] >       "size": "31470524",
	I1010 18:49:20.365005  117647 command_runner.go:130] >       "uid": null,
	I1010 18:49:20.365012  117647 command_runner.go:130] >       "username": "",
	I1010 18:49:20.365015  117647 command_runner.go:130] >       "spec": null,
	I1010 18:49:20.365019  117647 command_runner.go:130] >       "pinned": false
	I1010 18:49:20.365023  117647 command_runner.go:130] >     },
	I1010 18:49:20.365026  117647 command_runner.go:130] >     {
	I1010 18:49:20.365031  117647 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I1010 18:49:20.365036  117647 command_runner.go:130] >       "repoTags": [
	I1010 18:49:20.365041  117647 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I1010 18:49:20.365045  117647 command_runner.go:130] >       ],
	I1010 18:49:20.365048  117647 command_runner.go:130] >       "repoDigests": [
	I1010 18:49:20.365054  117647 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I1010 18:49:20.365067  117647 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I1010 18:49:20.365073  117647 command_runner.go:130] >       ],
	I1010 18:49:20.365076  117647 command_runner.go:130] >       "size": "63273227",
	I1010 18:49:20.365080  117647 command_runner.go:130] >       "uid": null,
	I1010 18:49:20.365087  117647 command_runner.go:130] >       "username": "nonroot",
	I1010 18:49:20.365091  117647 command_runner.go:130] >       "spec": null,
	I1010 18:49:20.365096  117647 command_runner.go:130] >       "pinned": false
	I1010 18:49:20.365099  117647 command_runner.go:130] >     },
	I1010 18:49:20.365104  117647 command_runner.go:130] >     {
	I1010 18:49:20.365110  117647 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I1010 18:49:20.365116  117647 command_runner.go:130] >       "repoTags": [
	I1010 18:49:20.365120  117647 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I1010 18:49:20.365126  117647 command_runner.go:130] >       ],
	I1010 18:49:20.365130  117647 command_runner.go:130] >       "repoDigests": [
	I1010 18:49:20.365137  117647 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I1010 18:49:20.365145  117647 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I1010 18:49:20.365149  117647 command_runner.go:130] >       ],
	I1010 18:49:20.365161  117647 command_runner.go:130] >       "size": "149009664",
	I1010 18:49:20.365166  117647 command_runner.go:130] >       "uid": {
	I1010 18:49:20.365170  117647 command_runner.go:130] >         "value": "0"
	I1010 18:49:20.365174  117647 command_runner.go:130] >       },
	I1010 18:49:20.365178  117647 command_runner.go:130] >       "username": "",
	I1010 18:49:20.365182  117647 command_runner.go:130] >       "spec": null,
	I1010 18:49:20.365186  117647 command_runner.go:130] >       "pinned": false
	I1010 18:49:20.365190  117647 command_runner.go:130] >     },
	I1010 18:49:20.365195  117647 command_runner.go:130] >     {
	I1010 18:49:20.365205  117647 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I1010 18:49:20.365212  117647 command_runner.go:130] >       "repoTags": [
	I1010 18:49:20.365217  117647 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I1010 18:49:20.365222  117647 command_runner.go:130] >       ],
	I1010 18:49:20.365226  117647 command_runner.go:130] >       "repoDigests": [
	I1010 18:49:20.365234  117647 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I1010 18:49:20.365242  117647 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I1010 18:49:20.365246  117647 command_runner.go:130] >       ],
	I1010 18:49:20.365252  117647 command_runner.go:130] >       "size": "95237600",
	I1010 18:49:20.365256  117647 command_runner.go:130] >       "uid": {
	I1010 18:49:20.365262  117647 command_runner.go:130] >         "value": "0"
	I1010 18:49:20.365266  117647 command_runner.go:130] >       },
	I1010 18:49:20.365272  117647 command_runner.go:130] >       "username": "",
	I1010 18:49:20.365276  117647 command_runner.go:130] >       "spec": null,
	I1010 18:49:20.365280  117647 command_runner.go:130] >       "pinned": false
	I1010 18:49:20.365284  117647 command_runner.go:130] >     },
	I1010 18:49:20.365287  117647 command_runner.go:130] >     {
	I1010 18:49:20.365293  117647 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I1010 18:49:20.365299  117647 command_runner.go:130] >       "repoTags": [
	I1010 18:49:20.365304  117647 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I1010 18:49:20.365308  117647 command_runner.go:130] >       ],
	I1010 18:49:20.365312  117647 command_runner.go:130] >       "repoDigests": [
	I1010 18:49:20.365321  117647 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I1010 18:49:20.365328  117647 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I1010 18:49:20.365334  117647 command_runner.go:130] >       ],
	I1010 18:49:20.365338  117647 command_runner.go:130] >       "size": "89437508",
	I1010 18:49:20.365342  117647 command_runner.go:130] >       "uid": {
	I1010 18:49:20.365346  117647 command_runner.go:130] >         "value": "0"
	I1010 18:49:20.365352  117647 command_runner.go:130] >       },
	I1010 18:49:20.365355  117647 command_runner.go:130] >       "username": "",
	I1010 18:49:20.365359  117647 command_runner.go:130] >       "spec": null,
	I1010 18:49:20.365363  117647 command_runner.go:130] >       "pinned": false
	I1010 18:49:20.365366  117647 command_runner.go:130] >     },
	I1010 18:49:20.365370  117647 command_runner.go:130] >     {
	I1010 18:49:20.365376  117647 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I1010 18:49:20.365382  117647 command_runner.go:130] >       "repoTags": [
	I1010 18:49:20.365387  117647 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I1010 18:49:20.365391  117647 command_runner.go:130] >       ],
	I1010 18:49:20.365395  117647 command_runner.go:130] >       "repoDigests": [
	I1010 18:49:20.365420  117647 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I1010 18:49:20.365429  117647 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I1010 18:49:20.365432  117647 command_runner.go:130] >       ],
	I1010 18:49:20.365448  117647 command_runner.go:130] >       "size": "92733849",
	I1010 18:49:20.365454  117647 command_runner.go:130] >       "uid": null,
	I1010 18:49:20.365458  117647 command_runner.go:130] >       "username": "",
	I1010 18:49:20.365462  117647 command_runner.go:130] >       "spec": null,
	I1010 18:49:20.365466  117647 command_runner.go:130] >       "pinned": false
	I1010 18:49:20.365470  117647 command_runner.go:130] >     },
	I1010 18:49:20.365473  117647 command_runner.go:130] >     {
	I1010 18:49:20.365478  117647 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I1010 18:49:20.365482  117647 command_runner.go:130] >       "repoTags": [
	I1010 18:49:20.365487  117647 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I1010 18:49:20.365490  117647 command_runner.go:130] >       ],
	I1010 18:49:20.365494  117647 command_runner.go:130] >       "repoDigests": [
	I1010 18:49:20.365500  117647 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I1010 18:49:20.365507  117647 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I1010 18:49:20.365511  117647 command_runner.go:130] >       ],
	I1010 18:49:20.365515  117647 command_runner.go:130] >       "size": "68420934",
	I1010 18:49:20.365519  117647 command_runner.go:130] >       "uid": {
	I1010 18:49:20.365523  117647 command_runner.go:130] >         "value": "0"
	I1010 18:49:20.365526  117647 command_runner.go:130] >       },
	I1010 18:49:20.365530  117647 command_runner.go:130] >       "username": "",
	I1010 18:49:20.365536  117647 command_runner.go:130] >       "spec": null,
	I1010 18:49:20.365542  117647 command_runner.go:130] >       "pinned": false
	I1010 18:49:20.365545  117647 command_runner.go:130] >     },
	I1010 18:49:20.365549  117647 command_runner.go:130] >     {
	I1010 18:49:20.365554  117647 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I1010 18:49:20.365561  117647 command_runner.go:130] >       "repoTags": [
	I1010 18:49:20.365565  117647 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I1010 18:49:20.365569  117647 command_runner.go:130] >       ],
	I1010 18:49:20.365573  117647 command_runner.go:130] >       "repoDigests": [
	I1010 18:49:20.365580  117647 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I1010 18:49:20.365589  117647 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I1010 18:49:20.365593  117647 command_runner.go:130] >       ],
	I1010 18:49:20.365597  117647 command_runner.go:130] >       "size": "742080",
	I1010 18:49:20.365601  117647 command_runner.go:130] >       "uid": {
	I1010 18:49:20.365605  117647 command_runner.go:130] >         "value": "65535"
	I1010 18:49:20.365608  117647 command_runner.go:130] >       },
	I1010 18:49:20.365612  117647 command_runner.go:130] >       "username": "",
	I1010 18:49:20.365616  117647 command_runner.go:130] >       "spec": null,
	I1010 18:49:20.365619  117647 command_runner.go:130] >       "pinned": true
	I1010 18:49:20.365623  117647 command_runner.go:130] >     }
	I1010 18:49:20.365626  117647 command_runner.go:130] >   ]
	I1010 18:49:20.365629  117647 command_runner.go:130] > }
	I1010 18:49:20.366333  117647 crio.go:514] all images are preloaded for cri-o runtime.
	I1010 18:49:20.366357  117647 crio.go:433] Images already preloaded, skipping extraction
	I1010 18:49:20.366407  117647 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 18:49:20.402003  117647 command_runner.go:130] > {
	I1010 18:49:20.402027  117647 command_runner.go:130] >   "images": [
	I1010 18:49:20.402031  117647 command_runner.go:130] >     {
	I1010 18:49:20.402039  117647 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I1010 18:49:20.402044  117647 command_runner.go:130] >       "repoTags": [
	I1010 18:49:20.402050  117647 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I1010 18:49:20.402054  117647 command_runner.go:130] >       ],
	I1010 18:49:20.402058  117647 command_runner.go:130] >       "repoDigests": [
	I1010 18:49:20.402065  117647 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I1010 18:49:20.402072  117647 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I1010 18:49:20.402076  117647 command_runner.go:130] >       ],
	I1010 18:49:20.402080  117647 command_runner.go:130] >       "size": "87190579",
	I1010 18:49:20.402083  117647 command_runner.go:130] >       "uid": null,
	I1010 18:49:20.402088  117647 command_runner.go:130] >       "username": "",
	I1010 18:49:20.402097  117647 command_runner.go:130] >       "spec": null,
	I1010 18:49:20.402103  117647 command_runner.go:130] >       "pinned": false
	I1010 18:49:20.402107  117647 command_runner.go:130] >     },
	I1010 18:49:20.402113  117647 command_runner.go:130] >     {
	I1010 18:49:20.402119  117647 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I1010 18:49:20.402125  117647 command_runner.go:130] >       "repoTags": [
	I1010 18:49:20.402130  117647 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I1010 18:49:20.402144  117647 command_runner.go:130] >       ],
	I1010 18:49:20.402151  117647 command_runner.go:130] >       "repoDigests": [
	I1010 18:49:20.402158  117647 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I1010 18:49:20.402167  117647 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I1010 18:49:20.402173  117647 command_runner.go:130] >       ],
	I1010 18:49:20.402178  117647 command_runner.go:130] >       "size": "1363676",
	I1010 18:49:20.402184  117647 command_runner.go:130] >       "uid": null,
	I1010 18:49:20.402191  117647 command_runner.go:130] >       "username": "",
	I1010 18:49:20.402198  117647 command_runner.go:130] >       "spec": null,
	I1010 18:49:20.402202  117647 command_runner.go:130] >       "pinned": false
	I1010 18:49:20.402205  117647 command_runner.go:130] >     },
	I1010 18:49:20.402210  117647 command_runner.go:130] >     {
	I1010 18:49:20.402216  117647 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1010 18:49:20.402232  117647 command_runner.go:130] >       "repoTags": [
	I1010 18:49:20.402239  117647 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1010 18:49:20.402243  117647 command_runner.go:130] >       ],
	I1010 18:49:20.402248  117647 command_runner.go:130] >       "repoDigests": [
	I1010 18:49:20.402255  117647 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1010 18:49:20.402265  117647 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1010 18:49:20.402271  117647 command_runner.go:130] >       ],
	I1010 18:49:20.402275  117647 command_runner.go:130] >       "size": "31470524",
	I1010 18:49:20.402281  117647 command_runner.go:130] >       "uid": null,
	I1010 18:49:20.402285  117647 command_runner.go:130] >       "username": "",
	I1010 18:49:20.402291  117647 command_runner.go:130] >       "spec": null,
	I1010 18:49:20.402295  117647 command_runner.go:130] >       "pinned": false
	I1010 18:49:20.402300  117647 command_runner.go:130] >     },
	I1010 18:49:20.402304  117647 command_runner.go:130] >     {
	I1010 18:49:20.402312  117647 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I1010 18:49:20.402316  117647 command_runner.go:130] >       "repoTags": [
	I1010 18:49:20.402323  117647 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I1010 18:49:20.402327  117647 command_runner.go:130] >       ],
	I1010 18:49:20.402333  117647 command_runner.go:130] >       "repoDigests": [
	I1010 18:49:20.402340  117647 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I1010 18:49:20.402360  117647 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I1010 18:49:20.402366  117647 command_runner.go:130] >       ],
	I1010 18:49:20.402371  117647 command_runner.go:130] >       "size": "63273227",
	I1010 18:49:20.402376  117647 command_runner.go:130] >       "uid": null,
	I1010 18:49:20.402380  117647 command_runner.go:130] >       "username": "nonroot",
	I1010 18:49:20.402384  117647 command_runner.go:130] >       "spec": null,
	I1010 18:49:20.402388  117647 command_runner.go:130] >       "pinned": false
	I1010 18:49:20.402395  117647 command_runner.go:130] >     },
	I1010 18:49:20.402398  117647 command_runner.go:130] >     {
	I1010 18:49:20.402408  117647 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I1010 18:49:20.402416  117647 command_runner.go:130] >       "repoTags": [
	I1010 18:49:20.402423  117647 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I1010 18:49:20.402431  117647 command_runner.go:130] >       ],
	I1010 18:49:20.402437  117647 command_runner.go:130] >       "repoDigests": [
	I1010 18:49:20.402450  117647 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I1010 18:49:20.402464  117647 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I1010 18:49:20.402470  117647 command_runner.go:130] >       ],
	I1010 18:49:20.402477  117647 command_runner.go:130] >       "size": "149009664",
	I1010 18:49:20.402484  117647 command_runner.go:130] >       "uid": {
	I1010 18:49:20.402488  117647 command_runner.go:130] >         "value": "0"
	I1010 18:49:20.402493  117647 command_runner.go:130] >       },
	I1010 18:49:20.402498  117647 command_runner.go:130] >       "username": "",
	I1010 18:49:20.402503  117647 command_runner.go:130] >       "spec": null,
	I1010 18:49:20.402508  117647 command_runner.go:130] >       "pinned": false
	I1010 18:49:20.402513  117647 command_runner.go:130] >     },
	I1010 18:49:20.402517  117647 command_runner.go:130] >     {
	I1010 18:49:20.402528  117647 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I1010 18:49:20.402538  117647 command_runner.go:130] >       "repoTags": [
	I1010 18:49:20.402548  117647 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I1010 18:49:20.402557  117647 command_runner.go:130] >       ],
	I1010 18:49:20.402564  117647 command_runner.go:130] >       "repoDigests": [
	I1010 18:49:20.402571  117647 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I1010 18:49:20.402580  117647 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I1010 18:49:20.402595  117647 command_runner.go:130] >       ],
	I1010 18:49:20.402602  117647 command_runner.go:130] >       "size": "95237600",
	I1010 18:49:20.402606  117647 command_runner.go:130] >       "uid": {
	I1010 18:49:20.402614  117647 command_runner.go:130] >         "value": "0"
	I1010 18:49:20.402622  117647 command_runner.go:130] >       },
	I1010 18:49:20.402632  117647 command_runner.go:130] >       "username": "",
	I1010 18:49:20.402639  117647 command_runner.go:130] >       "spec": null,
	I1010 18:49:20.402649  117647 command_runner.go:130] >       "pinned": false
	I1010 18:49:20.402657  117647 command_runner.go:130] >     },
	I1010 18:49:20.402665  117647 command_runner.go:130] >     {
	I1010 18:49:20.402674  117647 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I1010 18:49:20.402680  117647 command_runner.go:130] >       "repoTags": [
	I1010 18:49:20.402686  117647 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I1010 18:49:20.402691  117647 command_runner.go:130] >       ],
	I1010 18:49:20.402696  117647 command_runner.go:130] >       "repoDigests": [
	I1010 18:49:20.402705  117647 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I1010 18:49:20.402718  117647 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I1010 18:49:20.402728  117647 command_runner.go:130] >       ],
	I1010 18:49:20.402734  117647 command_runner.go:130] >       "size": "89437508",
	I1010 18:49:20.402743  117647 command_runner.go:130] >       "uid": {
	I1010 18:49:20.402753  117647 command_runner.go:130] >         "value": "0"
	I1010 18:49:20.402761  117647 command_runner.go:130] >       },
	I1010 18:49:20.402767  117647 command_runner.go:130] >       "username": "",
	I1010 18:49:20.402777  117647 command_runner.go:130] >       "spec": null,
	I1010 18:49:20.402785  117647 command_runner.go:130] >       "pinned": false
	I1010 18:49:20.402790  117647 command_runner.go:130] >     },
	I1010 18:49:20.402793  117647 command_runner.go:130] >     {
	I1010 18:49:20.402802  117647 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I1010 18:49:20.402809  117647 command_runner.go:130] >       "repoTags": [
	I1010 18:49:20.402819  117647 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I1010 18:49:20.402828  117647 command_runner.go:130] >       ],
	I1010 18:49:20.402835  117647 command_runner.go:130] >       "repoDigests": [
	I1010 18:49:20.402872  117647 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I1010 18:49:20.402892  117647 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I1010 18:49:20.402900  117647 command_runner.go:130] >       ],
	I1010 18:49:20.402904  117647 command_runner.go:130] >       "size": "92733849",
	I1010 18:49:20.402912  117647 command_runner.go:130] >       "uid": null,
	I1010 18:49:20.402921  117647 command_runner.go:130] >       "username": "",
	I1010 18:49:20.402932  117647 command_runner.go:130] >       "spec": null,
	I1010 18:49:20.402940  117647 command_runner.go:130] >       "pinned": false
	I1010 18:49:20.402949  117647 command_runner.go:130] >     },
	I1010 18:49:20.402958  117647 command_runner.go:130] >     {
	I1010 18:49:20.402969  117647 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I1010 18:49:20.402978  117647 command_runner.go:130] >       "repoTags": [
	I1010 18:49:20.402987  117647 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I1010 18:49:20.402994  117647 command_runner.go:130] >       ],
	I1010 18:49:20.402999  117647 command_runner.go:130] >       "repoDigests": [
	I1010 18:49:20.403014  117647 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I1010 18:49:20.403028  117647 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I1010 18:49:20.403036  117647 command_runner.go:130] >       ],
	I1010 18:49:20.403046  117647 command_runner.go:130] >       "size": "68420934",
	I1010 18:49:20.403054  117647 command_runner.go:130] >       "uid": {
	I1010 18:49:20.403062  117647 command_runner.go:130] >         "value": "0"
	I1010 18:49:20.403069  117647 command_runner.go:130] >       },
	I1010 18:49:20.403076  117647 command_runner.go:130] >       "username": "",
	I1010 18:49:20.403082  117647 command_runner.go:130] >       "spec": null,
	I1010 18:49:20.403087  117647 command_runner.go:130] >       "pinned": false
	I1010 18:49:20.403095  117647 command_runner.go:130] >     },
	I1010 18:49:20.403104  117647 command_runner.go:130] >     {
	I1010 18:49:20.403114  117647 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I1010 18:49:20.403123  117647 command_runner.go:130] >       "repoTags": [
	I1010 18:49:20.403133  117647 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I1010 18:49:20.403142  117647 command_runner.go:130] >       ],
	I1010 18:49:20.403150  117647 command_runner.go:130] >       "repoDigests": [
	I1010 18:49:20.403164  117647 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I1010 18:49:20.403175  117647 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I1010 18:49:20.403188  117647 command_runner.go:130] >       ],
	I1010 18:49:20.403199  117647 command_runner.go:130] >       "size": "742080",
	I1010 18:49:20.403205  117647 command_runner.go:130] >       "uid": {
	I1010 18:49:20.403212  117647 command_runner.go:130] >         "value": "65535"
	I1010 18:49:20.403221  117647 command_runner.go:130] >       },
	I1010 18:49:20.403234  117647 command_runner.go:130] >       "username": "",
	I1010 18:49:20.403243  117647 command_runner.go:130] >       "spec": null,
	I1010 18:49:20.403253  117647 command_runner.go:130] >       "pinned": true
	I1010 18:49:20.403261  117647 command_runner.go:130] >     }
	I1010 18:49:20.403269  117647 command_runner.go:130] >   ]
	I1010 18:49:20.403275  117647 command_runner.go:130] > }
	I1010 18:49:20.403412  117647 crio.go:514] all images are preloaded for cri-o runtime.
	I1010 18:49:20.403425  117647 cache_images.go:84] Images are preloaded, skipping loading
	I1010 18:49:20.403434  117647 kubeadm.go:934] updating node { 192.168.39.28 8443 v1.31.1 crio true true} ...
	I1010 18:49:20.403555  117647 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-965291 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.28
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:multinode-965291 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1010 18:49:20.403678  117647 ssh_runner.go:195] Run: crio config
	I1010 18:49:20.438214  117647 command_runner.go:130] ! time="2024-10-10 18:49:20.418078460Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I1010 18:49:20.444692  117647 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1010 18:49:20.451528  117647 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1010 18:49:20.451560  117647 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1010 18:49:20.451575  117647 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1010 18:49:20.451581  117647 command_runner.go:130] > #
	I1010 18:49:20.451592  117647 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1010 18:49:20.451602  117647 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1010 18:49:20.451613  117647 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1010 18:49:20.451623  117647 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1010 18:49:20.451629  117647 command_runner.go:130] > # reload'.
	I1010 18:49:20.451638  117647 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1010 18:49:20.451653  117647 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1010 18:49:20.451663  117647 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1010 18:49:20.451673  117647 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1010 18:49:20.451683  117647 command_runner.go:130] > [crio]
	I1010 18:49:20.451692  117647 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1010 18:49:20.451699  117647 command_runner.go:130] > # containers images, in this directory.
	I1010 18:49:20.451707  117647 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1010 18:49:20.451736  117647 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1010 18:49:20.451748  117647 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1010 18:49:20.451759  117647 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1010 18:49:20.451766  117647 command_runner.go:130] > # imagestore = ""
	I1010 18:49:20.451777  117647 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1010 18:49:20.451783  117647 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1010 18:49:20.451790  117647 command_runner.go:130] > storage_driver = "overlay"
	I1010 18:49:20.451796  117647 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1010 18:49:20.451802  117647 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1010 18:49:20.451807  117647 command_runner.go:130] > storage_option = [
	I1010 18:49:20.451811  117647 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1010 18:49:20.451817  117647 command_runner.go:130] > ]
	I1010 18:49:20.451823  117647 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1010 18:49:20.451832  117647 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1010 18:49:20.451837  117647 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1010 18:49:20.451844  117647 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1010 18:49:20.451850  117647 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1010 18:49:20.451856  117647 command_runner.go:130] > # always happen on a node reboot
	I1010 18:49:20.451861  117647 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1010 18:49:20.451873  117647 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1010 18:49:20.451881  117647 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1010 18:49:20.451886  117647 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1010 18:49:20.451893  117647 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I1010 18:49:20.451900  117647 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1010 18:49:20.451912  117647 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1010 18:49:20.451919  117647 command_runner.go:130] > # internal_wipe = true
	I1010 18:49:20.451926  117647 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1010 18:49:20.451934  117647 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1010 18:49:20.451939  117647 command_runner.go:130] > # internal_repair = false
	I1010 18:49:20.451946  117647 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1010 18:49:20.451952  117647 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1010 18:49:20.451958  117647 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1010 18:49:20.451964  117647 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1010 18:49:20.451976  117647 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1010 18:49:20.451982  117647 command_runner.go:130] > [crio.api]
	I1010 18:49:20.451988  117647 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1010 18:49:20.451992  117647 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1010 18:49:20.451998  117647 command_runner.go:130] > # IP address on which the stream server will listen.
	I1010 18:49:20.452004  117647 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1010 18:49:20.452010  117647 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1010 18:49:20.452015  117647 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1010 18:49:20.452018  117647 command_runner.go:130] > # stream_port = "0"
	I1010 18:49:20.452023  117647 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1010 18:49:20.452027  117647 command_runner.go:130] > # stream_enable_tls = false
	I1010 18:49:20.452032  117647 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1010 18:49:20.452036  117647 command_runner.go:130] > # stream_idle_timeout = ""
	I1010 18:49:20.452042  117647 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1010 18:49:20.452048  117647 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1010 18:49:20.452051  117647 command_runner.go:130] > # minutes.
	I1010 18:49:20.452057  117647 command_runner.go:130] > # stream_tls_cert = ""
	I1010 18:49:20.452062  117647 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1010 18:49:20.452068  117647 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1010 18:49:20.452072  117647 command_runner.go:130] > # stream_tls_key = ""
	I1010 18:49:20.452078  117647 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1010 18:49:20.452086  117647 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1010 18:49:20.452109  117647 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1010 18:49:20.452115  117647 command_runner.go:130] > # stream_tls_ca = ""
	I1010 18:49:20.452122  117647 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1010 18:49:20.452128  117647 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1010 18:49:20.452135  117647 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1010 18:49:20.452139  117647 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1010 18:49:20.452164  117647 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1010 18:49:20.452176  117647 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1010 18:49:20.452182  117647 command_runner.go:130] > [crio.runtime]
	I1010 18:49:20.452188  117647 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1010 18:49:20.452195  117647 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1010 18:49:20.452204  117647 command_runner.go:130] > # "nofile=1024:2048"
	I1010 18:49:20.452213  117647 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1010 18:49:20.452217  117647 command_runner.go:130] > # default_ulimits = [
	I1010 18:49:20.452220  117647 command_runner.go:130] > # ]
	I1010 18:49:20.452227  117647 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1010 18:49:20.452231  117647 command_runner.go:130] > # no_pivot = false
	I1010 18:49:20.452237  117647 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1010 18:49:20.452245  117647 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1010 18:49:20.452254  117647 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1010 18:49:20.452261  117647 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1010 18:49:20.452267  117647 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1010 18:49:20.452275  117647 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1010 18:49:20.452280  117647 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1010 18:49:20.452284  117647 command_runner.go:130] > # Cgroup setting for conmon
	I1010 18:49:20.452291  117647 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1010 18:49:20.452297  117647 command_runner.go:130] > conmon_cgroup = "pod"
	I1010 18:49:20.452303  117647 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1010 18:49:20.452309  117647 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1010 18:49:20.452317  117647 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1010 18:49:20.452321  117647 command_runner.go:130] > conmon_env = [
	I1010 18:49:20.452328  117647 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1010 18:49:20.452332  117647 command_runner.go:130] > ]
	I1010 18:49:20.452337  117647 command_runner.go:130] > # Additional environment variables to set for all the
	I1010 18:49:20.452344  117647 command_runner.go:130] > # containers. These are overridden if set in the
	I1010 18:49:20.452349  117647 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1010 18:49:20.452352  117647 command_runner.go:130] > # default_env = [
	I1010 18:49:20.452356  117647 command_runner.go:130] > # ]
	I1010 18:49:20.452361  117647 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1010 18:49:20.452370  117647 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1010 18:49:20.452375  117647 command_runner.go:130] > # selinux = false
	I1010 18:49:20.452383  117647 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1010 18:49:20.452389  117647 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1010 18:49:20.452394  117647 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1010 18:49:20.452404  117647 command_runner.go:130] > # seccomp_profile = ""
	I1010 18:49:20.452411  117647 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1010 18:49:20.452421  117647 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1010 18:49:20.452430  117647 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1010 18:49:20.452434  117647 command_runner.go:130] > # which might increase security.
	I1010 18:49:20.452439  117647 command_runner.go:130] > # This option is currently deprecated,
	I1010 18:49:20.452446  117647 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I1010 18:49:20.452453  117647 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1010 18:49:20.452460  117647 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1010 18:49:20.452468  117647 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1010 18:49:20.452474  117647 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1010 18:49:20.452480  117647 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1010 18:49:20.452487  117647 command_runner.go:130] > # This option supports live configuration reload.
	I1010 18:49:20.452492  117647 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1010 18:49:20.452499  117647 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1010 18:49:20.452504  117647 command_runner.go:130] > # the cgroup blockio controller.
	I1010 18:49:20.452508  117647 command_runner.go:130] > # blockio_config_file = ""
	I1010 18:49:20.452514  117647 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1010 18:49:20.452520  117647 command_runner.go:130] > # blockio parameters.
	I1010 18:49:20.452526  117647 command_runner.go:130] > # blockio_reload = false
	I1010 18:49:20.452549  117647 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1010 18:49:20.452560  117647 command_runner.go:130] > # irqbalance daemon.
	I1010 18:49:20.452565  117647 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1010 18:49:20.452572  117647 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1010 18:49:20.452580  117647 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1010 18:49:20.452587  117647 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1010 18:49:20.452597  117647 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1010 18:49:20.452603  117647 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1010 18:49:20.452610  117647 command_runner.go:130] > # This option supports live configuration reload.
	I1010 18:49:20.452615  117647 command_runner.go:130] > # rdt_config_file = ""
	I1010 18:49:20.452622  117647 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1010 18:49:20.452627  117647 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1010 18:49:20.452654  117647 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1010 18:49:20.452663  117647 command_runner.go:130] > # separate_pull_cgroup = ""
	I1010 18:49:20.452669  117647 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1010 18:49:20.452675  117647 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1010 18:49:20.452679  117647 command_runner.go:130] > # will be added.
	I1010 18:49:20.452683  117647 command_runner.go:130] > # default_capabilities = [
	I1010 18:49:20.452689  117647 command_runner.go:130] > # 	"CHOWN",
	I1010 18:49:20.452693  117647 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1010 18:49:20.452698  117647 command_runner.go:130] > # 	"FSETID",
	I1010 18:49:20.452702  117647 command_runner.go:130] > # 	"FOWNER",
	I1010 18:49:20.452708  117647 command_runner.go:130] > # 	"SETGID",
	I1010 18:49:20.452711  117647 command_runner.go:130] > # 	"SETUID",
	I1010 18:49:20.452717  117647 command_runner.go:130] > # 	"SETPCAP",
	I1010 18:49:20.452721  117647 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1010 18:49:20.452726  117647 command_runner.go:130] > # 	"KILL",
	I1010 18:49:20.452729  117647 command_runner.go:130] > # ]
	I1010 18:49:20.452737  117647 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1010 18:49:20.452745  117647 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1010 18:49:20.452750  117647 command_runner.go:130] > # add_inheritable_capabilities = false
	I1010 18:49:20.452758  117647 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1010 18:49:20.452764  117647 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1010 18:49:20.452768  117647 command_runner.go:130] > default_sysctls = [
	I1010 18:49:20.452773  117647 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1010 18:49:20.452777  117647 command_runner.go:130] > ]
	I1010 18:49:20.452782  117647 command_runner.go:130] > # List of devices on the host that a
	I1010 18:49:20.452788  117647 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1010 18:49:20.452792  117647 command_runner.go:130] > # allowed_devices = [
	I1010 18:49:20.452797  117647 command_runner.go:130] > # 	"/dev/fuse",
	I1010 18:49:20.452800  117647 command_runner.go:130] > # ]
	I1010 18:49:20.452805  117647 command_runner.go:130] > # List of additional devices. specified as
	I1010 18:49:20.452815  117647 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1010 18:49:20.452819  117647 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1010 18:49:20.452827  117647 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1010 18:49:20.452831  117647 command_runner.go:130] > # additional_devices = [
	I1010 18:49:20.452835  117647 command_runner.go:130] > # ]
	I1010 18:49:20.452840  117647 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1010 18:49:20.452846  117647 command_runner.go:130] > # cdi_spec_dirs = [
	I1010 18:49:20.452858  117647 command_runner.go:130] > # 	"/etc/cdi",
	I1010 18:49:20.452862  117647 command_runner.go:130] > # 	"/var/run/cdi",
	I1010 18:49:20.452866  117647 command_runner.go:130] > # ]
	I1010 18:49:20.452872  117647 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1010 18:49:20.452879  117647 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1010 18:49:20.452883  117647 command_runner.go:130] > # Defaults to false.
	I1010 18:49:20.452887  117647 command_runner.go:130] > # device_ownership_from_security_context = false
	I1010 18:49:20.452894  117647 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1010 18:49:20.452902  117647 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1010 18:49:20.452906  117647 command_runner.go:130] > # hooks_dir = [
	I1010 18:49:20.452913  117647 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1010 18:49:20.452916  117647 command_runner.go:130] > # ]
	I1010 18:49:20.452922  117647 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1010 18:49:20.452931  117647 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1010 18:49:20.452936  117647 command_runner.go:130] > # its default mounts from the following two files:
	I1010 18:49:20.452941  117647 command_runner.go:130] > #
	I1010 18:49:20.452946  117647 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1010 18:49:20.452954  117647 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1010 18:49:20.452960  117647 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1010 18:49:20.452965  117647 command_runner.go:130] > #
	I1010 18:49:20.452971  117647 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1010 18:49:20.452977  117647 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1010 18:49:20.452985  117647 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1010 18:49:20.452990  117647 command_runner.go:130] > #      only add mounts it finds in this file.
	I1010 18:49:20.452996  117647 command_runner.go:130] > #
	I1010 18:49:20.453000  117647 command_runner.go:130] > # default_mounts_file = ""
	I1010 18:49:20.453006  117647 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1010 18:49:20.453015  117647 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1010 18:49:20.453020  117647 command_runner.go:130] > pids_limit = 1024
	I1010 18:49:20.453026  117647 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1010 18:49:20.453034  117647 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1010 18:49:20.453040  117647 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1010 18:49:20.453048  117647 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1010 18:49:20.453054  117647 command_runner.go:130] > # log_size_max = -1
	I1010 18:49:20.453060  117647 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1010 18:49:20.453067  117647 command_runner.go:130] > # log_to_journald = false
	I1010 18:49:20.453074  117647 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1010 18:49:20.453081  117647 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1010 18:49:20.453085  117647 command_runner.go:130] > # Path to directory for container attach sockets.
	I1010 18:49:20.453090  117647 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1010 18:49:20.453097  117647 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1010 18:49:20.453101  117647 command_runner.go:130] > # bind_mount_prefix = ""
	I1010 18:49:20.453110  117647 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1010 18:49:20.453115  117647 command_runner.go:130] > # read_only = false
	I1010 18:49:20.453124  117647 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1010 18:49:20.453130  117647 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1010 18:49:20.453136  117647 command_runner.go:130] > # live configuration reload.
	I1010 18:49:20.453141  117647 command_runner.go:130] > # log_level = "info"
	I1010 18:49:20.453149  117647 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1010 18:49:20.453154  117647 command_runner.go:130] > # This option supports live configuration reload.
	I1010 18:49:20.453160  117647 command_runner.go:130] > # log_filter = ""
	I1010 18:49:20.453167  117647 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1010 18:49:20.453177  117647 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1010 18:49:20.453181  117647 command_runner.go:130] > # separated by comma.
	I1010 18:49:20.453188  117647 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1010 18:49:20.453194  117647 command_runner.go:130] > # uid_mappings = ""
	I1010 18:49:20.453200  117647 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1010 18:49:20.453208  117647 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1010 18:49:20.453213  117647 command_runner.go:130] > # separated by comma.
	I1010 18:49:20.453222  117647 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1010 18:49:20.453228  117647 command_runner.go:130] > # gid_mappings = ""
	I1010 18:49:20.453234  117647 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1010 18:49:20.453241  117647 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1010 18:49:20.453248  117647 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1010 18:49:20.453257  117647 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1010 18:49:20.453261  117647 command_runner.go:130] > # minimum_mappable_uid = -1
	I1010 18:49:20.453267  117647 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1010 18:49:20.453275  117647 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1010 18:49:20.453281  117647 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1010 18:49:20.453289  117647 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1010 18:49:20.453295  117647 command_runner.go:130] > # minimum_mappable_gid = -1
	I1010 18:49:20.453300  117647 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1010 18:49:20.453306  117647 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1010 18:49:20.453314  117647 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1010 18:49:20.453318  117647 command_runner.go:130] > # ctr_stop_timeout = 30
	I1010 18:49:20.453325  117647 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1010 18:49:20.453331  117647 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1010 18:49:20.453338  117647 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1010 18:49:20.453342  117647 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1010 18:49:20.453348  117647 command_runner.go:130] > drop_infra_ctr = false
	I1010 18:49:20.453353  117647 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1010 18:49:20.453361  117647 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1010 18:49:20.453368  117647 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1010 18:49:20.453374  117647 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1010 18:49:20.453380  117647 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1010 18:49:20.453388  117647 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1010 18:49:20.453393  117647 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1010 18:49:20.453400  117647 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1010 18:49:20.453404  117647 command_runner.go:130] > # shared_cpuset = ""
	I1010 18:49:20.453412  117647 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1010 18:49:20.453420  117647 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1010 18:49:20.453427  117647 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1010 18:49:20.453433  117647 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1010 18:49:20.453440  117647 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1010 18:49:20.453445  117647 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1010 18:49:20.453453  117647 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1010 18:49:20.453457  117647 command_runner.go:130] > # enable_criu_support = false
	I1010 18:49:20.453464  117647 command_runner.go:130] > # Enable/disable the generation of the container,
	I1010 18:49:20.453469  117647 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1010 18:49:20.453475  117647 command_runner.go:130] > # enable_pod_events = false
	I1010 18:49:20.453481  117647 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1010 18:49:20.453489  117647 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1010 18:49:20.453494  117647 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1010 18:49:20.453498  117647 command_runner.go:130] > # default_runtime = "runc"
	I1010 18:49:20.453503  117647 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1010 18:49:20.453510  117647 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1010 18:49:20.453520  117647 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1010 18:49:20.453525  117647 command_runner.go:130] > # creation as a file is not desired either.
	I1010 18:49:20.453533  117647 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1010 18:49:20.453539  117647 command_runner.go:130] > # the hostname is being managed dynamically.
	I1010 18:49:20.453543  117647 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1010 18:49:20.453549  117647 command_runner.go:130] > # ]
	I1010 18:49:20.453554  117647 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1010 18:49:20.453560  117647 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1010 18:49:20.453566  117647 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1010 18:49:20.453571  117647 command_runner.go:130] > # Each entry in the table should follow the format:
	I1010 18:49:20.453576  117647 command_runner.go:130] > #
	I1010 18:49:20.453581  117647 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1010 18:49:20.453585  117647 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1010 18:49:20.453607  117647 command_runner.go:130] > # runtime_type = "oci"
	I1010 18:49:20.453614  117647 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1010 18:49:20.453620  117647 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1010 18:49:20.453626  117647 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1010 18:49:20.453631  117647 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1010 18:49:20.453637  117647 command_runner.go:130] > # monitor_env = []
	I1010 18:49:20.453642  117647 command_runner.go:130] > # privileged_without_host_devices = false
	I1010 18:49:20.453646  117647 command_runner.go:130] > # allowed_annotations = []
	I1010 18:49:20.453651  117647 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1010 18:49:20.453658  117647 command_runner.go:130] > # Where:
	I1010 18:49:20.453664  117647 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1010 18:49:20.453671  117647 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1010 18:49:20.453677  117647 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1010 18:49:20.453685  117647 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1010 18:49:20.453690  117647 command_runner.go:130] > #   in $PATH.
	I1010 18:49:20.453698  117647 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1010 18:49:20.453702  117647 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1010 18:49:20.453708  117647 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1010 18:49:20.453714  117647 command_runner.go:130] > #   state.
	I1010 18:49:20.453720  117647 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1010 18:49:20.453729  117647 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1010 18:49:20.453735  117647 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1010 18:49:20.453742  117647 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1010 18:49:20.453749  117647 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1010 18:49:20.453757  117647 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1010 18:49:20.453762  117647 command_runner.go:130] > #   The currently recognized values are:
	I1010 18:49:20.453770  117647 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1010 18:49:20.453777  117647 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1010 18:49:20.453785  117647 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1010 18:49:20.453791  117647 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1010 18:49:20.453798  117647 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1010 18:49:20.453806  117647 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1010 18:49:20.453814  117647 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1010 18:49:20.453822  117647 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1010 18:49:20.453828  117647 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1010 18:49:20.453836  117647 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1010 18:49:20.453840  117647 command_runner.go:130] > #   deprecated option "conmon".
	I1010 18:49:20.453849  117647 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1010 18:49:20.453854  117647 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1010 18:49:20.453862  117647 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1010 18:49:20.453867  117647 command_runner.go:130] > #   should be moved to the container's cgroup
	I1010 18:49:20.453875  117647 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I1010 18:49:20.453880  117647 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1010 18:49:20.453889  117647 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1010 18:49:20.453896  117647 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1010 18:49:20.453899  117647 command_runner.go:130] > #
	I1010 18:49:20.453904  117647 command_runner.go:130] > # Using the seccomp notifier feature:
	I1010 18:49:20.453909  117647 command_runner.go:130] > #
	I1010 18:49:20.453916  117647 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1010 18:49:20.453925  117647 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1010 18:49:20.453928  117647 command_runner.go:130] > #
	I1010 18:49:20.453934  117647 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1010 18:49:20.453941  117647 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1010 18:49:20.453944  117647 command_runner.go:130] > #
	I1010 18:49:20.453950  117647 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1010 18:49:20.453956  117647 command_runner.go:130] > # feature.
	I1010 18:49:20.453959  117647 command_runner.go:130] > #
	I1010 18:49:20.453965  117647 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1010 18:49:20.453973  117647 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1010 18:49:20.453979  117647 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1010 18:49:20.453987  117647 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1010 18:49:20.453994  117647 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1010 18:49:20.453999  117647 command_runner.go:130] > #
	I1010 18:49:20.454005  117647 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1010 18:49:20.454012  117647 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1010 18:49:20.454033  117647 command_runner.go:130] > #
	I1010 18:49:20.454041  117647 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1010 18:49:20.454046  117647 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1010 18:49:20.454049  117647 command_runner.go:130] > #
	I1010 18:49:20.454055  117647 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1010 18:49:20.454062  117647 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1010 18:49:20.454066  117647 command_runner.go:130] > # limitation.
	I1010 18:49:20.454071  117647 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1010 18:49:20.454076  117647 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1010 18:49:20.454080  117647 command_runner.go:130] > runtime_type = "oci"
	I1010 18:49:20.454086  117647 command_runner.go:130] > runtime_root = "/run/runc"
	I1010 18:49:20.454090  117647 command_runner.go:130] > runtime_config_path = ""
	I1010 18:49:20.454097  117647 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1010 18:49:20.454102  117647 command_runner.go:130] > monitor_cgroup = "pod"
	I1010 18:49:20.454107  117647 command_runner.go:130] > monitor_exec_cgroup = ""
	I1010 18:49:20.454112  117647 command_runner.go:130] > monitor_env = [
	I1010 18:49:20.454117  117647 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1010 18:49:20.454123  117647 command_runner.go:130] > ]
	I1010 18:49:20.454128  117647 command_runner.go:130] > privileged_without_host_devices = false
	I1010 18:49:20.454134  117647 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1010 18:49:20.454141  117647 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1010 18:49:20.454147  117647 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1010 18:49:20.454156  117647 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1010 18:49:20.454163  117647 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1010 18:49:20.454174  117647 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1010 18:49:20.454183  117647 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1010 18:49:20.454192  117647 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1010 18:49:20.454198  117647 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1010 18:49:20.454205  117647 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1010 18:49:20.454212  117647 command_runner.go:130] > # Example:
	I1010 18:49:20.454216  117647 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1010 18:49:20.454221  117647 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1010 18:49:20.454226  117647 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1010 18:49:20.454231  117647 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1010 18:49:20.454238  117647 command_runner.go:130] > # cpuset = 0
	I1010 18:49:20.454243  117647 command_runner.go:130] > # cpushares = "0-1"
	I1010 18:49:20.454248  117647 command_runner.go:130] > # Where:
	I1010 18:49:20.454253  117647 command_runner.go:130] > # The workload name is workload-type.
	I1010 18:49:20.454259  117647 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1010 18:49:20.454267  117647 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1010 18:49:20.454272  117647 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1010 18:49:20.454282  117647 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1010 18:49:20.454289  117647 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1010 18:49:20.454294  117647 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1010 18:49:20.454303  117647 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1010 18:49:20.454308  117647 command_runner.go:130] > # Default value is set to true
	I1010 18:49:20.454312  117647 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1010 18:49:20.454319  117647 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1010 18:49:20.454324  117647 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1010 18:49:20.454329  117647 command_runner.go:130] > # Default value is set to 'false'
	I1010 18:49:20.454333  117647 command_runner.go:130] > # disable_hostport_mapping = false
	I1010 18:49:20.454341  117647 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1010 18:49:20.454345  117647 command_runner.go:130] > #
	I1010 18:49:20.454350  117647 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1010 18:49:20.454355  117647 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1010 18:49:20.454361  117647 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1010 18:49:20.454367  117647 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1010 18:49:20.454372  117647 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1010 18:49:20.454376  117647 command_runner.go:130] > [crio.image]
	I1010 18:49:20.454381  117647 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1010 18:49:20.454385  117647 command_runner.go:130] > # default_transport = "docker://"
	I1010 18:49:20.454391  117647 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1010 18:49:20.454397  117647 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1010 18:49:20.454401  117647 command_runner.go:130] > # global_auth_file = ""
	I1010 18:49:20.454405  117647 command_runner.go:130] > # The image used to instantiate infra containers.
	I1010 18:49:20.454410  117647 command_runner.go:130] > # This option supports live configuration reload.
	I1010 18:49:20.454419  117647 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I1010 18:49:20.454427  117647 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1010 18:49:20.454432  117647 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1010 18:49:20.454439  117647 command_runner.go:130] > # This option supports live configuration reload.
	I1010 18:49:20.454443  117647 command_runner.go:130] > # pause_image_auth_file = ""
	I1010 18:49:20.454448  117647 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1010 18:49:20.454456  117647 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1010 18:49:20.454462  117647 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1010 18:49:20.454469  117647 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1010 18:49:20.454473  117647 command_runner.go:130] > # pause_command = "/pause"
	I1010 18:49:20.454481  117647 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1010 18:49:20.454489  117647 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1010 18:49:20.454496  117647 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1010 18:49:20.454504  117647 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1010 18:49:20.454511  117647 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1010 18:49:20.454517  117647 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1010 18:49:20.454522  117647 command_runner.go:130] > # pinned_images = [
	I1010 18:49:20.454524  117647 command_runner.go:130] > # ]
	I1010 18:49:20.454532  117647 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1010 18:49:20.454538  117647 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1010 18:49:20.454545  117647 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1010 18:49:20.454551  117647 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1010 18:49:20.454558  117647 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1010 18:49:20.454562  117647 command_runner.go:130] > # signature_policy = ""
	I1010 18:49:20.454570  117647 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1010 18:49:20.454576  117647 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1010 18:49:20.454584  117647 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1010 18:49:20.454591  117647 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1010 18:49:20.454599  117647 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1010 18:49:20.454604  117647 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1010 18:49:20.454612  117647 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1010 18:49:20.454618  117647 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1010 18:49:20.454621  117647 command_runner.go:130] > # changing them here.
	I1010 18:49:20.454625  117647 command_runner.go:130] > # insecure_registries = [
	I1010 18:49:20.454628  117647 command_runner.go:130] > # ]
	I1010 18:49:20.454634  117647 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1010 18:49:20.454641  117647 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1010 18:49:20.454646  117647 command_runner.go:130] > # image_volumes = "mkdir"
	I1010 18:49:20.454652  117647 command_runner.go:130] > # Temporary directory to use for storing big files
	I1010 18:49:20.454657  117647 command_runner.go:130] > # big_files_temporary_dir = ""
	I1010 18:49:20.454663  117647 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1010 18:49:20.454669  117647 command_runner.go:130] > # CNI plugins.
	I1010 18:49:20.454673  117647 command_runner.go:130] > [crio.network]
	I1010 18:49:20.454678  117647 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1010 18:49:20.454685  117647 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1010 18:49:20.454689  117647 command_runner.go:130] > # cni_default_network = ""
	I1010 18:49:20.454695  117647 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1010 18:49:20.454702  117647 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1010 18:49:20.454708  117647 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1010 18:49:20.454712  117647 command_runner.go:130] > # plugin_dirs = [
	I1010 18:49:20.454716  117647 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1010 18:49:20.454719  117647 command_runner.go:130] > # ]
	I1010 18:49:20.454725  117647 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1010 18:49:20.454729  117647 command_runner.go:130] > [crio.metrics]
	I1010 18:49:20.454734  117647 command_runner.go:130] > # Globally enable or disable metrics support.
	I1010 18:49:20.454738  117647 command_runner.go:130] > enable_metrics = true
	I1010 18:49:20.454742  117647 command_runner.go:130] > # Specify enabled metrics collectors.
	I1010 18:49:20.454747  117647 command_runner.go:130] > # Per default all metrics are enabled.
	I1010 18:49:20.454754  117647 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1010 18:49:20.454761  117647 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1010 18:49:20.454767  117647 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1010 18:49:20.454773  117647 command_runner.go:130] > # metrics_collectors = [
	I1010 18:49:20.454777  117647 command_runner.go:130] > # 	"operations",
	I1010 18:49:20.454781  117647 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1010 18:49:20.454786  117647 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1010 18:49:20.454791  117647 command_runner.go:130] > # 	"operations_errors",
	I1010 18:49:20.454795  117647 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1010 18:49:20.454801  117647 command_runner.go:130] > # 	"image_pulls_by_name",
	I1010 18:49:20.454805  117647 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1010 18:49:20.454810  117647 command_runner.go:130] > # 	"image_pulls_failures",
	I1010 18:49:20.454814  117647 command_runner.go:130] > # 	"image_pulls_successes",
	I1010 18:49:20.454821  117647 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1010 18:49:20.454839  117647 command_runner.go:130] > # 	"image_layer_reuse",
	I1010 18:49:20.454846  117647 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1010 18:49:20.454850  117647 command_runner.go:130] > # 	"containers_oom_total",
	I1010 18:49:20.454856  117647 command_runner.go:130] > # 	"containers_oom",
	I1010 18:49:20.454860  117647 command_runner.go:130] > # 	"processes_defunct",
	I1010 18:49:20.454865  117647 command_runner.go:130] > # 	"operations_total",
	I1010 18:49:20.454869  117647 command_runner.go:130] > # 	"operations_latency_seconds",
	I1010 18:49:20.454874  117647 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1010 18:49:20.454880  117647 command_runner.go:130] > # 	"operations_errors_total",
	I1010 18:49:20.454885  117647 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1010 18:49:20.454889  117647 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1010 18:49:20.454894  117647 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1010 18:49:20.454899  117647 command_runner.go:130] > # 	"image_pulls_success_total",
	I1010 18:49:20.454904  117647 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1010 18:49:20.454910  117647 command_runner.go:130] > # 	"containers_oom_count_total",
	I1010 18:49:20.454914  117647 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1010 18:49:20.454918  117647 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1010 18:49:20.454921  117647 command_runner.go:130] > # ]
	I1010 18:49:20.454928  117647 command_runner.go:130] > # The port on which the metrics server will listen.
	I1010 18:49:20.454932  117647 command_runner.go:130] > # metrics_port = 9090
	I1010 18:49:20.454936  117647 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1010 18:49:20.454940  117647 command_runner.go:130] > # metrics_socket = ""
	I1010 18:49:20.454945  117647 command_runner.go:130] > # The certificate for the secure metrics server.
	I1010 18:49:20.454951  117647 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1010 18:49:20.454959  117647 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1010 18:49:20.454963  117647 command_runner.go:130] > # certificate on any modification event.
	I1010 18:49:20.454970  117647 command_runner.go:130] > # metrics_cert = ""
	I1010 18:49:20.454975  117647 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1010 18:49:20.454982  117647 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1010 18:49:20.454986  117647 command_runner.go:130] > # metrics_key = ""
	I1010 18:49:20.454994  117647 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1010 18:49:20.454997  117647 command_runner.go:130] > [crio.tracing]
	I1010 18:49:20.455003  117647 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1010 18:49:20.455008  117647 command_runner.go:130] > # enable_tracing = false
	I1010 18:49:20.455014  117647 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1010 18:49:20.455020  117647 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1010 18:49:20.455027  117647 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1010 18:49:20.455033  117647 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1010 18:49:20.455037  117647 command_runner.go:130] > # CRI-O NRI configuration.
	I1010 18:49:20.455043  117647 command_runner.go:130] > [crio.nri]
	I1010 18:49:20.455047  117647 command_runner.go:130] > # Globally enable or disable NRI.
	I1010 18:49:20.455051  117647 command_runner.go:130] > # enable_nri = false
	I1010 18:49:20.455056  117647 command_runner.go:130] > # NRI socket to listen on.
	I1010 18:49:20.455062  117647 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1010 18:49:20.455066  117647 command_runner.go:130] > # NRI plugin directory to use.
	I1010 18:49:20.455071  117647 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1010 18:49:20.455078  117647 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1010 18:49:20.455082  117647 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1010 18:49:20.455089  117647 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1010 18:49:20.455093  117647 command_runner.go:130] > # nri_disable_connections = false
	I1010 18:49:20.455098  117647 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1010 18:49:20.455104  117647 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1010 18:49:20.455109  117647 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1010 18:49:20.455116  117647 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1010 18:49:20.455124  117647 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1010 18:49:20.455128  117647 command_runner.go:130] > [crio.stats]
	I1010 18:49:20.455134  117647 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1010 18:49:20.455140  117647 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1010 18:49:20.455144  117647 command_runner.go:130] > # stats_collection_period = 0
	I1010 18:49:20.455229  117647 cni.go:84] Creating CNI manager for ""
	I1010 18:49:20.455241  117647 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1010 18:49:20.455252  117647 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1010 18:49:20.455275  117647 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.28 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-965291 NodeName:multinode-965291 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.28"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.28 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1010 18:49:20.455404  117647 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.28
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-965291"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.28
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.28"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1010 18:49:20.455478  117647 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1010 18:49:20.466520  117647 command_runner.go:130] > kubeadm
	I1010 18:49:20.466545  117647 command_runner.go:130] > kubectl
	I1010 18:49:20.466550  117647 command_runner.go:130] > kubelet
	I1010 18:49:20.466575  117647 binaries.go:44] Found k8s binaries, skipping transfer
	I1010 18:49:20.466623  117647 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1010 18:49:20.477411  117647 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I1010 18:49:20.496981  117647 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1010 18:49:20.516702  117647 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I1010 18:49:20.536051  117647 ssh_runner.go:195] Run: grep 192.168.39.28	control-plane.minikube.internal$ /etc/hosts
	I1010 18:49:20.540178  117647 command_runner.go:130] > 192.168.39.28	control-plane.minikube.internal
	I1010 18:49:20.540266  117647 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 18:49:20.693531  117647 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 18:49:20.713012  117647 certs.go:68] Setting up /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/multinode-965291 for IP: 192.168.39.28
	I1010 18:49:20.713039  117647 certs.go:194] generating shared ca certs ...
	I1010 18:49:20.713073  117647 certs.go:226] acquiring lock for ca certs: {Name:mk934aa2a82050d7b4e8800492bf64f91d140517 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:49:20.713307  117647 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key
	I1010 18:49:20.713378  117647 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key
	I1010 18:49:20.713393  117647 certs.go:256] generating profile certs ...
	I1010 18:49:20.713519  117647 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/multinode-965291/client.key
	I1010 18:49:20.713598  117647 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/multinode-965291/apiserver.key.411c3c18
	I1010 18:49:20.713674  117647 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/multinode-965291/proxy-client.key
	I1010 18:49:20.713691  117647 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1010 18:49:20.713709  117647 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1010 18:49:20.713724  117647 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1010 18:49:20.713739  117647 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1010 18:49:20.713753  117647 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/multinode-965291/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1010 18:49:20.713772  117647 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/multinode-965291/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1010 18:49:20.713790  117647 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/multinode-965291/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1010 18:49:20.713806  117647 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/multinode-965291/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1010 18:49:20.713872  117647 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876.pem (1338 bytes)
	W1010 18:49:20.713924  117647 certs.go:480] ignoring /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876_empty.pem, impossibly tiny 0 bytes
	I1010 18:49:20.713938  117647 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem (1679 bytes)
	I1010 18:49:20.713970  117647 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem (1078 bytes)
	I1010 18:49:20.713999  117647 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem (1123 bytes)
	I1010 18:49:20.714027  117647 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem (1675 bytes)
	I1010 18:49:20.714069  117647 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem (1708 bytes)
	I1010 18:49:20.714097  117647 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:49:20.714111  117647 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876.pem -> /usr/share/ca-certificates/88876.pem
	I1010 18:49:20.714121  117647 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem -> /usr/share/ca-certificates/888762.pem
	I1010 18:49:20.714803  117647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1010 18:49:20.741207  117647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1010 18:49:20.766932  117647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1010 18:49:20.793077  117647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1010 18:49:20.818410  117647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/multinode-965291/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1010 18:49:20.844146  117647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/multinode-965291/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1010 18:49:20.869832  117647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/multinode-965291/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1010 18:49:20.895246  117647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/multinode-965291/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1010 18:49:20.920331  117647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1010 18:49:20.946436  117647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876.pem --> /usr/share/ca-certificates/88876.pem (1338 bytes)
	I1010 18:49:20.972329  117647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem --> /usr/share/ca-certificates/888762.pem (1708 bytes)
	I1010 18:49:20.998636  117647 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1010 18:49:21.017522  117647 ssh_runner.go:195] Run: openssl version
	I1010 18:49:21.024079  117647 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I1010 18:49:21.024292  117647 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1010 18:49:21.036499  117647 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:49:21.041996  117647 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct 10 17:58 /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:49:21.042042  117647 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 10 17:58 /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:49:21.042090  117647 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:49:21.048277  117647 command_runner.go:130] > b5213941
	I1010 18:49:21.048369  117647 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1010 18:49:21.059155  117647 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/88876.pem && ln -fs /usr/share/ca-certificates/88876.pem /etc/ssl/certs/88876.pem"
	I1010 18:49:21.070962  117647 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/88876.pem
	I1010 18:49:21.076482  117647 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct 10 18:09 /usr/share/ca-certificates/88876.pem
	I1010 18:49:21.076681  117647 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 10 18:09 /usr/share/ca-certificates/88876.pem
	I1010 18:49:21.076745  117647 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/88876.pem
	I1010 18:49:21.082792  117647 command_runner.go:130] > 51391683
	I1010 18:49:21.082861  117647 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/88876.pem /etc/ssl/certs/51391683.0"
	I1010 18:49:21.093358  117647 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/888762.pem && ln -fs /usr/share/ca-certificates/888762.pem /etc/ssl/certs/888762.pem"
	I1010 18:49:21.106000  117647 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/888762.pem
	I1010 18:49:21.111155  117647 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct 10 18:09 /usr/share/ca-certificates/888762.pem
	I1010 18:49:21.111431  117647 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 10 18:09 /usr/share/ca-certificates/888762.pem
	I1010 18:49:21.111501  117647 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/888762.pem
	I1010 18:49:21.117647  117647 command_runner.go:130] > 3ec20f2e
	I1010 18:49:21.117715  117647 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/888762.pem /etc/ssl/certs/3ec20f2e.0"
	I1010 18:49:21.127779  117647 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1010 18:49:21.132589  117647 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1010 18:49:21.132614  117647 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1010 18:49:21.132622  117647 command_runner.go:130] > Device: 253,1	Inode: 6289960     Links: 1
	I1010 18:49:21.132632  117647 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1010 18:49:21.132642  117647 command_runner.go:130] > Access: 2024-10-10 18:42:36.625167643 +0000
	I1010 18:49:21.132650  117647 command_runner.go:130] > Modify: 2024-10-10 18:42:36.625167643 +0000
	I1010 18:49:21.132658  117647 command_runner.go:130] > Change: 2024-10-10 18:42:36.625167643 +0000
	I1010 18:49:21.132665  117647 command_runner.go:130] >  Birth: 2024-10-10 18:42:36.625167643 +0000
	I1010 18:49:21.132725  117647 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1010 18:49:21.139041  117647 command_runner.go:130] > Certificate will not expire
	I1010 18:49:21.139211  117647 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1010 18:49:21.145366  117647 command_runner.go:130] > Certificate will not expire
	I1010 18:49:21.145443  117647 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1010 18:49:21.151621  117647 command_runner.go:130] > Certificate will not expire
	I1010 18:49:21.151712  117647 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1010 18:49:21.157881  117647 command_runner.go:130] > Certificate will not expire
	I1010 18:49:21.157974  117647 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1010 18:49:21.164169  117647 command_runner.go:130] > Certificate will not expire
	I1010 18:49:21.164239  117647 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1010 18:49:21.170327  117647 command_runner.go:130] > Certificate will not expire
	I1010 18:49:21.170400  117647 kubeadm.go:392] StartCluster: {Name:multinode-965291 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:multinode-965291 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.28 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.122 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.137 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:f
alse istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 18:49:21.170509  117647 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1010 18:49:21.170557  117647 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1010 18:49:21.212963  117647 command_runner.go:130] > b3ca0ce060637bf39d23b9c1e7488f637044d041ee138579ff7f478d0d60b669
	I1010 18:49:21.212984  117647 command_runner.go:130] > 76513cc8bb6d03925307e236110027724b85565078d4d922c8311ed6083a01a6
	I1010 18:49:21.212996  117647 command_runner.go:130] > 97c9f528dac21bf44e6c02e64554a81132b8abf9b31ec579102ba9a042b3d38a
	I1010 18:49:21.213005  117647 command_runner.go:130] > f092e089c21847e0ff48a8248e7faeafada4c7ed83f9e2d19aac4160fde8cf56
	I1010 18:49:21.213010  117647 command_runner.go:130] > c65a6383e328fc83b5178b1b9052992dfd78946436001f2eb8b63fec22e3fa1f
	I1010 18:49:21.213015  117647 command_runner.go:130] > fe6a7f6a2a2853173494e508a5539fa554ffb163cfb97097eb7c185a48e87da8
	I1010 18:49:21.213021  117647 command_runner.go:130] > 5794c9a1761178b22d3156765ad9ecd2f40f38e87266266ce24367188f9b5018
	I1010 18:49:21.213027  117647 command_runner.go:130] > f00129d23471b49f194e49f9941ac40c8694efa5885506aafe4a6628465e47f1
	I1010 18:49:21.213045  117647 cri.go:89] found id: "b3ca0ce060637bf39d23b9c1e7488f637044d041ee138579ff7f478d0d60b669"
	I1010 18:49:21.213054  117647 cri.go:89] found id: "76513cc8bb6d03925307e236110027724b85565078d4d922c8311ed6083a01a6"
	I1010 18:49:21.213057  117647 cri.go:89] found id: "97c9f528dac21bf44e6c02e64554a81132b8abf9b31ec579102ba9a042b3d38a"
	I1010 18:49:21.213060  117647 cri.go:89] found id: "f092e089c21847e0ff48a8248e7faeafada4c7ed83f9e2d19aac4160fde8cf56"
	I1010 18:49:21.213063  117647 cri.go:89] found id: "c65a6383e328fc83b5178b1b9052992dfd78946436001f2eb8b63fec22e3fa1f"
	I1010 18:49:21.213067  117647 cri.go:89] found id: "fe6a7f6a2a2853173494e508a5539fa554ffb163cfb97097eb7c185a48e87da8"
	I1010 18:49:21.213073  117647 cri.go:89] found id: "5794c9a1761178b22d3156765ad9ecd2f40f38e87266266ce24367188f9b5018"
	I1010 18:49:21.213075  117647 cri.go:89] found id: "f00129d23471b49f194e49f9941ac40c8694efa5885506aafe4a6628465e47f1"
	I1010 18:49:21.213078  117647 cri.go:89] found id: ""
	I1010 18:49:21.213118  117647 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-965291 -n multinode-965291
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-965291 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (145.24s)

                                                
                                    
x
+
TestPreload (271.66s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-091948 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E1010 18:57:50.018172   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/functional-346405/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-091948 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (2m9.170542702s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-091948 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-091948 image pull gcr.io/k8s-minikube/busybox: (2.412626915s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-091948
E1010 18:59:42.606807   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/addons-473910/client.crt: no such file or directory" logger="UnhandledError"
E1010 18:59:59.536628   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/addons-473910/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:58: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p test-preload-091948: exit status 82 (2m0.481190265s)

                                                
                                                
-- stdout --
	* Stopping node "test-preload-091948"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
preload_test.go:60: out/minikube-linux-amd64 stop -p test-preload-091948 failed: exit status 82
panic.go:629: *** TestPreload FAILED at 2024-10-10 19:01:37.995900342 +0000 UTC m=+3845.680835537
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-091948 -n test-preload-091948
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-091948 -n test-preload-091948: exit status 3 (18.685725575s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1010 19:01:56.677270  122467 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.36:22: connect: no route to host
	E1010 19:01:56.677291  122467 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.39.36:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "test-preload-091948" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "test-preload-091948" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-091948
--- FAIL: TestPreload (271.66s)

                                                
                                    
x
+
TestKubernetesUpgrade (371.95s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-857939 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-857939 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m56.434763895s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-857939] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19787
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19787-81676/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19787-81676/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-857939" primary control-plane node in "kubernetes-upgrade-857939" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1010 19:03:53.466236  123562 out.go:345] Setting OutFile to fd 1 ...
	I1010 19:03:53.466431  123562 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 19:03:53.466446  123562 out.go:358] Setting ErrFile to fd 2...
	I1010 19:03:53.466453  123562 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 19:03:53.466819  123562 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19787-81676/.minikube/bin
	I1010 19:03:53.467676  123562 out.go:352] Setting JSON to false
	I1010 19:03:53.468994  123562 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":9979,"bootTime":1728577054,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1010 19:03:53.469059  123562 start.go:139] virtualization: kvm guest
	I1010 19:03:53.470431  123562 out.go:177] * [kubernetes-upgrade-857939] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1010 19:03:53.472495  123562 notify.go:220] Checking for updates...
	I1010 19:03:53.473710  123562 out.go:177]   - MINIKUBE_LOCATION=19787
	I1010 19:03:53.477060  123562 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1010 19:03:53.479460  123562 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19787-81676/kubeconfig
	I1010 19:03:53.482143  123562 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19787-81676/.minikube
	I1010 19:03:53.485065  123562 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1010 19:03:53.488000  123562 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1010 19:03:53.489486  123562 driver.go:394] Setting default libvirt URI to qemu:///system
	I1010 19:03:53.528765  123562 out.go:177] * Using the kvm2 driver based on user configuration
	I1010 19:03:53.531554  123562 start.go:297] selected driver: kvm2
	I1010 19:03:53.531568  123562 start.go:901] validating driver "kvm2" against <nil>
	I1010 19:03:53.531580  123562 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1010 19:03:53.532531  123562 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 19:03:53.550331  123562 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19787-81676/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1010 19:03:53.567603  123562 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1010 19:03:53.567659  123562 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1010 19:03:53.568014  123562 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1010 19:03:53.568057  123562 cni.go:84] Creating CNI manager for ""
	I1010 19:03:53.568119  123562 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1010 19:03:53.568134  123562 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1010 19:03:53.568226  123562 start.go:340] cluster config:
	{Name:kubernetes-upgrade-857939 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-857939 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 19:03:53.568348  123562 iso.go:125] acquiring lock: {Name:mk53f4624ffe67cac4f5d6b29b9ed87292a010a5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 19:03:53.570251  123562 out.go:177] * Starting "kubernetes-upgrade-857939" primary control-plane node in "kubernetes-upgrade-857939" cluster
	I1010 19:03:53.571700  123562 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1010 19:03:53.571747  123562 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1010 19:03:53.571757  123562 cache.go:56] Caching tarball of preloaded images
	I1010 19:03:53.571866  123562 preload.go:172] Found /home/jenkins/minikube-integration/19787-81676/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1010 19:03:53.571881  123562 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1010 19:03:53.572333  123562 profile.go:143] Saving config to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/kubernetes-upgrade-857939/config.json ...
	I1010 19:03:53.572367  123562 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/kubernetes-upgrade-857939/config.json: {Name:mk2128398d85bc8db5d34fca3ff8f9fea14d7197 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 19:03:53.572559  123562 start.go:360] acquireMachinesLock for kubernetes-upgrade-857939: {Name:mkf50a8a3600473176c10a3ea212772e747151e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1010 19:04:17.281993  123562 start.go:364] duration metric: took 23.709398705s to acquireMachinesLock for "kubernetes-upgrade-857939"
	I1010 19:04:17.282077  123562 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-857939 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-857939 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1010 19:04:17.282225  123562 start.go:125] createHost starting for "" (driver="kvm2")
	I1010 19:04:17.284414  123562 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1010 19:04:17.284615  123562 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:04:17.284675  123562 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:04:17.301488  123562 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44537
	I1010 19:04:17.301979  123562 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:04:17.302600  123562 main.go:141] libmachine: Using API Version  1
	I1010 19:04:17.302628  123562 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:04:17.303021  123562 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:04:17.303231  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) Calling .GetMachineName
	I1010 19:04:17.303437  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) Calling .DriverName
	I1010 19:04:17.303609  123562 start.go:159] libmachine.API.Create for "kubernetes-upgrade-857939" (driver="kvm2")
	I1010 19:04:17.303646  123562 client.go:168] LocalClient.Create starting
	I1010 19:04:17.303684  123562 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem
	I1010 19:04:17.303732  123562 main.go:141] libmachine: Decoding PEM data...
	I1010 19:04:17.303752  123562 main.go:141] libmachine: Parsing certificate...
	I1010 19:04:17.303815  123562 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem
	I1010 19:04:17.303841  123562 main.go:141] libmachine: Decoding PEM data...
	I1010 19:04:17.303853  123562 main.go:141] libmachine: Parsing certificate...
	I1010 19:04:17.303880  123562 main.go:141] libmachine: Running pre-create checks...
	I1010 19:04:17.303893  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) Calling .PreCreateCheck
	I1010 19:04:17.304202  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) Calling .GetConfigRaw
	I1010 19:04:17.304657  123562 main.go:141] libmachine: Creating machine...
	I1010 19:04:17.304679  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) Calling .Create
	I1010 19:04:17.304826  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) Creating KVM machine...
	I1010 19:04:17.305989  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | found existing default KVM network
	I1010 19:04:17.306716  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | I1010 19:04:17.306576  123899 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:37:74:ce} reservation:<nil>}
	I1010 19:04:17.307417  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | I1010 19:04:17.307338  123899 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000211a50}
	I1010 19:04:17.307449  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | created network xml: 
	I1010 19:04:17.307466  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | <network>
	I1010 19:04:17.307478  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG |   <name>mk-kubernetes-upgrade-857939</name>
	I1010 19:04:17.307487  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG |   <dns enable='no'/>
	I1010 19:04:17.307498  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG |   
	I1010 19:04:17.307510  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I1010 19:04:17.307528  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG |     <dhcp>
	I1010 19:04:17.307540  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I1010 19:04:17.307549  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG |     </dhcp>
	I1010 19:04:17.307554  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG |   </ip>
	I1010 19:04:17.307559  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG |   
	I1010 19:04:17.307566  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | </network>
	I1010 19:04:17.307573  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | 
	I1010 19:04:17.312832  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | trying to create private KVM network mk-kubernetes-upgrade-857939 192.168.50.0/24...
	I1010 19:04:17.389974  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | private KVM network mk-kubernetes-upgrade-857939 192.168.50.0/24 created
	I1010 19:04:17.390007  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) Setting up store path in /home/jenkins/minikube-integration/19787-81676/.minikube/machines/kubernetes-upgrade-857939 ...
	I1010 19:04:17.390020  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | I1010 19:04:17.389963  123899 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19787-81676/.minikube
	I1010 19:04:17.390038  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) Building disk image from file:///home/jenkins/minikube-integration/19787-81676/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso
	I1010 19:04:17.390201  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) Downloading /home/jenkins/minikube-integration/19787-81676/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19787-81676/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso...
	I1010 19:04:17.652616  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | I1010 19:04:17.652475  123899 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/kubernetes-upgrade-857939/id_rsa...
	I1010 19:04:17.827236  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | I1010 19:04:17.827063  123899 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/kubernetes-upgrade-857939/kubernetes-upgrade-857939.rawdisk...
	I1010 19:04:17.827276  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | Writing magic tar header
	I1010 19:04:17.827306  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | Writing SSH key tar header
	I1010 19:04:17.827315  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | I1010 19:04:17.827232  123899 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19787-81676/.minikube/machines/kubernetes-upgrade-857939 ...
	I1010 19:04:17.827444  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/kubernetes-upgrade-857939
	I1010 19:04:17.827475  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) Setting executable bit set on /home/jenkins/minikube-integration/19787-81676/.minikube/machines/kubernetes-upgrade-857939 (perms=drwx------)
	I1010 19:04:17.827492  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19787-81676/.minikube/machines
	I1010 19:04:17.827511  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19787-81676/.minikube
	I1010 19:04:17.827525  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19787-81676
	I1010 19:04:17.827540  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1010 19:04:17.827559  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | Checking permissions on dir: /home/jenkins
	I1010 19:04:17.827577  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | Checking permissions on dir: /home
	I1010 19:04:17.827590  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | Skipping /home - not owner
	I1010 19:04:17.827602  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) Setting executable bit set on /home/jenkins/minikube-integration/19787-81676/.minikube/machines (perms=drwxr-xr-x)
	I1010 19:04:17.827618  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) Setting executable bit set on /home/jenkins/minikube-integration/19787-81676/.minikube (perms=drwxr-xr-x)
	I1010 19:04:17.827632  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) Setting executable bit set on /home/jenkins/minikube-integration/19787-81676 (perms=drwxrwxr-x)
	I1010 19:04:17.827646  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1010 19:04:17.827659  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1010 19:04:17.827675  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) Creating domain...
	I1010 19:04:17.828969  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) define libvirt domain using xml: 
	I1010 19:04:17.828995  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) <domain type='kvm'>
	I1010 19:04:17.829005  123562 main.go:141] libmachine: (kubernetes-upgrade-857939)   <name>kubernetes-upgrade-857939</name>
	I1010 19:04:17.829013  123562 main.go:141] libmachine: (kubernetes-upgrade-857939)   <memory unit='MiB'>2200</memory>
	I1010 19:04:17.829021  123562 main.go:141] libmachine: (kubernetes-upgrade-857939)   <vcpu>2</vcpu>
	I1010 19:04:17.829028  123562 main.go:141] libmachine: (kubernetes-upgrade-857939)   <features>
	I1010 19:04:17.829052  123562 main.go:141] libmachine: (kubernetes-upgrade-857939)     <acpi/>
	I1010 19:04:17.829061  123562 main.go:141] libmachine: (kubernetes-upgrade-857939)     <apic/>
	I1010 19:04:17.829074  123562 main.go:141] libmachine: (kubernetes-upgrade-857939)     <pae/>
	I1010 19:04:17.829083  123562 main.go:141] libmachine: (kubernetes-upgrade-857939)     
	I1010 19:04:17.829090  123562 main.go:141] libmachine: (kubernetes-upgrade-857939)   </features>
	I1010 19:04:17.829100  123562 main.go:141] libmachine: (kubernetes-upgrade-857939)   <cpu mode='host-passthrough'>
	I1010 19:04:17.829114  123562 main.go:141] libmachine: (kubernetes-upgrade-857939)   
	I1010 19:04:17.829126  123562 main.go:141] libmachine: (kubernetes-upgrade-857939)   </cpu>
	I1010 19:04:17.829135  123562 main.go:141] libmachine: (kubernetes-upgrade-857939)   <os>
	I1010 19:04:17.829145  123562 main.go:141] libmachine: (kubernetes-upgrade-857939)     <type>hvm</type>
	I1010 19:04:17.829154  123562 main.go:141] libmachine: (kubernetes-upgrade-857939)     <boot dev='cdrom'/>
	I1010 19:04:17.829164  123562 main.go:141] libmachine: (kubernetes-upgrade-857939)     <boot dev='hd'/>
	I1010 19:04:17.829173  123562 main.go:141] libmachine: (kubernetes-upgrade-857939)     <bootmenu enable='no'/>
	I1010 19:04:17.829182  123562 main.go:141] libmachine: (kubernetes-upgrade-857939)   </os>
	I1010 19:04:17.829189  123562 main.go:141] libmachine: (kubernetes-upgrade-857939)   <devices>
	I1010 19:04:17.829206  123562 main.go:141] libmachine: (kubernetes-upgrade-857939)     <disk type='file' device='cdrom'>
	I1010 19:04:17.829227  123562 main.go:141] libmachine: (kubernetes-upgrade-857939)       <source file='/home/jenkins/minikube-integration/19787-81676/.minikube/machines/kubernetes-upgrade-857939/boot2docker.iso'/>
	I1010 19:04:17.829238  123562 main.go:141] libmachine: (kubernetes-upgrade-857939)       <target dev='hdc' bus='scsi'/>
	I1010 19:04:17.829247  123562 main.go:141] libmachine: (kubernetes-upgrade-857939)       <readonly/>
	I1010 19:04:17.829258  123562 main.go:141] libmachine: (kubernetes-upgrade-857939)     </disk>
	I1010 19:04:17.829271  123562 main.go:141] libmachine: (kubernetes-upgrade-857939)     <disk type='file' device='disk'>
	I1010 19:04:17.829279  123562 main.go:141] libmachine: (kubernetes-upgrade-857939)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1010 19:04:17.829311  123562 main.go:141] libmachine: (kubernetes-upgrade-857939)       <source file='/home/jenkins/minikube-integration/19787-81676/.minikube/machines/kubernetes-upgrade-857939/kubernetes-upgrade-857939.rawdisk'/>
	I1010 19:04:17.829324  123562 main.go:141] libmachine: (kubernetes-upgrade-857939)       <target dev='hda' bus='virtio'/>
	I1010 19:04:17.829336  123562 main.go:141] libmachine: (kubernetes-upgrade-857939)     </disk>
	I1010 19:04:17.829344  123562 main.go:141] libmachine: (kubernetes-upgrade-857939)     <interface type='network'>
	I1010 19:04:17.829364  123562 main.go:141] libmachine: (kubernetes-upgrade-857939)       <source network='mk-kubernetes-upgrade-857939'/>
	I1010 19:04:17.829375  123562 main.go:141] libmachine: (kubernetes-upgrade-857939)       <model type='virtio'/>
	I1010 19:04:17.829388  123562 main.go:141] libmachine: (kubernetes-upgrade-857939)     </interface>
	I1010 19:04:17.829401  123562 main.go:141] libmachine: (kubernetes-upgrade-857939)     <interface type='network'>
	I1010 19:04:17.829414  123562 main.go:141] libmachine: (kubernetes-upgrade-857939)       <source network='default'/>
	I1010 19:04:17.829421  123562 main.go:141] libmachine: (kubernetes-upgrade-857939)       <model type='virtio'/>
	I1010 19:04:17.829431  123562 main.go:141] libmachine: (kubernetes-upgrade-857939)     </interface>
	I1010 19:04:17.829444  123562 main.go:141] libmachine: (kubernetes-upgrade-857939)     <serial type='pty'>
	I1010 19:04:17.829465  123562 main.go:141] libmachine: (kubernetes-upgrade-857939)       <target port='0'/>
	I1010 19:04:17.829478  123562 main.go:141] libmachine: (kubernetes-upgrade-857939)     </serial>
	I1010 19:04:17.829490  123562 main.go:141] libmachine: (kubernetes-upgrade-857939)     <console type='pty'>
	I1010 19:04:17.829501  123562 main.go:141] libmachine: (kubernetes-upgrade-857939)       <target type='serial' port='0'/>
	I1010 19:04:17.829519  123562 main.go:141] libmachine: (kubernetes-upgrade-857939)     </console>
	I1010 19:04:17.829529  123562 main.go:141] libmachine: (kubernetes-upgrade-857939)     <rng model='virtio'>
	I1010 19:04:17.829539  123562 main.go:141] libmachine: (kubernetes-upgrade-857939)       <backend model='random'>/dev/random</backend>
	I1010 19:04:17.829552  123562 main.go:141] libmachine: (kubernetes-upgrade-857939)     </rng>
	I1010 19:04:17.829563  123562 main.go:141] libmachine: (kubernetes-upgrade-857939)     
	I1010 19:04:17.829572  123562 main.go:141] libmachine: (kubernetes-upgrade-857939)     
	I1010 19:04:17.829581  123562 main.go:141] libmachine: (kubernetes-upgrade-857939)   </devices>
	I1010 19:04:17.829590  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) </domain>
	I1010 19:04:17.829600  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) 
	I1010 19:04:17.837074  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | domain kubernetes-upgrade-857939 has defined MAC address 52:54:00:a1:36:db in network default
	I1010 19:04:17.837680  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) Ensuring networks are active...
	I1010 19:04:17.837702  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | domain kubernetes-upgrade-857939 has defined MAC address 52:54:00:2e:6e:ae in network mk-kubernetes-upgrade-857939
	I1010 19:04:17.838560  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) Ensuring network default is active
	I1010 19:04:17.838796  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) Ensuring network mk-kubernetes-upgrade-857939 is active
	I1010 19:04:17.839566  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) Getting domain xml...
	I1010 19:04:17.840300  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) Creating domain...
	I1010 19:04:19.172763  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) Waiting to get IP...
	I1010 19:04:19.173740  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | domain kubernetes-upgrade-857939 has defined MAC address 52:54:00:2e:6e:ae in network mk-kubernetes-upgrade-857939
	I1010 19:04:19.174279  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | unable to find current IP address of domain kubernetes-upgrade-857939 in network mk-kubernetes-upgrade-857939
	I1010 19:04:19.174310  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | I1010 19:04:19.174244  123899 retry.go:31] will retry after 293.830805ms: waiting for machine to come up
	I1010 19:04:19.469759  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | domain kubernetes-upgrade-857939 has defined MAC address 52:54:00:2e:6e:ae in network mk-kubernetes-upgrade-857939
	I1010 19:04:19.470344  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | unable to find current IP address of domain kubernetes-upgrade-857939 in network mk-kubernetes-upgrade-857939
	I1010 19:04:19.470372  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | I1010 19:04:19.470289  123899 retry.go:31] will retry after 335.030562ms: waiting for machine to come up
	I1010 19:04:19.807106  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | domain kubernetes-upgrade-857939 has defined MAC address 52:54:00:2e:6e:ae in network mk-kubernetes-upgrade-857939
	I1010 19:04:19.807447  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | unable to find current IP address of domain kubernetes-upgrade-857939 in network mk-kubernetes-upgrade-857939
	I1010 19:04:19.807472  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | I1010 19:04:19.807412  123899 retry.go:31] will retry after 415.128801ms: waiting for machine to come up
	I1010 19:04:20.223893  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | domain kubernetes-upgrade-857939 has defined MAC address 52:54:00:2e:6e:ae in network mk-kubernetes-upgrade-857939
	I1010 19:04:20.224401  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | unable to find current IP address of domain kubernetes-upgrade-857939 in network mk-kubernetes-upgrade-857939
	I1010 19:04:20.224430  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | I1010 19:04:20.224364  123899 retry.go:31] will retry after 601.443305ms: waiting for machine to come up
	I1010 19:04:20.827131  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | domain kubernetes-upgrade-857939 has defined MAC address 52:54:00:2e:6e:ae in network mk-kubernetes-upgrade-857939
	I1010 19:04:20.827564  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | unable to find current IP address of domain kubernetes-upgrade-857939 in network mk-kubernetes-upgrade-857939
	I1010 19:04:20.827593  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | I1010 19:04:20.827528  123899 retry.go:31] will retry after 555.702376ms: waiting for machine to come up
	I1010 19:04:21.384578  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | domain kubernetes-upgrade-857939 has defined MAC address 52:54:00:2e:6e:ae in network mk-kubernetes-upgrade-857939
	I1010 19:04:21.385176  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | unable to find current IP address of domain kubernetes-upgrade-857939 in network mk-kubernetes-upgrade-857939
	I1010 19:04:21.385209  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | I1010 19:04:21.385129  123899 retry.go:31] will retry after 886.283127ms: waiting for machine to come up
	I1010 19:04:22.272886  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | domain kubernetes-upgrade-857939 has defined MAC address 52:54:00:2e:6e:ae in network mk-kubernetes-upgrade-857939
	I1010 19:04:22.273461  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | unable to find current IP address of domain kubernetes-upgrade-857939 in network mk-kubernetes-upgrade-857939
	I1010 19:04:22.273487  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | I1010 19:04:22.273391  123899 retry.go:31] will retry after 1.081306558s: waiting for machine to come up
	I1010 19:04:23.356728  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | domain kubernetes-upgrade-857939 has defined MAC address 52:54:00:2e:6e:ae in network mk-kubernetes-upgrade-857939
	I1010 19:04:23.357168  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | unable to find current IP address of domain kubernetes-upgrade-857939 in network mk-kubernetes-upgrade-857939
	I1010 19:04:23.357194  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | I1010 19:04:23.357134  123899 retry.go:31] will retry after 1.478131325s: waiting for machine to come up
	I1010 19:04:24.836388  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | domain kubernetes-upgrade-857939 has defined MAC address 52:54:00:2e:6e:ae in network mk-kubernetes-upgrade-857939
	I1010 19:04:24.836922  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | unable to find current IP address of domain kubernetes-upgrade-857939 in network mk-kubernetes-upgrade-857939
	I1010 19:04:24.836972  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | I1010 19:04:24.836871  123899 retry.go:31] will retry after 1.657073169s: waiting for machine to come up
	I1010 19:04:26.496768  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | domain kubernetes-upgrade-857939 has defined MAC address 52:54:00:2e:6e:ae in network mk-kubernetes-upgrade-857939
	I1010 19:04:26.497192  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | unable to find current IP address of domain kubernetes-upgrade-857939 in network mk-kubernetes-upgrade-857939
	I1010 19:04:26.497213  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | I1010 19:04:26.497153  123899 retry.go:31] will retry after 2.225698396s: waiting for machine to come up
	I1010 19:04:28.725284  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | domain kubernetes-upgrade-857939 has defined MAC address 52:54:00:2e:6e:ae in network mk-kubernetes-upgrade-857939
	I1010 19:04:28.725836  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | unable to find current IP address of domain kubernetes-upgrade-857939 in network mk-kubernetes-upgrade-857939
	I1010 19:04:28.725868  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | I1010 19:04:28.725776  123899 retry.go:31] will retry after 1.872446622s: waiting for machine to come up
	I1010 19:04:30.600835  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | domain kubernetes-upgrade-857939 has defined MAC address 52:54:00:2e:6e:ae in network mk-kubernetes-upgrade-857939
	I1010 19:04:30.601232  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | unable to find current IP address of domain kubernetes-upgrade-857939 in network mk-kubernetes-upgrade-857939
	I1010 19:04:30.601259  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | I1010 19:04:30.601210  123899 retry.go:31] will retry after 2.986991637s: waiting for machine to come up
	I1010 19:04:33.590090  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | domain kubernetes-upgrade-857939 has defined MAC address 52:54:00:2e:6e:ae in network mk-kubernetes-upgrade-857939
	I1010 19:04:33.590547  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | unable to find current IP address of domain kubernetes-upgrade-857939 in network mk-kubernetes-upgrade-857939
	I1010 19:04:33.590576  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | I1010 19:04:33.590485  123899 retry.go:31] will retry after 3.612418665s: waiting for machine to come up
	I1010 19:04:37.207256  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | domain kubernetes-upgrade-857939 has defined MAC address 52:54:00:2e:6e:ae in network mk-kubernetes-upgrade-857939
	I1010 19:04:37.207648  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | unable to find current IP address of domain kubernetes-upgrade-857939 in network mk-kubernetes-upgrade-857939
	I1010 19:04:37.207670  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | I1010 19:04:37.207604  123899 retry.go:31] will retry after 5.065956093s: waiting for machine to come up
	I1010 19:04:42.279274  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | domain kubernetes-upgrade-857939 has defined MAC address 52:54:00:2e:6e:ae in network mk-kubernetes-upgrade-857939
	I1010 19:04:42.279817  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) Found IP for machine: 192.168.50.54
	I1010 19:04:42.279844  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) Reserving static IP address...
	I1010 19:04:42.279873  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | domain kubernetes-upgrade-857939 has current primary IP address 192.168.50.54 and MAC address 52:54:00:2e:6e:ae in network mk-kubernetes-upgrade-857939
	I1010 19:04:42.280317  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-857939", mac: "52:54:00:2e:6e:ae", ip: "192.168.50.54"} in network mk-kubernetes-upgrade-857939
	I1010 19:04:42.359877  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) Reserved static IP address: 192.168.50.54
	I1010 19:04:42.359917  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) Waiting for SSH to be available...
	I1010 19:04:42.359929  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | Getting to WaitForSSH function...
	I1010 19:04:42.363306  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | domain kubernetes-upgrade-857939 has defined MAC address 52:54:00:2e:6e:ae in network mk-kubernetes-upgrade-857939
	I1010 19:04:42.363778  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:6e:ae", ip: ""} in network mk-kubernetes-upgrade-857939: {Iface:virbr2 ExpiryTime:2024-10-10 20:04:32 +0000 UTC Type:0 Mac:52:54:00:2e:6e:ae Iaid: IPaddr:192.168.50.54 Prefix:24 Hostname:minikube Clientid:01:52:54:00:2e:6e:ae}
	I1010 19:04:42.363825  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | domain kubernetes-upgrade-857939 has defined IP address 192.168.50.54 and MAC address 52:54:00:2e:6e:ae in network mk-kubernetes-upgrade-857939
	I1010 19:04:42.363958  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | Using SSH client type: external
	I1010 19:04:42.363986  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | Using SSH private key: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/kubernetes-upgrade-857939/id_rsa (-rw-------)
	I1010 19:04:42.364033  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.54 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19787-81676/.minikube/machines/kubernetes-upgrade-857939/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1010 19:04:42.364045  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | About to run SSH command:
	I1010 19:04:42.364061  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | exit 0
	I1010 19:04:42.493197  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | SSH cmd err, output: <nil>: 
	I1010 19:04:42.493483  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) KVM machine creation complete!
	I1010 19:04:42.493917  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) Calling .GetConfigRaw
	I1010 19:04:42.494575  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) Calling .DriverName
	I1010 19:04:42.494806  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) Calling .DriverName
	I1010 19:04:42.494956  123562 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1010 19:04:42.494968  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) Calling .GetState
	I1010 19:04:42.496386  123562 main.go:141] libmachine: Detecting operating system of created instance...
	I1010 19:04:42.496402  123562 main.go:141] libmachine: Waiting for SSH to be available...
	I1010 19:04:42.496417  123562 main.go:141] libmachine: Getting to WaitForSSH function...
	I1010 19:04:42.496425  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) Calling .GetSSHHostname
	I1010 19:04:42.498911  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | domain kubernetes-upgrade-857939 has defined MAC address 52:54:00:2e:6e:ae in network mk-kubernetes-upgrade-857939
	I1010 19:04:42.499295  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:6e:ae", ip: ""} in network mk-kubernetes-upgrade-857939: {Iface:virbr2 ExpiryTime:2024-10-10 20:04:32 +0000 UTC Type:0 Mac:52:54:00:2e:6e:ae Iaid: IPaddr:192.168.50.54 Prefix:24 Hostname:kubernetes-upgrade-857939 Clientid:01:52:54:00:2e:6e:ae}
	I1010 19:04:42.499359  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | domain kubernetes-upgrade-857939 has defined IP address 192.168.50.54 and MAC address 52:54:00:2e:6e:ae in network mk-kubernetes-upgrade-857939
	I1010 19:04:42.499471  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) Calling .GetSSHPort
	I1010 19:04:42.499634  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) Calling .GetSSHKeyPath
	I1010 19:04:42.499802  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) Calling .GetSSHKeyPath
	I1010 19:04:42.499956  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) Calling .GetSSHUsername
	I1010 19:04:42.500110  123562 main.go:141] libmachine: Using SSH client type: native
	I1010 19:04:42.500313  123562 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.54 22 <nil> <nil>}
	I1010 19:04:42.500324  123562 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1010 19:04:42.612655  123562 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1010 19:04:42.612683  123562 main.go:141] libmachine: Detecting the provisioner...
	I1010 19:04:42.612695  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) Calling .GetSSHHostname
	I1010 19:04:42.615631  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | domain kubernetes-upgrade-857939 has defined MAC address 52:54:00:2e:6e:ae in network mk-kubernetes-upgrade-857939
	I1010 19:04:42.616015  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:6e:ae", ip: ""} in network mk-kubernetes-upgrade-857939: {Iface:virbr2 ExpiryTime:2024-10-10 20:04:32 +0000 UTC Type:0 Mac:52:54:00:2e:6e:ae Iaid: IPaddr:192.168.50.54 Prefix:24 Hostname:kubernetes-upgrade-857939 Clientid:01:52:54:00:2e:6e:ae}
	I1010 19:04:42.616046  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | domain kubernetes-upgrade-857939 has defined IP address 192.168.50.54 and MAC address 52:54:00:2e:6e:ae in network mk-kubernetes-upgrade-857939
	I1010 19:04:42.616275  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) Calling .GetSSHPort
	I1010 19:04:42.616476  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) Calling .GetSSHKeyPath
	I1010 19:04:42.616667  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) Calling .GetSSHKeyPath
	I1010 19:04:42.616888  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) Calling .GetSSHUsername
	I1010 19:04:42.617034  123562 main.go:141] libmachine: Using SSH client type: native
	I1010 19:04:42.617221  123562 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.54 22 <nil> <nil>}
	I1010 19:04:42.617232  123562 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1010 19:04:42.729882  123562 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1010 19:04:42.729962  123562 main.go:141] libmachine: found compatible host: buildroot
	I1010 19:04:42.729969  123562 main.go:141] libmachine: Provisioning with buildroot...
	I1010 19:04:42.729978  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) Calling .GetMachineName
	I1010 19:04:42.730368  123562 buildroot.go:166] provisioning hostname "kubernetes-upgrade-857939"
	I1010 19:04:42.730407  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) Calling .GetMachineName
	I1010 19:04:42.730609  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) Calling .GetSSHHostname
	I1010 19:04:42.733752  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | domain kubernetes-upgrade-857939 has defined MAC address 52:54:00:2e:6e:ae in network mk-kubernetes-upgrade-857939
	I1010 19:04:42.734223  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:6e:ae", ip: ""} in network mk-kubernetes-upgrade-857939: {Iface:virbr2 ExpiryTime:2024-10-10 20:04:32 +0000 UTC Type:0 Mac:52:54:00:2e:6e:ae Iaid: IPaddr:192.168.50.54 Prefix:24 Hostname:kubernetes-upgrade-857939 Clientid:01:52:54:00:2e:6e:ae}
	I1010 19:04:42.734257  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | domain kubernetes-upgrade-857939 has defined IP address 192.168.50.54 and MAC address 52:54:00:2e:6e:ae in network mk-kubernetes-upgrade-857939
	I1010 19:04:42.734437  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) Calling .GetSSHPort
	I1010 19:04:42.734601  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) Calling .GetSSHKeyPath
	I1010 19:04:42.734723  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) Calling .GetSSHKeyPath
	I1010 19:04:42.734830  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) Calling .GetSSHUsername
	I1010 19:04:42.734962  123562 main.go:141] libmachine: Using SSH client type: native
	I1010 19:04:42.735134  123562 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.54 22 <nil> <nil>}
	I1010 19:04:42.735146  123562 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-857939 && echo "kubernetes-upgrade-857939" | sudo tee /etc/hostname
	I1010 19:04:42.864608  123562 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-857939
	
	I1010 19:04:42.864643  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) Calling .GetSSHHostname
	I1010 19:04:42.867383  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | domain kubernetes-upgrade-857939 has defined MAC address 52:54:00:2e:6e:ae in network mk-kubernetes-upgrade-857939
	I1010 19:04:42.867865  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:6e:ae", ip: ""} in network mk-kubernetes-upgrade-857939: {Iface:virbr2 ExpiryTime:2024-10-10 20:04:32 +0000 UTC Type:0 Mac:52:54:00:2e:6e:ae Iaid: IPaddr:192.168.50.54 Prefix:24 Hostname:kubernetes-upgrade-857939 Clientid:01:52:54:00:2e:6e:ae}
	I1010 19:04:42.867973  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | domain kubernetes-upgrade-857939 has defined IP address 192.168.50.54 and MAC address 52:54:00:2e:6e:ae in network mk-kubernetes-upgrade-857939
	I1010 19:04:42.868050  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) Calling .GetSSHPort
	I1010 19:04:42.868267  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) Calling .GetSSHKeyPath
	I1010 19:04:42.868455  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) Calling .GetSSHKeyPath
	I1010 19:04:42.868605  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) Calling .GetSSHUsername
	I1010 19:04:42.868782  123562 main.go:141] libmachine: Using SSH client type: native
	I1010 19:04:42.869045  123562 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.54 22 <nil> <nil>}
	I1010 19:04:42.869069  123562 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-857939' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-857939/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-857939' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1010 19:04:42.990845  123562 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1010 19:04:42.990880  123562 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19787-81676/.minikube CaCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19787-81676/.minikube}
	I1010 19:04:42.990910  123562 buildroot.go:174] setting up certificates
	I1010 19:04:42.990931  123562 provision.go:84] configureAuth start
	I1010 19:04:42.990944  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) Calling .GetMachineName
	I1010 19:04:42.991261  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) Calling .GetIP
	I1010 19:04:42.994476  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | domain kubernetes-upgrade-857939 has defined MAC address 52:54:00:2e:6e:ae in network mk-kubernetes-upgrade-857939
	I1010 19:04:42.994956  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:6e:ae", ip: ""} in network mk-kubernetes-upgrade-857939: {Iface:virbr2 ExpiryTime:2024-10-10 20:04:32 +0000 UTC Type:0 Mac:52:54:00:2e:6e:ae Iaid: IPaddr:192.168.50.54 Prefix:24 Hostname:kubernetes-upgrade-857939 Clientid:01:52:54:00:2e:6e:ae}
	I1010 19:04:42.995013  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | domain kubernetes-upgrade-857939 has defined IP address 192.168.50.54 and MAC address 52:54:00:2e:6e:ae in network mk-kubernetes-upgrade-857939
	I1010 19:04:42.995235  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) Calling .GetSSHHostname
	I1010 19:04:42.998200  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | domain kubernetes-upgrade-857939 has defined MAC address 52:54:00:2e:6e:ae in network mk-kubernetes-upgrade-857939
	I1010 19:04:42.998702  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:6e:ae", ip: ""} in network mk-kubernetes-upgrade-857939: {Iface:virbr2 ExpiryTime:2024-10-10 20:04:32 +0000 UTC Type:0 Mac:52:54:00:2e:6e:ae Iaid: IPaddr:192.168.50.54 Prefix:24 Hostname:kubernetes-upgrade-857939 Clientid:01:52:54:00:2e:6e:ae}
	I1010 19:04:42.998730  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | domain kubernetes-upgrade-857939 has defined IP address 192.168.50.54 and MAC address 52:54:00:2e:6e:ae in network mk-kubernetes-upgrade-857939
	I1010 19:04:42.998895  123562 provision.go:143] copyHostCerts
	I1010 19:04:42.998967  123562 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem, removing ...
	I1010 19:04:42.998981  123562 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem
	I1010 19:04:42.999058  123562 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem (1123 bytes)
	I1010 19:04:42.999167  123562 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem, removing ...
	I1010 19:04:42.999194  123562 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem
	I1010 19:04:42.999218  123562 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem (1675 bytes)
	I1010 19:04:42.999271  123562 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem, removing ...
	I1010 19:04:42.999280  123562 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem
	I1010 19:04:42.999298  123562 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem (1078 bytes)
	I1010 19:04:42.999341  123562 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-857939 san=[127.0.0.1 192.168.50.54 kubernetes-upgrade-857939 localhost minikube]
	I1010 19:04:43.068691  123562 provision.go:177] copyRemoteCerts
	I1010 19:04:43.068757  123562 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1010 19:04:43.068788  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) Calling .GetSSHHostname
	I1010 19:04:43.071787  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | domain kubernetes-upgrade-857939 has defined MAC address 52:54:00:2e:6e:ae in network mk-kubernetes-upgrade-857939
	I1010 19:04:43.072019  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:6e:ae", ip: ""} in network mk-kubernetes-upgrade-857939: {Iface:virbr2 ExpiryTime:2024-10-10 20:04:32 +0000 UTC Type:0 Mac:52:54:00:2e:6e:ae Iaid: IPaddr:192.168.50.54 Prefix:24 Hostname:kubernetes-upgrade-857939 Clientid:01:52:54:00:2e:6e:ae}
	I1010 19:04:43.072050  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | domain kubernetes-upgrade-857939 has defined IP address 192.168.50.54 and MAC address 52:54:00:2e:6e:ae in network mk-kubernetes-upgrade-857939
	I1010 19:04:43.072276  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) Calling .GetSSHPort
	I1010 19:04:43.072500  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) Calling .GetSSHKeyPath
	I1010 19:04:43.072663  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) Calling .GetSSHUsername
	I1010 19:04:43.072779  123562 sshutil.go:53] new ssh client: &{IP:192.168.50.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/kubernetes-upgrade-857939/id_rsa Username:docker}
	I1010 19:04:43.159605  123562 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1010 19:04:43.186417  123562 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1010 19:04:43.211214  123562 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1010 19:04:43.236727  123562 provision.go:87] duration metric: took 245.777332ms to configureAuth
	I1010 19:04:43.236761  123562 buildroot.go:189] setting minikube options for container-runtime
	I1010 19:04:43.236975  123562 config.go:182] Loaded profile config "kubernetes-upgrade-857939": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1010 19:04:43.237053  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) Calling .GetSSHHostname
	I1010 19:04:43.239809  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | domain kubernetes-upgrade-857939 has defined MAC address 52:54:00:2e:6e:ae in network mk-kubernetes-upgrade-857939
	I1010 19:04:43.240203  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:6e:ae", ip: ""} in network mk-kubernetes-upgrade-857939: {Iface:virbr2 ExpiryTime:2024-10-10 20:04:32 +0000 UTC Type:0 Mac:52:54:00:2e:6e:ae Iaid: IPaddr:192.168.50.54 Prefix:24 Hostname:kubernetes-upgrade-857939 Clientid:01:52:54:00:2e:6e:ae}
	I1010 19:04:43.240227  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | domain kubernetes-upgrade-857939 has defined IP address 192.168.50.54 and MAC address 52:54:00:2e:6e:ae in network mk-kubernetes-upgrade-857939
	I1010 19:04:43.240479  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) Calling .GetSSHPort
	I1010 19:04:43.240698  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) Calling .GetSSHKeyPath
	I1010 19:04:43.240882  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) Calling .GetSSHKeyPath
	I1010 19:04:43.241038  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) Calling .GetSSHUsername
	I1010 19:04:43.241282  123562 main.go:141] libmachine: Using SSH client type: native
	I1010 19:04:43.241485  123562 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.54 22 <nil> <nil>}
	I1010 19:04:43.241506  123562 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1010 19:04:43.488413  123562 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1010 19:04:43.488447  123562 main.go:141] libmachine: Checking connection to Docker...
	I1010 19:04:43.488459  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) Calling .GetURL
	I1010 19:04:43.489973  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | Using libvirt version 6000000
	I1010 19:04:43.492455  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | domain kubernetes-upgrade-857939 has defined MAC address 52:54:00:2e:6e:ae in network mk-kubernetes-upgrade-857939
	I1010 19:04:43.492805  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:6e:ae", ip: ""} in network mk-kubernetes-upgrade-857939: {Iface:virbr2 ExpiryTime:2024-10-10 20:04:32 +0000 UTC Type:0 Mac:52:54:00:2e:6e:ae Iaid: IPaddr:192.168.50.54 Prefix:24 Hostname:kubernetes-upgrade-857939 Clientid:01:52:54:00:2e:6e:ae}
	I1010 19:04:43.492908  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | domain kubernetes-upgrade-857939 has defined IP address 192.168.50.54 and MAC address 52:54:00:2e:6e:ae in network mk-kubernetes-upgrade-857939
	I1010 19:04:43.492954  123562 main.go:141] libmachine: Docker is up and running!
	I1010 19:04:43.492979  123562 main.go:141] libmachine: Reticulating splines...
	I1010 19:04:43.492986  123562 client.go:171] duration metric: took 26.189330157s to LocalClient.Create
	I1010 19:04:43.493011  123562 start.go:167] duration metric: took 26.189406917s to libmachine.API.Create "kubernetes-upgrade-857939"
	I1010 19:04:43.493021  123562 start.go:293] postStartSetup for "kubernetes-upgrade-857939" (driver="kvm2")
	I1010 19:04:43.493031  123562 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1010 19:04:43.493048  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) Calling .DriverName
	I1010 19:04:43.493347  123562 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1010 19:04:43.493382  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) Calling .GetSSHHostname
	I1010 19:04:43.495745  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | domain kubernetes-upgrade-857939 has defined MAC address 52:54:00:2e:6e:ae in network mk-kubernetes-upgrade-857939
	I1010 19:04:43.496121  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:6e:ae", ip: ""} in network mk-kubernetes-upgrade-857939: {Iface:virbr2 ExpiryTime:2024-10-10 20:04:32 +0000 UTC Type:0 Mac:52:54:00:2e:6e:ae Iaid: IPaddr:192.168.50.54 Prefix:24 Hostname:kubernetes-upgrade-857939 Clientid:01:52:54:00:2e:6e:ae}
	I1010 19:04:43.496155  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | domain kubernetes-upgrade-857939 has defined IP address 192.168.50.54 and MAC address 52:54:00:2e:6e:ae in network mk-kubernetes-upgrade-857939
	I1010 19:04:43.496264  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) Calling .GetSSHPort
	I1010 19:04:43.496475  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) Calling .GetSSHKeyPath
	I1010 19:04:43.496644  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) Calling .GetSSHUsername
	I1010 19:04:43.496829  123562 sshutil.go:53] new ssh client: &{IP:192.168.50.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/kubernetes-upgrade-857939/id_rsa Username:docker}
	I1010 19:04:43.584648  123562 ssh_runner.go:195] Run: cat /etc/os-release
	I1010 19:04:43.589703  123562 info.go:137] Remote host: Buildroot 2023.02.9
	I1010 19:04:43.589736  123562 filesync.go:126] Scanning /home/jenkins/minikube-integration/19787-81676/.minikube/addons for local assets ...
	I1010 19:04:43.589796  123562 filesync.go:126] Scanning /home/jenkins/minikube-integration/19787-81676/.minikube/files for local assets ...
	I1010 19:04:43.589884  123562 filesync.go:149] local asset: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem -> 888762.pem in /etc/ssl/certs
	I1010 19:04:43.589978  123562 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1010 19:04:43.601411  123562 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem --> /etc/ssl/certs/888762.pem (1708 bytes)
	I1010 19:04:43.626941  123562 start.go:296] duration metric: took 133.905627ms for postStartSetup
	I1010 19:04:43.626996  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) Calling .GetConfigRaw
	I1010 19:04:43.627705  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) Calling .GetIP
	I1010 19:04:43.630657  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | domain kubernetes-upgrade-857939 has defined MAC address 52:54:00:2e:6e:ae in network mk-kubernetes-upgrade-857939
	I1010 19:04:43.631108  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:6e:ae", ip: ""} in network mk-kubernetes-upgrade-857939: {Iface:virbr2 ExpiryTime:2024-10-10 20:04:32 +0000 UTC Type:0 Mac:52:54:00:2e:6e:ae Iaid: IPaddr:192.168.50.54 Prefix:24 Hostname:kubernetes-upgrade-857939 Clientid:01:52:54:00:2e:6e:ae}
	I1010 19:04:43.631140  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | domain kubernetes-upgrade-857939 has defined IP address 192.168.50.54 and MAC address 52:54:00:2e:6e:ae in network mk-kubernetes-upgrade-857939
	I1010 19:04:43.631610  123562 profile.go:143] Saving config to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/kubernetes-upgrade-857939/config.json ...
	I1010 19:04:43.631860  123562 start.go:128] duration metric: took 26.349622322s to createHost
	I1010 19:04:43.631901  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) Calling .GetSSHHostname
	I1010 19:04:43.634732  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | domain kubernetes-upgrade-857939 has defined MAC address 52:54:00:2e:6e:ae in network mk-kubernetes-upgrade-857939
	I1010 19:04:43.635072  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:6e:ae", ip: ""} in network mk-kubernetes-upgrade-857939: {Iface:virbr2 ExpiryTime:2024-10-10 20:04:32 +0000 UTC Type:0 Mac:52:54:00:2e:6e:ae Iaid: IPaddr:192.168.50.54 Prefix:24 Hostname:kubernetes-upgrade-857939 Clientid:01:52:54:00:2e:6e:ae}
	I1010 19:04:43.635112  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | domain kubernetes-upgrade-857939 has defined IP address 192.168.50.54 and MAC address 52:54:00:2e:6e:ae in network mk-kubernetes-upgrade-857939
	I1010 19:04:43.635324  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) Calling .GetSSHPort
	I1010 19:04:43.635528  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) Calling .GetSSHKeyPath
	I1010 19:04:43.635696  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) Calling .GetSSHKeyPath
	I1010 19:04:43.635908  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) Calling .GetSSHUsername
	I1010 19:04:43.636068  123562 main.go:141] libmachine: Using SSH client type: native
	I1010 19:04:43.636256  123562 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.54 22 <nil> <nil>}
	I1010 19:04:43.636266  123562 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1010 19:04:43.749862  123562 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728587083.727478271
	
	I1010 19:04:43.749891  123562 fix.go:216] guest clock: 1728587083.727478271
	I1010 19:04:43.749900  123562 fix.go:229] Guest: 2024-10-10 19:04:43.727478271 +0000 UTC Remote: 2024-10-10 19:04:43.63187562 +0000 UTC m=+50.221100174 (delta=95.602651ms)
	I1010 19:04:43.749919  123562 fix.go:200] guest clock delta is within tolerance: 95.602651ms
	I1010 19:04:43.749924  123562 start.go:83] releasing machines lock for "kubernetes-upgrade-857939", held for 26.4678866s
	I1010 19:04:43.749948  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) Calling .DriverName
	I1010 19:04:43.750254  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) Calling .GetIP
	I1010 19:04:43.753909  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | domain kubernetes-upgrade-857939 has defined MAC address 52:54:00:2e:6e:ae in network mk-kubernetes-upgrade-857939
	I1010 19:04:43.754330  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:6e:ae", ip: ""} in network mk-kubernetes-upgrade-857939: {Iface:virbr2 ExpiryTime:2024-10-10 20:04:32 +0000 UTC Type:0 Mac:52:54:00:2e:6e:ae Iaid: IPaddr:192.168.50.54 Prefix:24 Hostname:kubernetes-upgrade-857939 Clientid:01:52:54:00:2e:6e:ae}
	I1010 19:04:43.754394  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | domain kubernetes-upgrade-857939 has defined IP address 192.168.50.54 and MAC address 52:54:00:2e:6e:ae in network mk-kubernetes-upgrade-857939
	I1010 19:04:43.754501  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) Calling .DriverName
	I1010 19:04:43.755218  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) Calling .DriverName
	I1010 19:04:43.755443  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) Calling .DriverName
	I1010 19:04:43.755546  123562 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1010 19:04:43.755594  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) Calling .GetSSHHostname
	I1010 19:04:43.755677  123562 ssh_runner.go:195] Run: cat /version.json
	I1010 19:04:43.755704  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) Calling .GetSSHHostname
	I1010 19:04:43.758543  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | domain kubernetes-upgrade-857939 has defined MAC address 52:54:00:2e:6e:ae in network mk-kubernetes-upgrade-857939
	I1010 19:04:43.758820  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | domain kubernetes-upgrade-857939 has defined MAC address 52:54:00:2e:6e:ae in network mk-kubernetes-upgrade-857939
	I1010 19:04:43.758853  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:6e:ae", ip: ""} in network mk-kubernetes-upgrade-857939: {Iface:virbr2 ExpiryTime:2024-10-10 20:04:32 +0000 UTC Type:0 Mac:52:54:00:2e:6e:ae Iaid: IPaddr:192.168.50.54 Prefix:24 Hostname:kubernetes-upgrade-857939 Clientid:01:52:54:00:2e:6e:ae}
	I1010 19:04:43.758880  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | domain kubernetes-upgrade-857939 has defined IP address 192.168.50.54 and MAC address 52:54:00:2e:6e:ae in network mk-kubernetes-upgrade-857939
	I1010 19:04:43.759038  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) Calling .GetSSHPort
	I1010 19:04:43.759229  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) Calling .GetSSHKeyPath
	I1010 19:04:43.759305  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:6e:ae", ip: ""} in network mk-kubernetes-upgrade-857939: {Iface:virbr2 ExpiryTime:2024-10-10 20:04:32 +0000 UTC Type:0 Mac:52:54:00:2e:6e:ae Iaid: IPaddr:192.168.50.54 Prefix:24 Hostname:kubernetes-upgrade-857939 Clientid:01:52:54:00:2e:6e:ae}
	I1010 19:04:43.759338  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | domain kubernetes-upgrade-857939 has defined IP address 192.168.50.54 and MAC address 52:54:00:2e:6e:ae in network mk-kubernetes-upgrade-857939
	I1010 19:04:43.759380  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) Calling .GetSSHUsername
	I1010 19:04:43.759584  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) Calling .GetSSHPort
	I1010 19:04:43.759581  123562 sshutil.go:53] new ssh client: &{IP:192.168.50.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/kubernetes-upgrade-857939/id_rsa Username:docker}
	I1010 19:04:43.759712  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) Calling .GetSSHKeyPath
	I1010 19:04:43.759830  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) Calling .GetSSHUsername
	I1010 19:04:43.759942  123562 sshutil.go:53] new ssh client: &{IP:192.168.50.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/kubernetes-upgrade-857939/id_rsa Username:docker}
	I1010 19:04:43.846702  123562 ssh_runner.go:195] Run: systemctl --version
	I1010 19:04:43.869708  123562 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1010 19:04:44.034151  123562 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1010 19:04:44.041146  123562 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1010 19:04:44.041220  123562 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1010 19:04:44.058363  123562 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1010 19:04:44.058390  123562 start.go:495] detecting cgroup driver to use...
	I1010 19:04:44.058479  123562 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1010 19:04:44.076383  123562 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1010 19:04:44.092730  123562 docker.go:217] disabling cri-docker service (if available) ...
	I1010 19:04:44.092799  123562 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1010 19:04:44.109716  123562 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1010 19:04:44.126862  123562 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1010 19:04:44.270150  123562 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1010 19:04:44.438889  123562 docker.go:233] disabling docker service ...
	I1010 19:04:44.438957  123562 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1010 19:04:44.455158  123562 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1010 19:04:44.470171  123562 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1010 19:04:44.626506  123562 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1010 19:04:44.743490  123562 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1010 19:04:44.758354  123562 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1010 19:04:44.779304  123562 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1010 19:04:44.779367  123562 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:04:44.791564  123562 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1010 19:04:44.791636  123562 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:04:44.803366  123562 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:04:44.814817  123562 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:04:44.829670  123562 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1010 19:04:44.841741  123562 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1010 19:04:44.852504  123562 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1010 19:04:44.852575  123562 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1010 19:04:44.869205  123562 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1010 19:04:44.880728  123562 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 19:04:45.023605  123562 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1010 19:04:45.119363  123562 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1010 19:04:45.119465  123562 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1010 19:04:45.127067  123562 start.go:563] Will wait 60s for crictl version
	I1010 19:04:45.127132  123562 ssh_runner.go:195] Run: which crictl
	I1010 19:04:45.131720  123562 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1010 19:04:45.176072  123562 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1010 19:04:45.176165  123562 ssh_runner.go:195] Run: crio --version
	I1010 19:04:45.214614  123562 ssh_runner.go:195] Run: crio --version
	I1010 19:04:45.248349  123562 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1010 19:04:45.249960  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) Calling .GetIP
	I1010 19:04:45.253252  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | domain kubernetes-upgrade-857939 has defined MAC address 52:54:00:2e:6e:ae in network mk-kubernetes-upgrade-857939
	I1010 19:04:45.253723  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:6e:ae", ip: ""} in network mk-kubernetes-upgrade-857939: {Iface:virbr2 ExpiryTime:2024-10-10 20:04:32 +0000 UTC Type:0 Mac:52:54:00:2e:6e:ae Iaid: IPaddr:192.168.50.54 Prefix:24 Hostname:kubernetes-upgrade-857939 Clientid:01:52:54:00:2e:6e:ae}
	I1010 19:04:45.253756  123562 main.go:141] libmachine: (kubernetes-upgrade-857939) DBG | domain kubernetes-upgrade-857939 has defined IP address 192.168.50.54 and MAC address 52:54:00:2e:6e:ae in network mk-kubernetes-upgrade-857939
	I1010 19:04:45.254049  123562 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1010 19:04:45.258710  123562 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 19:04:45.272010  123562 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-857939 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-857939 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.54 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1010 19:04:45.272152  123562 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1010 19:04:45.272237  123562 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 19:04:45.312793  123562 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1010 19:04:45.312903  123562 ssh_runner.go:195] Run: which lz4
	I1010 19:04:45.317481  123562 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1010 19:04:45.322342  123562 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1010 19:04:45.322381  123562 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1010 19:04:47.083884  123562 crio.go:462] duration metric: took 1.766459661s to copy over tarball
	I1010 19:04:47.084022  123562 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1010 19:04:49.789859  123562 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.70579482s)
	I1010 19:04:49.789891  123562 crio.go:469] duration metric: took 2.705972676s to extract the tarball
	I1010 19:04:49.789902  123562 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1010 19:04:49.832489  123562 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 19:04:49.878056  123562 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1010 19:04:49.878086  123562 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1010 19:04:49.878203  123562 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1010 19:04:49.878202  123562 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1010 19:04:49.878233  123562 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1010 19:04:49.878236  123562 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1010 19:04:49.878202  123562 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 19:04:49.878255  123562 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1010 19:04:49.878287  123562 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1010 19:04:49.878295  123562 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1010 19:04:49.879730  123562 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1010 19:04:49.879799  123562 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1010 19:04:49.879748  123562 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1010 19:04:49.879768  123562 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1010 19:04:49.879918  123562 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 19:04:49.879774  123562 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1010 19:04:49.879995  123562 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1010 19:04:49.880203  123562 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1010 19:04:50.045578  123562 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1010 19:04:50.050755  123562 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1010 19:04:50.052323  123562 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1010 19:04:50.059546  123562 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1010 19:04:50.067602  123562 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1010 19:04:50.068292  123562 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1010 19:04:50.068761  123562 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1010 19:04:50.154973  123562 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1010 19:04:50.155025  123562 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1010 19:04:50.155074  123562 ssh_runner.go:195] Run: which crictl
	I1010 19:04:50.252345  123562 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1010 19:04:50.252391  123562 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1010 19:04:50.252445  123562 ssh_runner.go:195] Run: which crictl
	I1010 19:04:50.257015  123562 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1010 19:04:50.257063  123562 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1010 19:04:50.257122  123562 ssh_runner.go:195] Run: which crictl
	I1010 19:04:50.265756  123562 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1010 19:04:50.265808  123562 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1010 19:04:50.265768  123562 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1010 19:04:50.265863  123562 ssh_runner.go:195] Run: which crictl
	I1010 19:04:50.265865  123562 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1010 19:04:50.265907  123562 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1010 19:04:50.265874  123562 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1010 19:04:50.265948  123562 ssh_runner.go:195] Run: which crictl
	I1010 19:04:50.265964  123562 ssh_runner.go:195] Run: which crictl
	I1010 19:04:50.265987  123562 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1010 19:04:50.266017  123562 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1010 19:04:50.266033  123562 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1010 19:04:50.266052  123562 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1010 19:04:50.266055  123562 ssh_runner.go:195] Run: which crictl
	I1010 19:04:50.266121  123562 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1010 19:04:50.283384  123562 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1010 19:04:50.359512  123562 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1010 19:04:50.359519  123562 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1010 19:04:50.373079  123562 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1010 19:04:50.373192  123562 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1010 19:04:50.373246  123562 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1010 19:04:50.373488  123562 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1010 19:04:50.392332  123562 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1010 19:04:50.504939  123562 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1010 19:04:50.504989  123562 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1010 19:04:50.583543  123562 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1010 19:04:50.583583  123562 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1010 19:04:50.583600  123562 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1010 19:04:50.583659  123562 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1010 19:04:50.583680  123562 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1010 19:04:50.625514  123562 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1010 19:04:50.625542  123562 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1010 19:04:50.725834  123562 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1010 19:04:50.729587  123562 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1010 19:04:50.729663  123562 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1010 19:04:50.753337  123562 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1010 19:04:50.753362  123562 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1010 19:04:50.753413  123562 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1010 19:04:50.791441  123562 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1010 19:04:50.802760  123562 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1010 19:04:50.822325  123562 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 19:04:50.971516  123562 cache_images.go:92] duration metric: took 1.093412821s to LoadCachedImages
	W1010 19:04:50.971625  123562 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I1010 19:04:50.971648  123562 kubeadm.go:934] updating node { 192.168.50.54 8443 v1.20.0 crio true true} ...
	I1010 19:04:50.971780  123562 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-857939 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.54
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-857939 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1010 19:04:50.971869  123562 ssh_runner.go:195] Run: crio config
	I1010 19:04:51.026466  123562 cni.go:84] Creating CNI manager for ""
	I1010 19:04:51.026495  123562 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1010 19:04:51.026507  123562 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1010 19:04:51.026526  123562 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.54 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-857939 NodeName:kubernetes-upgrade-857939 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.54"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.54 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1010 19:04:51.026663  123562 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.54
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-857939"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.54
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.54"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1010 19:04:51.026728  123562 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1010 19:04:51.037905  123562 binaries.go:44] Found k8s binaries, skipping transfer
	I1010 19:04:51.037979  123562 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1010 19:04:51.048658  123562 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (432 bytes)
	I1010 19:04:51.067619  123562 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1010 19:04:51.093843  123562 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1010 19:04:51.115749  123562 ssh_runner.go:195] Run: grep 192.168.50.54	control-plane.minikube.internal$ /etc/hosts
	I1010 19:04:51.120023  123562 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.54	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 19:04:51.133735  123562 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 19:04:51.279785  123562 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 19:04:51.300607  123562 certs.go:68] Setting up /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/kubernetes-upgrade-857939 for IP: 192.168.50.54
	I1010 19:04:51.300633  123562 certs.go:194] generating shared ca certs ...
	I1010 19:04:51.300656  123562 certs.go:226] acquiring lock for ca certs: {Name:mk934aa2a82050d7b4e8800492bf64f91d140517 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 19:04:51.300843  123562 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key
	I1010 19:04:51.300921  123562 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key
	I1010 19:04:51.300936  123562 certs.go:256] generating profile certs ...
	I1010 19:04:51.301039  123562 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/kubernetes-upgrade-857939/client.key
	I1010 19:04:51.301060  123562 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/kubernetes-upgrade-857939/client.crt with IP's: []
	I1010 19:04:51.701637  123562 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/kubernetes-upgrade-857939/client.crt ...
	I1010 19:04:51.701677  123562 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/kubernetes-upgrade-857939/client.crt: {Name:mk925d6c2c7818d2dce1212790aeb1f62e37c4bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 19:04:51.701863  123562 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/kubernetes-upgrade-857939/client.key ...
	I1010 19:04:51.701880  123562 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/kubernetes-upgrade-857939/client.key: {Name:mk27fc90a8bf1886ec3b89cefd618f3ecc3c41d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 19:04:51.702001  123562 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/kubernetes-upgrade-857939/apiserver.key.de133bb1
	I1010 19:04:51.702022  123562 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/kubernetes-upgrade-857939/apiserver.crt.de133bb1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.54]
	I1010 19:04:51.791238  123562 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/kubernetes-upgrade-857939/apiserver.crt.de133bb1 ...
	I1010 19:04:51.791277  123562 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/kubernetes-upgrade-857939/apiserver.crt.de133bb1: {Name:mk2d78beeec480365293b956af5f474d867b2eb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 19:04:51.791457  123562 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/kubernetes-upgrade-857939/apiserver.key.de133bb1 ...
	I1010 19:04:51.791475  123562 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/kubernetes-upgrade-857939/apiserver.key.de133bb1: {Name:mk3d891fb4b9da44789bc11dd5814f25e00c6fd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 19:04:51.791573  123562 certs.go:381] copying /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/kubernetes-upgrade-857939/apiserver.crt.de133bb1 -> /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/kubernetes-upgrade-857939/apiserver.crt
	I1010 19:04:51.791664  123562 certs.go:385] copying /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/kubernetes-upgrade-857939/apiserver.key.de133bb1 -> /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/kubernetes-upgrade-857939/apiserver.key
	I1010 19:04:51.791742  123562 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/kubernetes-upgrade-857939/proxy-client.key
	I1010 19:04:51.791764  123562 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/kubernetes-upgrade-857939/proxy-client.crt with IP's: []
	I1010 19:04:52.007492  123562 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/kubernetes-upgrade-857939/proxy-client.crt ...
	I1010 19:04:52.007523  123562 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/kubernetes-upgrade-857939/proxy-client.crt: {Name:mk37155cae622cf3015b780a387607da9d89e1c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 19:04:52.007688  123562 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/kubernetes-upgrade-857939/proxy-client.key ...
	I1010 19:04:52.007702  123562 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/kubernetes-upgrade-857939/proxy-client.key: {Name:mk1fa925c2f3729cdd462f2864556a491c7969f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 19:04:52.007869  123562 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876.pem (1338 bytes)
	W1010 19:04:52.007906  123562 certs.go:480] ignoring /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876_empty.pem, impossibly tiny 0 bytes
	I1010 19:04:52.007916  123562 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem (1679 bytes)
	I1010 19:04:52.007943  123562 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem (1078 bytes)
	I1010 19:04:52.007990  123562 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem (1123 bytes)
	I1010 19:04:52.008019  123562 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem (1675 bytes)
	I1010 19:04:52.008055  123562 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem (1708 bytes)
	I1010 19:04:52.008751  123562 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1010 19:04:52.043578  123562 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1010 19:04:52.071641  123562 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1010 19:04:52.100027  123562 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1010 19:04:52.127622  123562 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/kubernetes-upgrade-857939/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1010 19:04:52.155986  123562 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/kubernetes-upgrade-857939/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1010 19:04:52.195008  123562 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/kubernetes-upgrade-857939/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1010 19:04:52.237732  123562 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/kubernetes-upgrade-857939/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1010 19:04:52.269741  123562 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876.pem --> /usr/share/ca-certificates/88876.pem (1338 bytes)
	I1010 19:04:52.299340  123562 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem --> /usr/share/ca-certificates/888762.pem (1708 bytes)
	I1010 19:04:52.326050  123562 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1010 19:04:52.352582  123562 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1010 19:04:52.370431  123562 ssh_runner.go:195] Run: openssl version
	I1010 19:04:52.377147  123562 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/88876.pem && ln -fs /usr/share/ca-certificates/88876.pem /etc/ssl/certs/88876.pem"
	I1010 19:04:52.390310  123562 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/88876.pem
	I1010 19:04:52.395474  123562 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 10 18:09 /usr/share/ca-certificates/88876.pem
	I1010 19:04:52.395555  123562 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/88876.pem
	I1010 19:04:52.401873  123562 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/88876.pem /etc/ssl/certs/51391683.0"
	I1010 19:04:52.414271  123562 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/888762.pem && ln -fs /usr/share/ca-certificates/888762.pem /etc/ssl/certs/888762.pem"
	I1010 19:04:52.427099  123562 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/888762.pem
	I1010 19:04:52.432347  123562 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 10 18:09 /usr/share/ca-certificates/888762.pem
	I1010 19:04:52.432422  123562 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/888762.pem
	I1010 19:04:52.438669  123562 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/888762.pem /etc/ssl/certs/3ec20f2e.0"
	I1010 19:04:52.452265  123562 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1010 19:04:52.465553  123562 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1010 19:04:52.470930  123562 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 10 17:58 /usr/share/ca-certificates/minikubeCA.pem
	I1010 19:04:52.471001  123562 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1010 19:04:52.477660  123562 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1010 19:04:52.490039  123562 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1010 19:04:52.494722  123562 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1010 19:04:52.494789  123562 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-857939 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-857939 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.54 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 19:04:52.494890  123562 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1010 19:04:52.494958  123562 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1010 19:04:52.542677  123562 cri.go:89] found id: ""
	I1010 19:04:52.542760  123562 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1010 19:04:52.554111  123562 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1010 19:04:52.565369  123562 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1010 19:04:52.576943  123562 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1010 19:04:52.576967  123562 kubeadm.go:157] found existing configuration files:
	
	I1010 19:04:52.577026  123562 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1010 19:04:52.589552  123562 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1010 19:04:52.589606  123562 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1010 19:04:52.601348  123562 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1010 19:04:52.613258  123562 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1010 19:04:52.613332  123562 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1010 19:04:52.623720  123562 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1010 19:04:52.634065  123562 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1010 19:04:52.634121  123562 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1010 19:04:52.645116  123562 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1010 19:04:52.658294  123562 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1010 19:04:52.658348  123562 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1010 19:04:52.672515  123562 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1010 19:04:52.966753  123562 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1010 19:06:51.062827  123562 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1010 19:06:51.062953  123562 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1010 19:06:51.064443  123562 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1010 19:06:51.064518  123562 kubeadm.go:310] [preflight] Running pre-flight checks
	I1010 19:06:51.064632  123562 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1010 19:06:51.064811  123562 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1010 19:06:51.064991  123562 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1010 19:06:51.065102  123562 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1010 19:06:51.067085  123562 out.go:235]   - Generating certificates and keys ...
	I1010 19:06:51.067184  123562 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1010 19:06:51.067297  123562 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1010 19:06:51.067403  123562 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1010 19:06:51.067486  123562 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1010 19:06:51.067569  123562 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1010 19:06:51.067640  123562 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1010 19:06:51.067714  123562 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1010 19:06:51.067916  123562 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-857939 localhost] and IPs [192.168.50.54 127.0.0.1 ::1]
	I1010 19:06:51.067996  123562 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1010 19:06:51.068193  123562 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-857939 localhost] and IPs [192.168.50.54 127.0.0.1 ::1]
	I1010 19:06:51.068289  123562 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1010 19:06:51.068422  123562 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1010 19:06:51.068517  123562 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1010 19:06:51.068598  123562 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1010 19:06:51.068673  123562 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1010 19:06:51.068754  123562 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1010 19:06:51.068865  123562 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1010 19:06:51.068950  123562 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1010 19:06:51.069130  123562 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1010 19:06:51.069265  123562 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1010 19:06:51.069331  123562 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1010 19:06:51.069425  123562 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1010 19:06:51.071223  123562 out.go:235]   - Booting up control plane ...
	I1010 19:06:51.071364  123562 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1010 19:06:51.071479  123562 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1010 19:06:51.071585  123562 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1010 19:06:51.071716  123562 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1010 19:06:51.071994  123562 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1010 19:06:51.072070  123562 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1010 19:06:51.072129  123562 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1010 19:06:51.072316  123562 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1010 19:06:51.072436  123562 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1010 19:06:51.072686  123562 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1010 19:06:51.072792  123562 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1010 19:06:51.073063  123562 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1010 19:06:51.073152  123562 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1010 19:06:51.073403  123562 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1010 19:06:51.073510  123562 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1010 19:06:51.073788  123562 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1010 19:06:51.073800  123562 kubeadm.go:310] 
	I1010 19:06:51.073856  123562 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1010 19:06:51.073918  123562 kubeadm.go:310] 		timed out waiting for the condition
	I1010 19:06:51.073927  123562 kubeadm.go:310] 
	I1010 19:06:51.073979  123562 kubeadm.go:310] 	This error is likely caused by:
	I1010 19:06:51.074023  123562 kubeadm.go:310] 		- The kubelet is not running
	I1010 19:06:51.074166  123562 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1010 19:06:51.074177  123562 kubeadm.go:310] 
	I1010 19:06:51.074308  123562 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1010 19:06:51.074355  123562 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1010 19:06:51.074411  123562 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1010 19:06:51.074426  123562 kubeadm.go:310] 
	I1010 19:06:51.074572  123562 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1010 19:06:51.074680  123562 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1010 19:06:51.074690  123562 kubeadm.go:310] 
	I1010 19:06:51.074804  123562 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1010 19:06:51.074911  123562 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1010 19:06:51.075019  123562 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1010 19:06:51.075128  123562 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1010 19:06:51.075165  123562 kubeadm.go:310] 
	W1010 19:06:51.075278  123562 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-857939 localhost] and IPs [192.168.50.54 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-857939 localhost] and IPs [192.168.50.54 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-857939 localhost] and IPs [192.168.50.54 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-857939 localhost] and IPs [192.168.50.54 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1010 19:06:51.075329  123562 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1010 19:06:52.222626  123562 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.147265954s)
	I1010 19:06:52.222713  123562 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 19:06:52.238391  123562 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1010 19:06:52.249036  123562 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1010 19:06:52.249063  123562 kubeadm.go:157] found existing configuration files:
	
	I1010 19:06:52.249123  123562 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1010 19:06:52.259219  123562 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1010 19:06:52.259303  123562 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1010 19:06:52.269559  123562 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1010 19:06:52.279475  123562 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1010 19:06:52.279573  123562 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1010 19:06:52.293285  123562 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1010 19:06:52.302971  123562 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1010 19:06:52.303045  123562 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1010 19:06:52.313540  123562 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1010 19:06:52.323787  123562 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1010 19:06:52.323863  123562 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1010 19:06:52.334809  123562 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1010 19:06:52.558351  123562 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1010 19:08:49.120077  123562 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1010 19:08:49.120192  123562 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1010 19:08:49.121692  123562 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1010 19:08:49.121757  123562 kubeadm.go:310] [preflight] Running pre-flight checks
	I1010 19:08:49.121854  123562 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1010 19:08:49.121983  123562 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1010 19:08:49.122095  123562 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1010 19:08:49.122221  123562 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1010 19:08:49.124337  123562 out.go:235]   - Generating certificates and keys ...
	I1010 19:08:49.124464  123562 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1010 19:08:49.124554  123562 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1010 19:08:49.124676  123562 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1010 19:08:49.124777  123562 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1010 19:08:49.124866  123562 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1010 19:08:49.124948  123562 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1010 19:08:49.125023  123562 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1010 19:08:49.125110  123562 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1010 19:08:49.125176  123562 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1010 19:08:49.125293  123562 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1010 19:08:49.125341  123562 kubeadm.go:310] [certs] Using the existing "sa" key
	I1010 19:08:49.125390  123562 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1010 19:08:49.125470  123562 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1010 19:08:49.125566  123562 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1010 19:08:49.125658  123562 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1010 19:08:49.125733  123562 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1010 19:08:49.125895  123562 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1010 19:08:49.125977  123562 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1010 19:08:49.126036  123562 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1010 19:08:49.126137  123562 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1010 19:08:49.129148  123562 out.go:235]   - Booting up control plane ...
	I1010 19:08:49.129266  123562 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1010 19:08:49.129335  123562 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1010 19:08:49.129411  123562 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1010 19:08:49.129511  123562 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1010 19:08:49.129665  123562 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1010 19:08:49.129712  123562 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1010 19:08:49.129770  123562 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1010 19:08:49.129928  123562 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1010 19:08:49.130009  123562 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1010 19:08:49.130221  123562 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1010 19:08:49.130330  123562 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1010 19:08:49.130585  123562 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1010 19:08:49.130690  123562 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1010 19:08:49.130918  123562 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1010 19:08:49.131006  123562 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1010 19:08:49.131218  123562 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1010 19:08:49.131229  123562 kubeadm.go:310] 
	I1010 19:08:49.131263  123562 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1010 19:08:49.131306  123562 kubeadm.go:310] 		timed out waiting for the condition
	I1010 19:08:49.131319  123562 kubeadm.go:310] 
	I1010 19:08:49.131352  123562 kubeadm.go:310] 	This error is likely caused by:
	I1010 19:08:49.131463  123562 kubeadm.go:310] 		- The kubelet is not running
	I1010 19:08:49.131608  123562 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1010 19:08:49.131620  123562 kubeadm.go:310] 
	I1010 19:08:49.131711  123562 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1010 19:08:49.131742  123562 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1010 19:08:49.131769  123562 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1010 19:08:49.131774  123562 kubeadm.go:310] 
	I1010 19:08:49.131859  123562 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1010 19:08:49.131946  123562 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1010 19:08:49.131956  123562 kubeadm.go:310] 
	I1010 19:08:49.132096  123562 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1010 19:08:49.132178  123562 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1010 19:08:49.132295  123562 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1010 19:08:49.132405  123562 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1010 19:08:49.132426  123562 kubeadm.go:310] 
	I1010 19:08:49.132492  123562 kubeadm.go:394] duration metric: took 3m56.637707776s to StartCluster
	I1010 19:08:49.132540  123562 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:08:49.132593  123562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:08:49.179137  123562 cri.go:89] found id: ""
	I1010 19:08:49.179169  123562 logs.go:282] 0 containers: []
	W1010 19:08:49.179181  123562 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:08:49.179189  123562 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:08:49.179255  123562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:08:49.218233  123562 cri.go:89] found id: ""
	I1010 19:08:49.218264  123562 logs.go:282] 0 containers: []
	W1010 19:08:49.218274  123562 logs.go:284] No container was found matching "etcd"
	I1010 19:08:49.218281  123562 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:08:49.218342  123562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:08:49.257726  123562 cri.go:89] found id: ""
	I1010 19:08:49.257756  123562 logs.go:282] 0 containers: []
	W1010 19:08:49.257771  123562 logs.go:284] No container was found matching "coredns"
	I1010 19:08:49.257777  123562 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:08:49.257832  123562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:08:49.299504  123562 cri.go:89] found id: ""
	I1010 19:08:49.299533  123562 logs.go:282] 0 containers: []
	W1010 19:08:49.299542  123562 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:08:49.299548  123562 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:08:49.299605  123562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:08:49.336570  123562 cri.go:89] found id: ""
	I1010 19:08:49.336605  123562 logs.go:282] 0 containers: []
	W1010 19:08:49.336616  123562 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:08:49.336624  123562 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:08:49.336688  123562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:08:49.374447  123562 cri.go:89] found id: ""
	I1010 19:08:49.374482  123562 logs.go:282] 0 containers: []
	W1010 19:08:49.374494  123562 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:08:49.374502  123562 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:08:49.374570  123562 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:08:49.418160  123562 cri.go:89] found id: ""
	I1010 19:08:49.418194  123562 logs.go:282] 0 containers: []
	W1010 19:08:49.418205  123562 logs.go:284] No container was found matching "kindnet"
	I1010 19:08:49.418218  123562 logs.go:123] Gathering logs for container status ...
	I1010 19:08:49.418240  123562 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:08:49.462493  123562 logs.go:123] Gathering logs for kubelet ...
	I1010 19:08:49.462533  123562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:08:49.514841  123562 logs.go:123] Gathering logs for dmesg ...
	I1010 19:08:49.514881  123562 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:08:49.530897  123562 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:08:49.530933  123562 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:08:49.686276  123562 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:08:49.686308  123562 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:08:49.686326  123562 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1010 19:08:49.823146  123562 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1010 19:08:49.823239  123562 out.go:270] * 
	* 
	W1010 19:08:49.823298  123562 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1010 19:08:49.823312  123562 out.go:270] * 
	* 
	W1010 19:08:49.824353  123562 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1010 19:08:49.827924  123562 out.go:201] 
	W1010 19:08:49.829696  123562 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1010 19:08:49.829776  123562 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1010 19:08:49.829805  123562 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1010 19:08:49.832105  123562 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-857939 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-857939
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-857939: (2.323958462s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-857939 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-857939 status --format={{.Host}}: exit status 7 (77.724461ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-857939 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-857939 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (42.163317681s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-857939 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-857939 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-857939 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (98.308624ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-857939] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19787
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19787-81676/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19787-81676/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-857939
	    minikube start -p kubernetes-upgrade-857939 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-8579392 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-857939 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-857939 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E1010 19:09:59.530112   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/addons-473910/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-857939 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (26.924292814s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-10-10 19:10:01.547440023 +0000 UTC m=+4349.232375226
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-857939 -n kubernetes-upgrade-857939
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-857939 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-857939 logs -n 25: (1.723305029s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-873515 sudo find            | cilium-873515             | jenkins | v1.34.0 | 10 Oct 24 19:07 UTC |                     |
	|         | /etc/crio -type f -exec sh -c         |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                  |                           |         |         |                     |                     |
	| ssh     | -p cilium-873515 sudo crio            | cilium-873515             | jenkins | v1.34.0 | 10 Oct 24 19:07 UTC |                     |
	|         | config                                |                           |         |         |                     |                     |
	| delete  | -p cilium-873515                      | cilium-873515             | jenkins | v1.34.0 | 10 Oct 24 19:07 UTC | 10 Oct 24 19:07 UTC |
	| start   | -p running-upgrade-001575             | minikube                  | jenkins | v1.26.0 | 10 Oct 24 19:07 UTC | 10 Oct 24 19:08 UTC |
	|         | --memory=2200 --vm-driver=kvm2        |                           |         |         |                     |                     |
	|         |  --container-runtime=crio             |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-693624 sudo           | NoKubernetes-693624       | jenkins | v1.34.0 | 10 Oct 24 19:07 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-693624                | NoKubernetes-693624       | jenkins | v1.34.0 | 10 Oct 24 19:07 UTC | 10 Oct 24 19:07 UTC |
	| start   | -p cert-expiration-292195             | cert-expiration-292195    | jenkins | v1.34.0 | 10 Oct 24 19:07 UTC | 10 Oct 24 19:08 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| pause   | -p pause-215992                       | pause-215992              | jenkins | v1.34.0 | 10 Oct 24 19:08 UTC | 10 Oct 24 19:08 UTC |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	| unpause | -p pause-215992                       | pause-215992              | jenkins | v1.34.0 | 10 Oct 24 19:08 UTC | 10 Oct 24 19:08 UTC |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	| pause   | -p pause-215992                       | pause-215992              | jenkins | v1.34.0 | 10 Oct 24 19:08 UTC | 10 Oct 24 19:08 UTC |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	| delete  | -p pause-215992                       | pause-215992              | jenkins | v1.34.0 | 10 Oct 24 19:08 UTC | 10 Oct 24 19:08 UTC |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	| delete  | -p pause-215992                       | pause-215992              | jenkins | v1.34.0 | 10 Oct 24 19:08 UTC | 10 Oct 24 19:08 UTC |
	| start   | -p cert-options-584539                | cert-options-584539       | jenkins | v1.34.0 | 10 Oct 24 19:08 UTC | 10 Oct 24 19:09 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p running-upgrade-001575             | running-upgrade-001575    | jenkins | v1.34.0 | 10 Oct 24 19:08 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-857939          | kubernetes-upgrade-857939 | jenkins | v1.34.0 | 10 Oct 24 19:08 UTC | 10 Oct 24 19:08 UTC |
	| start   | -p kubernetes-upgrade-857939          | kubernetes-upgrade-857939 | jenkins | v1.34.0 | 10 Oct 24 19:08 UTC | 10 Oct 24 19:09 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | cert-options-584539 ssh               | cert-options-584539       | jenkins | v1.34.0 | 10 Oct 24 19:09 UTC | 10 Oct 24 19:09 UTC |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-584539 -- sudo        | cert-options-584539       | jenkins | v1.34.0 | 10 Oct 24 19:09 UTC | 10 Oct 24 19:09 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-584539                | cert-options-584539       | jenkins | v1.34.0 | 10 Oct 24 19:09 UTC | 10 Oct 24 19:09 UTC |
	| start   | -p force-systemd-flag-160659          | force-systemd-flag-160659 | jenkins | v1.34.0 | 10 Oct 24 19:09 UTC | 10 Oct 24 19:10 UTC |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-857939          | kubernetes-upgrade-857939 | jenkins | v1.34.0 | 10 Oct 24 19:09 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-857939          | kubernetes-upgrade-857939 | jenkins | v1.34.0 | 10 Oct 24 19:09 UTC | 10 Oct 24 19:10 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-160659 ssh cat     | force-systemd-flag-160659 | jenkins | v1.34.0 | 10 Oct 24 19:10 UTC | 10 Oct 24 19:10 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf    |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-160659          | force-systemd-flag-160659 | jenkins | v1.34.0 | 10 Oct 24 19:10 UTC | 10 Oct 24 19:10 UTC |
	| start   | -p auto-873515 --memory=3072          | auto-873515               | jenkins | v1.34.0 | 10 Oct 24 19:10 UTC |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                    |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/10 19:10:02
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1010 19:10:02.269361  131486 out.go:345] Setting OutFile to fd 1 ...
	I1010 19:10:02.270956  131486 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 19:10:02.270969  131486 out.go:358] Setting ErrFile to fd 2...
	I1010 19:10:02.270974  131486 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 19:10:02.271281  131486 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19787-81676/.minikube/bin
	I1010 19:10:02.272247  131486 out.go:352] Setting JSON to false
	I1010 19:10:02.273501  131486 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":10348,"bootTime":1728577054,"procs":219,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1010 19:10:02.273621  131486 start.go:139] virtualization: kvm guest
	I1010 19:10:02.275865  131486 out.go:177] * [auto-873515] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1010 19:10:02.277961  131486 notify.go:220] Checking for updates...
	I1010 19:10:02.277972  131486 out.go:177]   - MINIKUBE_LOCATION=19787
	I1010 19:10:02.279697  131486 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1010 19:10:02.281583  131486 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19787-81676/kubeconfig
	I1010 19:10:02.283598  131486 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19787-81676/.minikube
	I1010 19:10:02.285174  131486 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1010 19:10:02.287107  131486 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	
	
	==> CRI-O <==
	Oct 10 19:10:02 kubernetes-upgrade-857939 crio[3055]: time="2024-10-10 19:10:02.723599575Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728587402723564588,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=22947cdb-e067-48ff-8248-1e14f2ee48d2 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 10 19:10:02 kubernetes-upgrade-857939 crio[3055]: time="2024-10-10 19:10:02.724154466Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bf606e7f-fe88-44b5-9472-abb54bc208e0 name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 19:10:02 kubernetes-upgrade-857939 crio[3055]: time="2024-10-10 19:10:02.724242103Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bf606e7f-fe88-44b5-9472-abb54bc208e0 name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 19:10:02 kubernetes-upgrade-857939 crio[3055]: time="2024-10-10 19:10:02.724782220Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8210af010202d643c83f85253b283b8a783771565b7307b4b3026b95eb7a2a04,PodSandboxId:90557fbb2c502099d8ab4f5d1cd6cd1a3f4d8eb767665f391f9adf7c81813f2f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728587399130121022,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-hght6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88b4f66d-10f1-47e2-801f-8901fbf9f9db,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3db67fd9ff089fc9bc518ce1c5e4ce9f816b81a39427f6c274f0c987724cd260,PodSandboxId:d2614260400ba6ed1dfeaca13b62a2a97648946c66d5be3e409deb6f43e61ee6,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728587398596941118,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6ncb5,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: a2a7ab42-c850-4705-8379-943b944cf694,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b33e837852233b6a8caa8b2f8e6ff9b2d169b4edb5f3a2bbc742f279db5a3c95,PodSandboxId:723b985031d3822e544758d54f4369f42ebe1e91f12dd80799fa099fcba493a6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAIN
ER_RUNNING,CreatedAt:1728587398543174653,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x8vxp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53084bc0-c109-4800-b766-eedd50b55889,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e0e01e84564883f3661572bcea123c7be517221dcb2cc40672e903002fc85de,PodSandboxId:9d11e688c20d54c3990e25b2d3718239b0fc1d807a7eeb1348da789f4396c933,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:172
8587398519122978,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f5fff39-fb8a-44e7-86f3-afa3b2562474,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b42bd07a29a5cd1ff6d8440d53aa79d47720e838320559fe36ffa7e0c5f1caf6,PodSandboxId:5876e35648aabddd4d60d8cb835bf1748ddf53aa4606b2f8cb3dfe3482888a72,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728587393778868571,
Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-857939,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad94f4ce7a8317f460ee5cf3ab29b541,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:320c5af008db8302948e0648c99017d936e2ce1714223c6c8356df0222489dc5,PodSandboxId:1ad306afa3a040306a1fc0a1ed55b3ffa28519cd001819cba26a4bad0f3b0e29,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728587393747144767,Label
s:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-857939,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05a5c9e0422630db94ef057f91054c89,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01286a1f9d5949aff7907faa92a3084667a3c0d671e3085c42c0c5f2b0dc89df,PodSandboxId:3d037b3a688cf35bac7febc17ec39b557fa78bf14351e1c2cc0e6849f1264dce,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728587393709236061,Labels:map[string]st
ring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-857939,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f00f234f9958fff159d89df29b2a7baf,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c3493cc24e42085d393796ef920f703d895b4b35c416f159d63ed957ab9eb64,PodSandboxId:7fea837813258ffe3f900b9da8a53ebc61806493150971dfcb3402ba220691a0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728587393712996658,Labels:map[string]string{io.kubernet
es.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-857939,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5416c23d232067adb2ee8035b4c33a7d,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf9da9b806ce77c791b9dc6daf8a0b41e723b641389401735a5ca2fd97780110,PodSandboxId:b61a6f7c1ab03f35130166ee0c26a3540aaf2c11daa946077f9e8771c6f86fd7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1728587388641329553,Labels:map[string]string{io.kub
ernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6ncb5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2a7ab42-c850-4705-8379-943b944cf694,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0dc9d1d2699b2b8b5231a40f42553f2dc48a656c3d532a84ea8330896ac803dd,PodSandboxId:c5555c6c16efc0f3c71ce7b73404e17600315e90100cc947903415d36448c780,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]stri
ng{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728587387832151517,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-857939,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05a5c9e0422630db94ef057f91054c89,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70895ca59e65f6f4ab8e62d45ef620ae7d7bfe40adc15cdb084281df78601a2a,PodSandboxId:05b9bc78e83baa9aa0cc6d5fa1e296c8f566f3223b77b83649e94f496c6e590a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]strin
g{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1728587388027463865,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f5fff39-fb8a-44e7-86f3-afa3b2562474,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbd6519bb068341c5ac74fe4bb0f26969b4fe0200c0e0c3a6e73d78cecb4e41f,PodSandboxId:a6d4a1311d85cb0877aefb1dcd59c7ae3d933fd5315ef1a1f49b5f46a1177f5e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserS
pecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1728587387744786371,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-857939,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5416c23d232067adb2ee8035b4c33a7d,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7138bebe1e305282306667575715adf955d5647761bc465438e39009f0bfe89,PodSandboxId:92ed9379171b34f2bb81c1c00874b2a0d672ba3a2f84ac4ff30820afd0509f22,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},Use
rSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1728587387779594891,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-857939,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f00f234f9958fff159d89df29b2a7baf,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b966447cba2b071dd496757b4225b202aac5208a9a89429ae6fed1fc3f157f87,PodSandboxId:1050bcf2d41c99728abcc341d6c7b627a07b4a3c4dd97a84c3bd1c365c817b9a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHa
ndler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1728587387801491408,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-857939,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad94f4ce7a8317f460ee5cf3ab29b541,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:243356f4ddb2930ceaaf5ae3fafd611651861ba58c58df5d82adb33c3b9ff1fb,PodSandboxId:091680f2d4f28c0d353964892852a625f4ba3e7c1426c4a40bac3c70a90c34fe,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},I
mageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1728587387690086398,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x8vxp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53084bc0-c109-4800-b766-eedd50b55889,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffb6fcef709f27f79fbc67248371619ddbde0d08929bdc27c7127078fe1103b0,PodSandboxId:7974ed12e21533536286800ad4c44f6ec5ceef7b51c03be2b4d967e3685fd28b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d
3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1728587377036248354,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-hght6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88b4f66d-10f1-47e2-801f-8901fbf9f9db,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bf606e7f-fe88-44b5-9472-abb54bc208e0 name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 19:10:02 kubernetes-upgrade-857939 crio[3055]: time="2024-10-10 19:10:02.768803383Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=50b0f2e0-a59a-4862-bcac-5f7e0092f28e name=/runtime.v1.RuntimeService/Version
	Oct 10 19:10:02 kubernetes-upgrade-857939 crio[3055]: time="2024-10-10 19:10:02.768884582Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=50b0f2e0-a59a-4862-bcac-5f7e0092f28e name=/runtime.v1.RuntimeService/Version
	Oct 10 19:10:02 kubernetes-upgrade-857939 crio[3055]: time="2024-10-10 19:10:02.770310304Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3c050abd-533c-42bb-9d86-d5fbffaf187b name=/runtime.v1.ImageService/ImageFsInfo
	Oct 10 19:10:02 kubernetes-upgrade-857939 crio[3055]: time="2024-10-10 19:10:02.770744686Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728587402770656012,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3c050abd-533c-42bb-9d86-d5fbffaf187b name=/runtime.v1.ImageService/ImageFsInfo
	Oct 10 19:10:02 kubernetes-upgrade-857939 crio[3055]: time="2024-10-10 19:10:02.771321426Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=31420fee-5a82-412d-a024-d200b6d759d6 name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 19:10:02 kubernetes-upgrade-857939 crio[3055]: time="2024-10-10 19:10:02.771393926Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=31420fee-5a82-412d-a024-d200b6d759d6 name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 19:10:02 kubernetes-upgrade-857939 crio[3055]: time="2024-10-10 19:10:02.771840891Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8210af010202d643c83f85253b283b8a783771565b7307b4b3026b95eb7a2a04,PodSandboxId:90557fbb2c502099d8ab4f5d1cd6cd1a3f4d8eb767665f391f9adf7c81813f2f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728587399130121022,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-hght6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88b4f66d-10f1-47e2-801f-8901fbf9f9db,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3db67fd9ff089fc9bc518ce1c5e4ce9f816b81a39427f6c274f0c987724cd260,PodSandboxId:d2614260400ba6ed1dfeaca13b62a2a97648946c66d5be3e409deb6f43e61ee6,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728587398596941118,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6ncb5,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: a2a7ab42-c850-4705-8379-943b944cf694,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b33e837852233b6a8caa8b2f8e6ff9b2d169b4edb5f3a2bbc742f279db5a3c95,PodSandboxId:723b985031d3822e544758d54f4369f42ebe1e91f12dd80799fa099fcba493a6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAIN
ER_RUNNING,CreatedAt:1728587398543174653,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x8vxp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53084bc0-c109-4800-b766-eedd50b55889,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e0e01e84564883f3661572bcea123c7be517221dcb2cc40672e903002fc85de,PodSandboxId:9d11e688c20d54c3990e25b2d3718239b0fc1d807a7eeb1348da789f4396c933,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:172
8587398519122978,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f5fff39-fb8a-44e7-86f3-afa3b2562474,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b42bd07a29a5cd1ff6d8440d53aa79d47720e838320559fe36ffa7e0c5f1caf6,PodSandboxId:5876e35648aabddd4d60d8cb835bf1748ddf53aa4606b2f8cb3dfe3482888a72,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728587393778868571,
Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-857939,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad94f4ce7a8317f460ee5cf3ab29b541,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:320c5af008db8302948e0648c99017d936e2ce1714223c6c8356df0222489dc5,PodSandboxId:1ad306afa3a040306a1fc0a1ed55b3ffa28519cd001819cba26a4bad0f3b0e29,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728587393747144767,Label
s:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-857939,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05a5c9e0422630db94ef057f91054c89,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01286a1f9d5949aff7907faa92a3084667a3c0d671e3085c42c0c5f2b0dc89df,PodSandboxId:3d037b3a688cf35bac7febc17ec39b557fa78bf14351e1c2cc0e6849f1264dce,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728587393709236061,Labels:map[string]st
ring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-857939,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f00f234f9958fff159d89df29b2a7baf,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c3493cc24e42085d393796ef920f703d895b4b35c416f159d63ed957ab9eb64,PodSandboxId:7fea837813258ffe3f900b9da8a53ebc61806493150971dfcb3402ba220691a0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728587393712996658,Labels:map[string]string{io.kubernet
es.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-857939,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5416c23d232067adb2ee8035b4c33a7d,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf9da9b806ce77c791b9dc6daf8a0b41e723b641389401735a5ca2fd97780110,PodSandboxId:b61a6f7c1ab03f35130166ee0c26a3540aaf2c11daa946077f9e8771c6f86fd7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1728587388641329553,Labels:map[string]string{io.kub
ernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6ncb5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2a7ab42-c850-4705-8379-943b944cf694,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0dc9d1d2699b2b8b5231a40f42553f2dc48a656c3d532a84ea8330896ac803dd,PodSandboxId:c5555c6c16efc0f3c71ce7b73404e17600315e90100cc947903415d36448c780,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]stri
ng{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728587387832151517,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-857939,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05a5c9e0422630db94ef057f91054c89,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70895ca59e65f6f4ab8e62d45ef620ae7d7bfe40adc15cdb084281df78601a2a,PodSandboxId:05b9bc78e83baa9aa0cc6d5fa1e296c8f566f3223b77b83649e94f496c6e590a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]strin
g{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1728587388027463865,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f5fff39-fb8a-44e7-86f3-afa3b2562474,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbd6519bb068341c5ac74fe4bb0f26969b4fe0200c0e0c3a6e73d78cecb4e41f,PodSandboxId:a6d4a1311d85cb0877aefb1dcd59c7ae3d933fd5315ef1a1f49b5f46a1177f5e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserS
pecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1728587387744786371,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-857939,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5416c23d232067adb2ee8035b4c33a7d,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7138bebe1e305282306667575715adf955d5647761bc465438e39009f0bfe89,PodSandboxId:92ed9379171b34f2bb81c1c00874b2a0d672ba3a2f84ac4ff30820afd0509f22,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},Use
rSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1728587387779594891,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-857939,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f00f234f9958fff159d89df29b2a7baf,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b966447cba2b071dd496757b4225b202aac5208a9a89429ae6fed1fc3f157f87,PodSandboxId:1050bcf2d41c99728abcc341d6c7b627a07b4a3c4dd97a84c3bd1c365c817b9a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHa
ndler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1728587387801491408,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-857939,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad94f4ce7a8317f460ee5cf3ab29b541,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:243356f4ddb2930ceaaf5ae3fafd611651861ba58c58df5d82adb33c3b9ff1fb,PodSandboxId:091680f2d4f28c0d353964892852a625f4ba3e7c1426c4a40bac3c70a90c34fe,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},I
mageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1728587387690086398,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x8vxp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53084bc0-c109-4800-b766-eedd50b55889,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffb6fcef709f27f79fbc67248371619ddbde0d08929bdc27c7127078fe1103b0,PodSandboxId:7974ed12e21533536286800ad4c44f6ec5ceef7b51c03be2b4d967e3685fd28b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d
3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1728587377036248354,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-hght6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88b4f66d-10f1-47e2-801f-8901fbf9f9db,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=31420fee-5a82-412d-a024-d200b6d759d6 name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 19:10:02 kubernetes-upgrade-857939 crio[3055]: time="2024-10-10 19:10:02.819157363Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a21b6667-e280-464a-b476-4f56a743a902 name=/runtime.v1.RuntimeService/Version
	Oct 10 19:10:02 kubernetes-upgrade-857939 crio[3055]: time="2024-10-10 19:10:02.820504222Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a21b6667-e280-464a-b476-4f56a743a902 name=/runtime.v1.RuntimeService/Version
	Oct 10 19:10:02 kubernetes-upgrade-857939 crio[3055]: time="2024-10-10 19:10:02.823528305Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d407910b-19bb-4fff-83dd-096cfa551082 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 10 19:10:02 kubernetes-upgrade-857939 crio[3055]: time="2024-10-10 19:10:02.824069342Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728587402824045413,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d407910b-19bb-4fff-83dd-096cfa551082 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 10 19:10:02 kubernetes-upgrade-857939 crio[3055]: time="2024-10-10 19:10:02.824902854Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=783f49a8-b897-4515-8558-5e0b798ae8ad name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 19:10:02 kubernetes-upgrade-857939 crio[3055]: time="2024-10-10 19:10:02.824971099Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=783f49a8-b897-4515-8558-5e0b798ae8ad name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 19:10:02 kubernetes-upgrade-857939 crio[3055]: time="2024-10-10 19:10:02.825268655Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8210af010202d643c83f85253b283b8a783771565b7307b4b3026b95eb7a2a04,PodSandboxId:90557fbb2c502099d8ab4f5d1cd6cd1a3f4d8eb767665f391f9adf7c81813f2f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728587399130121022,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-hght6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88b4f66d-10f1-47e2-801f-8901fbf9f9db,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3db67fd9ff089fc9bc518ce1c5e4ce9f816b81a39427f6c274f0c987724cd260,PodSandboxId:d2614260400ba6ed1dfeaca13b62a2a97648946c66d5be3e409deb6f43e61ee6,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728587398596941118,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6ncb5,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: a2a7ab42-c850-4705-8379-943b944cf694,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b33e837852233b6a8caa8b2f8e6ff9b2d169b4edb5f3a2bbc742f279db5a3c95,PodSandboxId:723b985031d3822e544758d54f4369f42ebe1e91f12dd80799fa099fcba493a6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAIN
ER_RUNNING,CreatedAt:1728587398543174653,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x8vxp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53084bc0-c109-4800-b766-eedd50b55889,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e0e01e84564883f3661572bcea123c7be517221dcb2cc40672e903002fc85de,PodSandboxId:9d11e688c20d54c3990e25b2d3718239b0fc1d807a7eeb1348da789f4396c933,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:172
8587398519122978,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f5fff39-fb8a-44e7-86f3-afa3b2562474,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b42bd07a29a5cd1ff6d8440d53aa79d47720e838320559fe36ffa7e0c5f1caf6,PodSandboxId:5876e35648aabddd4d60d8cb835bf1748ddf53aa4606b2f8cb3dfe3482888a72,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728587393778868571,
Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-857939,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad94f4ce7a8317f460ee5cf3ab29b541,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:320c5af008db8302948e0648c99017d936e2ce1714223c6c8356df0222489dc5,PodSandboxId:1ad306afa3a040306a1fc0a1ed55b3ffa28519cd001819cba26a4bad0f3b0e29,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728587393747144767,Label
s:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-857939,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05a5c9e0422630db94ef057f91054c89,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01286a1f9d5949aff7907faa92a3084667a3c0d671e3085c42c0c5f2b0dc89df,PodSandboxId:3d037b3a688cf35bac7febc17ec39b557fa78bf14351e1c2cc0e6849f1264dce,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728587393709236061,Labels:map[string]st
ring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-857939,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f00f234f9958fff159d89df29b2a7baf,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c3493cc24e42085d393796ef920f703d895b4b35c416f159d63ed957ab9eb64,PodSandboxId:7fea837813258ffe3f900b9da8a53ebc61806493150971dfcb3402ba220691a0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728587393712996658,Labels:map[string]string{io.kubernet
es.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-857939,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5416c23d232067adb2ee8035b4c33a7d,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf9da9b806ce77c791b9dc6daf8a0b41e723b641389401735a5ca2fd97780110,PodSandboxId:b61a6f7c1ab03f35130166ee0c26a3540aaf2c11daa946077f9e8771c6f86fd7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1728587388641329553,Labels:map[string]string{io.kub
ernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6ncb5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2a7ab42-c850-4705-8379-943b944cf694,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0dc9d1d2699b2b8b5231a40f42553f2dc48a656c3d532a84ea8330896ac803dd,PodSandboxId:c5555c6c16efc0f3c71ce7b73404e17600315e90100cc947903415d36448c780,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]stri
ng{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728587387832151517,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-857939,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05a5c9e0422630db94ef057f91054c89,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70895ca59e65f6f4ab8e62d45ef620ae7d7bfe40adc15cdb084281df78601a2a,PodSandboxId:05b9bc78e83baa9aa0cc6d5fa1e296c8f566f3223b77b83649e94f496c6e590a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]strin
g{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1728587388027463865,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f5fff39-fb8a-44e7-86f3-afa3b2562474,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbd6519bb068341c5ac74fe4bb0f26969b4fe0200c0e0c3a6e73d78cecb4e41f,PodSandboxId:a6d4a1311d85cb0877aefb1dcd59c7ae3d933fd5315ef1a1f49b5f46a1177f5e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserS
pecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1728587387744786371,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-857939,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5416c23d232067adb2ee8035b4c33a7d,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7138bebe1e305282306667575715adf955d5647761bc465438e39009f0bfe89,PodSandboxId:92ed9379171b34f2bb81c1c00874b2a0d672ba3a2f84ac4ff30820afd0509f22,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},Use
rSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1728587387779594891,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-857939,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f00f234f9958fff159d89df29b2a7baf,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b966447cba2b071dd496757b4225b202aac5208a9a89429ae6fed1fc3f157f87,PodSandboxId:1050bcf2d41c99728abcc341d6c7b627a07b4a3c4dd97a84c3bd1c365c817b9a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHa
ndler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1728587387801491408,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-857939,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad94f4ce7a8317f460ee5cf3ab29b541,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:243356f4ddb2930ceaaf5ae3fafd611651861ba58c58df5d82adb33c3b9ff1fb,PodSandboxId:091680f2d4f28c0d353964892852a625f4ba3e7c1426c4a40bac3c70a90c34fe,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},I
mageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1728587387690086398,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x8vxp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53084bc0-c109-4800-b766-eedd50b55889,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffb6fcef709f27f79fbc67248371619ddbde0d08929bdc27c7127078fe1103b0,PodSandboxId:7974ed12e21533536286800ad4c44f6ec5ceef7b51c03be2b4d967e3685fd28b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d
3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1728587377036248354,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-hght6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88b4f66d-10f1-47e2-801f-8901fbf9f9db,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=783f49a8-b897-4515-8558-5e0b798ae8ad name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 19:10:02 kubernetes-upgrade-857939 crio[3055]: time="2024-10-10 19:10:02.863350226Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=828e3d3b-6d51-43c7-9e66-04a6cecbdda9 name=/runtime.v1.RuntimeService/Version
	Oct 10 19:10:02 kubernetes-upgrade-857939 crio[3055]: time="2024-10-10 19:10:02.863444457Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=828e3d3b-6d51-43c7-9e66-04a6cecbdda9 name=/runtime.v1.RuntimeService/Version
	Oct 10 19:10:02 kubernetes-upgrade-857939 crio[3055]: time="2024-10-10 19:10:02.865522189Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=91802f78-745e-4e47-8856-186dcdb5f802 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 10 19:10:02 kubernetes-upgrade-857939 crio[3055]: time="2024-10-10 19:10:02.866091697Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728587402866061134,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=91802f78-745e-4e47-8856-186dcdb5f802 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 10 19:10:02 kubernetes-upgrade-857939 crio[3055]: time="2024-10-10 19:10:02.866892576Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=95188f8a-39a7-4135-b474-dfb9fda5a027 name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 19:10:02 kubernetes-upgrade-857939 crio[3055]: time="2024-10-10 19:10:02.866968067Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=95188f8a-39a7-4135-b474-dfb9fda5a027 name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 19:10:02 kubernetes-upgrade-857939 crio[3055]: time="2024-10-10 19:10:02.867272968Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8210af010202d643c83f85253b283b8a783771565b7307b4b3026b95eb7a2a04,PodSandboxId:90557fbb2c502099d8ab4f5d1cd6cd1a3f4d8eb767665f391f9adf7c81813f2f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728587399130121022,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-hght6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88b4f66d-10f1-47e2-801f-8901fbf9f9db,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3db67fd9ff089fc9bc518ce1c5e4ce9f816b81a39427f6c274f0c987724cd260,PodSandboxId:d2614260400ba6ed1dfeaca13b62a2a97648946c66d5be3e409deb6f43e61ee6,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728587398596941118,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6ncb5,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: a2a7ab42-c850-4705-8379-943b944cf694,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b33e837852233b6a8caa8b2f8e6ff9b2d169b4edb5f3a2bbc742f279db5a3c95,PodSandboxId:723b985031d3822e544758d54f4369f42ebe1e91f12dd80799fa099fcba493a6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAIN
ER_RUNNING,CreatedAt:1728587398543174653,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x8vxp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53084bc0-c109-4800-b766-eedd50b55889,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e0e01e84564883f3661572bcea123c7be517221dcb2cc40672e903002fc85de,PodSandboxId:9d11e688c20d54c3990e25b2d3718239b0fc1d807a7eeb1348da789f4396c933,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:172
8587398519122978,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f5fff39-fb8a-44e7-86f3-afa3b2562474,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b42bd07a29a5cd1ff6d8440d53aa79d47720e838320559fe36ffa7e0c5f1caf6,PodSandboxId:5876e35648aabddd4d60d8cb835bf1748ddf53aa4606b2f8cb3dfe3482888a72,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728587393778868571,
Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-857939,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad94f4ce7a8317f460ee5cf3ab29b541,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:320c5af008db8302948e0648c99017d936e2ce1714223c6c8356df0222489dc5,PodSandboxId:1ad306afa3a040306a1fc0a1ed55b3ffa28519cd001819cba26a4bad0f3b0e29,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728587393747144767,Label
s:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-857939,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05a5c9e0422630db94ef057f91054c89,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01286a1f9d5949aff7907faa92a3084667a3c0d671e3085c42c0c5f2b0dc89df,PodSandboxId:3d037b3a688cf35bac7febc17ec39b557fa78bf14351e1c2cc0e6849f1264dce,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728587393709236061,Labels:map[string]st
ring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-857939,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f00f234f9958fff159d89df29b2a7baf,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c3493cc24e42085d393796ef920f703d895b4b35c416f159d63ed957ab9eb64,PodSandboxId:7fea837813258ffe3f900b9da8a53ebc61806493150971dfcb3402ba220691a0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728587393712996658,Labels:map[string]string{io.kubernet
es.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-857939,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5416c23d232067adb2ee8035b4c33a7d,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf9da9b806ce77c791b9dc6daf8a0b41e723b641389401735a5ca2fd97780110,PodSandboxId:b61a6f7c1ab03f35130166ee0c26a3540aaf2c11daa946077f9e8771c6f86fd7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1728587388641329553,Labels:map[string]string{io.kub
ernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6ncb5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2a7ab42-c850-4705-8379-943b944cf694,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0dc9d1d2699b2b8b5231a40f42553f2dc48a656c3d532a84ea8330896ac803dd,PodSandboxId:c5555c6c16efc0f3c71ce7b73404e17600315e90100cc947903415d36448c780,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]stri
ng{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728587387832151517,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-857939,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05a5c9e0422630db94ef057f91054c89,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70895ca59e65f6f4ab8e62d45ef620ae7d7bfe40adc15cdb084281df78601a2a,PodSandboxId:05b9bc78e83baa9aa0cc6d5fa1e296c8f566f3223b77b83649e94f496c6e590a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]strin
g{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1728587388027463865,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f5fff39-fb8a-44e7-86f3-afa3b2562474,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbd6519bb068341c5ac74fe4bb0f26969b4fe0200c0e0c3a6e73d78cecb4e41f,PodSandboxId:a6d4a1311d85cb0877aefb1dcd59c7ae3d933fd5315ef1a1f49b5f46a1177f5e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserS
pecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1728587387744786371,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-857939,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5416c23d232067adb2ee8035b4c33a7d,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7138bebe1e305282306667575715adf955d5647761bc465438e39009f0bfe89,PodSandboxId:92ed9379171b34f2bb81c1c00874b2a0d672ba3a2f84ac4ff30820afd0509f22,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},Use
rSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1728587387779594891,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-857939,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f00f234f9958fff159d89df29b2a7baf,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b966447cba2b071dd496757b4225b202aac5208a9a89429ae6fed1fc3f157f87,PodSandboxId:1050bcf2d41c99728abcc341d6c7b627a07b4a3c4dd97a84c3bd1c365c817b9a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHa
ndler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1728587387801491408,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-857939,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad94f4ce7a8317f460ee5cf3ab29b541,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:243356f4ddb2930ceaaf5ae3fafd611651861ba58c58df5d82adb33c3b9ff1fb,PodSandboxId:091680f2d4f28c0d353964892852a625f4ba3e7c1426c4a40bac3c70a90c34fe,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},I
mageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1728587387690086398,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-x8vxp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53084bc0-c109-4800-b766-eedd50b55889,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffb6fcef709f27f79fbc67248371619ddbde0d08929bdc27c7127078fe1103b0,PodSandboxId:7974ed12e21533536286800ad4c44f6ec5ceef7b51c03be2b4d967e3685fd28b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d
3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1728587377036248354,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-hght6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88b4f66d-10f1-47e2-801f-8901fbf9f9db,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=95188f8a-39a7-4135-b474-dfb9fda5a027 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	8210af010202d       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   3 seconds ago       Running             coredns                   1                   90557fbb2c502       coredns-7c65d6cfc9-hght6
	3db67fd9ff089       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   4 seconds ago       Running             coredns                   2                   d2614260400ba       coredns-7c65d6cfc9-6ncb5
	b33e837852233       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   4 seconds ago       Running             kube-proxy                2                   723b985031d38       kube-proxy-x8vxp
	0e0e01e845648       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   4 seconds ago       Running             storage-provisioner       2                   9d11e688c20d5       storage-provisioner
	b42bd07a29a5c       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   9 seconds ago       Running             kube-scheduler            2                   5876e35648aab       kube-scheduler-kubernetes-upgrade-857939
	320c5af008db8       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   9 seconds ago       Running             kube-apiserver            2                   1ad306afa3a04       kube-apiserver-kubernetes-upgrade-857939
	6c3493cc24e42       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   9 seconds ago       Running             kube-controller-manager   2                   7fea837813258       kube-controller-manager-kubernetes-upgrade-857939
	01286a1f9d594       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   9 seconds ago       Running             etcd                      2                   3d037b3a688cf       etcd-kubernetes-upgrade-857939
	bf9da9b806ce7       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   14 seconds ago      Exited              coredns                   1                   b61a6f7c1ab03       coredns-7c65d6cfc9-6ncb5
	70895ca59e65f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 seconds ago      Exited              storage-provisioner       1                   05b9bc78e83ba       storage-provisioner
	0dc9d1d2699b2       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   15 seconds ago      Exited              kube-apiserver            1                   c5555c6c16efc       kube-apiserver-kubernetes-upgrade-857939
	b966447cba2b0       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   15 seconds ago      Exited              kube-scheduler            1                   1050bcf2d41c9       kube-scheduler-kubernetes-upgrade-857939
	c7138bebe1e30       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   15 seconds ago      Exited              etcd                      1                   92ed9379171b3       etcd-kubernetes-upgrade-857939
	cbd6519bb0683       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   15 seconds ago      Exited              kube-controller-manager   1                   a6d4a1311d85c       kube-controller-manager-kubernetes-upgrade-857939
	243356f4ddb29       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   15 seconds ago      Exited              kube-proxy                1                   091680f2d4f28       kube-proxy-x8vxp
	ffb6fcef709f2       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   25 seconds ago      Exited              coredns                   0                   7974ed12e2153       coredns-7c65d6cfc9-hght6
	
	
	==> coredns [3db67fd9ff089fc9bc518ce1c5e4ce9f816b81a39427f6c274f0c987724cd260] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [8210af010202d643c83f85253b283b8a783771565b7307b4b3026b95eb7a2a04] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [bf9da9b806ce77c791b9dc6daf8a0b41e723b641389401735a5ca2fd97780110] <==
	
	
	==> coredns [ffb6fcef709f27f79fbc67248371619ddbde0d08929bdc27c7127078fe1103b0] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-857939
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-857939
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 10 Oct 2024 19:09:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-857939
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 10 Oct 2024 19:09:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 10 Oct 2024 19:09:57 +0000   Thu, 10 Oct 2024 19:09:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 10 Oct 2024 19:09:57 +0000   Thu, 10 Oct 2024 19:09:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 10 Oct 2024 19:09:57 +0000   Thu, 10 Oct 2024 19:09:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 10 Oct 2024 19:09:57 +0000   Thu, 10 Oct 2024 19:09:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.54
	  Hostname:    kubernetes-upgrade-857939
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 7a097035051f4a6683ea7e477674ff16
	  System UUID:                7a097035-051f-4a66-83ea-7e477674ff16
	  Boot ID:                    5caa58ed-0ffd-420f-8143-82b76b74e42b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-6ncb5                             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     29s
	  kube-system                 coredns-7c65d6cfc9-hght6                             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     29s
	  kube-system                 etcd-kubernetes-upgrade-857939                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         26s
	  kube-system                 kube-apiserver-kubernetes-upgrade-857939             250m (12%)    0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-857939    200m (10%)    0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-proxy-x8vxp                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-scheduler-kubernetes-upgrade-857939             100m (5%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 storage-provisioner                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             240Mi (11%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 27s                kube-proxy       
	  Normal  Starting                 4s                 kube-proxy       
	  Normal  NodeAllocatableEnforced  40s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  40s (x8 over 40s)  kubelet          Node kubernetes-upgrade-857939 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    40s (x8 over 40s)  kubelet          Node kubernetes-upgrade-857939 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     40s (x7 over 40s)  kubelet          Node kubernetes-upgrade-857939 status is now: NodeHasSufficientPID
	  Normal  Starting                 40s                kubelet          Starting kubelet.
	  Normal  RegisteredNode           29s                node-controller  Node kubernetes-upgrade-857939 event: Registered Node kubernetes-upgrade-857939 in Controller
	  Normal  Starting                 10s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10s (x8 over 10s)  kubelet          Node kubernetes-upgrade-857939 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10s (x8 over 10s)  kubelet          Node kubernetes-upgrade-857939 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10s (x7 over 10s)  kubelet          Node kubernetes-upgrade-857939 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3s                 node-controller  Node kubernetes-upgrade-857939 event: Registered Node kubernetes-upgrade-857939 in Controller
	
	
	==> dmesg <==
	[  +1.726855] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.797620] systemd-fstab-generator[554]: Ignoring "noauto" option for root device
	[  +0.064076] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.062222] systemd-fstab-generator[566]: Ignoring "noauto" option for root device
	[  +0.187017] systemd-fstab-generator[580]: Ignoring "noauto" option for root device
	[  +0.123469] systemd-fstab-generator[592]: Ignoring "noauto" option for root device
	[  +0.348501] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +4.854343] systemd-fstab-generator[714]: Ignoring "noauto" option for root device
	[  +0.066842] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.138869] systemd-fstab-generator[835]: Ignoring "noauto" option for root device
	[  +9.972078] systemd-fstab-generator[1219]: Ignoring "noauto" option for root device
	[  +0.084384] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.418584] kauditd_printk_skb: 109 callbacks suppressed
	[  +7.695698] systemd-fstab-generator[2184]: Ignoring "noauto" option for root device
	[  +0.152188] systemd-fstab-generator[2196]: Ignoring "noauto" option for root device
	[  +0.354839] systemd-fstab-generator[2286]: Ignoring "noauto" option for root device
	[  +0.226775] systemd-fstab-generator[2370]: Ignoring "noauto" option for root device
	[  +1.147258] systemd-fstab-generator[2858]: Ignoring "noauto" option for root device
	[  +1.418200] systemd-fstab-generator[3193]: Ignoring "noauto" option for root device
	[  +2.868481] systemd-fstab-generator[3670]: Ignoring "noauto" option for root device
	[  +0.092808] kauditd_printk_skb: 284 callbacks suppressed
	[  +5.646796] kauditd_printk_skb: 39 callbacks suppressed
	[Oct10 19:10] systemd-fstab-generator[4308]: Ignoring "noauto" option for root device
	
	
	==> etcd [01286a1f9d5949aff7907faa92a3084667a3c0d671e3085c42c0c5f2b0dc89df] <==
	{"level":"info","ts":"2024-10-10T19:09:54.473333Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b0a6bbe4c9ddfbc1 switched to configuration voters=(12729067988122991553)"}
	{"level":"info","ts":"2024-10-10T19:09:54.489063Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"b7dc4198fc8444d0","local-member-id":"b0a6bbe4c9ddfbc1","added-peer-id":"b0a6bbe4c9ddfbc1","added-peer-peer-urls":["https://192.168.50.54:2380"]}
	{"level":"info","ts":"2024-10-10T19:09:54.491561Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.50.54:2380"}
	{"level":"info","ts":"2024-10-10T19:09:54.491986Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.50.54:2380"}
	{"level":"info","ts":"2024-10-10T19:09:54.491501Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-10-10T19:09:54.492790Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"b0a6bbe4c9ddfbc1","initial-advertise-peer-urls":["https://192.168.50.54:2380"],"listen-peer-urls":["https://192.168.50.54:2380"],"advertise-client-urls":["https://192.168.50.54:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.54:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-10-10T19:09:54.492915Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-10-10T19:09:54.498886Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"b7dc4198fc8444d0","local-member-id":"b0a6bbe4c9ddfbc1","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-10T19:09:54.499086Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-10T19:09:56.140798Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b0a6bbe4c9ddfbc1 is starting a new election at term 2"}
	{"level":"info","ts":"2024-10-10T19:09:56.140854Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b0a6bbe4c9ddfbc1 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-10-10T19:09:56.140897Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b0a6bbe4c9ddfbc1 received MsgPreVoteResp from b0a6bbe4c9ddfbc1 at term 2"}
	{"level":"info","ts":"2024-10-10T19:09:56.140910Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b0a6bbe4c9ddfbc1 became candidate at term 3"}
	{"level":"info","ts":"2024-10-10T19:09:56.140915Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b0a6bbe4c9ddfbc1 received MsgVoteResp from b0a6bbe4c9ddfbc1 at term 3"}
	{"level":"info","ts":"2024-10-10T19:09:56.140923Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b0a6bbe4c9ddfbc1 became leader at term 3"}
	{"level":"info","ts":"2024-10-10T19:09:56.140930Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b0a6bbe4c9ddfbc1 elected leader b0a6bbe4c9ddfbc1 at term 3"}
	{"level":"info","ts":"2024-10-10T19:09:56.150133Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-10T19:09:56.151109Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-10T19:09:56.152087Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.54:2379"}
	{"level":"info","ts":"2024-10-10T19:09:56.152364Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-10T19:09:56.153111Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-10T19:09:56.153818Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-10T19:09:56.150085Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"b0a6bbe4c9ddfbc1","local-member-attributes":"{Name:kubernetes-upgrade-857939 ClientURLs:[https://192.168.50.54:2379]}","request-path":"/0/members/b0a6bbe4c9ddfbc1/attributes","cluster-id":"b7dc4198fc8444d0","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-10T19:09:56.158844Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-10T19:09:56.158874Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> etcd [c7138bebe1e305282306667575715adf955d5647761bc465438e39009f0bfe89] <==
	{"level":"info","ts":"2024-10-10T19:09:48.504432Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-10-10T19:09:48.582382Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"b7dc4198fc8444d0","local-member-id":"b0a6bbe4c9ddfbc1","commit-index":389}
	{"level":"info","ts":"2024-10-10T19:09:48.582641Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b0a6bbe4c9ddfbc1 switched to configuration voters=()"}
	{"level":"info","ts":"2024-10-10T19:09:48.582812Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b0a6bbe4c9ddfbc1 became follower at term 2"}
	{"level":"info","ts":"2024-10-10T19:09:48.582938Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft b0a6bbe4c9ddfbc1 [peers: [], term: 2, commit: 389, applied: 0, lastindex: 389, lastterm: 2]"}
	{"level":"warn","ts":"2024-10-10T19:09:48.584814Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-10-10T19:09:48.603948Z","caller":"mvcc/kvstore.go:418","msg":"kvstore restored","current-rev":380}
	{"level":"info","ts":"2024-10-10T19:09:48.606818Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2024-10-10T19:09:48.624595Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"b0a6bbe4c9ddfbc1","timeout":"7s"}
	{"level":"info","ts":"2024-10-10T19:09:48.633777Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"b0a6bbe4c9ddfbc1"}
	{"level":"info","ts":"2024-10-10T19:09:48.633915Z","caller":"etcdserver/server.go:867","msg":"starting etcd server","local-member-id":"b0a6bbe4c9ddfbc1","local-server-version":"3.5.15","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-10-10T19:09:48.634467Z","caller":"etcdserver/server.go:767","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-10-10T19:09:48.639230Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b0a6bbe4c9ddfbc1 switched to configuration voters=(12729067988122991553)"}
	{"level":"info","ts":"2024-10-10T19:09:48.639361Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"b7dc4198fc8444d0","local-member-id":"b0a6bbe4c9ddfbc1","added-peer-id":"b0a6bbe4c9ddfbc1","added-peer-peer-urls":["https://192.168.50.54:2380"]}
	{"level":"info","ts":"2024-10-10T19:09:48.639569Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"b7dc4198fc8444d0","local-member-id":"b0a6bbe4c9ddfbc1","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-10T19:09:48.639605Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-10T19:09:48.648445Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-10-10T19:09:48.648500Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-10-10T19:09:48.648508Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-10-10T19:09:48.649292Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-10T19:09:48.661284Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-10-10T19:09:48.661469Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"b0a6bbe4c9ddfbc1","initial-advertise-peer-urls":["https://192.168.50.54:2380"],"listen-peer-urls":["https://192.168.50.54:2380"],"advertise-client-urls":["https://192.168.50.54:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.54:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-10-10T19:09:48.661489Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-10-10T19:09:48.661573Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.50.54:2380"}
	{"level":"info","ts":"2024-10-10T19:09:48.661579Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.50.54:2380"}
	
	
	==> kernel <==
	 19:10:03 up 1 min,  0 users,  load average: 2.68, 0.73, 0.25
	Linux kubernetes-upgrade-857939 5.10.207 #1 SMP Tue Oct 8 15:16:25 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [0dc9d1d2699b2b8b5231a40f42553f2dc48a656c3d532a84ea8330896ac803dd] <==
	I1010 19:09:48.509168       1 options.go:228] external host was not specified, using 192.168.50.54
	I1010 19:09:48.574160       1 server.go:142] Version: v1.31.1
	I1010 19:09:48.574279       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	
	
	==> kube-apiserver [320c5af008db8302948e0648c99017d936e2ce1714223c6c8356df0222489dc5] <==
	I1010 19:09:57.623852       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1010 19:09:57.623861       1 policy_source.go:224] refreshing policies
	I1010 19:09:57.635813       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1010 19:09:57.641857       1 shared_informer.go:320] Caches are synced for configmaps
	I1010 19:09:57.642041       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1010 19:09:57.642087       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1010 19:09:57.643021       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I1010 19:09:57.643138       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1010 19:09:57.643418       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I1010 19:09:57.643734       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1010 19:09:57.643451       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I1010 19:09:57.645020       1 aggregator.go:171] initial CRD sync complete...
	I1010 19:09:57.645060       1 autoregister_controller.go:144] Starting autoregister controller
	I1010 19:09:57.645117       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1010 19:09:57.645994       1 cache.go:39] Caches are synced for autoregister controller
	I1010 19:09:57.664013       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	E1010 19:09:57.668021       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1010 19:09:58.553381       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1010 19:09:58.859405       1 controller.go:615] quota admission added evaluator for: endpoints
	I1010 19:10:00.015356       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1010 19:10:00.031349       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1010 19:10:00.084673       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1010 19:10:00.181942       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1010 19:10:00.198288       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1010 19:10:01.215355       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [6c3493cc24e42085d393796ef920f703d895b4b35c416f159d63ed957ab9eb64] <==
	I1010 19:10:00.979835       1 shared_informer.go:320] Caches are synced for node
	I1010 19:10:00.980062       1 range_allocator.go:171] "Sending events to api server" logger="node-ipam-controller"
	I1010 19:10:00.980189       1 range_allocator.go:177] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1010 19:10:00.980270       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I1010 19:10:00.980296       1 shared_informer.go:320] Caches are synced for cidrallocator
	I1010 19:10:00.980652       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I1010 19:10:00.981423       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="kubernetes-upgrade-857939"
	I1010 19:10:00.981667       1 shared_informer.go:320] Caches are synced for deployment
	I1010 19:10:00.983032       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I1010 19:10:00.983138       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I1010 19:10:00.983251       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I1010 19:10:00.989355       1 shared_informer.go:320] Caches are synced for expand
	I1010 19:10:01.015914       1 shared_informer.go:320] Caches are synced for stateful set
	I1010 19:10:01.029652       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I1010 19:10:01.062774       1 shared_informer.go:320] Caches are synced for daemon sets
	I1010 19:10:01.064981       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I1010 19:10:01.065236       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="kubernetes-upgrade-857939"
	I1010 19:10:01.099139       1 shared_informer.go:320] Caches are synced for endpoint
	I1010 19:10:01.163278       1 shared_informer.go:320] Caches are synced for resource quota
	I1010 19:10:01.165368       1 shared_informer.go:320] Caches are synced for resource quota
	I1010 19:10:01.219031       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I1010 19:10:01.232737       1 shared_informer.go:320] Caches are synced for crt configmap
	I1010 19:10:01.610835       1 shared_informer.go:320] Caches are synced for garbage collector
	I1010 19:10:01.610956       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I1010 19:10:01.613649       1 shared_informer.go:320] Caches are synced for garbage collector
	
	
	==> kube-controller-manager [cbd6519bb068341c5ac74fe4bb0f26969b4fe0200c0e0c3a6e73d78cecb4e41f] <==
	
	
	==> kube-proxy [243356f4ddb2930ceaaf5ae3fafd611651861ba58c58df5d82adb33c3b9ff1fb] <==
	I1010 19:09:48.833406       1 server_linux.go:66] "Using iptables proxy"
	
	
	==> kube-proxy [b33e837852233b6a8caa8b2f8e6ff9b2d169b4edb5f3a2bbc742f279db5a3c95] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1010 19:09:59.091450       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1010 19:09:59.140238       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.54"]
	E1010 19:09:59.140319       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1010 19:09:59.241792       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1010 19:09:59.241858       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1010 19:09:59.241888       1 server_linux.go:169] "Using iptables Proxier"
	I1010 19:09:59.248638       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1010 19:09:59.249053       1 server.go:483] "Version info" version="v1.31.1"
	I1010 19:09:59.249069       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1010 19:09:59.251516       1 config.go:199] "Starting service config controller"
	I1010 19:09:59.251532       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1010 19:09:59.251550       1 config.go:105] "Starting endpoint slice config controller"
	I1010 19:09:59.251553       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1010 19:09:59.252035       1 config.go:328] "Starting node config controller"
	I1010 19:09:59.252067       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1010 19:09:59.353800       1 shared_informer.go:320] Caches are synced for service config
	I1010 19:09:59.353918       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1010 19:09:59.354255       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [b42bd07a29a5cd1ff6d8440d53aa79d47720e838320559fe36ffa7e0c5f1caf6] <==
	I1010 19:09:55.084533       1 serving.go:386] Generated self-signed cert in-memory
	W1010 19:09:57.607195       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1010 19:09:57.607906       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [role.rbac.authorization.k8s.io "system::leader-locking-kube-scheduler" not found, role.rbac.authorization.k8s.io "extension-apiserver-authentication-reader" not found]
	W1010 19:09:57.608054       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1010 19:09:57.608158       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1010 19:09:57.635634       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I1010 19:09:57.638758       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1010 19:09:57.645520       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1010 19:09:57.646813       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1010 19:09:57.646865       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1010 19:09:57.646909       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1010 19:09:57.747853       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [b966447cba2b071dd496757b4225b202aac5208a9a89429ae6fed1fc3f157f87] <==
	
	
	==> kubelet <==
	Oct 10 19:09:53 kubernetes-upgrade-857939 kubelet[3677]: I1010 19:09:53.683266    3677 scope.go:117] "RemoveContainer" containerID="0dc9d1d2699b2b8b5231a40f42553f2dc48a656c3d532a84ea8330896ac803dd"
	Oct 10 19:09:53 kubernetes-upgrade-857939 kubelet[3677]: I1010 19:09:53.684243    3677 scope.go:117] "RemoveContainer" containerID="cbd6519bb068341c5ac74fe4bb0f26969b4fe0200c0e0c3a6e73d78cecb4e41f"
	Oct 10 19:09:53 kubernetes-upgrade-857939 kubelet[3677]: I1010 19:09:53.685838    3677 scope.go:117] "RemoveContainer" containerID="b966447cba2b071dd496757b4225b202aac5208a9a89429ae6fed1fc3f157f87"
	Oct 10 19:09:53 kubernetes-upgrade-857939 kubelet[3677]: E1010 19:09:53.813496    3677 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-857939?timeout=10s\": dial tcp 192.168.50.54:8443: connect: connection refused" interval="800ms"
	Oct 10 19:09:54 kubernetes-upgrade-857939 kubelet[3677]: I1010 19:09:54.018648    3677 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-857939"
	Oct 10 19:09:54 kubernetes-upgrade-857939 kubelet[3677]: E1010 19:09:54.019654    3677 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.54:8443: connect: connection refused" node="kubernetes-upgrade-857939"
	Oct 10 19:09:54 kubernetes-upgrade-857939 kubelet[3677]: W1010 19:09:54.090489    3677 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.50.54:8443: connect: connection refused
	Oct 10 19:09:54 kubernetes-upgrade-857939 kubelet[3677]: E1010 19:09:54.090565    3677 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.50.54:8443: connect: connection refused" logger="UnhandledError"
	Oct 10 19:09:54 kubernetes-upgrade-857939 kubelet[3677]: W1010 19:09:54.270455    3677 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.50.54:8443: connect: connection refused
	Oct 10 19:09:54 kubernetes-upgrade-857939 kubelet[3677]: E1010 19:09:54.270539    3677 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.50.54:8443: connect: connection refused" logger="UnhandledError"
	Oct 10 19:09:54 kubernetes-upgrade-857939 kubelet[3677]: I1010 19:09:54.821382    3677 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-857939"
	Oct 10 19:09:57 kubernetes-upgrade-857939 kubelet[3677]: I1010 19:09:57.713588    3677 kubelet_node_status.go:111] "Node was previously registered" node="kubernetes-upgrade-857939"
	Oct 10 19:09:57 kubernetes-upgrade-857939 kubelet[3677]: I1010 19:09:57.714004    3677 kubelet_node_status.go:75] "Successfully registered node" node="kubernetes-upgrade-857939"
	Oct 10 19:09:57 kubernetes-upgrade-857939 kubelet[3677]: I1010 19:09:57.714110    3677 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 10 19:09:57 kubernetes-upgrade-857939 kubelet[3677]: I1010 19:09:57.716005    3677 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 10 19:09:58 kubernetes-upgrade-857939 kubelet[3677]: I1010 19:09:58.182791    3677 apiserver.go:52] "Watching apiserver"
	Oct 10 19:09:58 kubernetes-upgrade-857939 kubelet[3677]: I1010 19:09:58.208326    3677 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 10 19:09:58 kubernetes-upgrade-857939 kubelet[3677]: I1010 19:09:58.211610    3677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/53084bc0-c109-4800-b766-eedd50b55889-lib-modules\") pod \"kube-proxy-x8vxp\" (UID: \"53084bc0-c109-4800-b766-eedd50b55889\") " pod="kube-system/kube-proxy-x8vxp"
	Oct 10 19:09:58 kubernetes-upgrade-857939 kubelet[3677]: I1010 19:09:58.211755    3677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/53084bc0-c109-4800-b766-eedd50b55889-xtables-lock\") pod \"kube-proxy-x8vxp\" (UID: \"53084bc0-c109-4800-b766-eedd50b55889\") " pod="kube-system/kube-proxy-x8vxp"
	Oct 10 19:09:58 kubernetes-upgrade-857939 kubelet[3677]: I1010 19:09:58.211789    3677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/6f5fff39-fb8a-44e7-86f3-afa3b2562474-tmp\") pod \"storage-provisioner\" (UID: \"6f5fff39-fb8a-44e7-86f3-afa3b2562474\") " pod="kube-system/storage-provisioner"
	Oct 10 19:09:58 kubernetes-upgrade-857939 kubelet[3677]: I1010 19:09:58.488561    3677 scope.go:117] "RemoveContainer" containerID="70895ca59e65f6f4ab8e62d45ef620ae7d7bfe40adc15cdb084281df78601a2a"
	Oct 10 19:09:58 kubernetes-upgrade-857939 kubelet[3677]: I1010 19:09:58.489195    3677 scope.go:117] "RemoveContainer" containerID="243356f4ddb2930ceaaf5ae3fafd611651861ba58c58df5d82adb33c3b9ff1fb"
	Oct 10 19:09:58 kubernetes-upgrade-857939 kubelet[3677]: I1010 19:09:58.489859    3677 scope.go:117] "RemoveContainer" containerID="bf9da9b806ce77c791b9dc6daf8a0b41e723b641389401735a5ca2fd97780110"
	Oct 10 19:10:03 kubernetes-upgrade-857939 kubelet[3677]: E1010 19:10:03.317891    3677 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728587403317313285,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 19:10:03 kubernetes-upgrade-857939 kubelet[3677]: E1010 19:10:03.318132    3677 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728587403317313285,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [0e0e01e84564883f3661572bcea123c7be517221dcb2cc40672e903002fc85de] <==
	I1010 19:09:58.790097       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1010 19:09:58.824589       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1010 19:09:58.824740       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1010 19:09:58.893001       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1010 19:09:58.893142       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-857939_6cfbb631-c091-4083-9372-122f179beb75!
	I1010 19:09:58.894174       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d038fd4b-3da3-428a-8501-1f342d91143b", APIVersion:"v1", ResourceVersion:"439", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kubernetes-upgrade-857939_6cfbb631-c091-4083-9372-122f179beb75 became leader
	I1010 19:09:58.995047       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-857939_6cfbb631-c091-4083-9372-122f179beb75!
	
	
	==> storage-provisioner [70895ca59e65f6f4ab8e62d45ef620ae7d7bfe40adc15cdb084281df78601a2a] <==
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-857939 -n kubernetes-upgrade-857939
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-857939 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-857939" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-857939
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-857939: (1.001004154s)
--- FAIL: TestKubernetesUpgrade (371.95s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (286.53s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-947203 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-947203 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (4m46.247852433s)

                                                
                                                
-- stdout --
	* [old-k8s-version-947203] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19787
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19787-81676/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19787-81676/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-947203" primary control-plane node in "old-k8s-version-947203" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1010 19:13:40.339182  139570 out.go:345] Setting OutFile to fd 1 ...
	I1010 19:13:40.339434  139570 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 19:13:40.339443  139570 out.go:358] Setting ErrFile to fd 2...
	I1010 19:13:40.339447  139570 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 19:13:40.339645  139570 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19787-81676/.minikube/bin
	I1010 19:13:40.340383  139570 out.go:352] Setting JSON to false
	I1010 19:13:40.341623  139570 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":10566,"bootTime":1728577054,"procs":298,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1010 19:13:40.341752  139570 start.go:139] virtualization: kvm guest
	I1010 19:13:40.344247  139570 out.go:177] * [old-k8s-version-947203] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1010 19:13:40.345752  139570 notify.go:220] Checking for updates...
	I1010 19:13:40.345766  139570 out.go:177]   - MINIKUBE_LOCATION=19787
	I1010 19:13:40.347134  139570 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1010 19:13:40.348352  139570 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19787-81676/kubeconfig
	I1010 19:13:40.349561  139570 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19787-81676/.minikube
	I1010 19:13:40.350900  139570 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1010 19:13:40.352269  139570 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1010 19:13:40.353839  139570 config.go:182] Loaded profile config "bridge-873515": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 19:13:40.353930  139570 config.go:182] Loaded profile config "enable-default-cni-873515": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 19:13:40.354062  139570 config.go:182] Loaded profile config "flannel-873515": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 19:13:40.354180  139570 driver.go:394] Setting default libvirt URI to qemu:///system
	I1010 19:13:40.395125  139570 out.go:177] * Using the kvm2 driver based on user configuration
	I1010 19:13:40.396646  139570 start.go:297] selected driver: kvm2
	I1010 19:13:40.396666  139570 start.go:901] validating driver "kvm2" against <nil>
	I1010 19:13:40.396679  139570 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1010 19:13:40.397553  139570 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 19:13:40.397656  139570 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19787-81676/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1010 19:13:40.415285  139570 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1010 19:13:40.415341  139570 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1010 19:13:40.415655  139570 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1010 19:13:40.415715  139570 cni.go:84] Creating CNI manager for ""
	I1010 19:13:40.415769  139570 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1010 19:13:40.415782  139570 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1010 19:13:40.415854  139570 start.go:340] cluster config:
	{Name:old-k8s-version-947203 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-947203 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 19:13:40.415962  139570 iso.go:125] acquiring lock: {Name:mk53f4624ffe67cac4f5d6b29b9ed87292a010a5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 19:13:40.417916  139570 out.go:177] * Starting "old-k8s-version-947203" primary control-plane node in "old-k8s-version-947203" cluster
	I1010 19:13:40.419540  139570 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1010 19:13:40.419583  139570 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1010 19:13:40.419591  139570 cache.go:56] Caching tarball of preloaded images
	I1010 19:13:40.419698  139570 preload.go:172] Found /home/jenkins/minikube-integration/19787-81676/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1010 19:13:40.419710  139570 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1010 19:13:40.419852  139570 profile.go:143] Saving config to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/old-k8s-version-947203/config.json ...
	I1010 19:13:40.419891  139570 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/old-k8s-version-947203/config.json: {Name:mk3a525c1d22bcd31ab316067b400814f52e019b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 19:13:40.420078  139570 start.go:360] acquireMachinesLock for old-k8s-version-947203: {Name:mkf50a8a3600473176c10a3ea212772e747151e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1010 19:13:50.806306  139570 start.go:364] duration metric: took 10.386180176s to acquireMachinesLock for "old-k8s-version-947203"
	I1010 19:13:50.806367  139570 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-947203 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-947203 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1010 19:13:50.806486  139570 start.go:125] createHost starting for "" (driver="kvm2")
	I1010 19:13:50.808001  139570 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1010 19:13:50.808290  139570 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:13:50.808333  139570 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:13:50.830499  139570 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37387
	I1010 19:13:50.831146  139570 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:13:50.832005  139570 main.go:141] libmachine: Using API Version  1
	I1010 19:13:50.832036  139570 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:13:50.832492  139570 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:13:50.832718  139570 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetMachineName
	I1010 19:13:50.832991  139570 main.go:141] libmachine: (old-k8s-version-947203) Calling .DriverName
	I1010 19:13:50.833180  139570 start.go:159] libmachine.API.Create for "old-k8s-version-947203" (driver="kvm2")
	I1010 19:13:50.833220  139570 client.go:168] LocalClient.Create starting
	I1010 19:13:50.833256  139570 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem
	I1010 19:13:50.833293  139570 main.go:141] libmachine: Decoding PEM data...
	I1010 19:13:50.833310  139570 main.go:141] libmachine: Parsing certificate...
	I1010 19:13:50.833375  139570 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem
	I1010 19:13:50.833418  139570 main.go:141] libmachine: Decoding PEM data...
	I1010 19:13:50.833432  139570 main.go:141] libmachine: Parsing certificate...
	I1010 19:13:50.833463  139570 main.go:141] libmachine: Running pre-create checks...
	I1010 19:13:50.833474  139570 main.go:141] libmachine: (old-k8s-version-947203) Calling .PreCreateCheck
	I1010 19:13:50.833959  139570 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetConfigRaw
	I1010 19:13:50.834508  139570 main.go:141] libmachine: Creating machine...
	I1010 19:13:50.834527  139570 main.go:141] libmachine: (old-k8s-version-947203) Calling .Create
	I1010 19:13:50.835187  139570 main.go:141] libmachine: (old-k8s-version-947203) Creating KVM machine...
	I1010 19:13:50.836182  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | found existing default KVM network
	I1010 19:13:50.840678  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:13:50.837771  140351 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:77:ea:d8} reservation:<nil>}
	I1010 19:13:50.840721  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:13:50.838998  140351 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:8d:df:42} reservation:<nil>}
	I1010 19:13:50.840749  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:13:50.840448  140351 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000285e90}
	I1010 19:13:50.840755  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | created network xml: 
	I1010 19:13:50.840762  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | <network>
	I1010 19:13:50.840768  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG |   <name>mk-old-k8s-version-947203</name>
	I1010 19:13:50.840774  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG |   <dns enable='no'/>
	I1010 19:13:50.840778  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG |   
	I1010 19:13:50.840785  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I1010 19:13:50.840790  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG |     <dhcp>
	I1010 19:13:50.840797  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I1010 19:13:50.840802  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG |     </dhcp>
	I1010 19:13:50.840807  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG |   </ip>
	I1010 19:13:50.840812  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG |   
	I1010 19:13:50.840817  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | </network>
	I1010 19:13:50.840821  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | 
	I1010 19:13:50.846078  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | trying to create private KVM network mk-old-k8s-version-947203 192.168.61.0/24...
	I1010 19:13:50.944398  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | private KVM network mk-old-k8s-version-947203 192.168.61.0/24 created
	I1010 19:13:50.944624  139570 main.go:141] libmachine: (old-k8s-version-947203) Setting up store path in /home/jenkins/minikube-integration/19787-81676/.minikube/machines/old-k8s-version-947203 ...
	I1010 19:13:50.944654  139570 main.go:141] libmachine: (old-k8s-version-947203) Building disk image from file:///home/jenkins/minikube-integration/19787-81676/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso
	I1010 19:13:50.944755  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:13:50.944706  140351 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19787-81676/.minikube
	I1010 19:13:50.945003  139570 main.go:141] libmachine: (old-k8s-version-947203) Downloading /home/jenkins/minikube-integration/19787-81676/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19787-81676/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso...
	I1010 19:13:51.250773  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:13:51.250563  140351 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/old-k8s-version-947203/id_rsa...
	I1010 19:13:51.416441  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:13:51.416326  140351 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/old-k8s-version-947203/old-k8s-version-947203.rawdisk...
	I1010 19:13:51.416479  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | Writing magic tar header
	I1010 19:13:51.416499  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | Writing SSH key tar header
	I1010 19:13:51.416620  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:13:51.416556  140351 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19787-81676/.minikube/machines/old-k8s-version-947203 ...
	I1010 19:13:51.416687  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/old-k8s-version-947203
	I1010 19:13:51.416738  139570 main.go:141] libmachine: (old-k8s-version-947203) Setting executable bit set on /home/jenkins/minikube-integration/19787-81676/.minikube/machines/old-k8s-version-947203 (perms=drwx------)
	I1010 19:13:51.416754  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19787-81676/.minikube/machines
	I1010 19:13:51.416765  139570 main.go:141] libmachine: (old-k8s-version-947203) Setting executable bit set on /home/jenkins/minikube-integration/19787-81676/.minikube/machines (perms=drwxr-xr-x)
	I1010 19:13:51.416786  139570 main.go:141] libmachine: (old-k8s-version-947203) Setting executable bit set on /home/jenkins/minikube-integration/19787-81676/.minikube (perms=drwxr-xr-x)
	I1010 19:13:51.416797  139570 main.go:141] libmachine: (old-k8s-version-947203) Setting executable bit set on /home/jenkins/minikube-integration/19787-81676 (perms=drwxrwxr-x)
	I1010 19:13:51.416808  139570 main.go:141] libmachine: (old-k8s-version-947203) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1010 19:13:51.416816  139570 main.go:141] libmachine: (old-k8s-version-947203) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1010 19:13:51.416826  139570 main.go:141] libmachine: (old-k8s-version-947203) Creating domain...
	I1010 19:13:51.416869  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19787-81676/.minikube
	I1010 19:13:51.416882  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19787-81676
	I1010 19:13:51.416893  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1010 19:13:51.416900  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | Checking permissions on dir: /home/jenkins
	I1010 19:13:51.416910  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | Checking permissions on dir: /home
	I1010 19:13:51.416918  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | Skipping /home - not owner
	I1010 19:13:51.418144  139570 main.go:141] libmachine: (old-k8s-version-947203) define libvirt domain using xml: 
	I1010 19:13:51.418165  139570 main.go:141] libmachine: (old-k8s-version-947203) <domain type='kvm'>
	I1010 19:13:51.418187  139570 main.go:141] libmachine: (old-k8s-version-947203)   <name>old-k8s-version-947203</name>
	I1010 19:13:51.418196  139570 main.go:141] libmachine: (old-k8s-version-947203)   <memory unit='MiB'>2200</memory>
	I1010 19:13:51.418209  139570 main.go:141] libmachine: (old-k8s-version-947203)   <vcpu>2</vcpu>
	I1010 19:13:51.418216  139570 main.go:141] libmachine: (old-k8s-version-947203)   <features>
	I1010 19:13:51.418229  139570 main.go:141] libmachine: (old-k8s-version-947203)     <acpi/>
	I1010 19:13:51.418241  139570 main.go:141] libmachine: (old-k8s-version-947203)     <apic/>
	I1010 19:13:51.418252  139570 main.go:141] libmachine: (old-k8s-version-947203)     <pae/>
	I1010 19:13:51.418262  139570 main.go:141] libmachine: (old-k8s-version-947203)     
	I1010 19:13:51.418272  139570 main.go:141] libmachine: (old-k8s-version-947203)   </features>
	I1010 19:13:51.418281  139570 main.go:141] libmachine: (old-k8s-version-947203)   <cpu mode='host-passthrough'>
	I1010 19:13:51.418293  139570 main.go:141] libmachine: (old-k8s-version-947203)   
	I1010 19:13:51.418306  139570 main.go:141] libmachine: (old-k8s-version-947203)   </cpu>
	I1010 19:13:51.418318  139570 main.go:141] libmachine: (old-k8s-version-947203)   <os>
	I1010 19:13:51.418329  139570 main.go:141] libmachine: (old-k8s-version-947203)     <type>hvm</type>
	I1010 19:13:51.418340  139570 main.go:141] libmachine: (old-k8s-version-947203)     <boot dev='cdrom'/>
	I1010 19:13:51.418361  139570 main.go:141] libmachine: (old-k8s-version-947203)     <boot dev='hd'/>
	I1010 19:13:51.418374  139570 main.go:141] libmachine: (old-k8s-version-947203)     <bootmenu enable='no'/>
	I1010 19:13:51.418381  139570 main.go:141] libmachine: (old-k8s-version-947203)   </os>
	I1010 19:13:51.418390  139570 main.go:141] libmachine: (old-k8s-version-947203)   <devices>
	I1010 19:13:51.418402  139570 main.go:141] libmachine: (old-k8s-version-947203)     <disk type='file' device='cdrom'>
	I1010 19:13:51.418420  139570 main.go:141] libmachine: (old-k8s-version-947203)       <source file='/home/jenkins/minikube-integration/19787-81676/.minikube/machines/old-k8s-version-947203/boot2docker.iso'/>
	I1010 19:13:51.418433  139570 main.go:141] libmachine: (old-k8s-version-947203)       <target dev='hdc' bus='scsi'/>
	I1010 19:13:51.418446  139570 main.go:141] libmachine: (old-k8s-version-947203)       <readonly/>
	I1010 19:13:51.418455  139570 main.go:141] libmachine: (old-k8s-version-947203)     </disk>
	I1010 19:13:51.418468  139570 main.go:141] libmachine: (old-k8s-version-947203)     <disk type='file' device='disk'>
	I1010 19:13:51.418478  139570 main.go:141] libmachine: (old-k8s-version-947203)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1010 19:13:51.418497  139570 main.go:141] libmachine: (old-k8s-version-947203)       <source file='/home/jenkins/minikube-integration/19787-81676/.minikube/machines/old-k8s-version-947203/old-k8s-version-947203.rawdisk'/>
	I1010 19:13:51.418509  139570 main.go:141] libmachine: (old-k8s-version-947203)       <target dev='hda' bus='virtio'/>
	I1010 19:13:51.418519  139570 main.go:141] libmachine: (old-k8s-version-947203)     </disk>
	I1010 19:13:51.418530  139570 main.go:141] libmachine: (old-k8s-version-947203)     <interface type='network'>
	I1010 19:13:51.418544  139570 main.go:141] libmachine: (old-k8s-version-947203)       <source network='mk-old-k8s-version-947203'/>
	I1010 19:13:51.418555  139570 main.go:141] libmachine: (old-k8s-version-947203)       <model type='virtio'/>
	I1010 19:13:51.418564  139570 main.go:141] libmachine: (old-k8s-version-947203)     </interface>
	I1010 19:13:51.418574  139570 main.go:141] libmachine: (old-k8s-version-947203)     <interface type='network'>
	I1010 19:13:51.418587  139570 main.go:141] libmachine: (old-k8s-version-947203)       <source network='default'/>
	I1010 19:13:51.418598  139570 main.go:141] libmachine: (old-k8s-version-947203)       <model type='virtio'/>
	I1010 19:13:51.418610  139570 main.go:141] libmachine: (old-k8s-version-947203)     </interface>
	I1010 19:13:51.418620  139570 main.go:141] libmachine: (old-k8s-version-947203)     <serial type='pty'>
	I1010 19:13:51.418632  139570 main.go:141] libmachine: (old-k8s-version-947203)       <target port='0'/>
	I1010 19:13:51.418639  139570 main.go:141] libmachine: (old-k8s-version-947203)     </serial>
	I1010 19:13:51.418650  139570 main.go:141] libmachine: (old-k8s-version-947203)     <console type='pty'>
	I1010 19:13:51.418663  139570 main.go:141] libmachine: (old-k8s-version-947203)       <target type='serial' port='0'/>
	I1010 19:13:51.418675  139570 main.go:141] libmachine: (old-k8s-version-947203)     </console>
	I1010 19:13:51.418694  139570 main.go:141] libmachine: (old-k8s-version-947203)     <rng model='virtio'>
	I1010 19:13:51.418706  139570 main.go:141] libmachine: (old-k8s-version-947203)       <backend model='random'>/dev/random</backend>
	I1010 19:13:51.418716  139570 main.go:141] libmachine: (old-k8s-version-947203)     </rng>
	I1010 19:13:51.418723  139570 main.go:141] libmachine: (old-k8s-version-947203)     
	I1010 19:13:51.418729  139570 main.go:141] libmachine: (old-k8s-version-947203)     
	I1010 19:13:51.418747  139570 main.go:141] libmachine: (old-k8s-version-947203)   </devices>
	I1010 19:13:51.418758  139570 main.go:141] libmachine: (old-k8s-version-947203) </domain>
	I1010 19:13:51.418769  139570 main.go:141] libmachine: (old-k8s-version-947203) 
	I1010 19:13:51.424987  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:4e:dd:30 in network default
	I1010 19:13:51.425804  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:13:51.425877  139570 main.go:141] libmachine: (old-k8s-version-947203) Ensuring networks are active...
	I1010 19:13:51.426784  139570 main.go:141] libmachine: (old-k8s-version-947203) Ensuring network default is active
	I1010 19:13:51.427210  139570 main.go:141] libmachine: (old-k8s-version-947203) Ensuring network mk-old-k8s-version-947203 is active
	I1010 19:13:51.428047  139570 main.go:141] libmachine: (old-k8s-version-947203) Getting domain xml...
	I1010 19:13:51.428969  139570 main.go:141] libmachine: (old-k8s-version-947203) Creating domain...
	I1010 19:13:52.936565  139570 main.go:141] libmachine: (old-k8s-version-947203) Waiting to get IP...
	I1010 19:13:52.938231  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:13:52.938609  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:13:52.938783  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:13:52.938578  140351 retry.go:31] will retry after 202.042755ms: waiting for machine to come up
	I1010 19:13:53.144590  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:13:53.145179  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:13:53.145245  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:13:53.145137  140351 retry.go:31] will retry after 271.252418ms: waiting for machine to come up
	I1010 19:13:53.417765  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:13:53.419188  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:13:53.419217  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:13:53.419106  140351 retry.go:31] will retry after 432.199139ms: waiting for machine to come up
	I1010 19:13:53.853205  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:13:53.853838  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:13:53.853864  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:13:53.853748  140351 retry.go:31] will retry after 573.506053ms: waiting for machine to come up
	I1010 19:13:54.428494  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:13:54.429250  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:13:54.429270  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:13:54.429123  140351 retry.go:31] will retry after 492.468595ms: waiting for machine to come up
	I1010 19:13:54.926274  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:13:54.927078  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:13:54.927115  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:13:54.926964  140351 retry.go:31] will retry after 949.171235ms: waiting for machine to come up
	I1010 19:13:55.879920  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:13:55.881491  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:13:55.881520  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:13:55.881390  140351 retry.go:31] will retry after 830.235502ms: waiting for machine to come up
	I1010 19:13:57.131209  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:13:57.136696  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:13:57.136722  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:13:57.136644  140351 retry.go:31] will retry after 1.102386112s: waiting for machine to come up
	I1010 19:13:58.241488  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:13:58.242271  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:13:58.242343  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:13:58.242227  140351 retry.go:31] will retry after 1.456611687s: waiting for machine to come up
	I1010 19:13:59.700311  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:13:59.700937  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:13:59.700970  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:13:59.700883  140351 retry.go:31] will retry after 1.516608119s: waiting for machine to come up
	I1010 19:14:01.219428  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:14:01.219983  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:14:01.220011  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:14:01.219934  140351 retry.go:31] will retry after 2.40510796s: waiting for machine to come up
	I1010 19:14:03.628630  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:14:03.629182  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:14:03.629204  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:14:03.629147  140351 retry.go:31] will retry after 3.272666395s: waiting for machine to come up
	I1010 19:14:06.903123  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:14:06.903695  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:14:06.903729  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:14:06.903634  140351 retry.go:31] will retry after 3.85416721s: waiting for machine to come up
	I1010 19:14:10.762133  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:14:10.762642  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:14:10.762664  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:14:10.762606  140351 retry.go:31] will retry after 4.699461222s: waiting for machine to come up
	I1010 19:14:15.466357  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:14:15.466907  139570 main.go:141] libmachine: (old-k8s-version-947203) Found IP for machine: 192.168.61.112
	I1010 19:14:15.466938  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has current primary IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:14:15.466964  139570 main.go:141] libmachine: (old-k8s-version-947203) Reserving static IP address...
	I1010 19:14:15.467299  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-947203", mac: "52:54:00:b5:0b:b2", ip: "192.168.61.112"} in network mk-old-k8s-version-947203
	I1010 19:14:15.549668  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | Getting to WaitForSSH function...
	I1010 19:14:15.549705  139570 main.go:141] libmachine: (old-k8s-version-947203) Reserved static IP address: 192.168.61.112
	I1010 19:14:15.549719  139570 main.go:141] libmachine: (old-k8s-version-947203) Waiting for SSH to be available...
	I1010 19:14:15.552473  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:14:15.552805  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203
	I1010 19:14:15.552830  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find defined IP address of network mk-old-k8s-version-947203 interface with MAC address 52:54:00:b5:0b:b2
	I1010 19:14:15.553067  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | Using SSH client type: external
	I1010 19:14:15.553103  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | Using SSH private key: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/old-k8s-version-947203/id_rsa (-rw-------)
	I1010 19:14:15.553143  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19787-81676/.minikube/machines/old-k8s-version-947203/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1010 19:14:15.553164  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | About to run SSH command:
	I1010 19:14:15.553217  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | exit 0
	I1010 19:14:15.557072  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | SSH cmd err, output: exit status 255: 
	I1010 19:14:15.557097  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1010 19:14:15.557109  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | command : exit 0
	I1010 19:14:15.557118  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | err     : exit status 255
	I1010 19:14:15.557132  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | output  : 
	I1010 19:14:18.557344  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | Getting to WaitForSSH function...
	I1010 19:14:18.560703  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:14:18.561225  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:14:07 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:14:18.561266  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:14:18.561458  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | Using SSH client type: external
	I1010 19:14:18.561481  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | Using SSH private key: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/old-k8s-version-947203/id_rsa (-rw-------)
	I1010 19:14:18.561528  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.112 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19787-81676/.minikube/machines/old-k8s-version-947203/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1010 19:14:18.561549  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | About to run SSH command:
	I1010 19:14:18.561560  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | exit 0
	I1010 19:14:18.689585  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | SSH cmd err, output: <nil>: 
	I1010 19:14:18.689881  139570 main.go:141] libmachine: (old-k8s-version-947203) KVM machine creation complete!
	I1010 19:14:18.690207  139570 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetConfigRaw
	I1010 19:14:18.690844  139570 main.go:141] libmachine: (old-k8s-version-947203) Calling .DriverName
	I1010 19:14:18.691106  139570 main.go:141] libmachine: (old-k8s-version-947203) Calling .DriverName
	I1010 19:14:18.691380  139570 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1010 19:14:18.691401  139570 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetState
	I1010 19:14:18.693025  139570 main.go:141] libmachine: Detecting operating system of created instance...
	I1010 19:14:18.693040  139570 main.go:141] libmachine: Waiting for SSH to be available...
	I1010 19:14:18.693045  139570 main.go:141] libmachine: Getting to WaitForSSH function...
	I1010 19:14:18.693051  139570 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHHostname
	I1010 19:14:18.695809  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:14:18.696335  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:14:07 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:14:18.696383  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:14:18.696533  139570 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHPort
	I1010 19:14:18.696711  139570 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:14:18.696839  139570 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:14:18.696965  139570 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHUsername
	I1010 19:14:18.697106  139570 main.go:141] libmachine: Using SSH client type: native
	I1010 19:14:18.697348  139570 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.112 22 <nil> <nil>}
	I1010 19:14:18.697366  139570 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1010 19:14:18.808359  139570 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1010 19:14:18.808388  139570 main.go:141] libmachine: Detecting the provisioner...
	I1010 19:14:18.808399  139570 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHHostname
	I1010 19:14:18.811558  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:14:18.811915  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:14:07 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:14:18.811959  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:14:18.812201  139570 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHPort
	I1010 19:14:18.812443  139570 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:14:18.812650  139570 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:14:18.812803  139570 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHUsername
	I1010 19:14:18.813031  139570 main.go:141] libmachine: Using SSH client type: native
	I1010 19:14:18.813203  139570 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.112 22 <nil> <nil>}
	I1010 19:14:18.813217  139570 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1010 19:14:18.930628  139570 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1010 19:14:18.930741  139570 main.go:141] libmachine: found compatible host: buildroot
	I1010 19:14:18.930759  139570 main.go:141] libmachine: Provisioning with buildroot...
	I1010 19:14:18.930770  139570 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetMachineName
	I1010 19:14:18.931040  139570 buildroot.go:166] provisioning hostname "old-k8s-version-947203"
	I1010 19:14:18.931074  139570 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetMachineName
	I1010 19:14:18.931330  139570 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHHostname
	I1010 19:14:18.933928  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:14:18.934370  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:14:07 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:14:18.934399  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:14:18.934547  139570 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHPort
	I1010 19:14:18.934745  139570 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:14:18.934910  139570 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:14:18.935031  139570 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHUsername
	I1010 19:14:18.935215  139570 main.go:141] libmachine: Using SSH client type: native
	I1010 19:14:18.935389  139570 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.112 22 <nil> <nil>}
	I1010 19:14:18.935410  139570 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-947203 && echo "old-k8s-version-947203" | sudo tee /etc/hostname
	I1010 19:14:19.065091  139570 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-947203
	
	I1010 19:14:19.065122  139570 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHHostname
	I1010 19:14:19.067904  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:14:19.068342  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:14:07 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:14:19.068370  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:14:19.068572  139570 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHPort
	I1010 19:14:19.068767  139570 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:14:19.069001  139570 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:14:19.069132  139570 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHUsername
	I1010 19:14:19.069293  139570 main.go:141] libmachine: Using SSH client type: native
	I1010 19:14:19.069488  139570 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.112 22 <nil> <nil>}
	I1010 19:14:19.069505  139570 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-947203' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-947203/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-947203' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1010 19:14:19.190791  139570 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1010 19:14:19.190823  139570 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19787-81676/.minikube CaCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19787-81676/.minikube}
	I1010 19:14:19.190885  139570 buildroot.go:174] setting up certificates
	I1010 19:14:19.190898  139570 provision.go:84] configureAuth start
	I1010 19:14:19.190916  139570 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetMachineName
	I1010 19:14:19.191255  139570 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetIP
	I1010 19:14:19.195743  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:14:19.196107  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:14:07 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:14:19.196142  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:14:19.196290  139570 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHHostname
	I1010 19:14:19.198496  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:14:19.198923  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:14:07 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:14:19.198970  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:14:19.199103  139570 provision.go:143] copyHostCerts
	I1010 19:14:19.199174  139570 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem, removing ...
	I1010 19:14:19.199189  139570 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem
	I1010 19:14:19.199262  139570 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem (1078 bytes)
	I1010 19:14:19.199400  139570 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem, removing ...
	I1010 19:14:19.199410  139570 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem
	I1010 19:14:19.199433  139570 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem (1123 bytes)
	I1010 19:14:19.199497  139570 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem, removing ...
	I1010 19:14:19.199506  139570 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem
	I1010 19:14:19.199524  139570 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem (1675 bytes)
	I1010 19:14:19.199570  139570 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-947203 san=[127.0.0.1 192.168.61.112 localhost minikube old-k8s-version-947203]
	I1010 19:14:19.296075  139570 provision.go:177] copyRemoteCerts
	I1010 19:14:19.296136  139570 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1010 19:14:19.296165  139570 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHHostname
	I1010 19:14:19.298999  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:14:19.299390  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:14:07 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:14:19.299423  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:14:19.299650  139570 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHPort
	I1010 19:14:19.299837  139570 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:14:19.299972  139570 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHUsername
	I1010 19:14:19.300148  139570 sshutil.go:53] new ssh client: &{IP:192.168.61.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/old-k8s-version-947203/id_rsa Username:docker}
	I1010 19:14:19.387203  139570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1010 19:14:19.414317  139570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1010 19:14:19.441030  139570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1010 19:14:19.466833  139570 provision.go:87] duration metric: took 275.915453ms to configureAuth
	I1010 19:14:19.466874  139570 buildroot.go:189] setting minikube options for container-runtime
	I1010 19:14:19.467089  139570 config.go:182] Loaded profile config "old-k8s-version-947203": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1010 19:14:19.467202  139570 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHHostname
	I1010 19:14:19.470162  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:14:19.470527  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:14:07 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:14:19.470619  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:14:19.470768  139570 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHPort
	I1010 19:14:19.470981  139570 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:14:19.471138  139570 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:14:19.471318  139570 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHUsername
	I1010 19:14:19.471480  139570 main.go:141] libmachine: Using SSH client type: native
	I1010 19:14:19.471662  139570 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.112 22 <nil> <nil>}
	I1010 19:14:19.471678  139570 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1010 19:14:19.712190  139570 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1010 19:14:19.712218  139570 main.go:141] libmachine: Checking connection to Docker...
	I1010 19:14:19.712230  139570 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetURL
	I1010 19:14:19.713453  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | Using libvirt version 6000000
	I1010 19:14:19.716158  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:14:19.716532  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:14:07 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:14:19.716578  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:14:19.716709  139570 main.go:141] libmachine: Docker is up and running!
	I1010 19:14:19.716728  139570 main.go:141] libmachine: Reticulating splines...
	I1010 19:14:19.716738  139570 client.go:171] duration metric: took 28.883508217s to LocalClient.Create
	I1010 19:14:19.716767  139570 start.go:167] duration metric: took 28.883588965s to libmachine.API.Create "old-k8s-version-947203"
	I1010 19:14:19.716780  139570 start.go:293] postStartSetup for "old-k8s-version-947203" (driver="kvm2")
	I1010 19:14:19.716794  139570 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1010 19:14:19.716819  139570 main.go:141] libmachine: (old-k8s-version-947203) Calling .DriverName
	I1010 19:14:19.717088  139570 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1010 19:14:19.717120  139570 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHHostname
	I1010 19:14:19.719637  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:14:19.719985  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:14:07 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:14:19.720017  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:14:19.720194  139570 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHPort
	I1010 19:14:19.720356  139570 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:14:19.720475  139570 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHUsername
	I1010 19:14:19.720581  139570 sshutil.go:53] new ssh client: &{IP:192.168.61.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/old-k8s-version-947203/id_rsa Username:docker}
	I1010 19:14:19.807504  139570 ssh_runner.go:195] Run: cat /etc/os-release
	I1010 19:14:19.812368  139570 info.go:137] Remote host: Buildroot 2023.02.9
	I1010 19:14:19.812406  139570 filesync.go:126] Scanning /home/jenkins/minikube-integration/19787-81676/.minikube/addons for local assets ...
	I1010 19:14:19.812481  139570 filesync.go:126] Scanning /home/jenkins/minikube-integration/19787-81676/.minikube/files for local assets ...
	I1010 19:14:19.812587  139570 filesync.go:149] local asset: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem -> 888762.pem in /etc/ssl/certs
	I1010 19:14:19.812684  139570 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1010 19:14:19.822867  139570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem --> /etc/ssl/certs/888762.pem (1708 bytes)
	I1010 19:14:19.849187  139570 start.go:296] duration metric: took 132.387027ms for postStartSetup
	I1010 19:14:19.849260  139570 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetConfigRaw
	I1010 19:14:19.850009  139570 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetIP
	I1010 19:14:19.852778  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:14:19.853112  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:14:07 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:14:19.853142  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:14:19.853400  139570 profile.go:143] Saving config to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/old-k8s-version-947203/config.json ...
	I1010 19:14:19.853607  139570 start.go:128] duration metric: took 29.047109443s to createHost
	I1010 19:14:19.853631  139570 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHHostname
	I1010 19:14:19.856336  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:14:19.856735  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:14:07 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:14:19.856766  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:14:19.856989  139570 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHPort
	I1010 19:14:19.857187  139570 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:14:19.857362  139570 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:14:19.857537  139570 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHUsername
	I1010 19:14:19.857692  139570 main.go:141] libmachine: Using SSH client type: native
	I1010 19:14:19.857862  139570 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.112 22 <nil> <nil>}
	I1010 19:14:19.857872  139570 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1010 19:14:19.969858  139570 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728587659.956540984
	
	I1010 19:14:19.969895  139570 fix.go:216] guest clock: 1728587659.956540984
	I1010 19:14:19.969906  139570 fix.go:229] Guest: 2024-10-10 19:14:19.956540984 +0000 UTC Remote: 2024-10-10 19:14:19.853619837 +0000 UTC m=+39.554426874 (delta=102.921147ms)
	I1010 19:14:19.969937  139570 fix.go:200] guest clock delta is within tolerance: 102.921147ms
	I1010 19:14:19.969944  139570 start.go:83] releasing machines lock for "old-k8s-version-947203", held for 29.163606399s
	I1010 19:14:19.969974  139570 main.go:141] libmachine: (old-k8s-version-947203) Calling .DriverName
	I1010 19:14:19.970283  139570 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetIP
	I1010 19:14:19.973535  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:14:19.973880  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:14:07 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:14:19.973917  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:14:19.974081  139570 main.go:141] libmachine: (old-k8s-version-947203) Calling .DriverName
	I1010 19:14:19.974607  139570 main.go:141] libmachine: (old-k8s-version-947203) Calling .DriverName
	I1010 19:14:19.974816  139570 main.go:141] libmachine: (old-k8s-version-947203) Calling .DriverName
	I1010 19:14:19.974897  139570 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1010 19:14:19.974963  139570 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHHostname
	I1010 19:14:19.975041  139570 ssh_runner.go:195] Run: cat /version.json
	I1010 19:14:19.975082  139570 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHHostname
	I1010 19:14:19.977712  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:14:19.977981  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:14:19.978080  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:14:07 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:14:19.978111  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:14:19.978266  139570 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHPort
	I1010 19:14:19.978381  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:14:07 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:14:19.978404  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:14:19.978465  139570 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:14:19.978550  139570 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHPort
	I1010 19:14:19.978640  139570 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHUsername
	I1010 19:14:19.978715  139570 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:14:19.978790  139570 sshutil.go:53] new ssh client: &{IP:192.168.61.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/old-k8s-version-947203/id_rsa Username:docker}
	I1010 19:14:19.978829  139570 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHUsername
	I1010 19:14:19.978930  139570 sshutil.go:53] new ssh client: &{IP:192.168.61.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/old-k8s-version-947203/id_rsa Username:docker}
	I1010 19:14:20.066492  139570 ssh_runner.go:195] Run: systemctl --version
	I1010 19:14:20.089166  139570 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1010 19:14:20.262402  139570 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1010 19:14:20.269036  139570 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1010 19:14:20.269121  139570 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1010 19:14:20.288025  139570 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1010 19:14:20.288060  139570 start.go:495] detecting cgroup driver to use...
	I1010 19:14:20.288135  139570 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1010 19:14:20.310102  139570 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1010 19:14:20.326132  139570 docker.go:217] disabling cri-docker service (if available) ...
	I1010 19:14:20.326211  139570 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1010 19:14:20.343535  139570 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1010 19:14:20.358384  139570 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1010 19:14:20.486859  139570 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1010 19:14:20.664089  139570 docker.go:233] disabling docker service ...
	I1010 19:14:20.664162  139570 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1010 19:14:20.685654  139570 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1010 19:14:20.706192  139570 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1010 19:14:20.840601  139570 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1010 19:14:20.977413  139570 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1010 19:14:20.998743  139570 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1010 19:14:21.025286  139570 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1010 19:14:21.025354  139570 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:14:21.042017  139570 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1010 19:14:21.042100  139570 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:14:21.054510  139570 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:14:21.066862  139570 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:14:21.081272  139570 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1010 19:14:21.094633  139570 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1010 19:14:21.105562  139570 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1010 19:14:21.105633  139570 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1010 19:14:21.120988  139570 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1010 19:14:21.132422  139570 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 19:14:21.268388  139570 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1010 19:14:21.377054  139570 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1010 19:14:21.377140  139570 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1010 19:14:21.383006  139570 start.go:563] Will wait 60s for crictl version
	I1010 19:14:21.383078  139570 ssh_runner.go:195] Run: which crictl
	I1010 19:14:21.390440  139570 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1010 19:14:21.443670  139570 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1010 19:14:21.443780  139570 ssh_runner.go:195] Run: crio --version
	I1010 19:14:21.481754  139570 ssh_runner.go:195] Run: crio --version
	I1010 19:14:21.515906  139570 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1010 19:14:21.517720  139570 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetIP
	I1010 19:14:21.521115  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:14:21.521583  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:14:07 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:14:21.521612  139570 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:14:21.521783  139570 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1010 19:14:21.526488  139570 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 19:14:21.540747  139570 kubeadm.go:883] updating cluster {Name:old-k8s-version-947203 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-947203 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.112 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1010 19:14:21.540914  139570 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1010 19:14:21.540979  139570 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 19:14:21.577456  139570 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1010 19:14:21.577543  139570 ssh_runner.go:195] Run: which lz4
	I1010 19:14:21.582490  139570 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1010 19:14:21.587996  139570 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1010 19:14:21.588038  139570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1010 19:14:23.486190  139570 crio.go:462] duration metric: took 1.903735147s to copy over tarball
	I1010 19:14:23.486294  139570 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1010 19:14:26.489343  139570 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.003005353s)
	I1010 19:14:26.489387  139570 crio.go:469] duration metric: took 3.003156848s to extract the tarball
	I1010 19:14:26.489399  139570 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1010 19:14:26.538621  139570 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 19:14:26.613775  139570 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1010 19:14:26.613806  139570 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1010 19:14:26.613909  139570 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1010 19:14:26.613945  139570 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1010 19:14:26.613958  139570 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1010 19:14:26.613978  139570 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1010 19:14:26.613975  139570 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1010 19:14:26.613893  139570 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 19:14:26.613949  139570 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1010 19:14:26.613895  139570 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1010 19:14:26.615504  139570 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1010 19:14:26.615702  139570 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1010 19:14:26.615771  139570 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 19:14:26.615779  139570 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1010 19:14:26.615712  139570 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1010 19:14:26.615712  139570 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1010 19:14:26.615724  139570 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1010 19:14:26.615734  139570 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1010 19:14:26.780505  139570 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1010 19:14:26.781469  139570 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1010 19:14:26.793326  139570 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1010 19:14:26.793328  139570 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1010 19:14:26.798337  139570 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1010 19:14:26.800966  139570 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1010 19:14:26.846157  139570 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1010 19:14:26.923922  139570 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1010 19:14:26.924038  139570 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1010 19:14:26.924162  139570 ssh_runner.go:195] Run: which crictl
	I1010 19:14:26.997757  139570 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1010 19:14:26.997806  139570 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1010 19:14:26.997856  139570 ssh_runner.go:195] Run: which crictl
	I1010 19:14:27.014805  139570 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1010 19:14:27.014858  139570 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1010 19:14:27.014858  139570 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1010 19:14:27.014890  139570 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1010 19:14:27.014906  139570 ssh_runner.go:195] Run: which crictl
	I1010 19:14:27.014932  139570 ssh_runner.go:195] Run: which crictl
	I1010 19:14:27.027782  139570 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1010 19:14:27.027834  139570 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1010 19:14:27.027849  139570 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1010 19:14:27.027881  139570 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1010 19:14:27.027890  139570 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1010 19:14:27.027908  139570 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1010 19:14:27.027926  139570 ssh_runner.go:195] Run: which crictl
	I1010 19:14:27.027930  139570 ssh_runner.go:195] Run: which crictl
	I1010 19:14:27.027884  139570 ssh_runner.go:195] Run: which crictl
	I1010 19:14:27.027950  139570 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1010 19:14:27.027988  139570 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1010 19:14:27.028055  139570 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1010 19:14:27.028099  139570 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1010 19:14:27.130800  139570 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1010 19:14:27.130876  139570 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1010 19:14:27.130936  139570 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1010 19:14:27.131021  139570 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1010 19:14:27.131069  139570 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1010 19:14:27.131120  139570 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1010 19:14:27.130881  139570 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1010 19:14:27.279278  139570 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1010 19:14:27.318579  139570 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1010 19:14:27.318631  139570 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1010 19:14:27.318742  139570 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1010 19:14:27.318777  139570 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1010 19:14:27.318879  139570 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1010 19:14:27.318911  139570 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1010 19:14:27.377869  139570 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1010 19:14:27.488730  139570 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1010 19:14:27.505759  139570 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1010 19:14:27.505815  139570 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1010 19:14:27.505870  139570 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1010 19:14:27.505878  139570 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1010 19:14:27.505883  139570 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1010 19:14:27.530390  139570 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1010 19:14:27.558078  139570 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 19:14:27.576316  139570 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1010 19:14:27.576412  139570 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1010 19:14:27.724349  139570 cache_images.go:92] duration metric: took 1.110522659s to LoadCachedImages
	W1010 19:14:27.724466  139570 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I1010 19:14:27.724485  139570 kubeadm.go:934] updating node { 192.168.61.112 8443 v1.20.0 crio true true} ...
	I1010 19:14:27.724620  139570 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-947203 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.112
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-947203 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1010 19:14:27.724705  139570 ssh_runner.go:195] Run: crio config
	I1010 19:14:27.776181  139570 cni.go:84] Creating CNI manager for ""
	I1010 19:14:27.776213  139570 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1010 19:14:27.776230  139570 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1010 19:14:27.776257  139570 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.112 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-947203 NodeName:old-k8s-version-947203 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.112"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.112 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1010 19:14:27.776478  139570 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.112
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-947203"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.112
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.112"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1010 19:14:27.776571  139570 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1010 19:14:27.799506  139570 binaries.go:44] Found k8s binaries, skipping transfer
	I1010 19:14:27.799583  139570 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1010 19:14:27.812738  139570 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1010 19:14:27.833714  139570 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1010 19:14:27.852313  139570 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1010 19:14:27.871051  139570 ssh_runner.go:195] Run: grep 192.168.61.112	control-plane.minikube.internal$ /etc/hosts
	I1010 19:14:27.875276  139570 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.112	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 19:14:27.888585  139570 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 19:14:28.019519  139570 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 19:14:28.039571  139570 certs.go:68] Setting up /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/old-k8s-version-947203 for IP: 192.168.61.112
	I1010 19:14:28.039602  139570 certs.go:194] generating shared ca certs ...
	I1010 19:14:28.039624  139570 certs.go:226] acquiring lock for ca certs: {Name:mk934aa2a82050d7b4e8800492bf64f91d140517 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 19:14:28.039827  139570 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key
	I1010 19:14:28.039897  139570 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key
	I1010 19:14:28.039911  139570 certs.go:256] generating profile certs ...
	I1010 19:14:28.039982  139570 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/old-k8s-version-947203/client.key
	I1010 19:14:28.040001  139570 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/old-k8s-version-947203/client.crt with IP's: []
	I1010 19:14:28.119671  139570 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/old-k8s-version-947203/client.crt ...
	I1010 19:14:28.119708  139570 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/old-k8s-version-947203/client.crt: {Name:mk54c00c083ced661e2d02170be14eec45b8842b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 19:14:28.119911  139570 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/old-k8s-version-947203/client.key ...
	I1010 19:14:28.119934  139570 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/old-k8s-version-947203/client.key: {Name:mk90c2c313f49cc32fa7b53c403beace08ccec4d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 19:14:28.120045  139570 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/old-k8s-version-947203/apiserver.key.8a666a52
	I1010 19:14:28.120069  139570 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/old-k8s-version-947203/apiserver.crt.8a666a52 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.112]
	I1010 19:14:28.301415  139570 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/old-k8s-version-947203/apiserver.crt.8a666a52 ...
	I1010 19:14:28.301450  139570 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/old-k8s-version-947203/apiserver.crt.8a666a52: {Name:mk583272623e034ae0e00bd7fcd8c4b46d2efa8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 19:14:28.301654  139570 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/old-k8s-version-947203/apiserver.key.8a666a52 ...
	I1010 19:14:28.301676  139570 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/old-k8s-version-947203/apiserver.key.8a666a52: {Name:mk6ff3f41400f6dbbef273100a314ee43fd74d8b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 19:14:28.301847  139570 certs.go:381] copying /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/old-k8s-version-947203/apiserver.crt.8a666a52 -> /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/old-k8s-version-947203/apiserver.crt
	I1010 19:14:28.302030  139570 certs.go:385] copying /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/old-k8s-version-947203/apiserver.key.8a666a52 -> /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/old-k8s-version-947203/apiserver.key
	I1010 19:14:28.302126  139570 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/old-k8s-version-947203/proxy-client.key
	I1010 19:14:28.302150  139570 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/old-k8s-version-947203/proxy-client.crt with IP's: []
	I1010 19:14:28.795105  139570 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/old-k8s-version-947203/proxy-client.crt ...
	I1010 19:14:28.795151  139570 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/old-k8s-version-947203/proxy-client.crt: {Name:mkd915116dd48e2ee087b8bf1cbee45e1f73a6d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 19:14:28.795322  139570 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/old-k8s-version-947203/proxy-client.key ...
	I1010 19:14:28.795338  139570 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/old-k8s-version-947203/proxy-client.key: {Name:mk45aab47fde70da02aa548e562fcfd2e15ca5c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 19:14:28.795508  139570 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876.pem (1338 bytes)
	W1010 19:14:28.795551  139570 certs.go:480] ignoring /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876_empty.pem, impossibly tiny 0 bytes
	I1010 19:14:28.795560  139570 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem (1679 bytes)
	I1010 19:14:28.795581  139570 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem (1078 bytes)
	I1010 19:14:28.795602  139570 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem (1123 bytes)
	I1010 19:14:28.795623  139570 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem (1675 bytes)
	I1010 19:14:28.795660  139570 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem (1708 bytes)
	I1010 19:14:28.796277  139570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1010 19:14:28.828527  139570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1010 19:14:28.857238  139570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1010 19:14:28.887991  139570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1010 19:14:28.922840  139570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/old-k8s-version-947203/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1010 19:14:28.977082  139570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/old-k8s-version-947203/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1010 19:14:29.034958  139570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/old-k8s-version-947203/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1010 19:14:29.071322  139570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/old-k8s-version-947203/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1010 19:14:29.104949  139570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1010 19:14:29.145678  139570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876.pem --> /usr/share/ca-certificates/88876.pem (1338 bytes)
	I1010 19:14:29.179867  139570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem --> /usr/share/ca-certificates/888762.pem (1708 bytes)
	I1010 19:14:29.211218  139570 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1010 19:14:29.232772  139570 ssh_runner.go:195] Run: openssl version
	I1010 19:14:29.239399  139570 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1010 19:14:29.251642  139570 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1010 19:14:29.257147  139570 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 10 17:58 /usr/share/ca-certificates/minikubeCA.pem
	I1010 19:14:29.257231  139570 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1010 19:14:29.263975  139570 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1010 19:14:29.278023  139570 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/88876.pem && ln -fs /usr/share/ca-certificates/88876.pem /etc/ssl/certs/88876.pem"
	I1010 19:14:29.292220  139570 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/88876.pem
	I1010 19:14:29.297666  139570 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 10 18:09 /usr/share/ca-certificates/88876.pem
	I1010 19:14:29.297742  139570 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/88876.pem
	I1010 19:14:29.304645  139570 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/88876.pem /etc/ssl/certs/51391683.0"
	I1010 19:14:29.318091  139570 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/888762.pem && ln -fs /usr/share/ca-certificates/888762.pem /etc/ssl/certs/888762.pem"
	I1010 19:14:29.331756  139570 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/888762.pem
	I1010 19:14:29.337623  139570 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 10 18:09 /usr/share/ca-certificates/888762.pem
	I1010 19:14:29.337685  139570 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/888762.pem
	I1010 19:14:29.345307  139570 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/888762.pem /etc/ssl/certs/3ec20f2e.0"
	I1010 19:14:29.360364  139570 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1010 19:14:29.366170  139570 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1010 19:14:29.366239  139570 kubeadm.go:392] StartCluster: {Name:old-k8s-version-947203 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-947203 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.112 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 19:14:29.366361  139570 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1010 19:14:29.366441  139570 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1010 19:14:29.418196  139570 cri.go:89] found id: ""
	I1010 19:14:29.418283  139570 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1010 19:14:29.435663  139570 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1010 19:14:29.448888  139570 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1010 19:14:29.462406  139570 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1010 19:14:29.462436  139570 kubeadm.go:157] found existing configuration files:
	
	I1010 19:14:29.462491  139570 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1010 19:14:29.475690  139570 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1010 19:14:29.475761  139570 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1010 19:14:29.489555  139570 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1010 19:14:29.499663  139570 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1010 19:14:29.499725  139570 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1010 19:14:29.513073  139570 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1010 19:14:29.525182  139570 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1010 19:14:29.525247  139570 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1010 19:14:29.537627  139570 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1010 19:14:29.548057  139570 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1010 19:14:29.548127  139570 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1010 19:14:29.558106  139570 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1010 19:14:29.681994  139570 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1010 19:14:29.682077  139570 kubeadm.go:310] [preflight] Running pre-flight checks
	I1010 19:14:29.872376  139570 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1010 19:14:29.872524  139570 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1010 19:14:29.872641  139570 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1010 19:14:30.090108  139570 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1010 19:14:30.093128  139570 out.go:235]   - Generating certificates and keys ...
	I1010 19:14:30.093256  139570 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1010 19:14:30.093334  139570 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1010 19:14:30.451633  139570 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1010 19:14:30.533678  139570 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1010 19:14:30.713132  139570 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1010 19:14:31.124230  139570 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1010 19:14:31.211429  139570 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1010 19:14:31.215401  139570 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-947203] and IPs [192.168.61.112 127.0.0.1 ::1]
	I1010 19:14:31.459871  139570 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1010 19:14:31.464268  139570 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-947203] and IPs [192.168.61.112 127.0.0.1 ::1]
	I1010 19:14:31.626413  139570 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1010 19:14:31.748348  139570 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1010 19:14:31.933036  139570 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1010 19:14:31.933299  139570 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1010 19:14:32.126617  139570 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1010 19:14:32.388049  139570 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1010 19:14:32.541058  139570 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1010 19:14:32.729745  139570 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1010 19:14:32.764922  139570 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1010 19:14:32.769447  139570 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1010 19:14:32.769559  139570 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1010 19:14:32.915634  139570 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1010 19:14:32.917535  139570 out.go:235]   - Booting up control plane ...
	I1010 19:14:32.917678  139570 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1010 19:14:32.924875  139570 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1010 19:14:32.925954  139570 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1010 19:14:32.926910  139570 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1010 19:14:32.934013  139570 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1010 19:15:12.935990  139570 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1010 19:15:12.936481  139570 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1010 19:15:12.936790  139570 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1010 19:15:17.937308  139570 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1010 19:15:17.937546  139570 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1010 19:15:27.938407  139570 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1010 19:15:27.938738  139570 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1010 19:15:47.939700  139570 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1010 19:15:47.939919  139570 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1010 19:16:27.941080  139570 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1010 19:16:27.941373  139570 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1010 19:16:27.941402  139570 kubeadm.go:310] 
	I1010 19:16:27.941468  139570 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1010 19:16:27.941513  139570 kubeadm.go:310] 		timed out waiting for the condition
	I1010 19:16:27.941519  139570 kubeadm.go:310] 
	I1010 19:16:27.941549  139570 kubeadm.go:310] 	This error is likely caused by:
	I1010 19:16:27.941578  139570 kubeadm.go:310] 		- The kubelet is not running
	I1010 19:16:27.941735  139570 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1010 19:16:27.941762  139570 kubeadm.go:310] 
	I1010 19:16:27.941902  139570 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1010 19:16:27.941954  139570 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1010 19:16:27.942022  139570 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1010 19:16:27.942039  139570 kubeadm.go:310] 
	I1010 19:16:27.942224  139570 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1010 19:16:27.942358  139570 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1010 19:16:27.942368  139570 kubeadm.go:310] 
	I1010 19:16:27.942518  139570 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1010 19:16:27.942656  139570 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1010 19:16:27.942768  139570 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1010 19:16:27.942868  139570 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1010 19:16:27.942889  139570 kubeadm.go:310] 
	I1010 19:16:27.943779  139570 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1010 19:16:27.943914  139570 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1010 19:16:27.944014  139570 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1010 19:16:27.944165  139570 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-947203] and IPs [192.168.61.112 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-947203] and IPs [192.168.61.112 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-947203] and IPs [192.168.61.112 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-947203] and IPs [192.168.61.112 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1010 19:16:27.944220  139570 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1010 19:16:29.328989  139570 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.38473057s)
	I1010 19:16:29.329076  139570 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 19:16:29.346369  139570 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1010 19:16:29.359251  139570 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1010 19:16:29.359280  139570 kubeadm.go:157] found existing configuration files:
	
	I1010 19:16:29.359341  139570 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1010 19:16:29.372446  139570 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1010 19:16:29.372582  139570 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1010 19:16:29.385560  139570 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1010 19:16:29.396827  139570 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1010 19:16:29.396920  139570 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1010 19:16:29.408689  139570 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1010 19:16:29.419979  139570 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1010 19:16:29.420047  139570 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1010 19:16:29.431738  139570 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1010 19:16:29.441586  139570 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1010 19:16:29.441655  139570 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1010 19:16:29.451644  139570 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1010 19:16:29.704579  139570 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1010 19:18:25.881591  139570 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1010 19:18:25.881671  139570 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1010 19:18:25.883710  139570 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1010 19:18:25.883786  139570 kubeadm.go:310] [preflight] Running pre-flight checks
	I1010 19:18:25.883865  139570 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1010 19:18:25.883960  139570 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1010 19:18:25.884058  139570 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1010 19:18:25.884144  139570 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1010 19:18:25.886143  139570 out.go:235]   - Generating certificates and keys ...
	I1010 19:18:25.886216  139570 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1010 19:18:25.886287  139570 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1010 19:18:25.886362  139570 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1010 19:18:25.886441  139570 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1010 19:18:25.886530  139570 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1010 19:18:25.886577  139570 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1010 19:18:25.886633  139570 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1010 19:18:25.886684  139570 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1010 19:18:25.886749  139570 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1010 19:18:25.886842  139570 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1010 19:18:25.886880  139570 kubeadm.go:310] [certs] Using the existing "sa" key
	I1010 19:18:25.886962  139570 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1010 19:18:25.887024  139570 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1010 19:18:25.887070  139570 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1010 19:18:25.887136  139570 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1010 19:18:25.887199  139570 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1010 19:18:25.887284  139570 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1010 19:18:25.887409  139570 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1010 19:18:25.887466  139570 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1010 19:18:25.887574  139570 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1010 19:18:25.889067  139570 out.go:235]   - Booting up control plane ...
	I1010 19:18:25.889180  139570 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1010 19:18:25.889267  139570 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1010 19:18:25.889343  139570 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1010 19:18:25.889449  139570 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1010 19:18:25.889638  139570 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1010 19:18:25.889717  139570 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1010 19:18:25.889812  139570 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1010 19:18:25.890055  139570 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1010 19:18:25.890157  139570 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1010 19:18:25.890346  139570 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1010 19:18:25.890416  139570 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1010 19:18:25.890594  139570 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1010 19:18:25.890658  139570 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1010 19:18:25.890865  139570 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1010 19:18:25.890949  139570 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1010 19:18:25.891219  139570 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1010 19:18:25.891230  139570 kubeadm.go:310] 
	I1010 19:18:25.891277  139570 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1010 19:18:25.891310  139570 kubeadm.go:310] 		timed out waiting for the condition
	I1010 19:18:25.891320  139570 kubeadm.go:310] 
	I1010 19:18:25.891350  139570 kubeadm.go:310] 	This error is likely caused by:
	I1010 19:18:25.891382  139570 kubeadm.go:310] 		- The kubelet is not running
	I1010 19:18:25.891461  139570 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1010 19:18:25.891468  139570 kubeadm.go:310] 
	I1010 19:18:25.891570  139570 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1010 19:18:25.891613  139570 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1010 19:18:25.891657  139570 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1010 19:18:25.891666  139570 kubeadm.go:310] 
	I1010 19:18:25.891834  139570 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1010 19:18:25.891927  139570 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1010 19:18:25.891936  139570 kubeadm.go:310] 
	I1010 19:18:25.892071  139570 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1010 19:18:25.892196  139570 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1010 19:18:25.892278  139570 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1010 19:18:25.892352  139570 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1010 19:18:25.892366  139570 kubeadm.go:310] 
	I1010 19:18:25.892424  139570 kubeadm.go:394] duration metric: took 3m56.526191175s to StartCluster
	I1010 19:18:25.892485  139570 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:18:25.892539  139570 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:18:25.937971  139570 cri.go:89] found id: ""
	I1010 19:18:25.937998  139570 logs.go:282] 0 containers: []
	W1010 19:18:25.938007  139570 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:18:25.938014  139570 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:18:25.938086  139570 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:18:25.975062  139570 cri.go:89] found id: ""
	I1010 19:18:25.975089  139570 logs.go:282] 0 containers: []
	W1010 19:18:25.975100  139570 logs.go:284] No container was found matching "etcd"
	I1010 19:18:25.975108  139570 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:18:25.975173  139570 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:18:26.013338  139570 cri.go:89] found id: ""
	I1010 19:18:26.013368  139570 logs.go:282] 0 containers: []
	W1010 19:18:26.013377  139570 logs.go:284] No container was found matching "coredns"
	I1010 19:18:26.013384  139570 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:18:26.013456  139570 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:18:26.049767  139570 cri.go:89] found id: ""
	I1010 19:18:26.049801  139570 logs.go:282] 0 containers: []
	W1010 19:18:26.049817  139570 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:18:26.049824  139570 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:18:26.049883  139570 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:18:26.083828  139570 cri.go:89] found id: ""
	I1010 19:18:26.083854  139570 logs.go:282] 0 containers: []
	W1010 19:18:26.083862  139570 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:18:26.083869  139570 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:18:26.083921  139570 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:18:26.116996  139570 cri.go:89] found id: ""
	I1010 19:18:26.117036  139570 logs.go:282] 0 containers: []
	W1010 19:18:26.117048  139570 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:18:26.117058  139570 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:18:26.117119  139570 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:18:26.155785  139570 cri.go:89] found id: ""
	I1010 19:18:26.155810  139570 logs.go:282] 0 containers: []
	W1010 19:18:26.155819  139570 logs.go:284] No container was found matching "kindnet"
	I1010 19:18:26.155829  139570 logs.go:123] Gathering logs for kubelet ...
	I1010 19:18:26.155841  139570 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:18:26.207677  139570 logs.go:123] Gathering logs for dmesg ...
	I1010 19:18:26.207721  139570 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:18:26.221927  139570 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:18:26.221957  139570 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:18:26.354434  139570 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:18:26.354457  139570 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:18:26.354470  139570 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:18:26.471736  139570 logs.go:123] Gathering logs for container status ...
	I1010 19:18:26.471787  139570 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1010 19:18:26.529541  139570 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1010 19:18:26.529598  139570 out.go:270] * 
	* 
	W1010 19:18:26.529661  139570 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1010 19:18:26.529674  139570 out.go:270] * 
	* 
	W1010 19:18:26.530549  139570 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1010 19:18:26.533974  139570 out.go:201] 
	W1010 19:18:26.535318  139570 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1010 19:18:26.535367  139570 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1010 19:18:26.535405  139570 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1010 19:18:26.537148  139570 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-947203 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-947203 -n old-k8s-version-947203
E1010 19:18:26.691207   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/calico-873515/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-947203 -n old-k8s-version-947203: exit status 6 (229.414441ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1010 19:18:26.810005  147304 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-947203" does not appear in /home/jenkins/minikube-integration/19787-81676/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-947203" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (286.53s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (139.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-320324 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-320324 --alsologtostderr -v=3: exit status 82 (2m0.845077898s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-320324"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1010 19:15:41.954865  145062 out.go:345] Setting OutFile to fd 1 ...
	I1010 19:15:41.955003  145062 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 19:15:41.955015  145062 out.go:358] Setting ErrFile to fd 2...
	I1010 19:15:41.955022  145062 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 19:15:41.955219  145062 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19787-81676/.minikube/bin
	I1010 19:15:41.955453  145062 out.go:352] Setting JSON to false
	I1010 19:15:41.955527  145062 mustload.go:65] Loading cluster: no-preload-320324
	I1010 19:15:41.955868  145062 config.go:182] Loaded profile config "no-preload-320324": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 19:15:41.955937  145062 profile.go:143] Saving config to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/no-preload-320324/config.json ...
	I1010 19:15:41.956103  145062 mustload.go:65] Loading cluster: no-preload-320324
	I1010 19:15:41.956198  145062 config.go:182] Loaded profile config "no-preload-320324": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 19:15:41.956221  145062 stop.go:39] StopHost: no-preload-320324
	I1010 19:15:41.956598  145062 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:15:41.956656  145062 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:15:41.971853  145062 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41333
	I1010 19:15:41.972369  145062 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:15:41.973048  145062 main.go:141] libmachine: Using API Version  1
	I1010 19:15:41.973086  145062 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:15:41.973496  145062 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:15:41.976150  145062 out.go:177] * Stopping node "no-preload-320324"  ...
	I1010 19:15:41.977399  145062 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1010 19:15:41.977439  145062 main.go:141] libmachine: (no-preload-320324) Calling .DriverName
	I1010 19:15:41.977692  145062 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1010 19:15:41.977735  145062 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHHostname
	I1010 19:15:41.980599  145062 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:15:41.981080  145062 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:14:36 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:15:41.981118  145062 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:15:41.981296  145062 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHPort
	I1010 19:15:41.981506  145062 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:15:41.981682  145062 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHUsername
	I1010 19:15:41.981861  145062 sshutil.go:53] new ssh client: &{IP:192.168.72.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/no-preload-320324/id_rsa Username:docker}
	I1010 19:15:42.080560  145062 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1010 19:15:42.146508  145062 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1010 19:15:42.214869  145062 main.go:141] libmachine: Stopping "no-preload-320324"...
	I1010 19:15:42.214923  145062 main.go:141] libmachine: (no-preload-320324) Calling .GetState
	I1010 19:15:42.216803  145062 main.go:141] libmachine: (no-preload-320324) Calling .Stop
	I1010 19:15:42.221243  145062 main.go:141] libmachine: (no-preload-320324) Waiting for machine to stop 0/120
	I1010 19:15:43.222787  145062 main.go:141] libmachine: (no-preload-320324) Waiting for machine to stop 1/120
	I1010 19:15:44.224014  145062 main.go:141] libmachine: (no-preload-320324) Waiting for machine to stop 2/120
	I1010 19:15:45.225370  145062 main.go:141] libmachine: (no-preload-320324) Waiting for machine to stop 3/120
	I1010 19:15:46.227619  145062 main.go:141] libmachine: (no-preload-320324) Waiting for machine to stop 4/120
	I1010 19:15:47.229637  145062 main.go:141] libmachine: (no-preload-320324) Waiting for machine to stop 5/120
	I1010 19:15:48.231174  145062 main.go:141] libmachine: (no-preload-320324) Waiting for machine to stop 6/120
	I1010 19:15:49.232989  145062 main.go:141] libmachine: (no-preload-320324) Waiting for machine to stop 7/120
	I1010 19:15:50.235569  145062 main.go:141] libmachine: (no-preload-320324) Waiting for machine to stop 8/120
	I1010 19:15:51.236928  145062 main.go:141] libmachine: (no-preload-320324) Waiting for machine to stop 9/120
	I1010 19:15:52.239195  145062 main.go:141] libmachine: (no-preload-320324) Waiting for machine to stop 10/120
	I1010 19:15:53.240553  145062 main.go:141] libmachine: (no-preload-320324) Waiting for machine to stop 11/120
	I1010 19:15:54.242041  145062 main.go:141] libmachine: (no-preload-320324) Waiting for machine to stop 12/120
	I1010 19:15:55.243581  145062 main.go:141] libmachine: (no-preload-320324) Waiting for machine to stop 13/120
	I1010 19:15:56.245314  145062 main.go:141] libmachine: (no-preload-320324) Waiting for machine to stop 14/120
	I1010 19:15:57.247418  145062 main.go:141] libmachine: (no-preload-320324) Waiting for machine to stop 15/120
	I1010 19:15:58.249342  145062 main.go:141] libmachine: (no-preload-320324) Waiting for machine to stop 16/120
	I1010 19:15:59.250800  145062 main.go:141] libmachine: (no-preload-320324) Waiting for machine to stop 17/120
	I1010 19:16:00.252275  145062 main.go:141] libmachine: (no-preload-320324) Waiting for machine to stop 18/120
	I1010 19:16:01.253895  145062 main.go:141] libmachine: (no-preload-320324) Waiting for machine to stop 19/120
	I1010 19:16:02.255749  145062 main.go:141] libmachine: (no-preload-320324) Waiting for machine to stop 20/120
	I1010 19:16:03.257413  145062 main.go:141] libmachine: (no-preload-320324) Waiting for machine to stop 21/120
	I1010 19:16:04.259245  145062 main.go:141] libmachine: (no-preload-320324) Waiting for machine to stop 22/120
	I1010 19:16:05.261681  145062 main.go:141] libmachine: (no-preload-320324) Waiting for machine to stop 23/120
	I1010 19:16:06.263617  145062 main.go:141] libmachine: (no-preload-320324) Waiting for machine to stop 24/120
	I1010 19:16:07.265949  145062 main.go:141] libmachine: (no-preload-320324) Waiting for machine to stop 25/120
	I1010 19:16:08.268106  145062 main.go:141] libmachine: (no-preload-320324) Waiting for machine to stop 26/120
	I1010 19:16:09.269680  145062 main.go:141] libmachine: (no-preload-320324) Waiting for machine to stop 27/120
	I1010 19:16:10.271210  145062 main.go:141] libmachine: (no-preload-320324) Waiting for machine to stop 28/120
	I1010 19:16:11.272619  145062 main.go:141] libmachine: (no-preload-320324) Waiting for machine to stop 29/120
	I1010 19:16:12.274438  145062 main.go:141] libmachine: (no-preload-320324) Waiting for machine to stop 30/120
	I1010 19:16:13.276325  145062 main.go:141] libmachine: (no-preload-320324) Waiting for machine to stop 31/120
	I1010 19:16:14.277874  145062 main.go:141] libmachine: (no-preload-320324) Waiting for machine to stop 32/120
	I1010 19:16:15.279388  145062 main.go:141] libmachine: (no-preload-320324) Waiting for machine to stop 33/120
	I1010 19:16:16.280990  145062 main.go:141] libmachine: (no-preload-320324) Waiting for machine to stop 34/120
	I1010 19:16:17.283282  145062 main.go:141] libmachine: (no-preload-320324) Waiting for machine to stop 35/120
	I1010 19:16:18.285136  145062 main.go:141] libmachine: (no-preload-320324) Waiting for machine to stop 36/120
	I1010 19:16:19.287761  145062 main.go:141] libmachine: (no-preload-320324) Waiting for machine to stop 37/120
	I1010 19:16:20.289429  145062 main.go:141] libmachine: (no-preload-320324) Waiting for machine to stop 38/120
	I1010 19:16:21.291380  145062 main.go:141] libmachine: (no-preload-320324) Waiting for machine to stop 39/120
	I1010 19:16:22.293675  145062 main.go:141] libmachine: (no-preload-320324) Waiting for machine to stop 40/120
	I1010 19:16:23.295483  145062 main.go:141] libmachine: (no-preload-320324) Waiting for machine to stop 41/120
	I1010 19:16:24.297112  145062 main.go:141] libmachine: (no-preload-320324) Waiting for machine to stop 42/120
	I1010 19:16:25.299563  145062 main.go:141] libmachine: (no-preload-320324) Waiting for machine to stop 43/120
	I1010 19:16:26.301219  145062 main.go:141] libmachine: (no-preload-320324) Waiting for machine to stop 44/120
	I1010 19:16:27.303322  145062 main.go:141] libmachine: (no-preload-320324) Waiting for machine to stop 45/120
	I1010 19:16:28.305083  145062 main.go:141] libmachine: (no-preload-320324) Waiting for machine to stop 46/120
	I1010 19:16:29.306647  145062 main.go:141] libmachine: (no-preload-320324) Waiting for machine to stop 47/120
	I1010 19:16:30.308505  145062 main.go:141] libmachine: (no-preload-320324) Waiting for machine to stop 48/120
	I1010 19:16:31.309927  145062 main.go:141] libmachine: (no-preload-320324) Waiting for machine to stop 49/120
	I1010 19:16:32.311901  145062 main.go:141] libmachine: (no-preload-320324) Waiting for machine to stop 50/120
	I1010 19:16:33.313620  145062 main.go:141] libmachine: (no-preload-320324) Waiting for machine to stop 51/120
	I1010 19:16:34.315636  145062 main.go:141] libmachine: (no-preload-320324) Waiting for machine to stop 52/120
	I1010 19:16:35.317069  145062 main.go:141] libmachine: (no-preload-320324) Waiting for machine to stop 53/120
	I1010 19:16:36.318458  145062 main.go:141] libmachine: (no-preload-320324) Waiting for machine to stop 54/120
	I1010 19:16:37.320317  145062 main.go:141] libmachine: (no-preload-320324) Waiting for machine to stop 55/120
	I1010 19:16:38.321702  145062 main.go:141] libmachine: (no-preload-320324) Waiting for machine to stop 56/120
	I1010 19:16:39.323527  145062 main.go:141] libmachine: (no-preload-320324) Waiting for machine to stop 57/120
	I1010 19:16:40.325736  145062 main.go:141] libmachine: (no-preload-320324) Waiting for machine to stop 58/120
	I1010 19:16:41.327045  145062 main.go:141] libmachine: (no-preload-320324) Waiting for machine to stop 59/120
	I1010 19:16:42.329291  145062 main.go:141] libmachine: (no-preload-320324) Waiting for machine to stop 60/120
	I1010 19:16:43.331443  145062 main.go:141] libmachine: (no-preload-320324) Waiting for machine to stop 61/120
	I1010 19:16:44.333850  145062 main.go:141] libmachine: (no-preload-320324) Waiting for machine to stop 62/120
	I1010 19:16:45.335456  145062 main.go:141] libmachine: (no-preload-320324) Waiting for machine to stop 63/120
	I1010 19:16:46.336939  145062 main.go:141] libmachine: (no-preload-320324) Waiting for machine to stop 64/120
	I1010 19:16:47.339126  145062 main.go:141] libmachine: (no-preload-320324) Waiting for machine to stop 65/120
	I1010 19:16:48.340781  145062 main.go:141] libmachine: (no-preload-320324) Waiting for machine to stop 66/120
	I1010 19:16:49.342341  145062 main.go:141] libmachine: (no-preload-320324) Waiting for machine to stop 67/120
	I1010 19:16:50.343696  145062 main.go:141] libmachine: (no-preload-320324) Waiting for machine to stop 68/120
	I1010 19:16:51.345183  145062 main.go:141] libmachine: (no-preload-320324) Waiting for machine to stop 69/120
	I1010 19:16:52.348556  145062 main.go:141] libmachine: (no-preload-320324) Waiting for machine to stop 70/120
	I1010 19:16:53.350305  145062 main.go:141] libmachine: (no-preload-320324) Waiting for machine to stop 71/120
	I1010 19:16:54.352235  145062 main.go:141] libmachine: (no-preload-320324) Waiting for machine to stop 72/120
	I1010 19:16:55.353770  145062 main.go:141] libmachine: (no-preload-320324) Waiting for machine to stop 73/120
	I1010 19:16:56.355569  145062 main.go:141] libmachine: (no-preload-320324) Waiting for machine to stop 74/120
	I1010 19:16:57.357706  145062 main.go:141] libmachine: (no-preload-320324) Waiting for machine to stop 75/120
	I1010 19:16:58.359356  145062 main.go:141] libmachine: (no-preload-320324) Waiting for machine to stop 76/120
	I1010 19:16:59.664475  145062 main.go:141] libmachine: (no-preload-320324) Waiting for machine to stop 77/120
	I1010 19:17:00.666333  145062 main.go:141] libmachine: (no-preload-320324) Waiting for machine to stop 78/120
	I1010 19:17:01.667932  145062 main.go:141] libmachine: (no-preload-320324) Waiting for machine to stop 79/120
	I1010 19:17:02.670281  145062 main.go:141] libmachine: (no-preload-320324) Waiting for machine to stop 80/120
	I1010 19:17:03.671677  145062 main.go:141] libmachine: (no-preload-320324) Waiting for machine to stop 81/120
	I1010 19:17:04.673257  145062 main.go:141] libmachine: (no-preload-320324) Waiting for machine to stop 82/120
	I1010 19:17:05.674586  145062 main.go:141] libmachine: (no-preload-320324) Waiting for machine to stop 83/120
	I1010 19:17:06.676190  145062 main.go:141] libmachine: (no-preload-320324) Waiting for machine to stop 84/120
	I1010 19:17:07.677784  145062 main.go:141] libmachine: (no-preload-320324) Waiting for machine to stop 85/120
	I1010 19:17:08.679416  145062 main.go:141] libmachine: (no-preload-320324) Waiting for machine to stop 86/120
	I1010 19:17:09.681019  145062 main.go:141] libmachine: (no-preload-320324) Waiting for machine to stop 87/120
	I1010 19:17:10.682474  145062 main.go:141] libmachine: (no-preload-320324) Waiting for machine to stop 88/120
	I1010 19:17:11.684232  145062 main.go:141] libmachine: (no-preload-320324) Waiting for machine to stop 89/120
	I1010 19:17:12.686560  145062 main.go:141] libmachine: (no-preload-320324) Waiting for machine to stop 90/120
	I1010 19:17:13.687845  145062 main.go:141] libmachine: (no-preload-320324) Waiting for machine to stop 91/120
	I1010 19:17:14.689199  145062 main.go:141] libmachine: (no-preload-320324) Waiting for machine to stop 92/120
	I1010 19:17:15.691739  145062 main.go:141] libmachine: (no-preload-320324) Waiting for machine to stop 93/120
	I1010 19:17:16.693398  145062 main.go:141] libmachine: (no-preload-320324) Waiting for machine to stop 94/120
	I1010 19:17:17.695823  145062 main.go:141] libmachine: (no-preload-320324) Waiting for machine to stop 95/120
	I1010 19:17:18.697743  145062 main.go:141] libmachine: (no-preload-320324) Waiting for machine to stop 96/120
	I1010 19:17:19.699107  145062 main.go:141] libmachine: (no-preload-320324) Waiting for machine to stop 97/120
	I1010 19:17:20.701385  145062 main.go:141] libmachine: (no-preload-320324) Waiting for machine to stop 98/120
	I1010 19:17:21.702835  145062 main.go:141] libmachine: (no-preload-320324) Waiting for machine to stop 99/120
	I1010 19:17:22.705127  145062 main.go:141] libmachine: (no-preload-320324) Waiting for machine to stop 100/120
	I1010 19:17:23.707418  145062 main.go:141] libmachine: (no-preload-320324) Waiting for machine to stop 101/120
	I1010 19:17:24.708701  145062 main.go:141] libmachine: (no-preload-320324) Waiting for machine to stop 102/120
	I1010 19:17:25.710721  145062 main.go:141] libmachine: (no-preload-320324) Waiting for machine to stop 103/120
	I1010 19:17:26.712257  145062 main.go:141] libmachine: (no-preload-320324) Waiting for machine to stop 104/120
	I1010 19:17:27.714155  145062 main.go:141] libmachine: (no-preload-320324) Waiting for machine to stop 105/120
	I1010 19:17:28.715925  145062 main.go:141] libmachine: (no-preload-320324) Waiting for machine to stop 106/120
	I1010 19:17:29.717432  145062 main.go:141] libmachine: (no-preload-320324) Waiting for machine to stop 107/120
	I1010 19:17:30.720050  145062 main.go:141] libmachine: (no-preload-320324) Waiting for machine to stop 108/120
	I1010 19:17:31.721584  145062 main.go:141] libmachine: (no-preload-320324) Waiting for machine to stop 109/120
	I1010 19:17:32.723435  145062 main.go:141] libmachine: (no-preload-320324) Waiting for machine to stop 110/120
	I1010 19:17:33.725084  145062 main.go:141] libmachine: (no-preload-320324) Waiting for machine to stop 111/120
	I1010 19:17:34.726693  145062 main.go:141] libmachine: (no-preload-320324) Waiting for machine to stop 112/120
	I1010 19:17:35.728323  145062 main.go:141] libmachine: (no-preload-320324) Waiting for machine to stop 113/120
	I1010 19:17:36.729932  145062 main.go:141] libmachine: (no-preload-320324) Waiting for machine to stop 114/120
	I1010 19:17:37.732021  145062 main.go:141] libmachine: (no-preload-320324) Waiting for machine to stop 115/120
	I1010 19:17:38.733430  145062 main.go:141] libmachine: (no-preload-320324) Waiting for machine to stop 116/120
	I1010 19:17:39.734919  145062 main.go:141] libmachine: (no-preload-320324) Waiting for machine to stop 117/120
	I1010 19:17:40.736538  145062 main.go:141] libmachine: (no-preload-320324) Waiting for machine to stop 118/120
	I1010 19:17:41.738223  145062 main.go:141] libmachine: (no-preload-320324) Waiting for machine to stop 119/120
	I1010 19:17:42.739657  145062 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1010 19:17:42.739753  145062 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1010 19:17:42.741898  145062 out.go:201] 
	W1010 19:17:42.743641  145062 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1010 19:17:42.743666  145062 out.go:270] * 
	* 
	W1010 19:17:42.748157  145062 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1010 19:17:42.750387  145062 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-320324 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-320324 -n no-preload-320324
E1010 19:17:45.729096   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/calico-873515/client.crt: no such file or directory" logger="UnhandledError"
E1010 19:17:45.863476   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/auto-873515/client.crt: no such file or directory" logger="UnhandledError"
E1010 19:17:50.018938   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/functional-346405/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-320324 -n no-preload-320324: exit status 3 (18.5328342s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1010 19:18:01.285225  146523 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.11:22: connect: no route to host
	E1010 19:18:01.285247  146523 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.72.11:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-320324" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (139.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (139.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-541370 --alsologtostderr -v=3
E1010 19:16:34.177536   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/auto-873515/client.crt: no such file or directory" logger="UnhandledError"
E1010 19:16:44.419446   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/auto-873515/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-541370 --alsologtostderr -v=3: exit status 82 (2m0.747842548s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-541370"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1010 19:16:30.133814  145604 out.go:345] Setting OutFile to fd 1 ...
	I1010 19:16:30.134106  145604 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 19:16:30.134117  145604 out.go:358] Setting ErrFile to fd 2...
	I1010 19:16:30.134121  145604 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 19:16:30.134305  145604 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19787-81676/.minikube/bin
	I1010 19:16:30.134535  145604 out.go:352] Setting JSON to false
	I1010 19:16:30.134609  145604 mustload.go:65] Loading cluster: embed-certs-541370
	I1010 19:16:30.134970  145604 config.go:182] Loaded profile config "embed-certs-541370": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 19:16:30.135048  145604 profile.go:143] Saving config to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/embed-certs-541370/config.json ...
	I1010 19:16:30.135241  145604 mustload.go:65] Loading cluster: embed-certs-541370
	I1010 19:16:30.135353  145604 config.go:182] Loaded profile config "embed-certs-541370": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 19:16:30.135387  145604 stop.go:39] StopHost: embed-certs-541370
	I1010 19:16:30.135757  145604 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:16:30.135808  145604 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:16:30.152934  145604 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46405
	I1010 19:16:30.153815  145604 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:16:30.154546  145604 main.go:141] libmachine: Using API Version  1
	I1010 19:16:30.154573  145604 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:16:30.154922  145604 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:16:30.157383  145604 out.go:177] * Stopping node "embed-certs-541370"  ...
	I1010 19:16:30.158859  145604 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1010 19:16:30.158914  145604 main.go:141] libmachine: (embed-certs-541370) Calling .DriverName
	I1010 19:16:30.159183  145604 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1010 19:16:30.159228  145604 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHHostname
	I1010 19:16:30.162838  145604 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:16:30.163394  145604 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:15:07 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:16:30.163426  145604 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:16:30.163588  145604 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHPort
	I1010 19:16:30.163732  145604 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:16:30.163886  145604 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHUsername
	I1010 19:16:30.163985  145604 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/embed-certs-541370/id_rsa Username:docker}
	I1010 19:16:30.284049  145604 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1010 19:16:30.349466  145604 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1010 19:16:30.419468  145604 main.go:141] libmachine: Stopping "embed-certs-541370"...
	I1010 19:16:30.419546  145604 main.go:141] libmachine: (embed-certs-541370) Calling .GetState
	I1010 19:16:30.421526  145604 main.go:141] libmachine: (embed-certs-541370) Calling .Stop
	I1010 19:16:30.425717  145604 main.go:141] libmachine: (embed-certs-541370) Waiting for machine to stop 0/120
	I1010 19:16:31.427325  145604 main.go:141] libmachine: (embed-certs-541370) Waiting for machine to stop 1/120
	I1010 19:16:32.428659  145604 main.go:141] libmachine: (embed-certs-541370) Waiting for machine to stop 2/120
	I1010 19:16:33.430196  145604 main.go:141] libmachine: (embed-certs-541370) Waiting for machine to stop 3/120
	I1010 19:16:34.431561  145604 main.go:141] libmachine: (embed-certs-541370) Waiting for machine to stop 4/120
	I1010 19:16:35.433854  145604 main.go:141] libmachine: (embed-certs-541370) Waiting for machine to stop 5/120
	I1010 19:16:36.435453  145604 main.go:141] libmachine: (embed-certs-541370) Waiting for machine to stop 6/120
	I1010 19:16:37.437539  145604 main.go:141] libmachine: (embed-certs-541370) Waiting for machine to stop 7/120
	I1010 19:16:38.439475  145604 main.go:141] libmachine: (embed-certs-541370) Waiting for machine to stop 8/120
	I1010 19:16:39.440936  145604 main.go:141] libmachine: (embed-certs-541370) Waiting for machine to stop 9/120
	I1010 19:16:40.443422  145604 main.go:141] libmachine: (embed-certs-541370) Waiting for machine to stop 10/120
	I1010 19:16:41.445029  145604 main.go:141] libmachine: (embed-certs-541370) Waiting for machine to stop 11/120
	I1010 19:16:42.447316  145604 main.go:141] libmachine: (embed-certs-541370) Waiting for machine to stop 12/120
	I1010 19:16:43.448970  145604 main.go:141] libmachine: (embed-certs-541370) Waiting for machine to stop 13/120
	I1010 19:16:44.450403  145604 main.go:141] libmachine: (embed-certs-541370) Waiting for machine to stop 14/120
	I1010 19:16:45.452620  145604 main.go:141] libmachine: (embed-certs-541370) Waiting for machine to stop 15/120
	I1010 19:16:46.454107  145604 main.go:141] libmachine: (embed-certs-541370) Waiting for machine to stop 16/120
	I1010 19:16:47.456110  145604 main.go:141] libmachine: (embed-certs-541370) Waiting for machine to stop 17/120
	I1010 19:16:48.457655  145604 main.go:141] libmachine: (embed-certs-541370) Waiting for machine to stop 18/120
	I1010 19:16:49.459686  145604 main.go:141] libmachine: (embed-certs-541370) Waiting for machine to stop 19/120
	I1010 19:16:50.461829  145604 main.go:141] libmachine: (embed-certs-541370) Waiting for machine to stop 20/120
	I1010 19:16:51.463567  145604 main.go:141] libmachine: (embed-certs-541370) Waiting for machine to stop 21/120
	I1010 19:16:52.465022  145604 main.go:141] libmachine: (embed-certs-541370) Waiting for machine to stop 22/120
	I1010 19:16:53.466646  145604 main.go:141] libmachine: (embed-certs-541370) Waiting for machine to stop 23/120
	I1010 19:16:54.468711  145604 main.go:141] libmachine: (embed-certs-541370) Waiting for machine to stop 24/120
	I1010 19:16:55.470447  145604 main.go:141] libmachine: (embed-certs-541370) Waiting for machine to stop 25/120
	I1010 19:16:56.471889  145604 main.go:141] libmachine: (embed-certs-541370) Waiting for machine to stop 26/120
	I1010 19:16:57.473338  145604 main.go:141] libmachine: (embed-certs-541370) Waiting for machine to stop 27/120
	I1010 19:16:58.475573  145604 main.go:141] libmachine: (embed-certs-541370) Waiting for machine to stop 28/120
	I1010 19:16:59.664694  145604 main.go:141] libmachine: (embed-certs-541370) Waiting for machine to stop 29/120
	I1010 19:17:00.666747  145604 main.go:141] libmachine: (embed-certs-541370) Waiting for machine to stop 30/120
	I1010 19:17:01.668874  145604 main.go:141] libmachine: (embed-certs-541370) Waiting for machine to stop 31/120
	I1010 19:17:02.670571  145604 main.go:141] libmachine: (embed-certs-541370) Waiting for machine to stop 32/120
	I1010 19:17:03.672062  145604 main.go:141] libmachine: (embed-certs-541370) Waiting for machine to stop 33/120
	I1010 19:17:04.674016  145604 main.go:141] libmachine: (embed-certs-541370) Waiting for machine to stop 34/120
	I1010 19:17:05.675862  145604 main.go:141] libmachine: (embed-certs-541370) Waiting for machine to stop 35/120
	I1010 19:17:06.677292  145604 main.go:141] libmachine: (embed-certs-541370) Waiting for machine to stop 36/120
	I1010 19:17:07.678995  145604 main.go:141] libmachine: (embed-certs-541370) Waiting for machine to stop 37/120
	I1010 19:17:08.680167  145604 main.go:141] libmachine: (embed-certs-541370) Waiting for machine to stop 38/120
	I1010 19:17:09.681697  145604 main.go:141] libmachine: (embed-certs-541370) Waiting for machine to stop 39/120
	I1010 19:17:10.683946  145604 main.go:141] libmachine: (embed-certs-541370) Waiting for machine to stop 40/120
	I1010 19:17:11.686136  145604 main.go:141] libmachine: (embed-certs-541370) Waiting for machine to stop 41/120
	I1010 19:17:12.687951  145604 main.go:141] libmachine: (embed-certs-541370) Waiting for machine to stop 42/120
	I1010 19:17:13.689010  145604 main.go:141] libmachine: (embed-certs-541370) Waiting for machine to stop 43/120
	I1010 19:17:14.690942  145604 main.go:141] libmachine: (embed-certs-541370) Waiting for machine to stop 44/120
	I1010 19:17:15.692795  145604 main.go:141] libmachine: (embed-certs-541370) Waiting for machine to stop 45/120
	I1010 19:17:16.694035  145604 main.go:141] libmachine: (embed-certs-541370) Waiting for machine to stop 46/120
	I1010 19:17:17.695520  145604 main.go:141] libmachine: (embed-certs-541370) Waiting for machine to stop 47/120
	I1010 19:17:18.697218  145604 main.go:141] libmachine: (embed-certs-541370) Waiting for machine to stop 48/120
	I1010 19:17:19.698785  145604 main.go:141] libmachine: (embed-certs-541370) Waiting for machine to stop 49/120
	I1010 19:17:20.701176  145604 main.go:141] libmachine: (embed-certs-541370) Waiting for machine to stop 50/120
	I1010 19:17:21.702747  145604 main.go:141] libmachine: (embed-certs-541370) Waiting for machine to stop 51/120
	I1010 19:17:22.704659  145604 main.go:141] libmachine: (embed-certs-541370) Waiting for machine to stop 52/120
	I1010 19:17:23.706381  145604 main.go:141] libmachine: (embed-certs-541370) Waiting for machine to stop 53/120
	I1010 19:17:24.708052  145604 main.go:141] libmachine: (embed-certs-541370) Waiting for machine to stop 54/120
	I1010 19:17:25.710444  145604 main.go:141] libmachine: (embed-certs-541370) Waiting for machine to stop 55/120
	I1010 19:17:26.711941  145604 main.go:141] libmachine: (embed-certs-541370) Waiting for machine to stop 56/120
	I1010 19:17:27.713535  145604 main.go:141] libmachine: (embed-certs-541370) Waiting for machine to stop 57/120
	I1010 19:17:28.715208  145604 main.go:141] libmachine: (embed-certs-541370) Waiting for machine to stop 58/120
	I1010 19:17:29.716879  145604 main.go:141] libmachine: (embed-certs-541370) Waiting for machine to stop 59/120
	I1010 19:17:30.719452  145604 main.go:141] libmachine: (embed-certs-541370) Waiting for machine to stop 60/120
	I1010 19:17:31.721266  145604 main.go:141] libmachine: (embed-certs-541370) Waiting for machine to stop 61/120
	I1010 19:17:32.723429  145604 main.go:141] libmachine: (embed-certs-541370) Waiting for machine to stop 62/120
	I1010 19:17:33.725664  145604 main.go:141] libmachine: (embed-certs-541370) Waiting for machine to stop 63/120
	I1010 19:17:34.727535  145604 main.go:141] libmachine: (embed-certs-541370) Waiting for machine to stop 64/120
	I1010 19:17:35.729353  145604 main.go:141] libmachine: (embed-certs-541370) Waiting for machine to stop 65/120
	I1010 19:17:36.731151  145604 main.go:141] libmachine: (embed-certs-541370) Waiting for machine to stop 66/120
	I1010 19:17:37.732682  145604 main.go:141] libmachine: (embed-certs-541370) Waiting for machine to stop 67/120
	I1010 19:17:38.734597  145604 main.go:141] libmachine: (embed-certs-541370) Waiting for machine to stop 68/120
	I1010 19:17:39.735747  145604 main.go:141] libmachine: (embed-certs-541370) Waiting for machine to stop 69/120
	I1010 19:17:40.737584  145604 main.go:141] libmachine: (embed-certs-541370) Waiting for machine to stop 70/120
	I1010 19:17:41.738834  145604 main.go:141] libmachine: (embed-certs-541370) Waiting for machine to stop 71/120
	I1010 19:17:42.740964  145604 main.go:141] libmachine: (embed-certs-541370) Waiting for machine to stop 72/120
	I1010 19:17:43.742467  145604 main.go:141] libmachine: (embed-certs-541370) Waiting for machine to stop 73/120
	I1010 19:17:44.743766  145604 main.go:141] libmachine: (embed-certs-541370) Waiting for machine to stop 74/120
	I1010 19:17:45.746016  145604 main.go:141] libmachine: (embed-certs-541370) Waiting for machine to stop 75/120
	I1010 19:17:46.747416  145604 main.go:141] libmachine: (embed-certs-541370) Waiting for machine to stop 76/120
	I1010 19:17:47.748704  145604 main.go:141] libmachine: (embed-certs-541370) Waiting for machine to stop 77/120
	I1010 19:17:48.750722  145604 main.go:141] libmachine: (embed-certs-541370) Waiting for machine to stop 78/120
	I1010 19:17:49.752260  145604 main.go:141] libmachine: (embed-certs-541370) Waiting for machine to stop 79/120
	I1010 19:17:50.754137  145604 main.go:141] libmachine: (embed-certs-541370) Waiting for machine to stop 80/120
	I1010 19:17:51.755407  145604 main.go:141] libmachine: (embed-certs-541370) Waiting for machine to stop 81/120
	I1010 19:17:52.756732  145604 main.go:141] libmachine: (embed-certs-541370) Waiting for machine to stop 82/120
	I1010 19:17:53.758064  145604 main.go:141] libmachine: (embed-certs-541370) Waiting for machine to stop 83/120
	I1010 19:17:54.759469  145604 main.go:141] libmachine: (embed-certs-541370) Waiting for machine to stop 84/120
	I1010 19:17:55.761746  145604 main.go:141] libmachine: (embed-certs-541370) Waiting for machine to stop 85/120
	I1010 19:17:56.763172  145604 main.go:141] libmachine: (embed-certs-541370) Waiting for machine to stop 86/120
	I1010 19:17:57.765360  145604 main.go:141] libmachine: (embed-certs-541370) Waiting for machine to stop 87/120
	I1010 19:17:58.766906  145604 main.go:141] libmachine: (embed-certs-541370) Waiting for machine to stop 88/120
	I1010 19:17:59.768312  145604 main.go:141] libmachine: (embed-certs-541370) Waiting for machine to stop 89/120
	I1010 19:18:00.769925  145604 main.go:141] libmachine: (embed-certs-541370) Waiting for machine to stop 90/120
	I1010 19:18:01.771398  145604 main.go:141] libmachine: (embed-certs-541370) Waiting for machine to stop 91/120
	I1010 19:18:02.772956  145604 main.go:141] libmachine: (embed-certs-541370) Waiting for machine to stop 92/120
	I1010 19:18:03.774191  145604 main.go:141] libmachine: (embed-certs-541370) Waiting for machine to stop 93/120
	I1010 19:18:04.775650  145604 main.go:141] libmachine: (embed-certs-541370) Waiting for machine to stop 94/120
	I1010 19:18:05.777976  145604 main.go:141] libmachine: (embed-certs-541370) Waiting for machine to stop 95/120
	I1010 19:18:06.779541  145604 main.go:141] libmachine: (embed-certs-541370) Waiting for machine to stop 96/120
	I1010 19:18:07.780827  145604 main.go:141] libmachine: (embed-certs-541370) Waiting for machine to stop 97/120
	I1010 19:18:08.782598  145604 main.go:141] libmachine: (embed-certs-541370) Waiting for machine to stop 98/120
	I1010 19:18:09.784088  145604 main.go:141] libmachine: (embed-certs-541370) Waiting for machine to stop 99/120
	I1010 19:18:10.786624  145604 main.go:141] libmachine: (embed-certs-541370) Waiting for machine to stop 100/120
	I1010 19:18:11.788388  145604 main.go:141] libmachine: (embed-certs-541370) Waiting for machine to stop 101/120
	I1010 19:18:12.789885  145604 main.go:141] libmachine: (embed-certs-541370) Waiting for machine to stop 102/120
	I1010 19:18:13.791255  145604 main.go:141] libmachine: (embed-certs-541370) Waiting for machine to stop 103/120
	I1010 19:18:14.792906  145604 main.go:141] libmachine: (embed-certs-541370) Waiting for machine to stop 104/120
	I1010 19:18:15.795026  145604 main.go:141] libmachine: (embed-certs-541370) Waiting for machine to stop 105/120
	I1010 19:18:16.796598  145604 main.go:141] libmachine: (embed-certs-541370) Waiting for machine to stop 106/120
	I1010 19:18:17.797908  145604 main.go:141] libmachine: (embed-certs-541370) Waiting for machine to stop 107/120
	I1010 19:18:18.799492  145604 main.go:141] libmachine: (embed-certs-541370) Waiting for machine to stop 108/120
	I1010 19:18:19.800749  145604 main.go:141] libmachine: (embed-certs-541370) Waiting for machine to stop 109/120
	I1010 19:18:20.801992  145604 main.go:141] libmachine: (embed-certs-541370) Waiting for machine to stop 110/120
	I1010 19:18:21.803283  145604 main.go:141] libmachine: (embed-certs-541370) Waiting for machine to stop 111/120
	I1010 19:18:22.805152  145604 main.go:141] libmachine: (embed-certs-541370) Waiting for machine to stop 112/120
	I1010 19:18:23.807509  145604 main.go:141] libmachine: (embed-certs-541370) Waiting for machine to stop 113/120
	I1010 19:18:24.808898  145604 main.go:141] libmachine: (embed-certs-541370) Waiting for machine to stop 114/120
	I1010 19:18:25.811047  145604 main.go:141] libmachine: (embed-certs-541370) Waiting for machine to stop 115/120
	I1010 19:18:26.812441  145604 main.go:141] libmachine: (embed-certs-541370) Waiting for machine to stop 116/120
	I1010 19:18:27.814202  145604 main.go:141] libmachine: (embed-certs-541370) Waiting for machine to stop 117/120
	I1010 19:18:28.816248  145604 main.go:141] libmachine: (embed-certs-541370) Waiting for machine to stop 118/120
	I1010 19:18:29.817837  145604 main.go:141] libmachine: (embed-certs-541370) Waiting for machine to stop 119/120
	I1010 19:18:30.818502  145604 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1010 19:18:30.818577  145604 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1010 19:18:30.820748  145604 out.go:201] 
	W1010 19:18:30.822507  145604 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1010 19:18:30.822524  145604 out.go:270] * 
	* 
	W1010 19:18:30.826526  145604 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1010 19:18:30.828013  145604 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-541370 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-541370 -n embed-certs-541370
E1010 19:18:30.920064   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/enable-default-cni-873515/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-541370 -n embed-certs-541370: exit status 3 (18.583292031s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1010 19:18:49.413231  147449 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.120:22: connect: no route to host
	E1010 19:18:49.413254  147449 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.39.120:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-541370" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (139.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.42s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-320324 -n no-preload-320324
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-320324 -n no-preload-320324: exit status 3 (3.199703852s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1010 19:18:04.485234  147101 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.11:22: connect: no route to host
	E1010 19:18:04.485258  147101 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.72.11:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-320324 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-320324 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.154940723s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.72.11:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-320324 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-320324 -n no-preload-320324
E1010 19:18:11.750878   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/custom-flannel-873515/client.crt: no such file or directory" logger="UnhandledError"
E1010 19:18:11.757347   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/custom-flannel-873515/client.crt: no such file or directory" logger="UnhandledError"
E1010 19:18:11.768732   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/custom-flannel-873515/client.crt: no such file or directory" logger="UnhandledError"
E1010 19:18:11.790727   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/custom-flannel-873515/client.crt: no such file or directory" logger="UnhandledError"
E1010 19:18:11.832245   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/custom-flannel-873515/client.crt: no such file or directory" logger="UnhandledError"
E1010 19:18:11.913772   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/custom-flannel-873515/client.crt: no such file or directory" logger="UnhandledError"
E1010 19:18:12.075389   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/custom-flannel-873515/client.crt: no such file or directory" logger="UnhandledError"
E1010 19:18:12.397296   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/custom-flannel-873515/client.crt: no such file or directory" logger="UnhandledError"
E1010 19:18:13.039488   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/custom-flannel-873515/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-320324 -n no-preload-320324: exit status 3 (3.060730828s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1010 19:18:13.701262  147167 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.11:22: connect: no route to host
	E1010 19:18:13.701284  147167 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.72.11:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-320324" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.51s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-947203 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-947203 create -f testdata/busybox.yaml: exit status 1 (45.321455ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-947203" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-947203 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-947203 -n old-k8s-version-947203
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-947203 -n old-k8s-version-947203: exit status 6 (226.75386ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1010 19:18:27.081762  147343 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-947203" does not appear in /home/jenkins/minikube-integration/19787-81676/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-947203" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-947203 -n old-k8s-version-947203
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-947203 -n old-k8s-version-947203: exit status 6 (232.948937ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1010 19:18:27.317125  147373 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-947203" does not appear in /home/jenkins/minikube-integration/19787-81676/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-947203" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.51s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (83.7s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-947203 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1010 19:18:28.350042   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/enable-default-cni-873515/client.crt: no such file or directory" logger="UnhandledError"
E1010 19:18:28.356536   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/enable-default-cni-873515/client.crt: no such file or directory" logger="UnhandledError"
E1010 19:18:28.367949   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/enable-default-cni-873515/client.crt: no such file or directory" logger="UnhandledError"
E1010 19:18:28.389350   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/enable-default-cni-873515/client.crt: no such file or directory" logger="UnhandledError"
E1010 19:18:28.430813   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/enable-default-cni-873515/client.crt: no such file or directory" logger="UnhandledError"
E1010 19:18:28.512312   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/enable-default-cni-873515/client.crt: no such file or directory" logger="UnhandledError"
E1010 19:18:28.674105   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/enable-default-cni-873515/client.crt: no such file or directory" logger="UnhandledError"
E1010 19:18:28.995781   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/enable-default-cni-873515/client.crt: no such file or directory" logger="UnhandledError"
E1010 19:18:29.637911   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/enable-default-cni-873515/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-947203 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m23.418302748s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-947203 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-947203 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-947203 describe deploy/metrics-server -n kube-system: exit status 1 (44.443597ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-947203" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-947203 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-947203 -n old-k8s-version-947203
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-947203 -n old-k8s-version-947203: exit status 6 (233.167536ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1010 19:19:51.013257  147992 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-947203" does not appear in /home/jenkins/minikube-integration/19787-81676/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-947203" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (83.70s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (139.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-361847 --alsologtostderr -v=3
E1010 19:18:48.845815   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/enable-default-cni-873515/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-361847 --alsologtostderr -v=3: exit status 82 (2m0.523908684s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-361847"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1010 19:18:42.419560  147580 out.go:345] Setting OutFile to fd 1 ...
	I1010 19:18:42.419878  147580 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 19:18:42.419895  147580 out.go:358] Setting ErrFile to fd 2...
	I1010 19:18:42.419900  147580 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 19:18:42.420095  147580 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19787-81676/.minikube/bin
	I1010 19:18:42.420330  147580 out.go:352] Setting JSON to false
	I1010 19:18:42.420407  147580 mustload.go:65] Loading cluster: default-k8s-diff-port-361847
	I1010 19:18:42.420728  147580 config.go:182] Loaded profile config "default-k8s-diff-port-361847": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 19:18:42.420794  147580 profile.go:143] Saving config to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/default-k8s-diff-port-361847/config.json ...
	I1010 19:18:42.420994  147580 mustload.go:65] Loading cluster: default-k8s-diff-port-361847
	I1010 19:18:42.421106  147580 config.go:182] Loaded profile config "default-k8s-diff-port-361847": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 19:18:42.421167  147580 stop.go:39] StopHost: default-k8s-diff-port-361847
	I1010 19:18:42.421517  147580 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:18:42.421574  147580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:18:42.436995  147580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33565
	I1010 19:18:42.438980  147580 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:18:42.439625  147580 main.go:141] libmachine: Using API Version  1
	I1010 19:18:42.439646  147580 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:18:42.440017  147580 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:18:42.442648  147580 out.go:177] * Stopping node "default-k8s-diff-port-361847"  ...
	I1010 19:18:42.444096  147580 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1010 19:18:42.444128  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .DriverName
	I1010 19:18:42.444380  147580 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1010 19:18:42.444417  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHHostname
	I1010 19:18:42.447575  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:18:42.447969  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:17:14 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:18:42.448001  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:18:42.448214  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHPort
	I1010 19:18:42.448413  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:18:42.448591  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHUsername
	I1010 19:18:42.448737  147580 sshutil.go:53] new ssh client: &{IP:192.168.50.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/default-k8s-diff-port-361847/id_rsa Username:docker}
	I1010 19:18:42.556907  147580 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1010 19:18:42.619989  147580 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1010 19:18:42.682651  147580 main.go:141] libmachine: Stopping "default-k8s-diff-port-361847"...
	I1010 19:18:42.682689  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetState
	I1010 19:18:42.684430  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .Stop
	I1010 19:18:42.688514  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for machine to stop 0/120
	I1010 19:18:43.691095  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for machine to stop 1/120
	I1010 19:18:44.692714  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for machine to stop 2/120
	I1010 19:18:45.694160  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for machine to stop 3/120
	I1010 19:18:46.696343  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for machine to stop 4/120
	I1010 19:18:47.697826  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for machine to stop 5/120
	I1010 19:18:48.699316  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for machine to stop 6/120
	I1010 19:18:49.700798  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for machine to stop 7/120
	I1010 19:18:50.702248  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for machine to stop 8/120
	I1010 19:18:51.703684  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for machine to stop 9/120
	I1010 19:18:52.704941  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for machine to stop 10/120
	I1010 19:18:53.706269  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for machine to stop 11/120
	I1010 19:18:54.707483  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for machine to stop 12/120
	I1010 19:18:55.708905  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for machine to stop 13/120
	I1010 19:18:56.710280  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for machine to stop 14/120
	I1010 19:18:57.712358  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for machine to stop 15/120
	I1010 19:18:58.713818  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for machine to stop 16/120
	I1010 19:18:59.715225  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for machine to stop 17/120
	I1010 19:19:00.716768  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for machine to stop 18/120
	I1010 19:19:01.718170  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for machine to stop 19/120
	I1010 19:19:02.719824  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for machine to stop 20/120
	I1010 19:19:03.721389  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for machine to stop 21/120
	I1010 19:19:04.722808  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for machine to stop 22/120
	I1010 19:19:05.724698  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for machine to stop 23/120
	I1010 19:19:06.726553  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for machine to stop 24/120
	I1010 19:19:07.728742  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for machine to stop 25/120
	I1010 19:19:08.730527  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for machine to stop 26/120
	I1010 19:19:09.732115  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for machine to stop 27/120
	I1010 19:19:10.733800  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for machine to stop 28/120
	I1010 19:19:11.735599  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for machine to stop 29/120
	I1010 19:19:12.738050  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for machine to stop 30/120
	I1010 19:19:13.739421  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for machine to stop 31/120
	I1010 19:19:14.741396  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for machine to stop 32/120
	I1010 19:19:15.743092  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for machine to stop 33/120
	I1010 19:19:16.744526  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for machine to stop 34/120
	I1010 19:19:17.746651  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for machine to stop 35/120
	I1010 19:19:18.748064  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for machine to stop 36/120
	I1010 19:19:19.749848  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for machine to stop 37/120
	I1010 19:19:20.751510  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for machine to stop 38/120
	I1010 19:19:21.753242  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for machine to stop 39/120
	I1010 19:19:22.755938  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for machine to stop 40/120
	I1010 19:19:23.757506  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for machine to stop 41/120
	I1010 19:19:24.759063  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for machine to stop 42/120
	I1010 19:19:25.760550  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for machine to stop 43/120
	I1010 19:19:26.762250  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for machine to stop 44/120
	I1010 19:19:27.764541  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for machine to stop 45/120
	I1010 19:19:28.766078  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for machine to stop 46/120
	I1010 19:19:29.768009  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for machine to stop 47/120
	I1010 19:19:30.769663  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for machine to stop 48/120
	I1010 19:19:31.771000  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for machine to stop 49/120
	I1010 19:19:32.773400  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for machine to stop 50/120
	I1010 19:19:33.774879  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for machine to stop 51/120
	I1010 19:19:34.776651  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for machine to stop 52/120
	I1010 19:19:35.778143  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for machine to stop 53/120
	I1010 19:19:36.779534  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for machine to stop 54/120
	I1010 19:19:37.781701  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for machine to stop 55/120
	I1010 19:19:38.783318  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for machine to stop 56/120
	I1010 19:19:39.784688  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for machine to stop 57/120
	I1010 19:19:40.786287  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for machine to stop 58/120
	I1010 19:19:41.787767  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for machine to stop 59/120
	I1010 19:19:42.790092  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for machine to stop 60/120
	I1010 19:19:43.791789  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for machine to stop 61/120
	I1010 19:19:44.793272  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for machine to stop 62/120
	I1010 19:19:45.794856  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for machine to stop 63/120
	I1010 19:19:46.796307  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for machine to stop 64/120
	I1010 19:19:47.798542  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for machine to stop 65/120
	I1010 19:19:48.800083  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for machine to stop 66/120
	I1010 19:19:49.801931  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for machine to stop 67/120
	I1010 19:19:50.803340  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for machine to stop 68/120
	I1010 19:19:51.804598  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for machine to stop 69/120
	I1010 19:19:52.806781  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for machine to stop 70/120
	I1010 19:19:53.808098  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for machine to stop 71/120
	I1010 19:19:54.809561  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for machine to stop 72/120
	I1010 19:19:55.811254  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for machine to stop 73/120
	I1010 19:19:56.812735  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for machine to stop 74/120
	I1010 19:19:57.814796  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for machine to stop 75/120
	I1010 19:19:58.816384  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for machine to stop 76/120
	I1010 19:19:59.817728  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for machine to stop 77/120
	I1010 19:20:00.819750  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for machine to stop 78/120
	I1010 19:20:01.821279  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for machine to stop 79/120
	I1010 19:20:02.823650  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for machine to stop 80/120
	I1010 19:20:03.825313  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for machine to stop 81/120
	I1010 19:20:04.826681  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for machine to stop 82/120
	I1010 19:20:05.828356  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for machine to stop 83/120
	I1010 19:20:06.829891  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for machine to stop 84/120
	I1010 19:20:07.832313  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for machine to stop 85/120
	I1010 19:20:08.833791  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for machine to stop 86/120
	I1010 19:20:09.835238  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for machine to stop 87/120
	I1010 19:20:10.836716  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for machine to stop 88/120
	I1010 19:20:11.838346  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for machine to stop 89/120
	I1010 19:20:12.840051  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for machine to stop 90/120
	I1010 19:20:13.841478  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for machine to stop 91/120
	I1010 19:20:14.842953  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for machine to stop 92/120
	I1010 19:20:15.844374  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for machine to stop 93/120
	I1010 19:20:16.845769  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for machine to stop 94/120
	I1010 19:20:17.847978  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for machine to stop 95/120
	I1010 19:20:18.849314  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for machine to stop 96/120
	I1010 19:20:19.851103  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for machine to stop 97/120
	I1010 19:20:20.852568  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for machine to stop 98/120
	I1010 19:20:21.853808  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for machine to stop 99/120
	I1010 19:20:22.856043  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for machine to stop 100/120
	I1010 19:20:23.857673  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for machine to stop 101/120
	I1010 19:20:24.859059  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for machine to stop 102/120
	I1010 19:20:25.860544  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for machine to stop 103/120
	I1010 19:20:26.862069  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for machine to stop 104/120
	I1010 19:20:27.864287  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for machine to stop 105/120
	I1010 19:20:28.865625  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for machine to stop 106/120
	I1010 19:20:29.866911  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for machine to stop 107/120
	I1010 19:20:30.868522  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for machine to stop 108/120
	I1010 19:20:31.869983  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for machine to stop 109/120
	I1010 19:20:32.871250  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for machine to stop 110/120
	I1010 19:20:33.872545  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for machine to stop 111/120
	I1010 19:20:34.874020  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for machine to stop 112/120
	I1010 19:20:35.875344  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for machine to stop 113/120
	I1010 19:20:36.876814  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for machine to stop 114/120
	I1010 19:20:37.878876  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for machine to stop 115/120
	I1010 19:20:38.880272  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for machine to stop 116/120
	I1010 19:20:39.881805  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for machine to stop 117/120
	I1010 19:20:40.883347  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for machine to stop 118/120
	I1010 19:20:41.884841  147580 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for machine to stop 119/120
	I1010 19:20:42.885962  147580 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1010 19:20:42.886021  147580 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1010 19:20:42.888153  147580 out.go:201] 
	W1010 19:20:42.889692  147580 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1010 19:20:42.889711  147580 out.go:270] * 
	* 
	W1010 19:20:42.893966  147580 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1010 19:20:42.895539  147580 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-361847 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-361847 -n default-k8s-diff-port-361847
E1010 19:20:51.462194   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/bridge-873515/client.crt: no such file or directory" logger="UnhandledError"
E1010 19:20:55.611779   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/custom-flannel-873515/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-361847 -n default-k8s-diff-port-361847: exit status 3 (18.611880421s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1010 19:21:01.509194  148321 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.32:22: connect: no route to host
	E1010 19:21:01.509244  148321 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.50.32:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-361847" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (139.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.42s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-541370 -n embed-certs-541370
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-541370 -n embed-certs-541370: exit status 3 (3.200156514s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1010 19:18:52.613226  147631 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.120:22: connect: no route to host
	E1010 19:18:52.613255  147631 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.39.120:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-541370 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E1010 19:18:52.728315   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/custom-flannel-873515/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-541370 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.154020159s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.120:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-541370 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-541370 -n embed-certs-541370
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-541370 -n embed-certs-541370: exit status 3 (3.061666641s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1010 19:19:01.829326  147712 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.120:22: connect: no route to host
	E1010 19:19:01.829352  147712 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.39.120:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-541370" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (714.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-947203 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E1010 19:19:55.561675   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/flannel-873515/client.crt: no such file or directory" logger="UnhandledError"
E1010 19:19:59.530707   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/addons-473910/client.crt: no such file or directory" logger="UnhandledError"
E1010 19:20:10.499567   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/bridge-873515/client.crt: no such file or directory" logger="UnhandledError"
E1010 19:20:36.523249   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/flannel-873515/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-947203 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (11m50.482577985s)

                                                
                                                
-- stdout --
	* [old-k8s-version-947203] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19787
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19787-81676/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19787-81676/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-947203" primary control-plane node in "old-k8s-version-947203" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-947203" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1010 19:19:53.562627  148123 out.go:345] Setting OutFile to fd 1 ...
	I1010 19:19:53.562871  148123 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 19:19:53.562879  148123 out.go:358] Setting ErrFile to fd 2...
	I1010 19:19:53.562884  148123 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 19:19:53.563048  148123 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19787-81676/.minikube/bin
	I1010 19:19:53.563600  148123 out.go:352] Setting JSON to false
	I1010 19:19:53.564501  148123 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":10940,"bootTime":1728577054,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1010 19:19:53.564610  148123 start.go:139] virtualization: kvm guest
	I1010 19:19:53.566752  148123 out.go:177] * [old-k8s-version-947203] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1010 19:19:53.568117  148123 notify.go:220] Checking for updates...
	I1010 19:19:53.568120  148123 out.go:177]   - MINIKUBE_LOCATION=19787
	I1010 19:19:53.569368  148123 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1010 19:19:53.570580  148123 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19787-81676/kubeconfig
	I1010 19:19:53.571843  148123 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19787-81676/.minikube
	I1010 19:19:53.573098  148123 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1010 19:19:53.574551  148123 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1010 19:19:53.576202  148123 config.go:182] Loaded profile config "old-k8s-version-947203": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1010 19:19:53.576606  148123 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:19:53.576678  148123 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:19:53.592556  148123 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34001
	I1010 19:19:53.593087  148123 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:19:53.593798  148123 main.go:141] libmachine: Using API Version  1
	I1010 19:19:53.593827  148123 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:19:53.594183  148123 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:19:53.594393  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .DriverName
	I1010 19:19:53.596388  148123 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I1010 19:19:53.597867  148123 driver.go:394] Setting default libvirt URI to qemu:///system
	I1010 19:19:53.598189  148123 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:19:53.598231  148123 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:19:53.613292  148123 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33741
	I1010 19:19:53.613798  148123 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:19:53.614294  148123 main.go:141] libmachine: Using API Version  1
	I1010 19:19:53.614316  148123 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:19:53.614627  148123 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:19:53.614800  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .DriverName
	I1010 19:19:53.651184  148123 out.go:177] * Using the kvm2 driver based on existing profile
	I1010 19:19:53.652620  148123 start.go:297] selected driver: kvm2
	I1010 19:19:53.652639  148123 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-947203 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-947203 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.112 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 19:19:53.652782  148123 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1010 19:19:53.653535  148123 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 19:19:53.653625  148123 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19787-81676/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1010 19:19:53.669018  148123 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1010 19:19:53.669447  148123 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1010 19:19:53.669493  148123 cni.go:84] Creating CNI manager for ""
	I1010 19:19:53.669551  148123 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1010 19:19:53.669611  148123 start.go:340] cluster config:
	{Name:old-k8s-version-947203 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-947203 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.112 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 19:19:53.669736  148123 iso.go:125] acquiring lock: {Name:mk53f4624ffe67cac4f5d6b29b9ed87292a010a5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 19:19:53.671615  148123 out.go:177] * Starting "old-k8s-version-947203" primary control-plane node in "old-k8s-version-947203" cluster
	I1010 19:19:53.672968  148123 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1010 19:19:53.673009  148123 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1010 19:19:53.673019  148123 cache.go:56] Caching tarball of preloaded images
	I1010 19:19:53.673111  148123 preload.go:172] Found /home/jenkins/minikube-integration/19787-81676/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1010 19:19:53.673125  148123 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1010 19:19:53.673230  148123 profile.go:143] Saving config to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/old-k8s-version-947203/config.json ...
	I1010 19:19:53.673410  148123 start.go:360] acquireMachinesLock for old-k8s-version-947203: {Name:mkf50a8a3600473176c10a3ea212772e747151e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1010 19:23:11.321975  148123 start.go:364] duration metric: took 3m17.648521443s to acquireMachinesLock for "old-k8s-version-947203"
	I1010 19:23:11.322043  148123 start.go:96] Skipping create...Using existing machine configuration
	I1010 19:23:11.322056  148123 fix.go:54] fixHost starting: 
	I1010 19:23:11.322537  148123 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:23:11.322596  148123 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:23:11.339586  148123 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42157
	I1010 19:23:11.340091  148123 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:23:11.340686  148123 main.go:141] libmachine: Using API Version  1
	I1010 19:23:11.340719  148123 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:23:11.341101  148123 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:23:11.341295  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .DriverName
	I1010 19:23:11.341453  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetState
	I1010 19:23:11.343215  148123 fix.go:112] recreateIfNeeded on old-k8s-version-947203: state=Stopped err=<nil>
	I1010 19:23:11.343247  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .DriverName
	W1010 19:23:11.343408  148123 fix.go:138] unexpected machine state, will restart: <nil>
	I1010 19:23:11.345653  148123 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-947203" ...
	I1010 19:23:11.347158  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .Start
	I1010 19:23:11.347359  148123 main.go:141] libmachine: (old-k8s-version-947203) Ensuring networks are active...
	I1010 19:23:11.348252  148123 main.go:141] libmachine: (old-k8s-version-947203) Ensuring network default is active
	I1010 19:23:11.348689  148123 main.go:141] libmachine: (old-k8s-version-947203) Ensuring network mk-old-k8s-version-947203 is active
	I1010 19:23:11.349139  148123 main.go:141] libmachine: (old-k8s-version-947203) Getting domain xml...
	I1010 19:23:11.349799  148123 main.go:141] libmachine: (old-k8s-version-947203) Creating domain...
	I1010 19:23:12.671472  148123 main.go:141] libmachine: (old-k8s-version-947203) Waiting to get IP...
	I1010 19:23:12.672628  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:12.673145  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:12.673278  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:12.673132  149057 retry.go:31] will retry after 280.111667ms: waiting for machine to come up
	I1010 19:23:12.954865  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:12.955325  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:12.955377  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:12.955300  149057 retry.go:31] will retry after 289.967238ms: waiting for machine to come up
	I1010 19:23:13.247039  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:13.247663  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:13.247769  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:13.247632  149057 retry.go:31] will retry after 319.085935ms: waiting for machine to come up
	I1010 19:23:13.568192  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:13.568690  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:13.568721  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:13.568657  149057 retry.go:31] will retry after 575.032546ms: waiting for machine to come up
	I1010 19:23:14.145650  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:14.146293  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:14.146319  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:14.146234  149057 retry.go:31] will retry after 702.803794ms: waiting for machine to come up
	I1010 19:23:14.851201  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:14.851736  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:14.851769  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:14.851706  149057 retry.go:31] will retry after 883.195401ms: waiting for machine to come up
	I1010 19:23:15.736627  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:15.737199  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:15.737252  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:15.737155  149057 retry.go:31] will retry after 794.699815ms: waiting for machine to come up
	I1010 19:23:16.533952  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:16.534510  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:16.534545  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:16.534443  149057 retry.go:31] will retry after 961.751912ms: waiting for machine to come up
	I1010 19:23:17.497535  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:17.498007  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:17.498035  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:17.497936  149057 retry.go:31] will retry after 1.423503471s: waiting for machine to come up
	I1010 19:23:18.923057  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:18.923636  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:18.923671  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:18.923582  149057 retry.go:31] will retry after 2.09836426s: waiting for machine to come up
	I1010 19:23:21.023500  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:21.024054  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:21.024084  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:21.023999  149057 retry.go:31] will retry after 2.809962093s: waiting for machine to come up
	I1010 19:23:23.837827  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:23.838271  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:23.838295  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:23.838220  149057 retry.go:31] will retry after 2.562944835s: waiting for machine to come up
	I1010 19:23:26.402699  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:26.403250  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:26.403294  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:26.403192  149057 retry.go:31] will retry after 3.867656846s: waiting for machine to come up
	I1010 19:23:30.275222  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.275762  148123 main.go:141] libmachine: (old-k8s-version-947203) Found IP for machine: 192.168.61.112
	I1010 19:23:30.275779  148123 main.go:141] libmachine: (old-k8s-version-947203) Reserving static IP address...
	I1010 19:23:30.275794  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has current primary IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.276478  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "old-k8s-version-947203", mac: "52:54:00:b5:0b:b2", ip: "192.168.61.112"} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:30.276529  148123 main.go:141] libmachine: (old-k8s-version-947203) Reserved static IP address: 192.168.61.112
	I1010 19:23:30.276553  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | skip adding static IP to network mk-old-k8s-version-947203 - found existing host DHCP lease matching {name: "old-k8s-version-947203", mac: "52:54:00:b5:0b:b2", ip: "192.168.61.112"}
	I1010 19:23:30.276574  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | Getting to WaitForSSH function...
	I1010 19:23:30.276590  148123 main.go:141] libmachine: (old-k8s-version-947203) Waiting for SSH to be available...
	I1010 19:23:30.278486  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.278854  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:30.278890  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.278984  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | Using SSH client type: external
	I1010 19:23:30.279016  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | Using SSH private key: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/old-k8s-version-947203/id_rsa (-rw-------)
	I1010 19:23:30.279051  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.112 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19787-81676/.minikube/machines/old-k8s-version-947203/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1010 19:23:30.279068  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | About to run SSH command:
	I1010 19:23:30.279080  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | exit 0
	I1010 19:23:30.405053  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | SSH cmd err, output: <nil>: 
	I1010 19:23:30.405492  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetConfigRaw
	I1010 19:23:30.406224  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetIP
	I1010 19:23:30.408830  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.409178  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:30.409204  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.409462  148123 profile.go:143] Saving config to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/old-k8s-version-947203/config.json ...
	I1010 19:23:30.409673  148123 machine.go:93] provisionDockerMachine start ...
	I1010 19:23:30.409693  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .DriverName
	I1010 19:23:30.409916  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHHostname
	I1010 19:23:30.412131  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.412400  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:30.412427  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.412574  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHPort
	I1010 19:23:30.412770  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:30.412965  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:30.413117  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHUsername
	I1010 19:23:30.413368  148123 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:30.413653  148123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.112 22 <nil> <nil>}
	I1010 19:23:30.413668  148123 main.go:141] libmachine: About to run SSH command:
	hostname
	I1010 19:23:30.525149  148123 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1010 19:23:30.525176  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetMachineName
	I1010 19:23:30.525417  148123 buildroot.go:166] provisioning hostname "old-k8s-version-947203"
	I1010 19:23:30.525478  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetMachineName
	I1010 19:23:30.525703  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHHostname
	I1010 19:23:30.528343  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.528681  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:30.528708  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.528841  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHPort
	I1010 19:23:30.529035  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:30.529185  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:30.529303  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHUsername
	I1010 19:23:30.529460  148123 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:30.529635  148123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.112 22 <nil> <nil>}
	I1010 19:23:30.529645  148123 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-947203 && echo "old-k8s-version-947203" | sudo tee /etc/hostname
	I1010 19:23:30.655605  148123 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-947203
	
	I1010 19:23:30.655644  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHHostname
	I1010 19:23:30.658439  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.658834  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:30.658869  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.659063  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHPort
	I1010 19:23:30.659285  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:30.659458  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:30.659574  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHUsername
	I1010 19:23:30.659733  148123 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:30.659905  148123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.112 22 <nil> <nil>}
	I1010 19:23:30.659921  148123 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-947203' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-947203/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-947203' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1010 19:23:30.781974  148123 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1010 19:23:30.782011  148123 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19787-81676/.minikube CaCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19787-81676/.minikube}
	I1010 19:23:30.782033  148123 buildroot.go:174] setting up certificates
	I1010 19:23:30.782048  148123 provision.go:84] configureAuth start
	I1010 19:23:30.782059  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetMachineName
	I1010 19:23:30.782379  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetIP
	I1010 19:23:30.785189  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.785584  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:30.785613  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.785782  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHHostname
	I1010 19:23:30.788086  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.788432  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:30.788458  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.788582  148123 provision.go:143] copyHostCerts
	I1010 19:23:30.788645  148123 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem, removing ...
	I1010 19:23:30.788660  148123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem
	I1010 19:23:30.788729  148123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem (1078 bytes)
	I1010 19:23:30.788839  148123 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem, removing ...
	I1010 19:23:30.788867  148123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem
	I1010 19:23:30.788900  148123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem (1123 bytes)
	I1010 19:23:30.788977  148123 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem, removing ...
	I1010 19:23:30.788986  148123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem
	I1010 19:23:30.789013  148123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem (1675 bytes)
	I1010 19:23:30.789081  148123 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-947203 san=[127.0.0.1 192.168.61.112 localhost minikube old-k8s-version-947203]
	I1010 19:23:31.240626  148123 provision.go:177] copyRemoteCerts
	I1010 19:23:31.240698  148123 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1010 19:23:31.240737  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHHostname
	I1010 19:23:31.243678  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.244108  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:31.244138  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.244356  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHPort
	I1010 19:23:31.244565  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:31.244765  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHUsername
	I1010 19:23:31.244929  148123 sshutil.go:53] new ssh client: &{IP:192.168.61.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/old-k8s-version-947203/id_rsa Username:docker}
	I1010 19:23:31.331337  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1010 19:23:31.356266  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1010 19:23:31.381186  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1010 19:23:31.405301  148123 provision.go:87] duration metric: took 623.235619ms to configureAuth
	I1010 19:23:31.405337  148123 buildroot.go:189] setting minikube options for container-runtime
	I1010 19:23:31.405541  148123 config.go:182] Loaded profile config "old-k8s-version-947203": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1010 19:23:31.405620  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHHostname
	I1010 19:23:31.408111  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.408444  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:31.408470  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.408695  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHPort
	I1010 19:23:31.408947  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:31.409109  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:31.409229  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHUsername
	I1010 19:23:31.409374  148123 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:31.409583  148123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.112 22 <nil> <nil>}
	I1010 19:23:31.409609  148123 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1010 19:23:31.647023  148123 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1010 19:23:31.647050  148123 machine.go:96] duration metric: took 1.237364012s to provisionDockerMachine
	I1010 19:23:31.647063  148123 start.go:293] postStartSetup for "old-k8s-version-947203" (driver="kvm2")
	I1010 19:23:31.647073  148123 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1010 19:23:31.647104  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .DriverName
	I1010 19:23:31.647464  148123 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1010 19:23:31.647502  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHHostname
	I1010 19:23:31.649991  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.650288  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:31.650311  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.650484  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHPort
	I1010 19:23:31.650637  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:31.650832  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHUsername
	I1010 19:23:31.650968  148123 sshutil.go:53] new ssh client: &{IP:192.168.61.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/old-k8s-version-947203/id_rsa Username:docker}
	I1010 19:23:31.735985  148123 ssh_runner.go:195] Run: cat /etc/os-release
	I1010 19:23:31.740689  148123 info.go:137] Remote host: Buildroot 2023.02.9
	I1010 19:23:31.740725  148123 filesync.go:126] Scanning /home/jenkins/minikube-integration/19787-81676/.minikube/addons for local assets ...
	I1010 19:23:31.740803  148123 filesync.go:126] Scanning /home/jenkins/minikube-integration/19787-81676/.minikube/files for local assets ...
	I1010 19:23:31.740947  148123 filesync.go:149] local asset: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem -> 888762.pem in /etc/ssl/certs
	I1010 19:23:31.741057  148123 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1010 19:23:31.751383  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem --> /etc/ssl/certs/888762.pem (1708 bytes)
	I1010 19:23:31.777091  148123 start.go:296] duration metric: took 130.011618ms for postStartSetup
	I1010 19:23:31.777134  148123 fix.go:56] duration metric: took 20.455078936s for fixHost
	I1010 19:23:31.777158  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHHostname
	I1010 19:23:31.780083  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.780661  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:31.780708  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.780832  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHPort
	I1010 19:23:31.781069  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:31.781245  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:31.781382  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHUsername
	I1010 19:23:31.781534  148123 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:31.781711  148123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.112 22 <nil> <nil>}
	I1010 19:23:31.781721  148123 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1010 19:23:31.893727  148123 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728588211.849777581
	
	I1010 19:23:31.893756  148123 fix.go:216] guest clock: 1728588211.849777581
	I1010 19:23:31.893763  148123 fix.go:229] Guest: 2024-10-10 19:23:31.849777581 +0000 UTC Remote: 2024-10-10 19:23:31.777138808 +0000 UTC m=+218.253674512 (delta=72.638773ms)
	I1010 19:23:31.893806  148123 fix.go:200] guest clock delta is within tolerance: 72.638773ms
	I1010 19:23:31.893813  148123 start.go:83] releasing machines lock for "old-k8s-version-947203", held for 20.571797307s
	I1010 19:23:31.893848  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .DriverName
	I1010 19:23:31.894151  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetIP
	I1010 19:23:31.896747  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.897156  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:31.897207  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.897385  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .DriverName
	I1010 19:23:31.898036  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .DriverName
	I1010 19:23:31.898229  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .DriverName
	I1010 19:23:31.898375  148123 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1010 19:23:31.898422  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHHostname
	I1010 19:23:31.898461  148123 ssh_runner.go:195] Run: cat /version.json
	I1010 19:23:31.898487  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHHostname
	I1010 19:23:31.901274  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.901450  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.901608  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:31.901649  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.901784  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHPort
	I1010 19:23:31.901920  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:31.901945  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.901952  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:31.902112  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHPort
	I1010 19:23:31.902130  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHUsername
	I1010 19:23:31.902306  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:31.902348  148123 sshutil.go:53] new ssh client: &{IP:192.168.61.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/old-k8s-version-947203/id_rsa Username:docker}
	I1010 19:23:31.902412  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHUsername
	I1010 19:23:31.902533  148123 sshutil.go:53] new ssh client: &{IP:192.168.61.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/old-k8s-version-947203/id_rsa Username:docker}
	I1010 19:23:32.016264  148123 ssh_runner.go:195] Run: systemctl --version
	I1010 19:23:32.022618  148123 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1010 19:23:32.169502  148123 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1010 19:23:32.175960  148123 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1010 19:23:32.176039  148123 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1010 19:23:32.193584  148123 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1010 19:23:32.193615  148123 start.go:495] detecting cgroup driver to use...
	I1010 19:23:32.193698  148123 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1010 19:23:32.210923  148123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1010 19:23:32.227172  148123 docker.go:217] disabling cri-docker service (if available) ...
	I1010 19:23:32.227254  148123 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1010 19:23:32.242142  148123 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1010 19:23:32.256896  148123 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1010 19:23:32.387462  148123 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1010 19:23:32.575184  148123 docker.go:233] disabling docker service ...
	I1010 19:23:32.575257  148123 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1010 19:23:32.590667  148123 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1010 19:23:32.604825  148123 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1010 19:23:32.739827  148123 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1010 19:23:32.863648  148123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1010 19:23:32.879435  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1010 19:23:32.899582  148123 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1010 19:23:32.899659  148123 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:32.915010  148123 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1010 19:23:32.915081  148123 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:32.931181  148123 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:32.944038  148123 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:32.955182  148123 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1010 19:23:32.972220  148123 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1010 19:23:32.982681  148123 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1010 19:23:32.982739  148123 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1010 19:23:32.997012  148123 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1010 19:23:33.009468  148123 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 19:23:33.180403  148123 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1010 19:23:33.284934  148123 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1010 19:23:33.285008  148123 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1010 19:23:33.290063  148123 start.go:563] Will wait 60s for crictl version
	I1010 19:23:33.290124  148123 ssh_runner.go:195] Run: which crictl
	I1010 19:23:33.294752  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1010 19:23:33.342710  148123 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1010 19:23:33.342792  148123 ssh_runner.go:195] Run: crio --version
	I1010 19:23:33.372821  148123 ssh_runner.go:195] Run: crio --version
	I1010 19:23:33.409395  148123 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1010 19:23:33.411012  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetIP
	I1010 19:23:33.414374  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:33.414789  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:33.414836  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:33.415084  148123 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1010 19:23:33.420164  148123 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 19:23:33.433442  148123 kubeadm.go:883] updating cluster {Name:old-k8s-version-947203 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-947203 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.112 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1010 19:23:33.433631  148123 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1010 19:23:33.433717  148123 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 19:23:33.480013  148123 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1010 19:23:33.480078  148123 ssh_runner.go:195] Run: which lz4
	I1010 19:23:33.485443  148123 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1010 19:23:33.490986  148123 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1010 19:23:33.491032  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1010 19:23:35.227279  148123 crio.go:462] duration metric: took 1.74186828s to copy over tarball
	I1010 19:23:35.227383  148123 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1010 19:23:38.216499  148123 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.989082447s)
	I1010 19:23:38.216533  148123 crio.go:469] duration metric: took 2.989218587s to extract the tarball
	I1010 19:23:38.216541  148123 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1010 19:23:38.259699  148123 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 19:23:38.308284  148123 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1010 19:23:38.308313  148123 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1010 19:23:38.308422  148123 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1010 19:23:38.308477  148123 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1010 19:23:38.308482  148123 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1010 19:23:38.308593  148123 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1010 19:23:38.308617  148123 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1010 19:23:38.308405  148123 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 19:23:38.308446  148123 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1010 19:23:38.308530  148123 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1010 19:23:38.310558  148123 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1010 19:23:38.310612  148123 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1010 19:23:38.310642  148123 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1010 19:23:38.310652  148123 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1010 19:23:38.310645  148123 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1010 19:23:38.310560  148123 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 19:23:38.310733  148123 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1010 19:23:38.311068  148123 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1010 19:23:38.469174  148123 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1010 19:23:38.469563  148123 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1010 19:23:38.488463  148123 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1010 19:23:38.490815  148123 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1010 19:23:38.491841  148123 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1010 19:23:38.496079  148123 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1010 19:23:38.501292  148123 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1010 19:23:38.581315  148123 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1010 19:23:38.581361  148123 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1010 19:23:38.581385  148123 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1010 19:23:38.581407  148123 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1010 19:23:38.581442  148123 ssh_runner.go:195] Run: which crictl
	I1010 19:23:38.581457  148123 ssh_runner.go:195] Run: which crictl
	I1010 19:23:38.642668  148123 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1010 19:23:38.642721  148123 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1010 19:23:38.642777  148123 ssh_runner.go:195] Run: which crictl
	I1010 19:23:38.658235  148123 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1010 19:23:38.658291  148123 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1010 19:23:38.658288  148123 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1010 19:23:38.658328  148123 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1010 19:23:38.658346  148123 ssh_runner.go:195] Run: which crictl
	I1010 19:23:38.658373  148123 ssh_runner.go:195] Run: which crictl
	I1010 19:23:38.678038  148123 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1010 19:23:38.678097  148123 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1010 19:23:38.678155  148123 ssh_runner.go:195] Run: which crictl
	I1010 19:23:38.682719  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1010 19:23:38.682773  148123 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1010 19:23:38.682721  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1010 19:23:38.682791  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1010 19:23:38.682812  148123 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1010 19:23:38.682845  148123 ssh_runner.go:195] Run: which crictl
	I1010 19:23:38.682851  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1010 19:23:38.682872  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1010 19:23:38.691181  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1010 19:23:38.842626  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1010 19:23:38.842714  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1010 19:23:38.842754  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1010 19:23:38.842797  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1010 19:23:38.842721  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1010 19:23:38.842851  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1010 19:23:38.842789  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1010 19:23:38.989994  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1010 19:23:39.005815  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1010 19:23:39.005923  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1010 19:23:39.010029  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1010 19:23:39.010105  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1010 19:23:39.010150  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1010 19:23:39.010230  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1010 19:23:39.134646  148123 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 19:23:39.141431  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1010 19:23:39.176750  148123 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1010 19:23:39.176945  148123 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1010 19:23:39.176989  148123 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1010 19:23:39.177038  148123 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1010 19:23:39.177073  148123 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1010 19:23:39.177104  148123 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1010 19:23:39.335056  148123 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1010 19:23:39.335113  148123 cache_images.go:92] duration metric: took 1.026784312s to LoadCachedImages
	W1010 19:23:39.335180  148123 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I1010 19:23:39.335192  148123 kubeadm.go:934] updating node { 192.168.61.112 8443 v1.20.0 crio true true} ...
	I1010 19:23:39.335305  148123 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-947203 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.112
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-947203 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1010 19:23:39.335380  148123 ssh_runner.go:195] Run: crio config
	I1010 19:23:39.394307  148123 cni.go:84] Creating CNI manager for ""
	I1010 19:23:39.394338  148123 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1010 19:23:39.394350  148123 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1010 19:23:39.394378  148123 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.112 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-947203 NodeName:old-k8s-version-947203 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.112"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.112 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1010 19:23:39.394572  148123 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.112
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-947203"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.112
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.112"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1010 19:23:39.394662  148123 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1010 19:23:39.405510  148123 binaries.go:44] Found k8s binaries, skipping transfer
	I1010 19:23:39.405600  148123 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1010 19:23:39.415968  148123 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1010 19:23:39.443234  148123 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1010 19:23:39.462448  148123 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1010 19:23:39.481959  148123 ssh_runner.go:195] Run: grep 192.168.61.112	control-plane.minikube.internal$ /etc/hosts
	I1010 19:23:39.486188  148123 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.112	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 19:23:39.501922  148123 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 19:23:39.642769  148123 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 19:23:39.661335  148123 certs.go:68] Setting up /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/old-k8s-version-947203 for IP: 192.168.61.112
	I1010 19:23:39.661363  148123 certs.go:194] generating shared ca certs ...
	I1010 19:23:39.661384  148123 certs.go:226] acquiring lock for ca certs: {Name:mk934aa2a82050d7b4e8800492bf64f91d140517 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 19:23:39.661595  148123 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key
	I1010 19:23:39.661658  148123 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key
	I1010 19:23:39.661671  148123 certs.go:256] generating profile certs ...
	I1010 19:23:39.661779  148123 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/old-k8s-version-947203/client.key
	I1010 19:23:39.661867  148123 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/old-k8s-version-947203/apiserver.key.8a666a52
	I1010 19:23:39.661922  148123 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/old-k8s-version-947203/proxy-client.key
	I1010 19:23:39.662312  148123 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876.pem (1338 bytes)
	W1010 19:23:39.662395  148123 certs.go:480] ignoring /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876_empty.pem, impossibly tiny 0 bytes
	I1010 19:23:39.662410  148123 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem (1679 bytes)
	I1010 19:23:39.662447  148123 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem (1078 bytes)
	I1010 19:23:39.662501  148123 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem (1123 bytes)
	I1010 19:23:39.662531  148123 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem (1675 bytes)
	I1010 19:23:39.662622  148123 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem (1708 bytes)
	I1010 19:23:39.663372  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1010 19:23:39.710366  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1010 19:23:39.752724  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1010 19:23:39.801408  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1010 19:23:39.843557  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/old-k8s-version-947203/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1010 19:23:39.892522  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/old-k8s-version-947203/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1010 19:23:39.938519  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/old-k8s-version-947203/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1010 19:23:39.966140  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/old-k8s-version-947203/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1010 19:23:39.992974  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem --> /usr/share/ca-certificates/888762.pem (1708 bytes)
	I1010 19:23:40.019381  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1010 19:23:40.045769  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876.pem --> /usr/share/ca-certificates/88876.pem (1338 bytes)
	I1010 19:23:40.080683  148123 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1010 19:23:40.099747  148123 ssh_runner.go:195] Run: openssl version
	I1010 19:23:40.107865  148123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1010 19:23:40.123158  148123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1010 19:23:40.128594  148123 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 10 17:58 /usr/share/ca-certificates/minikubeCA.pem
	I1010 19:23:40.128660  148123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1010 19:23:40.135319  148123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1010 19:23:40.147514  148123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/88876.pem && ln -fs /usr/share/ca-certificates/88876.pem /etc/ssl/certs/88876.pem"
	I1010 19:23:40.162463  148123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/88876.pem
	I1010 19:23:40.168387  148123 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 10 18:09 /usr/share/ca-certificates/88876.pem
	I1010 19:23:40.168512  148123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/88876.pem
	I1010 19:23:40.176304  148123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/88876.pem /etc/ssl/certs/51391683.0"
	I1010 19:23:40.191306  148123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/888762.pem && ln -fs /usr/share/ca-certificates/888762.pem /etc/ssl/certs/888762.pem"
	I1010 19:23:40.204196  148123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/888762.pem
	I1010 19:23:40.211044  148123 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 10 18:09 /usr/share/ca-certificates/888762.pem
	I1010 19:23:40.211122  148123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/888762.pem
	I1010 19:23:40.219574  148123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/888762.pem /etc/ssl/certs/3ec20f2e.0"
	I1010 19:23:40.232634  148123 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1010 19:23:40.237639  148123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1010 19:23:40.244141  148123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1010 19:23:40.251188  148123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1010 19:23:40.258538  148123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1010 19:23:40.265029  148123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1010 19:23:40.271754  148123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1010 19:23:40.278352  148123 kubeadm.go:392] StartCluster: {Name:old-k8s-version-947203 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-947203 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.112 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 19:23:40.278453  148123 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1010 19:23:40.278518  148123 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1010 19:23:40.317771  148123 cri.go:89] found id: ""
	I1010 19:23:40.317838  148123 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1010 19:23:40.330494  148123 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1010 19:23:40.330520  148123 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1010 19:23:40.330576  148123 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1010 19:23:40.341984  148123 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1010 19:23:40.343386  148123 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-947203" does not appear in /home/jenkins/minikube-integration/19787-81676/kubeconfig
	I1010 19:23:40.344334  148123 kubeconfig.go:62] /home/jenkins/minikube-integration/19787-81676/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-947203" cluster setting kubeconfig missing "old-k8s-version-947203" context setting]
	I1010 19:23:40.345360  148123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19787-81676/kubeconfig: {Name:mk31297b607b774870865548d952622c04d970cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 19:23:40.347128  148123 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1010 19:23:40.357902  148123 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.112
	I1010 19:23:40.357944  148123 kubeadm.go:1160] stopping kube-system containers ...
	I1010 19:23:40.357961  148123 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1010 19:23:40.358031  148123 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1010 19:23:40.399970  148123 cri.go:89] found id: ""
	I1010 19:23:40.400057  148123 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1010 19:23:40.418527  148123 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1010 19:23:40.429182  148123 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1010 19:23:40.429217  148123 kubeadm.go:157] found existing configuration files:
	
	I1010 19:23:40.429262  148123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1010 19:23:40.439287  148123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1010 19:23:40.439363  148123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1010 19:23:40.449479  148123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1010 19:23:40.459288  148123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1010 19:23:40.459359  148123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1010 19:23:40.472733  148123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1010 19:23:40.484890  148123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1010 19:23:40.484958  148123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1010 19:23:40.495301  148123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1010 19:23:40.504750  148123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1010 19:23:40.504820  148123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1010 19:23:40.515034  148123 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1010 19:23:40.526168  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:23:40.675022  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:23:41.566371  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:23:41.815296  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:23:41.930082  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:23:42.027597  148123 api_server.go:52] waiting for apiserver process to appear ...
	I1010 19:23:42.027713  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:42.528219  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:43.027889  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:43.527735  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:44.028463  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:44.527916  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:45.027911  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:45.528797  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:46.028772  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:46.527982  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:47.027799  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:47.527894  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:48.028352  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:48.527935  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:49.028421  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:49.528271  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:50.028462  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:50.527867  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:51.028782  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:51.528581  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:52.028732  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:52.527987  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:53.027852  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:53.528647  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:54.027982  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:54.528065  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:55.027890  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:55.527753  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:56.027877  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:56.527790  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:57.028263  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:57.527790  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:58.028743  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:58.528039  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:59.028519  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:59.527790  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:00.027889  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:00.528623  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:01.028697  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:01.528030  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:02.028062  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:02.527876  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:03.028745  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:03.528569  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:04.028711  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:04.528770  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:05.028716  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:05.528083  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:06.028204  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:06.528430  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:07.027972  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:07.527987  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:08.027829  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:08.528676  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:09.028538  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:09.527883  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:10.028693  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:10.528439  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:11.028528  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:11.528679  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:12.028002  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:12.527904  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:13.028685  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:13.527833  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:14.028090  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:14.528072  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:15.028648  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:15.528388  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:16.028060  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:16.528472  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:17.028098  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:17.528125  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:18.028563  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:18.528613  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:19.028306  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:19.527854  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:20.028765  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:20.528245  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:21.027936  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:21.528172  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:22.028527  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:22.528446  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:23.028709  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:23.528711  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:24.028402  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:24.528516  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:25.028207  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:25.527928  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:26.028032  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:26.527826  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:27.028609  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:27.528448  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:28.028018  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:28.528597  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:29.027960  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:29.528054  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:30.028227  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:30.527791  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:31.027790  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:31.528761  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:32.028343  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:32.528553  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:33.028195  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:33.527895  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:34.028434  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:34.527949  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:35.028224  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:35.528470  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:36.028540  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:36.528499  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:37.028362  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:37.528483  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:38.028531  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:38.527865  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:39.028363  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:39.528571  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:40.027992  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:40.528552  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:41.028220  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:41.527889  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:42.028462  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:24:42.028549  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:24:42.070669  148123 cri.go:89] found id: ""
	I1010 19:24:42.070701  148123 logs.go:282] 0 containers: []
	W1010 19:24:42.070710  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:24:42.070716  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:24:42.070775  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:24:42.110693  148123 cri.go:89] found id: ""
	I1010 19:24:42.110731  148123 logs.go:282] 0 containers: []
	W1010 19:24:42.110748  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:24:42.110756  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:24:42.110816  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:24:42.148471  148123 cri.go:89] found id: ""
	I1010 19:24:42.148511  148123 logs.go:282] 0 containers: []
	W1010 19:24:42.148525  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:24:42.148535  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:24:42.148603  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:24:42.184637  148123 cri.go:89] found id: ""
	I1010 19:24:42.184670  148123 logs.go:282] 0 containers: []
	W1010 19:24:42.184683  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:24:42.184691  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:24:42.184759  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:24:42.226794  148123 cri.go:89] found id: ""
	I1010 19:24:42.226834  148123 logs.go:282] 0 containers: []
	W1010 19:24:42.226848  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:24:42.226857  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:24:42.226942  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:24:42.262969  148123 cri.go:89] found id: ""
	I1010 19:24:42.263004  148123 logs.go:282] 0 containers: []
	W1010 19:24:42.263017  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:24:42.263027  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:24:42.263167  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:24:42.301063  148123 cri.go:89] found id: ""
	I1010 19:24:42.301088  148123 logs.go:282] 0 containers: []
	W1010 19:24:42.301096  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:24:42.301102  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:24:42.301153  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:24:42.338829  148123 cri.go:89] found id: ""
	I1010 19:24:42.338862  148123 logs.go:282] 0 containers: []
	W1010 19:24:42.338873  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:24:42.338886  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:24:42.338901  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:24:42.391426  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:24:42.391478  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:24:42.405520  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:24:42.405563  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:24:42.544245  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:24:42.544275  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:24:42.544292  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:24:42.620760  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:24:42.620804  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:24:45.164389  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:45.179714  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:24:45.179776  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:24:45.216271  148123 cri.go:89] found id: ""
	I1010 19:24:45.216308  148123 logs.go:282] 0 containers: []
	W1010 19:24:45.216319  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:24:45.216327  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:24:45.216394  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:24:45.255109  148123 cri.go:89] found id: ""
	I1010 19:24:45.255154  148123 logs.go:282] 0 containers: []
	W1010 19:24:45.255167  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:24:45.255176  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:24:45.255248  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:24:45.294338  148123 cri.go:89] found id: ""
	I1010 19:24:45.294369  148123 logs.go:282] 0 containers: []
	W1010 19:24:45.294380  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:24:45.294388  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:24:45.294457  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:24:45.339573  148123 cri.go:89] found id: ""
	I1010 19:24:45.339606  148123 logs.go:282] 0 containers: []
	W1010 19:24:45.339626  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:24:45.339636  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:24:45.339703  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:24:45.400184  148123 cri.go:89] found id: ""
	I1010 19:24:45.400214  148123 logs.go:282] 0 containers: []
	W1010 19:24:45.400225  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:24:45.400234  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:24:45.400301  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:24:45.436152  148123 cri.go:89] found id: ""
	I1010 19:24:45.436183  148123 logs.go:282] 0 containers: []
	W1010 19:24:45.436195  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:24:45.436203  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:24:45.436262  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:24:45.484321  148123 cri.go:89] found id: ""
	I1010 19:24:45.484347  148123 logs.go:282] 0 containers: []
	W1010 19:24:45.484355  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:24:45.484361  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:24:45.484441  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:24:45.532893  148123 cri.go:89] found id: ""
	I1010 19:24:45.532923  148123 logs.go:282] 0 containers: []
	W1010 19:24:45.532932  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:24:45.532943  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:24:45.532958  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:24:45.585183  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:24:45.585214  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:24:45.638800  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:24:45.638847  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:24:45.653928  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:24:45.653961  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:24:45.733534  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:24:45.733564  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:24:45.733580  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:24:48.314367  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:48.329687  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:24:48.329754  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:24:48.370372  148123 cri.go:89] found id: ""
	I1010 19:24:48.370400  148123 logs.go:282] 0 containers: []
	W1010 19:24:48.370409  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:24:48.370415  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:24:48.370490  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:24:48.409362  148123 cri.go:89] found id: ""
	I1010 19:24:48.409396  148123 logs.go:282] 0 containers: []
	W1010 19:24:48.409407  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:24:48.409415  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:24:48.409500  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:24:48.449640  148123 cri.go:89] found id: ""
	I1010 19:24:48.449672  148123 logs.go:282] 0 containers: []
	W1010 19:24:48.449681  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:24:48.449687  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:24:48.449746  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:24:48.485053  148123 cri.go:89] found id: ""
	I1010 19:24:48.485104  148123 logs.go:282] 0 containers: []
	W1010 19:24:48.485121  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:24:48.485129  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:24:48.485183  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:24:48.520083  148123 cri.go:89] found id: ""
	I1010 19:24:48.520113  148123 logs.go:282] 0 containers: []
	W1010 19:24:48.520121  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:24:48.520127  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:24:48.520185  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:24:48.555112  148123 cri.go:89] found id: ""
	I1010 19:24:48.555138  148123 logs.go:282] 0 containers: []
	W1010 19:24:48.555149  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:24:48.555156  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:24:48.555241  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:24:48.592682  148123 cri.go:89] found id: ""
	I1010 19:24:48.592710  148123 logs.go:282] 0 containers: []
	W1010 19:24:48.592719  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:24:48.592725  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:24:48.592775  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:24:48.628449  148123 cri.go:89] found id: ""
	I1010 19:24:48.628482  148123 logs.go:282] 0 containers: []
	W1010 19:24:48.628490  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:24:48.628500  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:24:48.628513  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:24:48.709385  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:24:48.709428  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:24:48.752542  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:24:48.752677  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:24:48.812331  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:24:48.812385  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:24:48.827057  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:24:48.827095  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:24:48.908312  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:24:51.409122  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:51.423404  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:24:51.423482  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:24:51.465742  148123 cri.go:89] found id: ""
	I1010 19:24:51.465781  148123 logs.go:282] 0 containers: []
	W1010 19:24:51.465793  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:24:51.465803  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:24:51.465875  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:24:51.501597  148123 cri.go:89] found id: ""
	I1010 19:24:51.501630  148123 logs.go:282] 0 containers: []
	W1010 19:24:51.501641  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:24:51.501648  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:24:51.501711  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:24:51.537500  148123 cri.go:89] found id: ""
	I1010 19:24:51.537539  148123 logs.go:282] 0 containers: []
	W1010 19:24:51.537551  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:24:51.537559  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:24:51.537626  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:24:51.579546  148123 cri.go:89] found id: ""
	I1010 19:24:51.579576  148123 logs.go:282] 0 containers: []
	W1010 19:24:51.579587  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:24:51.579595  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:24:51.579660  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:24:51.619893  148123 cri.go:89] found id: ""
	I1010 19:24:51.619917  148123 logs.go:282] 0 containers: []
	W1010 19:24:51.619925  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:24:51.619931  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:24:51.619999  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:24:51.655787  148123 cri.go:89] found id: ""
	I1010 19:24:51.655824  148123 logs.go:282] 0 containers: []
	W1010 19:24:51.655837  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:24:51.655846  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:24:51.655921  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:24:51.699582  148123 cri.go:89] found id: ""
	I1010 19:24:51.699611  148123 logs.go:282] 0 containers: []
	W1010 19:24:51.699619  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:24:51.699625  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:24:51.699685  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:24:51.738658  148123 cri.go:89] found id: ""
	I1010 19:24:51.738689  148123 logs.go:282] 0 containers: []
	W1010 19:24:51.738697  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:24:51.738707  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:24:51.738721  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:24:51.789556  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:24:51.789587  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:24:51.858919  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:24:51.858968  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:24:51.887818  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:24:51.887854  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:24:51.964408  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:24:51.964434  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:24:51.964449  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:24:54.545627  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:54.560748  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:24:54.560830  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:24:54.595863  148123 cri.go:89] found id: ""
	I1010 19:24:54.595893  148123 logs.go:282] 0 containers: []
	W1010 19:24:54.595903  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:24:54.595912  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:24:54.595978  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:24:54.633645  148123 cri.go:89] found id: ""
	I1010 19:24:54.633681  148123 logs.go:282] 0 containers: []
	W1010 19:24:54.633693  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:24:54.633701  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:24:54.633761  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:24:54.668269  148123 cri.go:89] found id: ""
	I1010 19:24:54.668299  148123 logs.go:282] 0 containers: []
	W1010 19:24:54.668311  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:24:54.668317  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:24:54.668369  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:24:54.706559  148123 cri.go:89] found id: ""
	I1010 19:24:54.706591  148123 logs.go:282] 0 containers: []
	W1010 19:24:54.706600  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:24:54.706608  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:24:54.706673  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:24:54.746248  148123 cri.go:89] found id: ""
	I1010 19:24:54.746283  148123 logs.go:282] 0 containers: []
	W1010 19:24:54.746295  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:24:54.746303  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:24:54.746383  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:24:54.782982  148123 cri.go:89] found id: ""
	I1010 19:24:54.783017  148123 logs.go:282] 0 containers: []
	W1010 19:24:54.783027  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:24:54.783033  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:24:54.783085  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:24:54.819664  148123 cri.go:89] found id: ""
	I1010 19:24:54.819700  148123 logs.go:282] 0 containers: []
	W1010 19:24:54.819713  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:24:54.819722  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:24:54.819797  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:24:54.859603  148123 cri.go:89] found id: ""
	I1010 19:24:54.859632  148123 logs.go:282] 0 containers: []
	W1010 19:24:54.859640  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:24:54.859650  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:24:54.859662  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:24:54.910949  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:24:54.910987  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:24:54.925941  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:24:54.925975  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:24:55.009626  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:24:55.009654  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:24:55.009669  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:24:55.097196  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:24:55.097237  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:24:57.641732  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:57.661141  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:24:57.661222  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:24:57.716054  148123 cri.go:89] found id: ""
	I1010 19:24:57.716086  148123 logs.go:282] 0 containers: []
	W1010 19:24:57.716094  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:24:57.716100  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:24:57.716178  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:24:57.766862  148123 cri.go:89] found id: ""
	I1010 19:24:57.766892  148123 logs.go:282] 0 containers: []
	W1010 19:24:57.766906  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:24:57.766917  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:24:57.766989  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:24:57.814776  148123 cri.go:89] found id: ""
	I1010 19:24:57.814808  148123 logs.go:282] 0 containers: []
	W1010 19:24:57.814821  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:24:57.814829  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:24:57.814899  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:24:57.850458  148123 cri.go:89] found id: ""
	I1010 19:24:57.850495  148123 logs.go:282] 0 containers: []
	W1010 19:24:57.850505  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:24:57.850516  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:24:57.850667  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:24:57.886541  148123 cri.go:89] found id: ""
	I1010 19:24:57.886566  148123 logs.go:282] 0 containers: []
	W1010 19:24:57.886575  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:24:57.886581  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:24:57.886645  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:24:57.920748  148123 cri.go:89] found id: ""
	I1010 19:24:57.920783  148123 logs.go:282] 0 containers: []
	W1010 19:24:57.920795  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:24:57.920802  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:24:57.920887  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:24:57.957801  148123 cri.go:89] found id: ""
	I1010 19:24:57.957833  148123 logs.go:282] 0 containers: []
	W1010 19:24:57.957844  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:24:57.957852  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:24:57.957919  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:24:57.995648  148123 cri.go:89] found id: ""
	I1010 19:24:57.995683  148123 logs.go:282] 0 containers: []
	W1010 19:24:57.995694  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:24:57.995706  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:24:57.995723  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:24:58.034987  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:24:58.035030  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:24:58.089014  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:24:58.089066  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:24:58.104179  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:24:58.104211  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:24:58.178239  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:24:58.178271  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:24:58.178291  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:00.756057  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:00.770086  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:00.770150  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:00.811400  148123 cri.go:89] found id: ""
	I1010 19:25:00.811444  148123 logs.go:282] 0 containers: []
	W1010 19:25:00.811456  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:00.811466  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:00.811533  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:00.848821  148123 cri.go:89] found id: ""
	I1010 19:25:00.848859  148123 logs.go:282] 0 containers: []
	W1010 19:25:00.848870  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:00.848877  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:00.848947  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:00.883529  148123 cri.go:89] found id: ""
	I1010 19:25:00.883557  148123 logs.go:282] 0 containers: []
	W1010 19:25:00.883566  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:00.883573  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:00.883631  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:00.922952  148123 cri.go:89] found id: ""
	I1010 19:25:00.922982  148123 logs.go:282] 0 containers: []
	W1010 19:25:00.922992  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:00.922998  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:00.923057  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:00.956116  148123 cri.go:89] found id: ""
	I1010 19:25:00.956147  148123 logs.go:282] 0 containers: []
	W1010 19:25:00.956159  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:00.956168  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:00.956233  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:00.998892  148123 cri.go:89] found id: ""
	I1010 19:25:00.998919  148123 logs.go:282] 0 containers: []
	W1010 19:25:00.998930  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:00.998939  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:00.998996  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:01.037617  148123 cri.go:89] found id: ""
	I1010 19:25:01.037649  148123 logs.go:282] 0 containers: []
	W1010 19:25:01.037657  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:01.037663  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:01.037717  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:01.078003  148123 cri.go:89] found id: ""
	I1010 19:25:01.078034  148123 logs.go:282] 0 containers: []
	W1010 19:25:01.078046  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:01.078055  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:01.078069  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:01.118948  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:01.118977  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:01.171053  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:01.171091  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:01.187027  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:01.187060  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:01.261288  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:01.261315  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:01.261330  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:03.848144  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:03.862527  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:03.862601  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:03.899120  148123 cri.go:89] found id: ""
	I1010 19:25:03.899159  148123 logs.go:282] 0 containers: []
	W1010 19:25:03.899180  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:03.899187  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:03.899276  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:03.935461  148123 cri.go:89] found id: ""
	I1010 19:25:03.935492  148123 logs.go:282] 0 containers: []
	W1010 19:25:03.935500  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:03.935507  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:03.935569  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:03.971892  148123 cri.go:89] found id: ""
	I1010 19:25:03.971925  148123 logs.go:282] 0 containers: []
	W1010 19:25:03.971937  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:03.971945  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:03.972019  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:04.008341  148123 cri.go:89] found id: ""
	I1010 19:25:04.008368  148123 logs.go:282] 0 containers: []
	W1010 19:25:04.008377  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:04.008390  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:04.008447  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:04.042007  148123 cri.go:89] found id: ""
	I1010 19:25:04.042036  148123 logs.go:282] 0 containers: []
	W1010 19:25:04.042044  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:04.042051  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:04.042101  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:04.078017  148123 cri.go:89] found id: ""
	I1010 19:25:04.078045  148123 logs.go:282] 0 containers: []
	W1010 19:25:04.078053  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:04.078059  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:04.078112  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:04.116792  148123 cri.go:89] found id: ""
	I1010 19:25:04.116823  148123 logs.go:282] 0 containers: []
	W1010 19:25:04.116832  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:04.116839  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:04.116928  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:04.153468  148123 cri.go:89] found id: ""
	I1010 19:25:04.153496  148123 logs.go:282] 0 containers: []
	W1010 19:25:04.153503  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:04.153513  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:04.153525  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:04.230646  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:04.230683  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:04.270975  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:04.271015  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:04.320845  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:04.320902  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:04.337789  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:04.337828  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:04.416077  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:06.916412  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:06.931309  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:06.931376  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:06.969224  148123 cri.go:89] found id: ""
	I1010 19:25:06.969257  148123 logs.go:282] 0 containers: []
	W1010 19:25:06.969269  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:06.969277  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:06.969346  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:07.006598  148123 cri.go:89] found id: ""
	I1010 19:25:07.006653  148123 logs.go:282] 0 containers: []
	W1010 19:25:07.006667  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:07.006676  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:07.006744  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:07.046392  148123 cri.go:89] found id: ""
	I1010 19:25:07.046418  148123 logs.go:282] 0 containers: []
	W1010 19:25:07.046427  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:07.046433  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:07.046491  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:07.089570  148123 cri.go:89] found id: ""
	I1010 19:25:07.089603  148123 logs.go:282] 0 containers: []
	W1010 19:25:07.089615  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:07.089624  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:07.089689  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:07.127152  148123 cri.go:89] found id: ""
	I1010 19:25:07.127185  148123 logs.go:282] 0 containers: []
	W1010 19:25:07.127195  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:07.127204  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:07.127282  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:07.160516  148123 cri.go:89] found id: ""
	I1010 19:25:07.160543  148123 logs.go:282] 0 containers: []
	W1010 19:25:07.160554  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:07.160563  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:07.160635  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:07.197270  148123 cri.go:89] found id: ""
	I1010 19:25:07.197307  148123 logs.go:282] 0 containers: []
	W1010 19:25:07.197318  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:07.197327  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:07.197395  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:07.232658  148123 cri.go:89] found id: ""
	I1010 19:25:07.232687  148123 logs.go:282] 0 containers: []
	W1010 19:25:07.232696  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:07.232706  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:07.232720  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:07.246579  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:07.246609  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:07.315200  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:07.315230  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:07.315251  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:07.389532  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:07.389580  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:07.438808  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:07.438837  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:09.990294  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:10.004155  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:10.004270  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:10.039034  148123 cri.go:89] found id: ""
	I1010 19:25:10.039068  148123 logs.go:282] 0 containers: []
	W1010 19:25:10.039078  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:10.039087  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:10.039174  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:10.079936  148123 cri.go:89] found id: ""
	I1010 19:25:10.079967  148123 logs.go:282] 0 containers: []
	W1010 19:25:10.079978  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:10.079985  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:10.080068  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:10.116441  148123 cri.go:89] found id: ""
	I1010 19:25:10.116471  148123 logs.go:282] 0 containers: []
	W1010 19:25:10.116483  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:10.116491  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:10.116556  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:10.151998  148123 cri.go:89] found id: ""
	I1010 19:25:10.152045  148123 logs.go:282] 0 containers: []
	W1010 19:25:10.152058  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:10.152075  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:10.152153  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:10.188514  148123 cri.go:89] found id: ""
	I1010 19:25:10.188547  148123 logs.go:282] 0 containers: []
	W1010 19:25:10.188565  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:10.188574  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:10.188640  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:10.223648  148123 cri.go:89] found id: ""
	I1010 19:25:10.223682  148123 logs.go:282] 0 containers: []
	W1010 19:25:10.223694  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:10.223702  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:10.223771  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:10.264117  148123 cri.go:89] found id: ""
	I1010 19:25:10.264143  148123 logs.go:282] 0 containers: []
	W1010 19:25:10.264151  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:10.264158  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:10.264215  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:10.302566  148123 cri.go:89] found id: ""
	I1010 19:25:10.302601  148123 logs.go:282] 0 containers: []
	W1010 19:25:10.302613  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:10.302625  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:10.302649  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:10.316567  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:10.316606  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:10.388642  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:10.388674  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:10.388692  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:10.471035  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:10.471090  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:10.519600  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:10.519645  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:13.075428  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:13.090830  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:13.090915  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:13.126302  148123 cri.go:89] found id: ""
	I1010 19:25:13.126336  148123 logs.go:282] 0 containers: []
	W1010 19:25:13.126348  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:13.126357  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:13.126429  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:13.163309  148123 cri.go:89] found id: ""
	I1010 19:25:13.163344  148123 logs.go:282] 0 containers: []
	W1010 19:25:13.163357  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:13.163365  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:13.163428  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:13.197569  148123 cri.go:89] found id: ""
	I1010 19:25:13.197605  148123 logs.go:282] 0 containers: []
	W1010 19:25:13.197615  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:13.197621  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:13.197692  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:13.236299  148123 cri.go:89] found id: ""
	I1010 19:25:13.236328  148123 logs.go:282] 0 containers: []
	W1010 19:25:13.236336  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:13.236342  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:13.236406  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:13.275667  148123 cri.go:89] found id: ""
	I1010 19:25:13.275696  148123 logs.go:282] 0 containers: []
	W1010 19:25:13.275705  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:13.275711  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:13.275776  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:13.309728  148123 cri.go:89] found id: ""
	I1010 19:25:13.309763  148123 logs.go:282] 0 containers: []
	W1010 19:25:13.309774  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:13.309786  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:13.309854  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:13.347464  148123 cri.go:89] found id: ""
	I1010 19:25:13.347493  148123 logs.go:282] 0 containers: []
	W1010 19:25:13.347504  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:13.347513  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:13.347576  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:13.385086  148123 cri.go:89] found id: ""
	I1010 19:25:13.385119  148123 logs.go:282] 0 containers: []
	W1010 19:25:13.385130  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:13.385142  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:13.385156  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:13.466513  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:13.466553  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:13.508610  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:13.508651  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:13.564897  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:13.564932  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:13.578771  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:13.578803  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:13.651857  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:16.153066  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:16.167613  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:16.167698  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:16.203867  148123 cri.go:89] found id: ""
	I1010 19:25:16.203897  148123 logs.go:282] 0 containers: []
	W1010 19:25:16.203906  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:16.203912  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:16.203965  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:16.242077  148123 cri.go:89] found id: ""
	I1010 19:25:16.242109  148123 logs.go:282] 0 containers: []
	W1010 19:25:16.242130  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:16.242138  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:16.242234  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:16.283792  148123 cri.go:89] found id: ""
	I1010 19:25:16.283825  148123 logs.go:282] 0 containers: []
	W1010 19:25:16.283836  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:16.283844  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:16.283907  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:16.319934  148123 cri.go:89] found id: ""
	I1010 19:25:16.319969  148123 logs.go:282] 0 containers: []
	W1010 19:25:16.319980  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:16.319990  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:16.320063  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:16.360439  148123 cri.go:89] found id: ""
	I1010 19:25:16.360482  148123 logs.go:282] 0 containers: []
	W1010 19:25:16.360495  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:16.360504  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:16.360569  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:16.395890  148123 cri.go:89] found id: ""
	I1010 19:25:16.395922  148123 logs.go:282] 0 containers: []
	W1010 19:25:16.395931  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:16.395941  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:16.396009  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:16.431396  148123 cri.go:89] found id: ""
	I1010 19:25:16.431442  148123 logs.go:282] 0 containers: []
	W1010 19:25:16.431456  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:16.431463  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:16.431531  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:16.471768  148123 cri.go:89] found id: ""
	I1010 19:25:16.471796  148123 logs.go:282] 0 containers: []
	W1010 19:25:16.471804  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:16.471814  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:16.471830  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:16.526112  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:16.526155  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:16.541041  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:16.541074  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:16.623245  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:16.623273  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:16.623289  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:16.710840  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:16.710884  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:19.257566  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:19.273982  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:19.274063  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:19.311360  148123 cri.go:89] found id: ""
	I1010 19:25:19.311392  148123 logs.go:282] 0 containers: []
	W1010 19:25:19.311404  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:19.311411  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:19.311475  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:19.347025  148123 cri.go:89] found id: ""
	I1010 19:25:19.347053  148123 logs.go:282] 0 containers: []
	W1010 19:25:19.347062  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:19.347068  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:19.347120  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:19.383575  148123 cri.go:89] found id: ""
	I1010 19:25:19.383608  148123 logs.go:282] 0 containers: []
	W1010 19:25:19.383620  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:19.383633  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:19.383698  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:19.419245  148123 cri.go:89] found id: ""
	I1010 19:25:19.419270  148123 logs.go:282] 0 containers: []
	W1010 19:25:19.419278  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:19.419284  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:19.419334  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:19.453563  148123 cri.go:89] found id: ""
	I1010 19:25:19.453591  148123 logs.go:282] 0 containers: []
	W1010 19:25:19.453601  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:19.453658  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:19.453715  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:19.496430  148123 cri.go:89] found id: ""
	I1010 19:25:19.496458  148123 logs.go:282] 0 containers: []
	W1010 19:25:19.496466  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:19.496473  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:19.496533  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:19.534626  148123 cri.go:89] found id: ""
	I1010 19:25:19.534670  148123 logs.go:282] 0 containers: []
	W1010 19:25:19.534682  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:19.534691  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:19.534757  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:19.574186  148123 cri.go:89] found id: ""
	I1010 19:25:19.574236  148123 logs.go:282] 0 containers: []
	W1010 19:25:19.574248  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:19.574261  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:19.574283  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:19.587952  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:19.587989  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:19.664607  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:19.664644  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:19.664664  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:19.742185  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:19.742224  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:19.781530  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:19.781562  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:22.335398  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:22.349627  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:22.349693  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:22.389075  148123 cri.go:89] found id: ""
	I1010 19:25:22.389102  148123 logs.go:282] 0 containers: []
	W1010 19:25:22.389110  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:22.389117  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:22.389168  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:22.424331  148123 cri.go:89] found id: ""
	I1010 19:25:22.424368  148123 logs.go:282] 0 containers: []
	W1010 19:25:22.424381  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:22.424389  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:22.424460  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:22.464011  148123 cri.go:89] found id: ""
	I1010 19:25:22.464051  148123 logs.go:282] 0 containers: []
	W1010 19:25:22.464062  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:22.464070  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:22.464141  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:22.500786  148123 cri.go:89] found id: ""
	I1010 19:25:22.500819  148123 logs.go:282] 0 containers: []
	W1010 19:25:22.500828  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:22.500835  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:22.500914  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:22.538016  148123 cri.go:89] found id: ""
	I1010 19:25:22.538053  148123 logs.go:282] 0 containers: []
	W1010 19:25:22.538061  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:22.538068  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:22.538124  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:22.576600  148123 cri.go:89] found id: ""
	I1010 19:25:22.576629  148123 logs.go:282] 0 containers: []
	W1010 19:25:22.576637  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:22.576643  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:22.576702  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:22.616020  148123 cri.go:89] found id: ""
	I1010 19:25:22.616048  148123 logs.go:282] 0 containers: []
	W1010 19:25:22.616057  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:22.616064  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:22.616114  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:22.652238  148123 cri.go:89] found id: ""
	I1010 19:25:22.652271  148123 logs.go:282] 0 containers: []
	W1010 19:25:22.652283  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:22.652295  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:22.652311  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:22.726182  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:22.726216  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:22.726234  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:22.814040  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:22.814086  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:22.859254  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:22.859289  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:22.916565  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:22.916607  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:25.432636  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:25.446982  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:25.447047  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:25.482440  148123 cri.go:89] found id: ""
	I1010 19:25:25.482472  148123 logs.go:282] 0 containers: []
	W1010 19:25:25.482484  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:25.482492  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:25.482552  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:25.517709  148123 cri.go:89] found id: ""
	I1010 19:25:25.517742  148123 logs.go:282] 0 containers: []
	W1010 19:25:25.517756  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:25.517764  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:25.517867  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:25.561504  148123 cri.go:89] found id: ""
	I1010 19:25:25.561532  148123 logs.go:282] 0 containers: []
	W1010 19:25:25.561544  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:25.561552  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:25.561616  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:25.601704  148123 cri.go:89] found id: ""
	I1010 19:25:25.601741  148123 logs.go:282] 0 containers: []
	W1010 19:25:25.601753  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:25.601762  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:25.601825  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:25.638519  148123 cri.go:89] found id: ""
	I1010 19:25:25.638544  148123 logs.go:282] 0 containers: []
	W1010 19:25:25.638552  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:25.638558  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:25.638609  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:25.673390  148123 cri.go:89] found id: ""
	I1010 19:25:25.673425  148123 logs.go:282] 0 containers: []
	W1010 19:25:25.673436  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:25.673447  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:25.673525  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:25.708251  148123 cri.go:89] found id: ""
	I1010 19:25:25.708278  148123 logs.go:282] 0 containers: []
	W1010 19:25:25.708286  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:25.708293  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:25.708354  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:25.749793  148123 cri.go:89] found id: ""
	I1010 19:25:25.749826  148123 logs.go:282] 0 containers: []
	W1010 19:25:25.749837  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:25.749846  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:25.749861  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:25.802511  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:25.802554  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:25.817523  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:25.817562  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:25.898595  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:25.898619  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:25.898635  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:25.978629  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:25.978684  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:28.520165  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:28.532725  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:28.532793  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:28.571667  148123 cri.go:89] found id: ""
	I1010 19:25:28.571695  148123 logs.go:282] 0 containers: []
	W1010 19:25:28.571704  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:28.571710  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:28.571770  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:28.608136  148123 cri.go:89] found id: ""
	I1010 19:25:28.608165  148123 logs.go:282] 0 containers: []
	W1010 19:25:28.608174  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:28.608181  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:28.608244  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:28.642123  148123 cri.go:89] found id: ""
	I1010 19:25:28.642161  148123 logs.go:282] 0 containers: []
	W1010 19:25:28.642173  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:28.642181  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:28.642242  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:28.678600  148123 cri.go:89] found id: ""
	I1010 19:25:28.678633  148123 logs.go:282] 0 containers: []
	W1010 19:25:28.678643  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:28.678651  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:28.678702  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:28.716627  148123 cri.go:89] found id: ""
	I1010 19:25:28.716660  148123 logs.go:282] 0 containers: []
	W1010 19:25:28.716673  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:28.716681  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:28.716766  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:28.752750  148123 cri.go:89] found id: ""
	I1010 19:25:28.752786  148123 logs.go:282] 0 containers: []
	W1010 19:25:28.752798  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:28.752806  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:28.752898  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:28.790021  148123 cri.go:89] found id: ""
	I1010 19:25:28.790054  148123 logs.go:282] 0 containers: []
	W1010 19:25:28.790066  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:28.790075  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:28.790128  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:28.828760  148123 cri.go:89] found id: ""
	I1010 19:25:28.828803  148123 logs.go:282] 0 containers: []
	W1010 19:25:28.828827  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:28.828839  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:28.828877  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:28.879773  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:28.879811  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:28.894311  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:28.894342  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:28.968675  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:28.968706  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:28.968729  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:29.045862  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:29.045913  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:31.588772  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:31.601453  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:31.601526  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:31.637056  148123 cri.go:89] found id: ""
	I1010 19:25:31.637090  148123 logs.go:282] 0 containers: []
	W1010 19:25:31.637101  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:31.637108  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:31.637183  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:31.677825  148123 cri.go:89] found id: ""
	I1010 19:25:31.677854  148123 logs.go:282] 0 containers: []
	W1010 19:25:31.677862  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:31.677867  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:31.677920  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:31.715372  148123 cri.go:89] found id: ""
	I1010 19:25:31.715402  148123 logs.go:282] 0 containers: []
	W1010 19:25:31.715411  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:31.715419  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:31.715470  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:31.753767  148123 cri.go:89] found id: ""
	I1010 19:25:31.753794  148123 logs.go:282] 0 containers: []
	W1010 19:25:31.753806  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:31.753814  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:31.753879  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:31.792999  148123 cri.go:89] found id: ""
	I1010 19:25:31.793024  148123 logs.go:282] 0 containers: []
	W1010 19:25:31.793035  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:31.793050  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:31.793106  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:31.831571  148123 cri.go:89] found id: ""
	I1010 19:25:31.831598  148123 logs.go:282] 0 containers: []
	W1010 19:25:31.831607  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:31.831614  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:31.831673  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:31.880382  148123 cri.go:89] found id: ""
	I1010 19:25:31.880422  148123 logs.go:282] 0 containers: []
	W1010 19:25:31.880436  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:31.880447  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:31.880527  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:31.919596  148123 cri.go:89] found id: ""
	I1010 19:25:31.919627  148123 logs.go:282] 0 containers: []
	W1010 19:25:31.919637  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:31.919647  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:31.919661  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:31.967963  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:31.968001  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:31.983064  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:31.983106  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:32.056073  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:32.056105  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:32.056122  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:32.142927  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:32.142978  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:34.690025  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:34.705326  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:34.705413  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:34.740756  148123 cri.go:89] found id: ""
	I1010 19:25:34.740786  148123 logs.go:282] 0 containers: []
	W1010 19:25:34.740793  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:34.740799  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:34.740872  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:34.777021  148123 cri.go:89] found id: ""
	I1010 19:25:34.777049  148123 logs.go:282] 0 containers: []
	W1010 19:25:34.777061  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:34.777069  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:34.777123  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:34.826699  148123 cri.go:89] found id: ""
	I1010 19:25:34.826735  148123 logs.go:282] 0 containers: []
	W1010 19:25:34.826747  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:34.826754  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:34.826896  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:34.864297  148123 cri.go:89] found id: ""
	I1010 19:25:34.864329  148123 logs.go:282] 0 containers: []
	W1010 19:25:34.864340  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:34.864348  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:34.864415  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:34.899198  148123 cri.go:89] found id: ""
	I1010 19:25:34.899234  148123 logs.go:282] 0 containers: []
	W1010 19:25:34.899247  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:34.899256  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:34.899315  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:34.934132  148123 cri.go:89] found id: ""
	I1010 19:25:34.934162  148123 logs.go:282] 0 containers: []
	W1010 19:25:34.934170  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:34.934176  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:34.934238  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:34.969858  148123 cri.go:89] found id: ""
	I1010 19:25:34.969891  148123 logs.go:282] 0 containers: []
	W1010 19:25:34.969903  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:34.969911  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:34.969978  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:35.006082  148123 cri.go:89] found id: ""
	I1010 19:25:35.006121  148123 logs.go:282] 0 containers: []
	W1010 19:25:35.006132  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:35.006145  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:35.006160  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:35.060596  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:35.060631  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:35.076246  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:35.076281  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:35.151879  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:35.151912  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:35.151931  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:35.231802  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:35.231845  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:37.774550  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:37.787871  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:37.787938  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:37.824273  148123 cri.go:89] found id: ""
	I1010 19:25:37.824306  148123 logs.go:282] 0 containers: []
	W1010 19:25:37.824317  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:37.824325  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:37.824410  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:37.860548  148123 cri.go:89] found id: ""
	I1010 19:25:37.860582  148123 logs.go:282] 0 containers: []
	W1010 19:25:37.860593  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:37.860601  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:37.860677  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:37.894616  148123 cri.go:89] found id: ""
	I1010 19:25:37.894644  148123 logs.go:282] 0 containers: []
	W1010 19:25:37.894654  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:37.894662  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:37.894730  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:37.931247  148123 cri.go:89] found id: ""
	I1010 19:25:37.931272  148123 logs.go:282] 0 containers: []
	W1010 19:25:37.931280  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:37.931287  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:37.931337  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:37.968028  148123 cri.go:89] found id: ""
	I1010 19:25:37.968068  148123 logs.go:282] 0 containers: []
	W1010 19:25:37.968079  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:37.968086  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:37.968153  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:38.004661  148123 cri.go:89] found id: ""
	I1010 19:25:38.004692  148123 logs.go:282] 0 containers: []
	W1010 19:25:38.004700  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:38.004706  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:38.004760  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:38.040874  148123 cri.go:89] found id: ""
	I1010 19:25:38.040906  148123 logs.go:282] 0 containers: []
	W1010 19:25:38.040915  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:38.040922  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:38.040990  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:38.083771  148123 cri.go:89] found id: ""
	I1010 19:25:38.083802  148123 logs.go:282] 0 containers: []
	W1010 19:25:38.083811  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:38.083821  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:38.083836  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:38.099645  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:38.099683  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:38.172168  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:38.172207  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:38.172223  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:38.248523  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:38.248561  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:38.306594  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:38.306630  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:40.873868  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:40.889561  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:40.889668  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:40.926769  148123 cri.go:89] found id: ""
	I1010 19:25:40.926800  148123 logs.go:282] 0 containers: []
	W1010 19:25:40.926808  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:40.926816  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:40.926869  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:40.962564  148123 cri.go:89] found id: ""
	I1010 19:25:40.962592  148123 logs.go:282] 0 containers: []
	W1010 19:25:40.962600  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:40.962606  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:40.962659  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:40.997856  148123 cri.go:89] found id: ""
	I1010 19:25:40.997885  148123 logs.go:282] 0 containers: []
	W1010 19:25:40.997894  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:40.997900  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:40.997951  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:41.035479  148123 cri.go:89] found id: ""
	I1010 19:25:41.035506  148123 logs.go:282] 0 containers: []
	W1010 19:25:41.035514  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:41.035520  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:41.035575  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:41.077733  148123 cri.go:89] found id: ""
	I1010 19:25:41.077817  148123 logs.go:282] 0 containers: []
	W1010 19:25:41.077830  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:41.077840  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:41.077916  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:41.115439  148123 cri.go:89] found id: ""
	I1010 19:25:41.115476  148123 logs.go:282] 0 containers: []
	W1010 19:25:41.115489  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:41.115498  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:41.115552  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:41.152586  148123 cri.go:89] found id: ""
	I1010 19:25:41.152625  148123 logs.go:282] 0 containers: []
	W1010 19:25:41.152636  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:41.152643  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:41.152695  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:41.186807  148123 cri.go:89] found id: ""
	I1010 19:25:41.186840  148123 logs.go:282] 0 containers: []
	W1010 19:25:41.186851  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:41.186862  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:41.186880  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:41.200404  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:41.200447  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:41.280717  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:41.280744  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:41.280760  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:41.360561  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:41.360611  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:41.404461  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:41.404497  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:43.955723  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:43.970895  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:43.970964  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:44.012162  148123 cri.go:89] found id: ""
	I1010 19:25:44.012194  148123 logs.go:282] 0 containers: []
	W1010 19:25:44.012203  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:44.012209  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:44.012282  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:44.074583  148123 cri.go:89] found id: ""
	I1010 19:25:44.074606  148123 logs.go:282] 0 containers: []
	W1010 19:25:44.074614  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:44.074620  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:44.074675  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:44.133562  148123 cri.go:89] found id: ""
	I1010 19:25:44.133588  148123 logs.go:282] 0 containers: []
	W1010 19:25:44.133596  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:44.133602  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:44.133658  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:44.168832  148123 cri.go:89] found id: ""
	I1010 19:25:44.168883  148123 logs.go:282] 0 containers: []
	W1010 19:25:44.168896  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:44.168908  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:44.168967  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:44.204896  148123 cri.go:89] found id: ""
	I1010 19:25:44.204923  148123 logs.go:282] 0 containers: []
	W1010 19:25:44.204934  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:44.204943  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:44.205002  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:44.242410  148123 cri.go:89] found id: ""
	I1010 19:25:44.242437  148123 logs.go:282] 0 containers: []
	W1010 19:25:44.242448  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:44.242456  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:44.242524  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:44.278553  148123 cri.go:89] found id: ""
	I1010 19:25:44.278581  148123 logs.go:282] 0 containers: []
	W1010 19:25:44.278589  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:44.278595  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:44.278658  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:44.316099  148123 cri.go:89] found id: ""
	I1010 19:25:44.316125  148123 logs.go:282] 0 containers: []
	W1010 19:25:44.316132  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:44.316141  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:44.316155  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:44.365809  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:44.365860  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:44.380129  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:44.380162  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:44.454241  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:44.454267  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:44.454281  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:44.530928  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:44.530974  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:47.078279  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:47.092233  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:47.092310  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:47.129517  148123 cri.go:89] found id: ""
	I1010 19:25:47.129557  148123 logs.go:282] 0 containers: []
	W1010 19:25:47.129573  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:47.129582  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:47.129654  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:47.170309  148123 cri.go:89] found id: ""
	I1010 19:25:47.170345  148123 logs.go:282] 0 containers: []
	W1010 19:25:47.170357  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:47.170365  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:47.170432  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:47.205110  148123 cri.go:89] found id: ""
	I1010 19:25:47.205145  148123 logs.go:282] 0 containers: []
	W1010 19:25:47.205156  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:47.205162  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:47.205228  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:47.245668  148123 cri.go:89] found id: ""
	I1010 19:25:47.245695  148123 logs.go:282] 0 containers: []
	W1010 19:25:47.245704  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:47.245711  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:47.245768  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:47.284549  148123 cri.go:89] found id: ""
	I1010 19:25:47.284576  148123 logs.go:282] 0 containers: []
	W1010 19:25:47.284584  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:47.284590  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:47.284643  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:47.324676  148123 cri.go:89] found id: ""
	I1010 19:25:47.324708  148123 logs.go:282] 0 containers: []
	W1010 19:25:47.324720  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:47.324729  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:47.324782  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:47.364315  148123 cri.go:89] found id: ""
	I1010 19:25:47.364347  148123 logs.go:282] 0 containers: []
	W1010 19:25:47.364356  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:47.364362  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:47.364433  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:47.400611  148123 cri.go:89] found id: ""
	I1010 19:25:47.400641  148123 logs.go:282] 0 containers: []
	W1010 19:25:47.400652  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:47.400664  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:47.400680  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:47.415185  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:47.415227  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:47.481638  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:47.481665  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:47.481681  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:47.560135  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:47.560171  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:47.602144  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:47.602174  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:50.151770  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:50.165336  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:50.165403  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:50.201045  148123 cri.go:89] found id: ""
	I1010 19:25:50.201072  148123 logs.go:282] 0 containers: []
	W1010 19:25:50.201082  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:50.201089  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:50.201154  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:50.237047  148123 cri.go:89] found id: ""
	I1010 19:25:50.237082  148123 logs.go:282] 0 containers: []
	W1010 19:25:50.237094  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:50.237102  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:50.237174  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:50.273727  148123 cri.go:89] found id: ""
	I1010 19:25:50.273756  148123 logs.go:282] 0 containers: []
	W1010 19:25:50.273767  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:50.273780  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:50.273843  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:50.311397  148123 cri.go:89] found id: ""
	I1010 19:25:50.311424  148123 logs.go:282] 0 containers: []
	W1010 19:25:50.311433  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:50.311450  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:50.311513  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:50.347593  148123 cri.go:89] found id: ""
	I1010 19:25:50.347625  148123 logs.go:282] 0 containers: []
	W1010 19:25:50.347637  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:50.347705  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:50.347782  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:50.382195  148123 cri.go:89] found id: ""
	I1010 19:25:50.382228  148123 logs.go:282] 0 containers: []
	W1010 19:25:50.382240  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:50.382247  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:50.382313  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:50.419189  148123 cri.go:89] found id: ""
	I1010 19:25:50.419221  148123 logs.go:282] 0 containers: []
	W1010 19:25:50.419229  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:50.419236  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:50.419297  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:50.454368  148123 cri.go:89] found id: ""
	I1010 19:25:50.454399  148123 logs.go:282] 0 containers: []
	W1010 19:25:50.454410  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:50.454421  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:50.454440  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:50.495575  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:50.495623  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:50.548691  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:50.548737  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:50.564585  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:50.564621  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:50.637152  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:50.637174  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:50.637190  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:53.221939  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:53.235889  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:53.235968  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:53.278589  148123 cri.go:89] found id: ""
	I1010 19:25:53.278620  148123 logs.go:282] 0 containers: []
	W1010 19:25:53.278632  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:53.278640  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:53.278705  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:53.315062  148123 cri.go:89] found id: ""
	I1010 19:25:53.315090  148123 logs.go:282] 0 containers: []
	W1010 19:25:53.315101  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:53.315109  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:53.315182  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:53.348994  148123 cri.go:89] found id: ""
	I1010 19:25:53.349031  148123 logs.go:282] 0 containers: []
	W1010 19:25:53.349040  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:53.349046  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:53.349099  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:53.383334  148123 cri.go:89] found id: ""
	I1010 19:25:53.383363  148123 logs.go:282] 0 containers: []
	W1010 19:25:53.383373  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:53.383380  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:53.383450  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:53.417888  148123 cri.go:89] found id: ""
	I1010 19:25:53.417920  148123 logs.go:282] 0 containers: []
	W1010 19:25:53.417931  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:53.417938  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:53.418013  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:53.454757  148123 cri.go:89] found id: ""
	I1010 19:25:53.454787  148123 logs.go:282] 0 containers: []
	W1010 19:25:53.454798  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:53.454806  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:53.454871  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:53.491422  148123 cri.go:89] found id: ""
	I1010 19:25:53.491452  148123 logs.go:282] 0 containers: []
	W1010 19:25:53.491464  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:53.491472  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:53.491541  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:53.531240  148123 cri.go:89] found id: ""
	I1010 19:25:53.531271  148123 logs.go:282] 0 containers: []
	W1010 19:25:53.531282  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:53.531294  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:53.531310  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:53.582576  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:53.582608  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:53.596481  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:53.596511  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:53.666134  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:53.666162  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:53.666178  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:53.745621  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:53.745659  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:56.290744  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:56.307536  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:56.307610  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:56.353456  148123 cri.go:89] found id: ""
	I1010 19:25:56.353484  148123 logs.go:282] 0 containers: []
	W1010 19:25:56.353494  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:56.353501  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:56.353553  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:56.394093  148123 cri.go:89] found id: ""
	I1010 19:25:56.394120  148123 logs.go:282] 0 containers: []
	W1010 19:25:56.394131  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:56.394138  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:56.394203  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:56.428388  148123 cri.go:89] found id: ""
	I1010 19:25:56.428428  148123 logs.go:282] 0 containers: []
	W1010 19:25:56.428441  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:56.428450  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:56.428524  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:56.464414  148123 cri.go:89] found id: ""
	I1010 19:25:56.464459  148123 logs.go:282] 0 containers: []
	W1010 19:25:56.464472  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:56.464480  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:56.464563  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:56.505271  148123 cri.go:89] found id: ""
	I1010 19:25:56.505307  148123 logs.go:282] 0 containers: []
	W1010 19:25:56.505316  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:56.505322  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:56.505374  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:56.546424  148123 cri.go:89] found id: ""
	I1010 19:25:56.546456  148123 logs.go:282] 0 containers: []
	W1010 19:25:56.546467  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:56.546475  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:56.546545  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:56.587458  148123 cri.go:89] found id: ""
	I1010 19:25:56.587489  148123 logs.go:282] 0 containers: []
	W1010 19:25:56.587501  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:56.587509  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:56.587588  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:56.628248  148123 cri.go:89] found id: ""
	I1010 19:25:56.628279  148123 logs.go:282] 0 containers: []
	W1010 19:25:56.628291  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:56.628304  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:56.628320  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:56.642109  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:56.642141  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:56.723646  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:56.723678  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:56.723696  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:56.805849  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:56.805899  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:56.849863  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:56.849891  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:59.407093  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:59.422915  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:59.423005  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:59.462867  148123 cri.go:89] found id: ""
	I1010 19:25:59.462898  148123 logs.go:282] 0 containers: []
	W1010 19:25:59.462909  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:59.462917  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:59.462980  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:59.496931  148123 cri.go:89] found id: ""
	I1010 19:25:59.496958  148123 logs.go:282] 0 containers: []
	W1010 19:25:59.496968  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:59.496983  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:59.497049  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:59.532241  148123 cri.go:89] found id: ""
	I1010 19:25:59.532271  148123 logs.go:282] 0 containers: []
	W1010 19:25:59.532283  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:59.532291  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:59.532352  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:59.571285  148123 cri.go:89] found id: ""
	I1010 19:25:59.571313  148123 logs.go:282] 0 containers: []
	W1010 19:25:59.571324  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:59.571331  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:59.571391  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:59.606687  148123 cri.go:89] found id: ""
	I1010 19:25:59.606721  148123 logs.go:282] 0 containers: []
	W1010 19:25:59.606734  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:59.606741  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:59.606800  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:59.643247  148123 cri.go:89] found id: ""
	I1010 19:25:59.643276  148123 logs.go:282] 0 containers: []
	W1010 19:25:59.643286  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:59.643294  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:59.643369  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:59.679305  148123 cri.go:89] found id: ""
	I1010 19:25:59.679335  148123 logs.go:282] 0 containers: []
	W1010 19:25:59.679344  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:59.679350  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:59.679407  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:59.716149  148123 cri.go:89] found id: ""
	I1010 19:25:59.716198  148123 logs.go:282] 0 containers: []
	W1010 19:25:59.716210  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:59.716222  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:59.716239  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:59.730334  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:59.730364  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:59.799759  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:59.799786  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:59.799804  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:59.881883  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:59.881925  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:59.923755  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:59.923784  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:02.478043  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:02.492627  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:02.492705  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:02.532133  148123 cri.go:89] found id: ""
	I1010 19:26:02.532160  148123 logs.go:282] 0 containers: []
	W1010 19:26:02.532172  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:02.532179  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:02.532244  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:02.571915  148123 cri.go:89] found id: ""
	I1010 19:26:02.571943  148123 logs.go:282] 0 containers: []
	W1010 19:26:02.571951  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:02.571957  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:02.572014  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:02.616330  148123 cri.go:89] found id: ""
	I1010 19:26:02.616365  148123 logs.go:282] 0 containers: []
	W1010 19:26:02.616376  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:02.616385  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:02.616455  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:02.662245  148123 cri.go:89] found id: ""
	I1010 19:26:02.662275  148123 logs.go:282] 0 containers: []
	W1010 19:26:02.662284  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:02.662291  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:02.662354  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:02.698692  148123 cri.go:89] found id: ""
	I1010 19:26:02.698720  148123 logs.go:282] 0 containers: []
	W1010 19:26:02.698731  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:02.698739  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:02.698805  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:02.736643  148123 cri.go:89] found id: ""
	I1010 19:26:02.736667  148123 logs.go:282] 0 containers: []
	W1010 19:26:02.736677  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:02.736685  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:02.736742  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:02.780356  148123 cri.go:89] found id: ""
	I1010 19:26:02.780388  148123 logs.go:282] 0 containers: []
	W1010 19:26:02.780399  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:02.780407  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:02.780475  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:02.816716  148123 cri.go:89] found id: ""
	I1010 19:26:02.816744  148123 logs.go:282] 0 containers: []
	W1010 19:26:02.816752  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:02.816761  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:02.816775  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:02.899002  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:02.899046  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:02.942060  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:02.942093  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:02.997605  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:02.997661  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:03.011768  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:03.011804  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:03.096404  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:05.596971  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:05.612524  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:05.612598  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:05.649999  148123 cri.go:89] found id: ""
	I1010 19:26:05.650032  148123 logs.go:282] 0 containers: []
	W1010 19:26:05.650044  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:05.650052  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:05.650119  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:05.684365  148123 cri.go:89] found id: ""
	I1010 19:26:05.684398  148123 logs.go:282] 0 containers: []
	W1010 19:26:05.684419  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:05.684427  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:05.684494  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:05.721801  148123 cri.go:89] found id: ""
	I1010 19:26:05.721832  148123 logs.go:282] 0 containers: []
	W1010 19:26:05.721840  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:05.721847  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:05.721908  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:05.762010  148123 cri.go:89] found id: ""
	I1010 19:26:05.762037  148123 logs.go:282] 0 containers: []
	W1010 19:26:05.762046  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:05.762056  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:05.762117  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:05.803425  148123 cri.go:89] found id: ""
	I1010 19:26:05.803457  148123 logs.go:282] 0 containers: []
	W1010 19:26:05.803468  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:05.803476  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:05.803551  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:05.838800  148123 cri.go:89] found id: ""
	I1010 19:26:05.838833  148123 logs.go:282] 0 containers: []
	W1010 19:26:05.838845  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:05.838853  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:05.838921  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:05.875110  148123 cri.go:89] found id: ""
	I1010 19:26:05.875147  148123 logs.go:282] 0 containers: []
	W1010 19:26:05.875158  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:05.875167  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:05.875245  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:05.908951  148123 cri.go:89] found id: ""
	I1010 19:26:05.908988  148123 logs.go:282] 0 containers: []
	W1010 19:26:05.909000  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:05.909012  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:05.909027  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:05.922263  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:05.922293  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:05.996441  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:05.996465  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:05.996484  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:06.078113  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:06.078160  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:06.122919  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:06.122956  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:08.674464  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:08.688452  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:08.688570  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:08.726240  148123 cri.go:89] found id: ""
	I1010 19:26:08.726269  148123 logs.go:282] 0 containers: []
	W1010 19:26:08.726279  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:08.726287  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:08.726370  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:08.764762  148123 cri.go:89] found id: ""
	I1010 19:26:08.764790  148123 logs.go:282] 0 containers: []
	W1010 19:26:08.764799  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:08.764806  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:08.764887  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:08.805522  148123 cri.go:89] found id: ""
	I1010 19:26:08.805552  148123 logs.go:282] 0 containers: []
	W1010 19:26:08.805563  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:08.805572  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:08.805642  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:08.845694  148123 cri.go:89] found id: ""
	I1010 19:26:08.845729  148123 logs.go:282] 0 containers: []
	W1010 19:26:08.845740  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:08.845747  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:08.845817  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:08.885024  148123 cri.go:89] found id: ""
	I1010 19:26:08.885054  148123 logs.go:282] 0 containers: []
	W1010 19:26:08.885066  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:08.885074  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:08.885138  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:08.927500  148123 cri.go:89] found id: ""
	I1010 19:26:08.927531  148123 logs.go:282] 0 containers: []
	W1010 19:26:08.927540  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:08.927547  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:08.927616  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:08.963870  148123 cri.go:89] found id: ""
	I1010 19:26:08.963904  148123 logs.go:282] 0 containers: []
	W1010 19:26:08.963916  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:08.963924  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:08.963988  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:08.997984  148123 cri.go:89] found id: ""
	I1010 19:26:08.998017  148123 logs.go:282] 0 containers: []
	W1010 19:26:08.998027  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:08.998039  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:08.998056  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:09.049307  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:09.049341  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:09.063341  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:09.063432  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:09.145190  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:09.145214  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:09.145226  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:09.231409  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:09.231445  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:11.773475  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:11.788981  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:11.789055  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:11.830289  148123 cri.go:89] found id: ""
	I1010 19:26:11.830319  148123 logs.go:282] 0 containers: []
	W1010 19:26:11.830327  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:11.830333  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:11.830399  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:11.867093  148123 cri.go:89] found id: ""
	I1010 19:26:11.867120  148123 logs.go:282] 0 containers: []
	W1010 19:26:11.867131  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:11.867138  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:11.867212  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:11.902146  148123 cri.go:89] found id: ""
	I1010 19:26:11.902181  148123 logs.go:282] 0 containers: []
	W1010 19:26:11.902192  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:11.902201  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:11.902271  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:11.938652  148123 cri.go:89] found id: ""
	I1010 19:26:11.938681  148123 logs.go:282] 0 containers: []
	W1010 19:26:11.938691  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:11.938703  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:11.938771  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:11.975903  148123 cri.go:89] found id: ""
	I1010 19:26:11.975937  148123 logs.go:282] 0 containers: []
	W1010 19:26:11.975947  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:11.975955  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:11.976020  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:12.014083  148123 cri.go:89] found id: ""
	I1010 19:26:12.014111  148123 logs.go:282] 0 containers: []
	W1010 19:26:12.014119  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:12.014126  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:12.014190  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:12.057832  148123 cri.go:89] found id: ""
	I1010 19:26:12.057858  148123 logs.go:282] 0 containers: []
	W1010 19:26:12.057867  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:12.057874  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:12.057925  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:12.103253  148123 cri.go:89] found id: ""
	I1010 19:26:12.103286  148123 logs.go:282] 0 containers: []
	W1010 19:26:12.103298  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:12.103311  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:12.103326  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:12.171062  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:12.171104  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:12.185463  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:12.185496  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:12.261568  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:12.261592  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:12.261607  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:12.345819  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:12.345860  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:14.886283  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:14.901117  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:14.901213  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:14.935235  148123 cri.go:89] found id: ""
	I1010 19:26:14.935264  148123 logs.go:282] 0 containers: []
	W1010 19:26:14.935273  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:14.935280  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:14.935336  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:14.970617  148123 cri.go:89] found id: ""
	I1010 19:26:14.970642  148123 logs.go:282] 0 containers: []
	W1010 19:26:14.970652  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:14.970658  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:14.970722  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:15.011086  148123 cri.go:89] found id: ""
	I1010 19:26:15.011113  148123 logs.go:282] 0 containers: []
	W1010 19:26:15.011122  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:15.011128  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:15.011188  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:15.053004  148123 cri.go:89] found id: ""
	I1010 19:26:15.053026  148123 logs.go:282] 0 containers: []
	W1010 19:26:15.053033  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:15.053039  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:15.053099  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:15.089275  148123 cri.go:89] found id: ""
	I1010 19:26:15.089304  148123 logs.go:282] 0 containers: []
	W1010 19:26:15.089312  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:15.089319  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:15.089383  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:15.126620  148123 cri.go:89] found id: ""
	I1010 19:26:15.126645  148123 logs.go:282] 0 containers: []
	W1010 19:26:15.126654  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:15.126660  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:15.126723  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:15.160920  148123 cri.go:89] found id: ""
	I1010 19:26:15.160956  148123 logs.go:282] 0 containers: []
	W1010 19:26:15.160969  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:15.160979  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:15.161037  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:15.197430  148123 cri.go:89] found id: ""
	I1010 19:26:15.197462  148123 logs.go:282] 0 containers: []
	W1010 19:26:15.197474  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:15.197486  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:15.197507  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:15.240905  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:15.240953  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:15.297663  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:15.297719  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:15.312279  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:15.312313  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:15.395296  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:15.395320  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:15.395336  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:17.978030  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:17.992574  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:17.992651  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:18.028846  148123 cri.go:89] found id: ""
	I1010 19:26:18.028890  148123 logs.go:282] 0 containers: []
	W1010 19:26:18.028901  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:18.028908  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:18.028965  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:18.073004  148123 cri.go:89] found id: ""
	I1010 19:26:18.073033  148123 logs.go:282] 0 containers: []
	W1010 19:26:18.073041  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:18.073047  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:18.073118  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:18.112062  148123 cri.go:89] found id: ""
	I1010 19:26:18.112098  148123 logs.go:282] 0 containers: []
	W1010 19:26:18.112111  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:18.112121  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:18.112209  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:18.151565  148123 cri.go:89] found id: ""
	I1010 19:26:18.151597  148123 logs.go:282] 0 containers: []
	W1010 19:26:18.151608  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:18.151616  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:18.151675  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:18.197483  148123 cri.go:89] found id: ""
	I1010 19:26:18.197509  148123 logs.go:282] 0 containers: []
	W1010 19:26:18.197518  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:18.197524  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:18.197591  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:18.235151  148123 cri.go:89] found id: ""
	I1010 19:26:18.235181  148123 logs.go:282] 0 containers: []
	W1010 19:26:18.235193  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:18.235201  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:18.235279  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:18.273807  148123 cri.go:89] found id: ""
	I1010 19:26:18.273841  148123 logs.go:282] 0 containers: []
	W1010 19:26:18.273852  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:18.273858  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:18.273920  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:18.311644  148123 cri.go:89] found id: ""
	I1010 19:26:18.311677  148123 logs.go:282] 0 containers: []
	W1010 19:26:18.311688  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:18.311700  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:18.311716  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:18.355599  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:18.355635  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:18.407786  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:18.407830  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:18.422944  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:18.422976  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:18.496830  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:18.496873  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:18.496887  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:21.108300  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:21.123177  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:21.123243  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:21.162544  148123 cri.go:89] found id: ""
	I1010 19:26:21.162575  148123 logs.go:282] 0 containers: []
	W1010 19:26:21.162586  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:21.162595  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:21.162662  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:21.199799  148123 cri.go:89] found id: ""
	I1010 19:26:21.199828  148123 logs.go:282] 0 containers: []
	W1010 19:26:21.199839  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:21.199845  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:21.199901  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:21.246333  148123 cri.go:89] found id: ""
	I1010 19:26:21.246357  148123 logs.go:282] 0 containers: []
	W1010 19:26:21.246366  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:21.246372  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:21.246433  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:21.282681  148123 cri.go:89] found id: ""
	I1010 19:26:21.282719  148123 logs.go:282] 0 containers: []
	W1010 19:26:21.282732  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:21.282740  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:21.282810  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:21.319500  148123 cri.go:89] found id: ""
	I1010 19:26:21.319535  148123 logs.go:282] 0 containers: []
	W1010 19:26:21.319546  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:21.319554  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:21.319623  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:21.355779  148123 cri.go:89] found id: ""
	I1010 19:26:21.355809  148123 logs.go:282] 0 containers: []
	W1010 19:26:21.355820  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:21.355828  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:21.355902  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:21.394567  148123 cri.go:89] found id: ""
	I1010 19:26:21.394608  148123 logs.go:282] 0 containers: []
	W1010 19:26:21.394617  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:21.394623  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:21.394684  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:21.430009  148123 cri.go:89] found id: ""
	I1010 19:26:21.430046  148123 logs.go:282] 0 containers: []
	W1010 19:26:21.430058  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:21.430069  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:21.430089  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:21.443267  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:21.443301  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:21.517046  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:21.517068  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:21.517083  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:21.594927  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:21.594978  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:21.634274  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:21.634311  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:24.189760  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:24.204355  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:24.204420  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:24.248130  148123 cri.go:89] found id: ""
	I1010 19:26:24.248160  148123 logs.go:282] 0 containers: []
	W1010 19:26:24.248171  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:24.248178  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:24.248244  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:24.293281  148123 cri.go:89] found id: ""
	I1010 19:26:24.293312  148123 logs.go:282] 0 containers: []
	W1010 19:26:24.293324  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:24.293331  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:24.293400  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:24.350709  148123 cri.go:89] found id: ""
	I1010 19:26:24.350743  148123 logs.go:282] 0 containers: []
	W1010 19:26:24.350755  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:24.350765  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:24.350838  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:24.394113  148123 cri.go:89] found id: ""
	I1010 19:26:24.394152  148123 logs.go:282] 0 containers: []
	W1010 19:26:24.394170  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:24.394181  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:24.394256  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:24.445999  148123 cri.go:89] found id: ""
	I1010 19:26:24.446030  148123 logs.go:282] 0 containers: []
	W1010 19:26:24.446042  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:24.446051  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:24.446120  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:24.483566  148123 cri.go:89] found id: ""
	I1010 19:26:24.483596  148123 logs.go:282] 0 containers: []
	W1010 19:26:24.483605  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:24.483612  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:24.483665  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:24.520705  148123 cri.go:89] found id: ""
	I1010 19:26:24.520736  148123 logs.go:282] 0 containers: []
	W1010 19:26:24.520748  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:24.520757  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:24.520825  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:24.557316  148123 cri.go:89] found id: ""
	I1010 19:26:24.557346  148123 logs.go:282] 0 containers: []
	W1010 19:26:24.557355  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:24.557364  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:24.557376  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:24.608065  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:24.608109  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:24.623202  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:24.623250  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:24.694665  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:24.694697  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:24.694716  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:24.783369  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:24.783414  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:27.327524  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:27.341132  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:27.341214  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:27.376589  148123 cri.go:89] found id: ""
	I1010 19:26:27.376618  148123 logs.go:282] 0 containers: []
	W1010 19:26:27.376627  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:27.376633  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:27.376684  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:27.414452  148123 cri.go:89] found id: ""
	I1010 19:26:27.414491  148123 logs.go:282] 0 containers: []
	W1010 19:26:27.414502  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:27.414510  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:27.414572  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:27.449839  148123 cri.go:89] found id: ""
	I1010 19:26:27.449867  148123 logs.go:282] 0 containers: []
	W1010 19:26:27.449875  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:27.449882  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:27.449932  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:27.492445  148123 cri.go:89] found id: ""
	I1010 19:26:27.492472  148123 logs.go:282] 0 containers: []
	W1010 19:26:27.492480  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:27.492487  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:27.492549  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:27.533032  148123 cri.go:89] found id: ""
	I1010 19:26:27.533085  148123 logs.go:282] 0 containers: []
	W1010 19:26:27.533095  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:27.533122  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:27.533199  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:27.571096  148123 cri.go:89] found id: ""
	I1010 19:26:27.571122  148123 logs.go:282] 0 containers: []
	W1010 19:26:27.571130  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:27.571135  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:27.571204  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:27.608762  148123 cri.go:89] found id: ""
	I1010 19:26:27.608798  148123 logs.go:282] 0 containers: []
	W1010 19:26:27.608809  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:27.608818  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:27.608896  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:27.648200  148123 cri.go:89] found id: ""
	I1010 19:26:27.648236  148123 logs.go:282] 0 containers: []
	W1010 19:26:27.648248  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:27.648260  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:27.648275  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:27.698124  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:27.698154  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:27.755009  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:27.755052  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:27.770048  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:27.770084  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:27.845475  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:27.845505  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:27.845519  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:30.423117  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:30.436706  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:30.436768  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:30.473367  148123 cri.go:89] found id: ""
	I1010 19:26:30.473396  148123 logs.go:282] 0 containers: []
	W1010 19:26:30.473408  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:30.473417  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:30.473470  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:30.510560  148123 cri.go:89] found id: ""
	I1010 19:26:30.510599  148123 logs.go:282] 0 containers: []
	W1010 19:26:30.510610  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:30.510619  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:30.510687  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:30.549682  148123 cri.go:89] found id: ""
	I1010 19:26:30.549715  148123 logs.go:282] 0 containers: []
	W1010 19:26:30.549726  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:30.549734  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:30.549798  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:30.586401  148123 cri.go:89] found id: ""
	I1010 19:26:30.586425  148123 logs.go:282] 0 containers: []
	W1010 19:26:30.586434  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:30.586441  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:30.586492  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:30.622012  148123 cri.go:89] found id: ""
	I1010 19:26:30.622037  148123 logs.go:282] 0 containers: []
	W1010 19:26:30.622045  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:30.622052  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:30.622107  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:30.658336  148123 cri.go:89] found id: ""
	I1010 19:26:30.658364  148123 logs.go:282] 0 containers: []
	W1010 19:26:30.658372  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:30.658378  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:30.658442  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:30.695495  148123 cri.go:89] found id: ""
	I1010 19:26:30.695523  148123 logs.go:282] 0 containers: []
	W1010 19:26:30.695532  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:30.695538  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:30.695591  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:30.735093  148123 cri.go:89] found id: ""
	I1010 19:26:30.735125  148123 logs.go:282] 0 containers: []
	W1010 19:26:30.735136  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:30.735148  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:30.735163  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:30.816097  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:30.816140  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:30.853878  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:30.853915  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:30.906289  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:30.906341  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:30.923742  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:30.923769  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:30.993226  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:33.493799  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:33.508083  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:33.508166  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:33.545019  148123 cri.go:89] found id: ""
	I1010 19:26:33.545055  148123 logs.go:282] 0 containers: []
	W1010 19:26:33.545072  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:33.545081  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:33.545150  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:33.584215  148123 cri.go:89] found id: ""
	I1010 19:26:33.584242  148123 logs.go:282] 0 containers: []
	W1010 19:26:33.584252  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:33.584259  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:33.584315  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:33.620598  148123 cri.go:89] found id: ""
	I1010 19:26:33.620627  148123 logs.go:282] 0 containers: []
	W1010 19:26:33.620636  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:33.620643  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:33.620691  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:33.655225  148123 cri.go:89] found id: ""
	I1010 19:26:33.655252  148123 logs.go:282] 0 containers: []
	W1010 19:26:33.655260  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:33.655267  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:33.655322  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:33.693464  148123 cri.go:89] found id: ""
	I1010 19:26:33.693490  148123 logs.go:282] 0 containers: []
	W1010 19:26:33.693498  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:33.693505  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:33.693554  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:33.734024  148123 cri.go:89] found id: ""
	I1010 19:26:33.734059  148123 logs.go:282] 0 containers: []
	W1010 19:26:33.734071  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:33.734079  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:33.734141  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:33.769434  148123 cri.go:89] found id: ""
	I1010 19:26:33.769467  148123 logs.go:282] 0 containers: []
	W1010 19:26:33.769476  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:33.769483  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:33.769533  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:33.809011  148123 cri.go:89] found id: ""
	I1010 19:26:33.809040  148123 logs.go:282] 0 containers: []
	W1010 19:26:33.809049  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:33.809059  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:33.809071  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:33.864186  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:33.864230  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:33.878681  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:33.878711  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:33.958225  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:33.958250  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:33.958265  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:34.034730  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:34.034774  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:36.577123  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:36.591494  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:36.591560  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:36.625170  148123 cri.go:89] found id: ""
	I1010 19:26:36.625210  148123 logs.go:282] 0 containers: []
	W1010 19:26:36.625222  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:36.625230  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:36.625295  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:36.660790  148123 cri.go:89] found id: ""
	I1010 19:26:36.660821  148123 logs.go:282] 0 containers: []
	W1010 19:26:36.660831  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:36.660838  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:36.660911  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:36.695742  148123 cri.go:89] found id: ""
	I1010 19:26:36.695774  148123 logs.go:282] 0 containers: []
	W1010 19:26:36.695786  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:36.695793  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:36.695854  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:36.729523  148123 cri.go:89] found id: ""
	I1010 19:26:36.729552  148123 logs.go:282] 0 containers: []
	W1010 19:26:36.729561  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:36.729569  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:36.729630  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:36.765515  148123 cri.go:89] found id: ""
	I1010 19:26:36.765549  148123 logs.go:282] 0 containers: []
	W1010 19:26:36.765561  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:36.765571  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:36.765635  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:36.799110  148123 cri.go:89] found id: ""
	I1010 19:26:36.799144  148123 logs.go:282] 0 containers: []
	W1010 19:26:36.799155  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:36.799164  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:36.799224  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:36.832806  148123 cri.go:89] found id: ""
	I1010 19:26:36.832834  148123 logs.go:282] 0 containers: []
	W1010 19:26:36.832845  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:36.832865  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:36.832933  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:36.869093  148123 cri.go:89] found id: ""
	I1010 19:26:36.869126  148123 logs.go:282] 0 containers: []
	W1010 19:26:36.869137  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:36.869149  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:36.869165  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:36.922229  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:36.922276  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:36.936973  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:36.937019  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:37.016400  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:37.016425  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:37.016448  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:37.106308  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:37.106354  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:39.652101  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:39.665546  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:39.665639  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:39.699646  148123 cri.go:89] found id: ""
	I1010 19:26:39.699675  148123 logs.go:282] 0 containers: []
	W1010 19:26:39.699686  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:39.699693  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:39.699745  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:39.735568  148123 cri.go:89] found id: ""
	I1010 19:26:39.735592  148123 logs.go:282] 0 containers: []
	W1010 19:26:39.735600  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:39.735606  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:39.735656  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:39.770021  148123 cri.go:89] found id: ""
	I1010 19:26:39.770049  148123 logs.go:282] 0 containers: []
	W1010 19:26:39.770060  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:39.770067  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:39.770139  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:39.804867  148123 cri.go:89] found id: ""
	I1010 19:26:39.804894  148123 logs.go:282] 0 containers: []
	W1010 19:26:39.804903  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:39.804910  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:39.804972  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:39.847074  148123 cri.go:89] found id: ""
	I1010 19:26:39.847104  148123 logs.go:282] 0 containers: []
	W1010 19:26:39.847115  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:39.847124  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:39.847200  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:39.890334  148123 cri.go:89] found id: ""
	I1010 19:26:39.890368  148123 logs.go:282] 0 containers: []
	W1010 19:26:39.890379  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:39.890387  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:39.890456  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:39.926316  148123 cri.go:89] found id: ""
	I1010 19:26:39.926346  148123 logs.go:282] 0 containers: []
	W1010 19:26:39.926355  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:39.926362  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:39.926416  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:39.966179  148123 cri.go:89] found id: ""
	I1010 19:26:39.966215  148123 logs.go:282] 0 containers: []
	W1010 19:26:39.966227  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:39.966239  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:39.966255  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:40.047457  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:40.047505  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:40.086611  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:40.086656  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:40.139123  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:40.139167  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:40.152966  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:40.152995  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:40.231009  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:42.731851  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:42.748201  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:42.748287  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:42.787567  148123 cri.go:89] found id: ""
	I1010 19:26:42.787599  148123 logs.go:282] 0 containers: []
	W1010 19:26:42.787607  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:42.787614  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:42.787697  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:42.831051  148123 cri.go:89] found id: ""
	I1010 19:26:42.831089  148123 logs.go:282] 0 containers: []
	W1010 19:26:42.831100  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:42.831109  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:42.831180  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:42.871352  148123 cri.go:89] found id: ""
	I1010 19:26:42.871390  148123 logs.go:282] 0 containers: []
	W1010 19:26:42.871402  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:42.871410  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:42.871484  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:42.910283  148123 cri.go:89] found id: ""
	I1010 19:26:42.910316  148123 logs.go:282] 0 containers: []
	W1010 19:26:42.910327  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:42.910335  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:42.910403  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:42.946971  148123 cri.go:89] found id: ""
	I1010 19:26:42.947003  148123 logs.go:282] 0 containers: []
	W1010 19:26:42.947012  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:42.947019  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:42.947087  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:42.985484  148123 cri.go:89] found id: ""
	I1010 19:26:42.985517  148123 logs.go:282] 0 containers: []
	W1010 19:26:42.985528  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:42.985537  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:42.985603  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:43.024500  148123 cri.go:89] found id: ""
	I1010 19:26:43.024535  148123 logs.go:282] 0 containers: []
	W1010 19:26:43.024544  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:43.024552  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:43.024608  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:43.064008  148123 cri.go:89] found id: ""
	I1010 19:26:43.064039  148123 logs.go:282] 0 containers: []
	W1010 19:26:43.064049  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:43.064060  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:43.064075  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:43.119367  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:43.119415  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:43.133396  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:43.133428  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:43.207447  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:43.207470  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:43.207491  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:43.290416  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:43.290456  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:45.834115  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:45.849172  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:45.849238  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:45.886020  148123 cri.go:89] found id: ""
	I1010 19:26:45.886049  148123 logs.go:282] 0 containers: []
	W1010 19:26:45.886060  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:45.886067  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:45.886152  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:45.923188  148123 cri.go:89] found id: ""
	I1010 19:26:45.923218  148123 logs.go:282] 0 containers: []
	W1010 19:26:45.923245  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:45.923253  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:45.923316  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:45.960297  148123 cri.go:89] found id: ""
	I1010 19:26:45.960337  148123 logs.go:282] 0 containers: []
	W1010 19:26:45.960351  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:45.960361  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:45.960432  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:45.995140  148123 cri.go:89] found id: ""
	I1010 19:26:45.995173  148123 logs.go:282] 0 containers: []
	W1010 19:26:45.995184  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:45.995191  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:45.995265  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:46.040390  148123 cri.go:89] found id: ""
	I1010 19:26:46.040418  148123 logs.go:282] 0 containers: []
	W1010 19:26:46.040426  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:46.040433  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:46.040500  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:46.079999  148123 cri.go:89] found id: ""
	I1010 19:26:46.080029  148123 logs.go:282] 0 containers: []
	W1010 19:26:46.080042  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:46.080052  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:46.080105  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:46.117331  148123 cri.go:89] found id: ""
	I1010 19:26:46.117357  148123 logs.go:282] 0 containers: []
	W1010 19:26:46.117364  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:46.117370  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:46.117441  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:46.152248  148123 cri.go:89] found id: ""
	I1010 19:26:46.152279  148123 logs.go:282] 0 containers: []
	W1010 19:26:46.152290  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:46.152303  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:46.152319  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:46.192503  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:46.192533  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:46.246419  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:46.246456  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:46.259864  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:46.259895  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:46.338518  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:46.338549  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:46.338567  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:48.924216  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:48.938741  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:48.938805  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:48.973310  148123 cri.go:89] found id: ""
	I1010 19:26:48.973339  148123 logs.go:282] 0 containers: []
	W1010 19:26:48.973348  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:48.973356  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:48.973411  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:49.010467  148123 cri.go:89] found id: ""
	I1010 19:26:49.010492  148123 logs.go:282] 0 containers: []
	W1010 19:26:49.010500  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:49.010506  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:49.010585  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:49.051621  148123 cri.go:89] found id: ""
	I1010 19:26:49.051646  148123 logs.go:282] 0 containers: []
	W1010 19:26:49.051653  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:49.051664  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:49.051727  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:49.091090  148123 cri.go:89] found id: ""
	I1010 19:26:49.091121  148123 logs.go:282] 0 containers: []
	W1010 19:26:49.091132  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:49.091140  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:49.091202  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:49.127669  148123 cri.go:89] found id: ""
	I1010 19:26:49.127712  148123 logs.go:282] 0 containers: []
	W1010 19:26:49.127724  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:49.127732  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:49.127804  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:49.164446  148123 cri.go:89] found id: ""
	I1010 19:26:49.164476  148123 logs.go:282] 0 containers: []
	W1010 19:26:49.164485  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:49.164491  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:49.164553  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:49.201222  148123 cri.go:89] found id: ""
	I1010 19:26:49.201263  148123 logs.go:282] 0 containers: []
	W1010 19:26:49.201275  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:49.201284  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:49.201345  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:49.238258  148123 cri.go:89] found id: ""
	I1010 19:26:49.238283  148123 logs.go:282] 0 containers: []
	W1010 19:26:49.238290  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:49.238299  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:49.238311  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:49.313576  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:49.313619  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:49.351066  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:49.351096  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:49.402772  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:49.402823  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:49.417713  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:49.417752  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:49.488834  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:51.989031  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:52.003056  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:52.003140  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:52.039682  148123 cri.go:89] found id: ""
	I1010 19:26:52.039709  148123 logs.go:282] 0 containers: []
	W1010 19:26:52.039718  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:52.039725  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:52.039788  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:52.081402  148123 cri.go:89] found id: ""
	I1010 19:26:52.081433  148123 logs.go:282] 0 containers: []
	W1010 19:26:52.081443  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:52.081449  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:52.081502  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:52.118270  148123 cri.go:89] found id: ""
	I1010 19:26:52.118304  148123 logs.go:282] 0 containers: []
	W1010 19:26:52.118315  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:52.118325  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:52.118392  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:52.154692  148123 cri.go:89] found id: ""
	I1010 19:26:52.154724  148123 logs.go:282] 0 containers: []
	W1010 19:26:52.154735  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:52.154743  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:52.154807  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:52.190056  148123 cri.go:89] found id: ""
	I1010 19:26:52.190085  148123 logs.go:282] 0 containers: []
	W1010 19:26:52.190094  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:52.190101  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:52.190161  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:52.225468  148123 cri.go:89] found id: ""
	I1010 19:26:52.225501  148123 logs.go:282] 0 containers: []
	W1010 19:26:52.225511  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:52.225521  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:52.225589  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:52.260682  148123 cri.go:89] found id: ""
	I1010 19:26:52.260710  148123 logs.go:282] 0 containers: []
	W1010 19:26:52.260718  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:52.260724  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:52.260774  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:52.297613  148123 cri.go:89] found id: ""
	I1010 19:26:52.297642  148123 logs.go:282] 0 containers: []
	W1010 19:26:52.297659  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:52.297672  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:52.297689  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:52.352224  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:52.352267  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:52.367003  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:52.367033  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:52.443124  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:52.443157  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:52.443173  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:52.522391  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:52.522433  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:55.066877  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:55.082191  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:55.082258  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:55.120729  148123 cri.go:89] found id: ""
	I1010 19:26:55.120763  148123 logs.go:282] 0 containers: []
	W1010 19:26:55.120775  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:55.120784  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:55.120863  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:55.154795  148123 cri.go:89] found id: ""
	I1010 19:26:55.154827  148123 logs.go:282] 0 containers: []
	W1010 19:26:55.154839  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:55.154848  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:55.154904  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:55.194846  148123 cri.go:89] found id: ""
	I1010 19:26:55.194876  148123 logs.go:282] 0 containers: []
	W1010 19:26:55.194889  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:55.194897  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:55.194958  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:55.230173  148123 cri.go:89] found id: ""
	I1010 19:26:55.230201  148123 logs.go:282] 0 containers: []
	W1010 19:26:55.230212  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:55.230220  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:55.230290  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:55.264999  148123 cri.go:89] found id: ""
	I1010 19:26:55.265025  148123 logs.go:282] 0 containers: []
	W1010 19:26:55.265032  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:55.265039  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:55.265096  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:55.303666  148123 cri.go:89] found id: ""
	I1010 19:26:55.303695  148123 logs.go:282] 0 containers: []
	W1010 19:26:55.303703  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:55.303724  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:55.303797  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:55.342061  148123 cri.go:89] found id: ""
	I1010 19:26:55.342087  148123 logs.go:282] 0 containers: []
	W1010 19:26:55.342098  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:55.342106  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:55.342171  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:55.378305  148123 cri.go:89] found id: ""
	I1010 19:26:55.378336  148123 logs.go:282] 0 containers: []
	W1010 19:26:55.378345  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:55.378354  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:55.378365  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:55.416815  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:55.416865  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:55.467371  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:55.467415  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:55.481704  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:55.481744  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:55.559219  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:55.559242  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:55.559254  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:58.141472  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:58.154326  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:58.154394  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:58.193053  148123 cri.go:89] found id: ""
	I1010 19:26:58.193080  148123 logs.go:282] 0 containers: []
	W1010 19:26:58.193088  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:58.193094  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:58.193142  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:58.232514  148123 cri.go:89] found id: ""
	I1010 19:26:58.232538  148123 logs.go:282] 0 containers: []
	W1010 19:26:58.232545  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:58.232551  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:58.232599  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:58.266724  148123 cri.go:89] found id: ""
	I1010 19:26:58.266756  148123 logs.go:282] 0 containers: []
	W1010 19:26:58.266765  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:58.266771  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:58.266824  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:58.301986  148123 cri.go:89] found id: ""
	I1010 19:26:58.302012  148123 logs.go:282] 0 containers: []
	W1010 19:26:58.302019  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:58.302031  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:58.302080  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:58.340394  148123 cri.go:89] found id: ""
	I1010 19:26:58.340431  148123 logs.go:282] 0 containers: []
	W1010 19:26:58.340440  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:58.340448  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:58.340500  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:58.407035  148123 cri.go:89] found id: ""
	I1010 19:26:58.407069  148123 logs.go:282] 0 containers: []
	W1010 19:26:58.407081  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:58.407088  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:58.407151  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:58.446650  148123 cri.go:89] found id: ""
	I1010 19:26:58.446682  148123 logs.go:282] 0 containers: []
	W1010 19:26:58.446694  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:58.446703  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:58.446763  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:58.480719  148123 cri.go:89] found id: ""
	I1010 19:26:58.480750  148123 logs.go:282] 0 containers: []
	W1010 19:26:58.480759  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:58.480768  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:58.480781  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:58.561568  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:58.561602  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:58.609237  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:58.609264  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:58.659705  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:58.659748  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:58.674646  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:58.674680  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:58.750922  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:01.252098  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:01.265420  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:01.265497  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:01.300133  148123 cri.go:89] found id: ""
	I1010 19:27:01.300168  148123 logs.go:282] 0 containers: []
	W1010 19:27:01.300180  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:01.300188  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:01.300272  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:01.337451  148123 cri.go:89] found id: ""
	I1010 19:27:01.337477  148123 logs.go:282] 0 containers: []
	W1010 19:27:01.337485  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:01.337492  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:01.337539  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:01.371987  148123 cri.go:89] found id: ""
	I1010 19:27:01.372019  148123 logs.go:282] 0 containers: []
	W1010 19:27:01.372028  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:01.372034  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:01.372098  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:01.408996  148123 cri.go:89] found id: ""
	I1010 19:27:01.409022  148123 logs.go:282] 0 containers: []
	W1010 19:27:01.409033  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:01.409042  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:01.409109  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:01.443558  148123 cri.go:89] found id: ""
	I1010 19:27:01.443587  148123 logs.go:282] 0 containers: []
	W1010 19:27:01.443595  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:01.443602  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:01.443663  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:01.478517  148123 cri.go:89] found id: ""
	I1010 19:27:01.478546  148123 logs.go:282] 0 containers: []
	W1010 19:27:01.478555  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:01.478561  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:01.478611  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:01.513528  148123 cri.go:89] found id: ""
	I1010 19:27:01.513556  148123 logs.go:282] 0 containers: []
	W1010 19:27:01.513568  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:01.513576  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:01.513641  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:01.558861  148123 cri.go:89] found id: ""
	I1010 19:27:01.558897  148123 logs.go:282] 0 containers: []
	W1010 19:27:01.558909  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:01.558921  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:01.558937  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:01.599719  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:01.599747  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:01.650736  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:01.650774  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:01.664643  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:01.664669  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:01.736533  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:01.736557  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:01.736570  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:04.317302  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:04.330450  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:04.330519  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:04.368893  148123 cri.go:89] found id: ""
	I1010 19:27:04.368923  148123 logs.go:282] 0 containers: []
	W1010 19:27:04.368932  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:04.368939  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:04.368993  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:04.406598  148123 cri.go:89] found id: ""
	I1010 19:27:04.406625  148123 logs.go:282] 0 containers: []
	W1010 19:27:04.406634  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:04.406640  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:04.406692  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:04.441485  148123 cri.go:89] found id: ""
	I1010 19:27:04.441515  148123 logs.go:282] 0 containers: []
	W1010 19:27:04.441525  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:04.441532  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:04.441581  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:04.476663  148123 cri.go:89] found id: ""
	I1010 19:27:04.476690  148123 logs.go:282] 0 containers: []
	W1010 19:27:04.476698  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:04.476704  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:04.476765  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:04.512124  148123 cri.go:89] found id: ""
	I1010 19:27:04.512171  148123 logs.go:282] 0 containers: []
	W1010 19:27:04.512186  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:04.512195  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:04.512260  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:04.547902  148123 cri.go:89] found id: ""
	I1010 19:27:04.547929  148123 logs.go:282] 0 containers: []
	W1010 19:27:04.547940  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:04.547949  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:04.548008  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:04.585257  148123 cri.go:89] found id: ""
	I1010 19:27:04.585287  148123 logs.go:282] 0 containers: []
	W1010 19:27:04.585297  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:04.585303  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:04.585352  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:04.621019  148123 cri.go:89] found id: ""
	I1010 19:27:04.621048  148123 logs.go:282] 0 containers: []
	W1010 19:27:04.621057  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:04.621068  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:04.621080  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:04.662770  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:04.662801  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:04.714551  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:04.714592  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:04.730922  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:04.730951  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:04.841139  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:04.841163  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:04.841178  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:07.428528  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:07.442423  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:07.442498  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:07.476722  148123 cri.go:89] found id: ""
	I1010 19:27:07.476753  148123 logs.go:282] 0 containers: []
	W1010 19:27:07.476764  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:07.476772  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:07.476835  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:07.512211  148123 cri.go:89] found id: ""
	I1010 19:27:07.512245  148123 logs.go:282] 0 containers: []
	W1010 19:27:07.512256  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:07.512263  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:07.512325  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:07.546546  148123 cri.go:89] found id: ""
	I1010 19:27:07.546588  148123 logs.go:282] 0 containers: []
	W1010 19:27:07.546601  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:07.546619  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:07.546688  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:07.585743  148123 cri.go:89] found id: ""
	I1010 19:27:07.585768  148123 logs.go:282] 0 containers: []
	W1010 19:27:07.585777  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:07.585783  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:07.585848  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:07.626827  148123 cri.go:89] found id: ""
	I1010 19:27:07.626855  148123 logs.go:282] 0 containers: []
	W1010 19:27:07.626865  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:07.626874  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:07.626960  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:07.661910  148123 cri.go:89] found id: ""
	I1010 19:27:07.661940  148123 logs.go:282] 0 containers: []
	W1010 19:27:07.661948  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:07.661955  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:07.662072  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:07.698255  148123 cri.go:89] found id: ""
	I1010 19:27:07.698288  148123 logs.go:282] 0 containers: []
	W1010 19:27:07.698301  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:07.698309  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:07.698374  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:07.734389  148123 cri.go:89] found id: ""
	I1010 19:27:07.734417  148123 logs.go:282] 0 containers: []
	W1010 19:27:07.734428  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:07.734440  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:07.734454  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:07.814089  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:07.814130  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:07.854799  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:07.854831  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:07.906595  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:07.906635  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:07.921928  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:07.921966  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:07.989839  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:10.490115  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:10.504216  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:10.504292  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:10.554789  148123 cri.go:89] found id: ""
	I1010 19:27:10.554815  148123 logs.go:282] 0 containers: []
	W1010 19:27:10.554825  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:10.554831  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:10.554889  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:10.618970  148123 cri.go:89] found id: ""
	I1010 19:27:10.618997  148123 logs.go:282] 0 containers: []
	W1010 19:27:10.619005  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:10.619012  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:10.619061  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:10.666900  148123 cri.go:89] found id: ""
	I1010 19:27:10.666933  148123 logs.go:282] 0 containers: []
	W1010 19:27:10.666946  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:10.666953  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:10.667023  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:10.704364  148123 cri.go:89] found id: ""
	I1010 19:27:10.704405  148123 logs.go:282] 0 containers: []
	W1010 19:27:10.704430  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:10.704450  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:10.704525  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:10.742341  148123 cri.go:89] found id: ""
	I1010 19:27:10.742380  148123 logs.go:282] 0 containers: []
	W1010 19:27:10.742389  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:10.742396  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:10.742461  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:10.783056  148123 cri.go:89] found id: ""
	I1010 19:27:10.783088  148123 logs.go:282] 0 containers: []
	W1010 19:27:10.783098  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:10.783105  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:10.783174  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:10.823198  148123 cri.go:89] found id: ""
	I1010 19:27:10.823224  148123 logs.go:282] 0 containers: []
	W1010 19:27:10.823233  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:10.823243  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:10.823302  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:10.858154  148123 cri.go:89] found id: ""
	I1010 19:27:10.858195  148123 logs.go:282] 0 containers: []
	W1010 19:27:10.858204  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:10.858213  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:10.858229  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:10.935491  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:10.935518  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:10.935532  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:11.015653  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:11.015694  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:11.058765  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:11.058793  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:11.116748  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:11.116799  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:13.631579  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:13.644885  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:13.644953  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:13.685242  148123 cri.go:89] found id: ""
	I1010 19:27:13.685273  148123 logs.go:282] 0 containers: []
	W1010 19:27:13.685285  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:13.685294  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:13.685364  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:13.720831  148123 cri.go:89] found id: ""
	I1010 19:27:13.720866  148123 logs.go:282] 0 containers: []
	W1010 19:27:13.720877  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:13.720885  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:13.720947  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:13.766374  148123 cri.go:89] found id: ""
	I1010 19:27:13.766404  148123 logs.go:282] 0 containers: []
	W1010 19:27:13.766413  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:13.766419  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:13.766468  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:13.804550  148123 cri.go:89] found id: ""
	I1010 19:27:13.804585  148123 logs.go:282] 0 containers: []
	W1010 19:27:13.804597  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:13.804606  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:13.804674  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:13.842406  148123 cri.go:89] found id: ""
	I1010 19:27:13.842432  148123 logs.go:282] 0 containers: []
	W1010 19:27:13.842440  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:13.842460  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:13.842678  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:13.882365  148123 cri.go:89] found id: ""
	I1010 19:27:13.882402  148123 logs.go:282] 0 containers: []
	W1010 19:27:13.882415  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:13.882423  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:13.882505  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:13.926016  148123 cri.go:89] found id: ""
	I1010 19:27:13.926052  148123 logs.go:282] 0 containers: []
	W1010 19:27:13.926065  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:13.926075  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:13.926177  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:13.960485  148123 cri.go:89] found id: ""
	I1010 19:27:13.960523  148123 logs.go:282] 0 containers: []
	W1010 19:27:13.960533  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:13.960542  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:13.960558  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:14.013013  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:14.013055  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:14.026906  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:14.026935  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:14.104469  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:14.104494  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:14.104507  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:14.185917  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:14.185959  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:16.732088  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:16.746646  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:16.746710  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:16.784715  148123 cri.go:89] found id: ""
	I1010 19:27:16.784739  148123 logs.go:282] 0 containers: []
	W1010 19:27:16.784749  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:16.784755  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:16.784807  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:16.824181  148123 cri.go:89] found id: ""
	I1010 19:27:16.824210  148123 logs.go:282] 0 containers: []
	W1010 19:27:16.824220  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:16.824228  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:16.824289  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:16.860901  148123 cri.go:89] found id: ""
	I1010 19:27:16.860932  148123 logs.go:282] 0 containers: []
	W1010 19:27:16.860941  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:16.860947  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:16.860997  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:16.895127  148123 cri.go:89] found id: ""
	I1010 19:27:16.895159  148123 logs.go:282] 0 containers: []
	W1010 19:27:16.895175  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:16.895183  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:16.895266  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:16.931989  148123 cri.go:89] found id: ""
	I1010 19:27:16.932020  148123 logs.go:282] 0 containers: []
	W1010 19:27:16.932028  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:16.932035  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:16.932086  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:16.967254  148123 cri.go:89] found id: ""
	I1010 19:27:16.967283  148123 logs.go:282] 0 containers: []
	W1010 19:27:16.967292  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:16.967299  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:16.967347  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:17.003010  148123 cri.go:89] found id: ""
	I1010 19:27:17.003035  148123 logs.go:282] 0 containers: []
	W1010 19:27:17.003043  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:17.003050  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:17.003097  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:17.040053  148123 cri.go:89] found id: ""
	I1010 19:27:17.040093  148123 logs.go:282] 0 containers: []
	W1010 19:27:17.040102  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:17.040118  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:17.040131  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:17.093176  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:17.093217  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:17.108086  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:17.108116  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:17.186423  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:17.186452  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:17.186467  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:17.271884  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:17.271926  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:19.817954  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:19.831924  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:19.831993  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:19.873415  148123 cri.go:89] found id: ""
	I1010 19:27:19.873441  148123 logs.go:282] 0 containers: []
	W1010 19:27:19.873459  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:19.873466  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:19.873527  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:19.912174  148123 cri.go:89] found id: ""
	I1010 19:27:19.912207  148123 logs.go:282] 0 containers: []
	W1010 19:27:19.912219  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:19.912227  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:19.912298  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:19.948422  148123 cri.go:89] found id: ""
	I1010 19:27:19.948456  148123 logs.go:282] 0 containers: []
	W1010 19:27:19.948466  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:19.948472  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:19.948524  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:19.983917  148123 cri.go:89] found id: ""
	I1010 19:27:19.983949  148123 logs.go:282] 0 containers: []
	W1010 19:27:19.983962  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:19.983970  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:19.984024  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:20.019160  148123 cri.go:89] found id: ""
	I1010 19:27:20.019187  148123 logs.go:282] 0 containers: []
	W1010 19:27:20.019198  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:20.019207  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:20.019271  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:20.056073  148123 cri.go:89] found id: ""
	I1010 19:27:20.056104  148123 logs.go:282] 0 containers: []
	W1010 19:27:20.056116  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:20.056124  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:20.056187  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:20.095877  148123 cri.go:89] found id: ""
	I1010 19:27:20.095916  148123 logs.go:282] 0 containers: []
	W1010 19:27:20.095928  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:20.095935  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:20.096007  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:20.132360  148123 cri.go:89] found id: ""
	I1010 19:27:20.132385  148123 logs.go:282] 0 containers: []
	W1010 19:27:20.132394  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:20.132402  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:20.132413  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:20.190573  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:20.190619  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:20.205785  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:20.205819  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:20.278882  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:20.278911  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:20.278924  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:20.363982  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:20.364024  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:22.906059  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:22.919839  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:22.919912  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:22.958203  148123 cri.go:89] found id: ""
	I1010 19:27:22.958242  148123 logs.go:282] 0 containers: []
	W1010 19:27:22.958252  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:22.958258  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:22.958312  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:22.995834  148123 cri.go:89] found id: ""
	I1010 19:27:22.995866  148123 logs.go:282] 0 containers: []
	W1010 19:27:22.995874  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:22.995880  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:22.995945  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:23.035913  148123 cri.go:89] found id: ""
	I1010 19:27:23.035950  148123 logs.go:282] 0 containers: []
	W1010 19:27:23.035962  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:23.035970  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:23.036039  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:23.073995  148123 cri.go:89] found id: ""
	I1010 19:27:23.074036  148123 logs.go:282] 0 containers: []
	W1010 19:27:23.074049  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:23.074057  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:23.074117  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:23.111190  148123 cri.go:89] found id: ""
	I1010 19:27:23.111222  148123 logs.go:282] 0 containers: []
	W1010 19:27:23.111234  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:23.111243  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:23.111305  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:23.147771  148123 cri.go:89] found id: ""
	I1010 19:27:23.147797  148123 logs.go:282] 0 containers: []
	W1010 19:27:23.147806  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:23.147814  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:23.147878  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:23.185485  148123 cri.go:89] found id: ""
	I1010 19:27:23.185517  148123 logs.go:282] 0 containers: []
	W1010 19:27:23.185527  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:23.185535  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:23.185598  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:23.222030  148123 cri.go:89] found id: ""
	I1010 19:27:23.222060  148123 logs.go:282] 0 containers: []
	W1010 19:27:23.222070  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:23.222081  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:23.222097  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:23.301826  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:23.301849  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:23.301861  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:23.378688  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:23.378730  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:23.426683  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:23.426717  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:23.478425  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:23.478467  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:25.993768  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:26.008311  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:26.008383  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:26.046315  148123 cri.go:89] found id: ""
	I1010 19:27:26.046343  148123 logs.go:282] 0 containers: []
	W1010 19:27:26.046355  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:26.046364  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:26.046418  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:26.081823  148123 cri.go:89] found id: ""
	I1010 19:27:26.081847  148123 logs.go:282] 0 containers: []
	W1010 19:27:26.081855  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:26.081861  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:26.081913  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:26.117139  148123 cri.go:89] found id: ""
	I1010 19:27:26.117167  148123 logs.go:282] 0 containers: []
	W1010 19:27:26.117183  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:26.117191  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:26.117242  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:26.154477  148123 cri.go:89] found id: ""
	I1010 19:27:26.154501  148123 logs.go:282] 0 containers: []
	W1010 19:27:26.154510  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:26.154517  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:26.154568  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:26.187497  148123 cri.go:89] found id: ""
	I1010 19:27:26.187527  148123 logs.go:282] 0 containers: []
	W1010 19:27:26.187535  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:26.187541  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:26.187604  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:26.223110  148123 cri.go:89] found id: ""
	I1010 19:27:26.223140  148123 logs.go:282] 0 containers: []
	W1010 19:27:26.223151  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:26.223160  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:26.223226  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:26.264975  148123 cri.go:89] found id: ""
	I1010 19:27:26.265003  148123 logs.go:282] 0 containers: []
	W1010 19:27:26.265011  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:26.265017  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:26.265067  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:26.307467  148123 cri.go:89] found id: ""
	I1010 19:27:26.307494  148123 logs.go:282] 0 containers: []
	W1010 19:27:26.307503  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:26.307512  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:26.307524  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:26.346313  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:26.346342  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:26.400069  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:26.400109  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:26.415467  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:26.415500  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:26.486986  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:26.487016  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:26.487033  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:29.064522  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:29.077631  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:29.077696  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:29.116127  148123 cri.go:89] found id: ""
	I1010 19:27:29.116154  148123 logs.go:282] 0 containers: []
	W1010 19:27:29.116164  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:29.116172  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:29.116241  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:29.152863  148123 cri.go:89] found id: ""
	I1010 19:27:29.152893  148123 logs.go:282] 0 containers: []
	W1010 19:27:29.152901  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:29.152907  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:29.152963  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:29.189951  148123 cri.go:89] found id: ""
	I1010 19:27:29.189983  148123 logs.go:282] 0 containers: []
	W1010 19:27:29.189992  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:29.189999  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:29.190060  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:29.226611  148123 cri.go:89] found id: ""
	I1010 19:27:29.226646  148123 logs.go:282] 0 containers: []
	W1010 19:27:29.226657  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:29.226665  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:29.226729  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:29.271884  148123 cri.go:89] found id: ""
	I1010 19:27:29.271921  148123 logs.go:282] 0 containers: []
	W1010 19:27:29.271934  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:29.271944  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:29.272016  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:29.306128  148123 cri.go:89] found id: ""
	I1010 19:27:29.306168  148123 logs.go:282] 0 containers: []
	W1010 19:27:29.306181  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:29.306191  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:29.306255  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:29.341869  148123 cri.go:89] found id: ""
	I1010 19:27:29.341893  148123 logs.go:282] 0 containers: []
	W1010 19:27:29.341901  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:29.341908  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:29.341962  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:29.376213  148123 cri.go:89] found id: ""
	I1010 19:27:29.376240  148123 logs.go:282] 0 containers: []
	W1010 19:27:29.376249  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:29.376259  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:29.376273  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:29.428827  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:29.428878  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:29.443719  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:29.443754  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:29.516166  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:29.516193  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:29.516208  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:29.596643  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:29.596681  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:32.135286  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:32.148791  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:32.148893  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:32.185511  148123 cri.go:89] found id: ""
	I1010 19:27:32.185542  148123 logs.go:282] 0 containers: []
	W1010 19:27:32.185553  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:32.185560  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:32.185626  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:32.222908  148123 cri.go:89] found id: ""
	I1010 19:27:32.222934  148123 logs.go:282] 0 containers: []
	W1010 19:27:32.222942  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:32.222948  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:32.222999  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:32.260998  148123 cri.go:89] found id: ""
	I1010 19:27:32.261033  148123 logs.go:282] 0 containers: []
	W1010 19:27:32.261045  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:32.261063  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:32.261128  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:32.298311  148123 cri.go:89] found id: ""
	I1010 19:27:32.298339  148123 logs.go:282] 0 containers: []
	W1010 19:27:32.298348  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:32.298355  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:32.298409  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:32.334134  148123 cri.go:89] found id: ""
	I1010 19:27:32.334220  148123 logs.go:282] 0 containers: []
	W1010 19:27:32.334236  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:32.334249  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:32.334319  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:32.369690  148123 cri.go:89] found id: ""
	I1010 19:27:32.369723  148123 logs.go:282] 0 containers: []
	W1010 19:27:32.369735  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:32.369743  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:32.369807  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:32.407164  148123 cri.go:89] found id: ""
	I1010 19:27:32.407210  148123 logs.go:282] 0 containers: []
	W1010 19:27:32.407218  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:32.407224  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:32.407279  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:32.446468  148123 cri.go:89] found id: ""
	I1010 19:27:32.446494  148123 logs.go:282] 0 containers: []
	W1010 19:27:32.446505  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:32.446519  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:32.446535  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:32.497953  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:32.497997  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:32.513590  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:32.513620  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:32.594037  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:32.594072  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:32.594090  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:32.673546  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:32.673587  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:35.226084  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:35.241152  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:35.241222  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:35.283208  148123 cri.go:89] found id: ""
	I1010 19:27:35.283245  148123 logs.go:282] 0 containers: []
	W1010 19:27:35.283269  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:35.283286  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:35.283346  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:35.320406  148123 cri.go:89] found id: ""
	I1010 19:27:35.320444  148123 logs.go:282] 0 containers: []
	W1010 19:27:35.320457  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:35.320466  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:35.320523  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:35.360984  148123 cri.go:89] found id: ""
	I1010 19:27:35.361015  148123 logs.go:282] 0 containers: []
	W1010 19:27:35.361027  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:35.361034  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:35.361097  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:35.400191  148123 cri.go:89] found id: ""
	I1010 19:27:35.400219  148123 logs.go:282] 0 containers: []
	W1010 19:27:35.400230  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:35.400244  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:35.400314  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:35.438092  148123 cri.go:89] found id: ""
	I1010 19:27:35.438126  148123 logs.go:282] 0 containers: []
	W1010 19:27:35.438139  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:35.438147  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:35.438215  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:35.472656  148123 cri.go:89] found id: ""
	I1010 19:27:35.472681  148123 logs.go:282] 0 containers: []
	W1010 19:27:35.472688  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:35.472694  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:35.472743  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:35.505895  148123 cri.go:89] found id: ""
	I1010 19:27:35.505931  148123 logs.go:282] 0 containers: []
	W1010 19:27:35.505942  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:35.505950  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:35.506015  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:35.540406  148123 cri.go:89] found id: ""
	I1010 19:27:35.540441  148123 logs.go:282] 0 containers: []
	W1010 19:27:35.540451  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:35.540462  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:35.540478  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:35.555874  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:35.555903  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:35.628400  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:35.628427  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:35.628453  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:35.708262  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:35.708305  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:35.752796  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:35.752824  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:38.306212  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:38.319473  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:38.319543  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:38.357493  148123 cri.go:89] found id: ""
	I1010 19:27:38.357520  148123 logs.go:282] 0 containers: []
	W1010 19:27:38.357529  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:38.357536  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:38.357588  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:38.395908  148123 cri.go:89] found id: ""
	I1010 19:27:38.395935  148123 logs.go:282] 0 containers: []
	W1010 19:27:38.395943  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:38.395949  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:38.396010  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:38.434495  148123 cri.go:89] found id: ""
	I1010 19:27:38.434529  148123 logs.go:282] 0 containers: []
	W1010 19:27:38.434539  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:38.434545  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:38.434608  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:38.474647  148123 cri.go:89] found id: ""
	I1010 19:27:38.474686  148123 logs.go:282] 0 containers: []
	W1010 19:27:38.474698  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:38.474710  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:38.474784  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:38.511105  148123 cri.go:89] found id: ""
	I1010 19:27:38.511133  148123 logs.go:282] 0 containers: []
	W1010 19:27:38.511141  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:38.511148  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:38.511199  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:38.551011  148123 cri.go:89] found id: ""
	I1010 19:27:38.551055  148123 logs.go:282] 0 containers: []
	W1010 19:27:38.551066  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:38.551074  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:38.551139  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:38.590579  148123 cri.go:89] found id: ""
	I1010 19:27:38.590606  148123 logs.go:282] 0 containers: []
	W1010 19:27:38.590615  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:38.590621  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:38.590686  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:38.627775  148123 cri.go:89] found id: ""
	I1010 19:27:38.627812  148123 logs.go:282] 0 containers: []
	W1010 19:27:38.627826  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:38.627838  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:38.627854  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:38.683195  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:38.683239  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:38.697220  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:38.697250  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:38.771619  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:38.771651  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:38.771668  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:38.851324  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:38.851362  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:41.408165  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:41.422958  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:41.423032  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:41.466774  148123 cri.go:89] found id: ""
	I1010 19:27:41.466806  148123 logs.go:282] 0 containers: []
	W1010 19:27:41.466816  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:41.466823  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:41.466874  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:41.506429  148123 cri.go:89] found id: ""
	I1010 19:27:41.506470  148123 logs.go:282] 0 containers: []
	W1010 19:27:41.506482  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:41.506489  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:41.506555  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:41.548929  148123 cri.go:89] found id: ""
	I1010 19:27:41.548965  148123 logs.go:282] 0 containers: []
	W1010 19:27:41.548976  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:41.548983  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:41.549044  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:41.592243  148123 cri.go:89] found id: ""
	I1010 19:27:41.592274  148123 logs.go:282] 0 containers: []
	W1010 19:27:41.592283  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:41.592290  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:41.592342  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:41.628516  148123 cri.go:89] found id: ""
	I1010 19:27:41.628558  148123 logs.go:282] 0 containers: []
	W1010 19:27:41.628570  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:41.628578  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:41.628650  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:41.665011  148123 cri.go:89] found id: ""
	I1010 19:27:41.665041  148123 logs.go:282] 0 containers: []
	W1010 19:27:41.665053  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:41.665060  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:41.665112  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:41.702650  148123 cri.go:89] found id: ""
	I1010 19:27:41.702681  148123 logs.go:282] 0 containers: []
	W1010 19:27:41.702692  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:41.702700  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:41.702764  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:41.738971  148123 cri.go:89] found id: ""
	I1010 19:27:41.739002  148123 logs.go:282] 0 containers: []
	W1010 19:27:41.739021  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:41.739031  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:41.739062  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:41.813296  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:41.813321  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:41.813335  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:41.895138  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:41.895185  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:41.941975  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:41.942012  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:41.994838  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:41.994888  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:44.511153  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:44.524871  148123 kubeadm.go:597] duration metric: took 4m4.194337013s to restartPrimaryControlPlane
	W1010 19:27:44.524953  148123 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1010 19:27:44.524988  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1010 19:27:44.994063  148123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 19:27:45.010926  148123 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1010 19:27:45.021757  148123 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1010 19:27:45.032246  148123 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1010 19:27:45.032270  148123 kubeadm.go:157] found existing configuration files:
	
	I1010 19:27:45.032326  148123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1010 19:27:45.042799  148123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1010 19:27:45.042866  148123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1010 19:27:45.053017  148123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1010 19:27:45.063373  148123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1010 19:27:45.063445  148123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1010 19:27:45.074689  148123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1010 19:27:45.085187  148123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1010 19:27:45.085241  148123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1010 19:27:45.096275  148123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1010 19:27:45.106686  148123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1010 19:27:45.106750  148123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1010 19:27:45.117018  148123 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1010 19:27:45.358920  148123 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1010 19:29:41.275119  148123 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1010 19:29:41.275272  148123 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1010 19:29:41.276822  148123 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1010 19:29:41.276919  148123 kubeadm.go:310] [preflight] Running pre-flight checks
	I1010 19:29:41.277017  148123 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1010 19:29:41.277142  148123 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1010 19:29:41.277254  148123 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1010 19:29:41.277357  148123 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1010 19:29:41.279069  148123 out.go:235]   - Generating certificates and keys ...
	I1010 19:29:41.279160  148123 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1010 19:29:41.279217  148123 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1010 19:29:41.279306  148123 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1010 19:29:41.279381  148123 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1010 19:29:41.279484  148123 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1010 19:29:41.279576  148123 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1010 19:29:41.279674  148123 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1010 19:29:41.279779  148123 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1010 19:29:41.279906  148123 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1010 19:29:41.279971  148123 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1010 19:29:41.280005  148123 kubeadm.go:310] [certs] Using the existing "sa" key
	I1010 19:29:41.280052  148123 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1010 19:29:41.280095  148123 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1010 19:29:41.280144  148123 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1010 19:29:41.280219  148123 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1010 19:29:41.280317  148123 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1010 19:29:41.280474  148123 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1010 19:29:41.280583  148123 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1010 19:29:41.280648  148123 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1010 19:29:41.280736  148123 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1010 19:29:41.282180  148123 out.go:235]   - Booting up control plane ...
	I1010 19:29:41.282266  148123 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1010 19:29:41.282336  148123 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1010 19:29:41.282414  148123 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1010 19:29:41.282538  148123 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1010 19:29:41.282748  148123 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1010 19:29:41.282822  148123 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1010 19:29:41.282896  148123 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1010 19:29:41.283061  148123 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1010 19:29:41.283123  148123 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1010 19:29:41.283287  148123 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1010 19:29:41.283348  148123 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1010 19:29:41.283519  148123 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1010 19:29:41.283623  148123 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1010 19:29:41.283809  148123 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1010 19:29:41.283900  148123 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1010 19:29:41.284115  148123 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1010 19:29:41.284135  148123 kubeadm.go:310] 
	I1010 19:29:41.284201  148123 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1010 19:29:41.284243  148123 kubeadm.go:310] 		timed out waiting for the condition
	I1010 19:29:41.284257  148123 kubeadm.go:310] 
	I1010 19:29:41.284307  148123 kubeadm.go:310] 	This error is likely caused by:
	I1010 19:29:41.284346  148123 kubeadm.go:310] 		- The kubelet is not running
	I1010 19:29:41.284481  148123 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1010 19:29:41.284505  148123 kubeadm.go:310] 
	I1010 19:29:41.284655  148123 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1010 19:29:41.284714  148123 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1010 19:29:41.284752  148123 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1010 19:29:41.284758  148123 kubeadm.go:310] 
	I1010 19:29:41.284913  148123 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1010 19:29:41.285038  148123 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1010 19:29:41.285055  148123 kubeadm.go:310] 
	I1010 19:29:41.285169  148123 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1010 19:29:41.285307  148123 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1010 19:29:41.285475  148123 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1010 19:29:41.285587  148123 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1010 19:29:41.285644  148123 kubeadm.go:310] 
	W1010 19:29:41.285747  148123 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1010 19:29:41.285792  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1010 19:29:46.681684  148123 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.395862838s)
	I1010 19:29:46.681797  148123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 19:29:46.696927  148123 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1010 19:29:46.708250  148123 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1010 19:29:46.708280  148123 kubeadm.go:157] found existing configuration files:
	
	I1010 19:29:46.708339  148123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1010 19:29:46.719748  148123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1010 19:29:46.719828  148123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1010 19:29:46.730892  148123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1010 19:29:46.742329  148123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1010 19:29:46.742401  148123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1010 19:29:46.752919  148123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1010 19:29:46.762860  148123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1010 19:29:46.762932  148123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1010 19:29:46.772513  148123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1010 19:29:46.781672  148123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1010 19:29:46.781746  148123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1010 19:29:46.791250  148123 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1010 19:29:47.018706  148123 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1010 19:31:43.351851  148123 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1010 19:31:43.351992  148123 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1010 19:31:43.353664  148123 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1010 19:31:43.353752  148123 kubeadm.go:310] [preflight] Running pre-flight checks
	I1010 19:31:43.353861  148123 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1010 19:31:43.353998  148123 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1010 19:31:43.354130  148123 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1010 19:31:43.354235  148123 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1010 19:31:43.356450  148123 out.go:235]   - Generating certificates and keys ...
	I1010 19:31:43.356557  148123 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1010 19:31:43.356652  148123 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1010 19:31:43.356783  148123 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1010 19:31:43.356902  148123 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1010 19:31:43.357002  148123 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1010 19:31:43.357074  148123 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1010 19:31:43.357181  148123 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1010 19:31:43.357247  148123 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1010 19:31:43.357325  148123 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1010 19:31:43.357408  148123 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1010 19:31:43.357459  148123 kubeadm.go:310] [certs] Using the existing "sa" key
	I1010 19:31:43.357529  148123 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1010 19:31:43.357604  148123 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1010 19:31:43.357676  148123 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1010 19:31:43.357751  148123 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1010 19:31:43.357830  148123 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1010 19:31:43.357979  148123 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1010 19:31:43.358092  148123 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1010 19:31:43.358158  148123 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1010 19:31:43.358263  148123 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1010 19:31:43.360216  148123 out.go:235]   - Booting up control plane ...
	I1010 19:31:43.360332  148123 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1010 19:31:43.360463  148123 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1010 19:31:43.360551  148123 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1010 19:31:43.360673  148123 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1010 19:31:43.360907  148123 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1010 19:31:43.360976  148123 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1010 19:31:43.361058  148123 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1010 19:31:43.361309  148123 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1010 19:31:43.361410  148123 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1010 19:31:43.361648  148123 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1010 19:31:43.361742  148123 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1010 19:31:43.361960  148123 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1010 19:31:43.362058  148123 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1010 19:31:43.362308  148123 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1010 19:31:43.362399  148123 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1010 19:31:43.362640  148123 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1010 19:31:43.362652  148123 kubeadm.go:310] 
	I1010 19:31:43.362708  148123 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1010 19:31:43.362758  148123 kubeadm.go:310] 		timed out waiting for the condition
	I1010 19:31:43.362768  148123 kubeadm.go:310] 
	I1010 19:31:43.362809  148123 kubeadm.go:310] 	This error is likely caused by:
	I1010 19:31:43.362858  148123 kubeadm.go:310] 		- The kubelet is not running
	I1010 19:31:43.362981  148123 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1010 19:31:43.362993  148123 kubeadm.go:310] 
	I1010 19:31:43.363119  148123 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1010 19:31:43.363164  148123 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1010 19:31:43.363204  148123 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1010 19:31:43.363219  148123 kubeadm.go:310] 
	I1010 19:31:43.363344  148123 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1010 19:31:43.363454  148123 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1010 19:31:43.363465  148123 kubeadm.go:310] 
	I1010 19:31:43.363591  148123 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1010 19:31:43.363708  148123 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1010 19:31:43.363803  148123 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1010 19:31:43.363892  148123 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1010 19:31:43.363978  148123 kubeadm.go:394] duration metric: took 8m3.085634474s to StartCluster
	I1010 19:31:43.364052  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:31:43.364159  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:31:43.364268  148123 kubeadm.go:310] 
	I1010 19:31:43.413041  148123 cri.go:89] found id: ""
	I1010 19:31:43.413075  148123 logs.go:282] 0 containers: []
	W1010 19:31:43.413086  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:31:43.413092  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:31:43.413206  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:31:43.456454  148123 cri.go:89] found id: ""
	I1010 19:31:43.456487  148123 logs.go:282] 0 containers: []
	W1010 19:31:43.456499  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:31:43.456507  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:31:43.456567  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:31:43.498582  148123 cri.go:89] found id: ""
	I1010 19:31:43.498614  148123 logs.go:282] 0 containers: []
	W1010 19:31:43.498624  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:31:43.498633  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:31:43.498694  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:31:43.540557  148123 cri.go:89] found id: ""
	I1010 19:31:43.540589  148123 logs.go:282] 0 containers: []
	W1010 19:31:43.540598  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:31:43.540605  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:31:43.540661  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:31:43.576743  148123 cri.go:89] found id: ""
	I1010 19:31:43.576776  148123 logs.go:282] 0 containers: []
	W1010 19:31:43.576787  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:31:43.576796  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:31:43.576887  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:31:43.614620  148123 cri.go:89] found id: ""
	I1010 19:31:43.614652  148123 logs.go:282] 0 containers: []
	W1010 19:31:43.614662  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:31:43.614668  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:31:43.614730  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:31:43.652008  148123 cri.go:89] found id: ""
	I1010 19:31:43.652061  148123 logs.go:282] 0 containers: []
	W1010 19:31:43.652075  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:31:43.652084  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:31:43.652153  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:31:43.689990  148123 cri.go:89] found id: ""
	I1010 19:31:43.690019  148123 logs.go:282] 0 containers: []
	W1010 19:31:43.690028  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:31:43.690043  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:31:43.690056  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:31:43.745634  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:31:43.745673  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:31:43.760064  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:31:43.760095  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:31:43.840168  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:31:43.840197  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:31:43.840214  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:31:43.943265  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:31:43.943303  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1010 19:31:43.987758  148123 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1010 19:31:43.987816  148123 out.go:270] * 
	* 
	W1010 19:31:43.987891  148123 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1010 19:31:43.987906  148123 out.go:270] * 
	* 
	W1010 19:31:43.988703  148123 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1010 19:31:43.991928  148123 out.go:201] 
	W1010 19:31:43.993494  148123 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1010 19:31:43.993556  148123 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1010 19:31:43.993585  148123 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1010 19:31:43.995273  148123 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-947203 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-947203 -n old-k8s-version-947203
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-947203 -n old-k8s-version-947203: exit status 2 (250.889835ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-947203 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-947203 logs -n 25: (1.590604923s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p no-preload-320324                                   | no-preload-320324            | jenkins | v1.34.0 | 10 Oct 24 19:15 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-029826             | newest-cni-029826            | jenkins | v1.34.0 | 10 Oct 24 19:16 UTC | 10 Oct 24 19:16 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-029826                                   | newest-cni-029826            | jenkins | v1.34.0 | 10 Oct 24 19:16 UTC | 10 Oct 24 19:16 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-029826                  | newest-cni-029826            | jenkins | v1.34.0 | 10 Oct 24 19:16 UTC | 10 Oct 24 19:16 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-029826 --memory=2200 --alsologtostderr   | newest-cni-029826            | jenkins | v1.34.0 | 10 Oct 24 19:16 UTC | 10 Oct 24 19:16 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-541370            | embed-certs-541370           | jenkins | v1.34.0 | 10 Oct 24 19:16 UTC | 10 Oct 24 19:16 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-541370                                  | embed-certs-541370           | jenkins | v1.34.0 | 10 Oct 24 19:16 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| image   | newest-cni-029826 image list                           | newest-cni-029826            | jenkins | v1.34.0 | 10 Oct 24 19:16 UTC | 10 Oct 24 19:16 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-029826                                   | newest-cni-029826            | jenkins | v1.34.0 | 10 Oct 24 19:16 UTC | 10 Oct 24 19:16 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-029826                                   | newest-cni-029826            | jenkins | v1.34.0 | 10 Oct 24 19:16 UTC | 10 Oct 24 19:16 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-029826                                   | newest-cni-029826            | jenkins | v1.34.0 | 10 Oct 24 19:16 UTC | 10 Oct 24 19:16 UTC |
	| delete  | -p newest-cni-029826                                   | newest-cni-029826            | jenkins | v1.34.0 | 10 Oct 24 19:16 UTC | 10 Oct 24 19:17 UTC |
	| start   | -p                                                     | default-k8s-diff-port-361847 | jenkins | v1.34.0 | 10 Oct 24 19:17 UTC | 10 Oct 24 19:18 UTC |
	|         | default-k8s-diff-port-361847                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-320324                  | no-preload-320324            | jenkins | v1.34.0 | 10 Oct 24 19:18 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-320324                                   | no-preload-320324            | jenkins | v1.34.0 | 10 Oct 24 19:18 UTC | 10 Oct 24 19:29 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-947203        | old-k8s-version-947203       | jenkins | v1.34.0 | 10 Oct 24 19:18 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-361847  | default-k8s-diff-port-361847 | jenkins | v1.34.0 | 10 Oct 24 19:18 UTC | 10 Oct 24 19:18 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-361847 | jenkins | v1.34.0 | 10 Oct 24 19:18 UTC |                     |
	|         | default-k8s-diff-port-361847                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-541370                 | embed-certs-541370           | jenkins | v1.34.0 | 10 Oct 24 19:18 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-541370                                  | embed-certs-541370           | jenkins | v1.34.0 | 10 Oct 24 19:19 UTC | 10 Oct 24 19:28 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-947203                              | old-k8s-version-947203       | jenkins | v1.34.0 | 10 Oct 24 19:19 UTC | 10 Oct 24 19:19 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-947203             | old-k8s-version-947203       | jenkins | v1.34.0 | 10 Oct 24 19:19 UTC | 10 Oct 24 19:19 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-947203                              | old-k8s-version-947203       | jenkins | v1.34.0 | 10 Oct 24 19:19 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-361847       | default-k8s-diff-port-361847 | jenkins | v1.34.0 | 10 Oct 24 19:21 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-361847 | jenkins | v1.34.0 | 10 Oct 24 19:21 UTC | 10 Oct 24 19:29 UTC |
	|         | default-k8s-diff-port-361847                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/10 19:21:13
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1010 19:21:13.943219  148525 out.go:345] Setting OutFile to fd 1 ...
	I1010 19:21:13.943336  148525 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 19:21:13.943343  148525 out.go:358] Setting ErrFile to fd 2...
	I1010 19:21:13.943347  148525 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 19:21:13.943560  148525 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19787-81676/.minikube/bin
	I1010 19:21:13.944109  148525 out.go:352] Setting JSON to false
	I1010 19:21:13.945219  148525 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":11020,"bootTime":1728577054,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1010 19:21:13.945321  148525 start.go:139] virtualization: kvm guest
	I1010 19:21:13.947915  148525 out.go:177] * [default-k8s-diff-port-361847] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1010 19:21:13.950021  148525 out.go:177]   - MINIKUBE_LOCATION=19787
	I1010 19:21:13.950037  148525 notify.go:220] Checking for updates...
	I1010 19:21:13.952994  148525 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1010 19:21:13.954661  148525 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19787-81676/kubeconfig
	I1010 19:21:13.956438  148525 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19787-81676/.minikube
	I1010 19:21:13.958502  148525 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1010 19:21:13.960099  148525 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1010 19:21:13.961930  148525 config.go:182] Loaded profile config "default-k8s-diff-port-361847": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 19:21:13.962374  148525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:21:13.962450  148525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:21:13.978323  148525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41247
	I1010 19:21:13.978926  148525 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:21:13.979520  148525 main.go:141] libmachine: Using API Version  1
	I1010 19:21:13.979538  148525 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:21:13.979954  148525 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:21:13.980144  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .DriverName
	I1010 19:21:13.980446  148525 driver.go:394] Setting default libvirt URI to qemu:///system
	I1010 19:21:13.980745  148525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:21:13.980784  148525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:21:13.996046  148525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37617
	I1010 19:21:13.996534  148525 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:21:13.997069  148525 main.go:141] libmachine: Using API Version  1
	I1010 19:21:13.997097  148525 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:21:13.997530  148525 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:21:13.997788  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .DriverName
	I1010 19:21:14.033593  148525 out.go:177] * Using the kvm2 driver based on existing profile
	I1010 19:21:14.035367  148525 start.go:297] selected driver: kvm2
	I1010 19:21:14.035394  148525 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-361847 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-361847 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.32 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 19:21:14.035526  148525 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1010 19:21:14.036341  148525 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 19:21:14.036452  148525 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19787-81676/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1010 19:21:14.052462  148525 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1010 19:21:14.052918  148525 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1010 19:21:14.052967  148525 cni.go:84] Creating CNI manager for ""
	I1010 19:21:14.053019  148525 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1010 19:21:14.053067  148525 start.go:340] cluster config:
	{Name:default-k8s-diff-port-361847 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-361847 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.32 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-h
ost Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 19:21:14.053178  148525 iso.go:125] acquiring lock: {Name:mk53f4624ffe67cac4f5d6b29b9ed87292a010a5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 19:21:14.055485  148525 out.go:177] * Starting "default-k8s-diff-port-361847" primary control-plane node in "default-k8s-diff-port-361847" cluster
	I1010 19:21:16.773106  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:21:14.056945  148525 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1010 19:21:14.057002  148525 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1010 19:21:14.057014  148525 cache.go:56] Caching tarball of preloaded images
	I1010 19:21:14.057118  148525 preload.go:172] Found /home/jenkins/minikube-integration/19787-81676/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1010 19:21:14.057134  148525 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1010 19:21:14.057268  148525 profile.go:143] Saving config to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/default-k8s-diff-port-361847/config.json ...
	I1010 19:21:14.057476  148525 start.go:360] acquireMachinesLock for default-k8s-diff-port-361847: {Name:mkf50a8a3600473176c10a3ea212772e747151e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1010 19:21:22.853158  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:21:25.925174  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:21:32.005160  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:21:35.077198  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:21:41.157130  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:21:44.229127  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:21:50.309136  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:21:53.381191  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:21:59.461129  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:22:02.533201  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:22:08.613124  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:22:11.685169  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:22:17.765161  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:22:20.837208  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:22:26.917127  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:22:29.989172  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:22:36.069147  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:22:39.141173  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:22:45.221160  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:22:48.293141  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:22:51.297376  147758 start.go:364] duration metric: took 3m49.312490934s to acquireMachinesLock for "embed-certs-541370"
	I1010 19:22:51.297453  147758 start.go:96] Skipping create...Using existing machine configuration
	I1010 19:22:51.297464  147758 fix.go:54] fixHost starting: 
	I1010 19:22:51.297787  147758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:22:51.297848  147758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:22:51.314087  147758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43639
	I1010 19:22:51.314588  147758 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:22:51.315115  147758 main.go:141] libmachine: Using API Version  1
	I1010 19:22:51.315138  147758 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:22:51.315509  147758 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:22:51.315691  147758 main.go:141] libmachine: (embed-certs-541370) Calling .DriverName
	I1010 19:22:51.315879  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetState
	I1010 19:22:51.317597  147758 fix.go:112] recreateIfNeeded on embed-certs-541370: state=Stopped err=<nil>
	I1010 19:22:51.317621  147758 main.go:141] libmachine: (embed-certs-541370) Calling .DriverName
	W1010 19:22:51.317781  147758 fix.go:138] unexpected machine state, will restart: <nil>
	I1010 19:22:51.319664  147758 out.go:177] * Restarting existing kvm2 VM for "embed-certs-541370" ...
	I1010 19:22:51.320967  147758 main.go:141] libmachine: (embed-certs-541370) Calling .Start
	I1010 19:22:51.321134  147758 main.go:141] libmachine: (embed-certs-541370) Ensuring networks are active...
	I1010 19:22:51.322026  147758 main.go:141] libmachine: (embed-certs-541370) Ensuring network default is active
	I1010 19:22:51.322468  147758 main.go:141] libmachine: (embed-certs-541370) Ensuring network mk-embed-certs-541370 is active
	I1010 19:22:51.322874  147758 main.go:141] libmachine: (embed-certs-541370) Getting domain xml...
	I1010 19:22:51.323687  147758 main.go:141] libmachine: (embed-certs-541370) Creating domain...
	I1010 19:22:51.294881  147213 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1010 19:22:51.294927  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetMachineName
	I1010 19:22:51.295226  147213 buildroot.go:166] provisioning hostname "no-preload-320324"
	I1010 19:22:51.295256  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetMachineName
	I1010 19:22:51.295454  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHHostname
	I1010 19:22:51.297198  147213 machine.go:96] duration metric: took 4m37.414594306s to provisionDockerMachine
	I1010 19:22:51.297252  147213 fix.go:56] duration metric: took 4m37.436635356s for fixHost
	I1010 19:22:51.297259  147213 start.go:83] releasing machines lock for "no-preload-320324", held for 4m37.436668423s
	W1010 19:22:51.297278  147213 start.go:714] error starting host: provision: host is not running
	W1010 19:22:51.297382  147213 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1010 19:22:51.297396  147213 start.go:729] Will try again in 5 seconds ...
	I1010 19:22:52.568699  147758 main.go:141] libmachine: (embed-certs-541370) Waiting to get IP...
	I1010 19:22:52.569582  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:22:52.569952  147758 main.go:141] libmachine: (embed-certs-541370) DBG | unable to find current IP address of domain embed-certs-541370 in network mk-embed-certs-541370
	I1010 19:22:52.570018  147758 main.go:141] libmachine: (embed-certs-541370) DBG | I1010 19:22:52.569935  148914 retry.go:31] will retry after 261.244287ms: waiting for machine to come up
	I1010 19:22:52.832639  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:22:52.833280  147758 main.go:141] libmachine: (embed-certs-541370) DBG | unable to find current IP address of domain embed-certs-541370 in network mk-embed-certs-541370
	I1010 19:22:52.833310  147758 main.go:141] libmachine: (embed-certs-541370) DBG | I1010 19:22:52.833200  148914 retry.go:31] will retry after 304.116732ms: waiting for machine to come up
	I1010 19:22:53.138770  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:22:53.139091  147758 main.go:141] libmachine: (embed-certs-541370) DBG | unable to find current IP address of domain embed-certs-541370 in network mk-embed-certs-541370
	I1010 19:22:53.139124  147758 main.go:141] libmachine: (embed-certs-541370) DBG | I1010 19:22:53.139055  148914 retry.go:31] will retry after 484.354474ms: waiting for machine to come up
	I1010 19:22:53.624831  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:22:53.625293  147758 main.go:141] libmachine: (embed-certs-541370) DBG | unable to find current IP address of domain embed-certs-541370 in network mk-embed-certs-541370
	I1010 19:22:53.625323  147758 main.go:141] libmachine: (embed-certs-541370) DBG | I1010 19:22:53.625234  148914 retry.go:31] will retry after 591.916836ms: waiting for machine to come up
	I1010 19:22:54.219214  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:22:54.219732  147758 main.go:141] libmachine: (embed-certs-541370) DBG | unable to find current IP address of domain embed-certs-541370 in network mk-embed-certs-541370
	I1010 19:22:54.219763  147758 main.go:141] libmachine: (embed-certs-541370) DBG | I1010 19:22:54.219673  148914 retry.go:31] will retry after 614.162479ms: waiting for machine to come up
	I1010 19:22:54.835573  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:22:54.836038  147758 main.go:141] libmachine: (embed-certs-541370) DBG | unable to find current IP address of domain embed-certs-541370 in network mk-embed-certs-541370
	I1010 19:22:54.836063  147758 main.go:141] libmachine: (embed-certs-541370) DBG | I1010 19:22:54.835988  148914 retry.go:31] will retry after 824.170953ms: waiting for machine to come up
	I1010 19:22:55.662092  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:22:55.662646  147758 main.go:141] libmachine: (embed-certs-541370) DBG | unable to find current IP address of domain embed-certs-541370 in network mk-embed-certs-541370
	I1010 19:22:55.662668  147758 main.go:141] libmachine: (embed-certs-541370) DBG | I1010 19:22:55.662586  148914 retry.go:31] will retry after 928.483848ms: waiting for machine to come up
	I1010 19:22:56.593200  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:22:56.593724  147758 main.go:141] libmachine: (embed-certs-541370) DBG | unable to find current IP address of domain embed-certs-541370 in network mk-embed-certs-541370
	I1010 19:22:56.593756  147758 main.go:141] libmachine: (embed-certs-541370) DBG | I1010 19:22:56.593679  148914 retry.go:31] will retry after 941.138644ms: waiting for machine to come up
	I1010 19:22:56.299351  147213 start.go:360] acquireMachinesLock for no-preload-320324: {Name:mkf50a8a3600473176c10a3ea212772e747151e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1010 19:22:57.536977  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:22:57.537403  147758 main.go:141] libmachine: (embed-certs-541370) DBG | unable to find current IP address of domain embed-certs-541370 in network mk-embed-certs-541370
	I1010 19:22:57.537429  147758 main.go:141] libmachine: (embed-certs-541370) DBG | I1010 19:22:57.537331  148914 retry.go:31] will retry after 1.262203584s: waiting for machine to come up
	I1010 19:22:58.801921  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:22:58.802420  147758 main.go:141] libmachine: (embed-certs-541370) DBG | unable to find current IP address of domain embed-certs-541370 in network mk-embed-certs-541370
	I1010 19:22:58.802454  147758 main.go:141] libmachine: (embed-certs-541370) DBG | I1010 19:22:58.802381  148914 retry.go:31] will retry after 2.154751391s: waiting for machine to come up
	I1010 19:23:00.960100  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:00.960661  147758 main.go:141] libmachine: (embed-certs-541370) DBG | unable to find current IP address of domain embed-certs-541370 in network mk-embed-certs-541370
	I1010 19:23:00.960684  147758 main.go:141] libmachine: (embed-certs-541370) DBG | I1010 19:23:00.960607  148914 retry.go:31] will retry after 1.945155171s: waiting for machine to come up
	I1010 19:23:02.907705  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:02.908097  147758 main.go:141] libmachine: (embed-certs-541370) DBG | unable to find current IP address of domain embed-certs-541370 in network mk-embed-certs-541370
	I1010 19:23:02.908129  147758 main.go:141] libmachine: (embed-certs-541370) DBG | I1010 19:23:02.908038  148914 retry.go:31] will retry after 3.245262469s: waiting for machine to come up
	I1010 19:23:06.157527  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:06.157897  147758 main.go:141] libmachine: (embed-certs-541370) DBG | unable to find current IP address of domain embed-certs-541370 in network mk-embed-certs-541370
	I1010 19:23:06.157925  147758 main.go:141] libmachine: (embed-certs-541370) DBG | I1010 19:23:06.157858  148914 retry.go:31] will retry after 3.973579024s: waiting for machine to come up
	I1010 19:23:11.321975  148123 start.go:364] duration metric: took 3m17.648521443s to acquireMachinesLock for "old-k8s-version-947203"
	I1010 19:23:11.322043  148123 start.go:96] Skipping create...Using existing machine configuration
	I1010 19:23:11.322056  148123 fix.go:54] fixHost starting: 
	I1010 19:23:11.322537  148123 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:23:11.322596  148123 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:23:11.339586  148123 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42157
	I1010 19:23:11.340091  148123 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:23:11.340686  148123 main.go:141] libmachine: Using API Version  1
	I1010 19:23:11.340719  148123 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:23:11.341101  148123 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:23:11.341295  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .DriverName
	I1010 19:23:11.341453  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetState
	I1010 19:23:11.343215  148123 fix.go:112] recreateIfNeeded on old-k8s-version-947203: state=Stopped err=<nil>
	I1010 19:23:11.343247  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .DriverName
	W1010 19:23:11.343408  148123 fix.go:138] unexpected machine state, will restart: <nil>
	I1010 19:23:11.345653  148123 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-947203" ...
	I1010 19:23:10.135369  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.135799  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has current primary IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.135830  147758 main.go:141] libmachine: (embed-certs-541370) Found IP for machine: 192.168.39.120
	I1010 19:23:10.135839  147758 main.go:141] libmachine: (embed-certs-541370) Reserving static IP address...
	I1010 19:23:10.136283  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "embed-certs-541370", mac: "52:54:00:e2:ee:d0", ip: "192.168.39.120"} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:10.136311  147758 main.go:141] libmachine: (embed-certs-541370) Reserved static IP address: 192.168.39.120
	I1010 19:23:10.136327  147758 main.go:141] libmachine: (embed-certs-541370) DBG | skip adding static IP to network mk-embed-certs-541370 - found existing host DHCP lease matching {name: "embed-certs-541370", mac: "52:54:00:e2:ee:d0", ip: "192.168.39.120"}
	I1010 19:23:10.136339  147758 main.go:141] libmachine: (embed-certs-541370) DBG | Getting to WaitForSSH function...
	I1010 19:23:10.136351  147758 main.go:141] libmachine: (embed-certs-541370) Waiting for SSH to be available...
	I1010 19:23:10.138861  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.139259  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:10.139293  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.139438  147758 main.go:141] libmachine: (embed-certs-541370) DBG | Using SSH client type: external
	I1010 19:23:10.139472  147758 main.go:141] libmachine: (embed-certs-541370) DBG | Using SSH private key: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/embed-certs-541370/id_rsa (-rw-------)
	I1010 19:23:10.139517  147758 main.go:141] libmachine: (embed-certs-541370) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.120 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19787-81676/.minikube/machines/embed-certs-541370/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1010 19:23:10.139541  147758 main.go:141] libmachine: (embed-certs-541370) DBG | About to run SSH command:
	I1010 19:23:10.139562  147758 main.go:141] libmachine: (embed-certs-541370) DBG | exit 0
	I1010 19:23:10.261078  147758 main.go:141] libmachine: (embed-certs-541370) DBG | SSH cmd err, output: <nil>: 
	I1010 19:23:10.261533  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetConfigRaw
	I1010 19:23:10.262192  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetIP
	I1010 19:23:10.265071  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.265467  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:10.265515  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.265737  147758 profile.go:143] Saving config to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/embed-certs-541370/config.json ...
	I1010 19:23:10.265941  147758 machine.go:93] provisionDockerMachine start ...
	I1010 19:23:10.265960  147758 main.go:141] libmachine: (embed-certs-541370) Calling .DriverName
	I1010 19:23:10.266188  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHHostname
	I1010 19:23:10.269186  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.269618  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:10.269649  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.269799  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHPort
	I1010 19:23:10.269984  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:23:10.270206  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:23:10.270345  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHUsername
	I1010 19:23:10.270550  147758 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:10.270834  147758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.120 22 <nil> <nil>}
	I1010 19:23:10.270849  147758 main.go:141] libmachine: About to run SSH command:
	hostname
	I1010 19:23:10.373285  147758 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1010 19:23:10.373316  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetMachineName
	I1010 19:23:10.373625  147758 buildroot.go:166] provisioning hostname "embed-certs-541370"
	I1010 19:23:10.373660  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetMachineName
	I1010 19:23:10.373835  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHHostname
	I1010 19:23:10.376552  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.376951  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:10.376994  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.377132  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHPort
	I1010 19:23:10.377332  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:23:10.377489  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:23:10.377606  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHUsername
	I1010 19:23:10.377745  147758 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:10.377918  147758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.120 22 <nil> <nil>}
	I1010 19:23:10.377930  147758 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-541370 && echo "embed-certs-541370" | sudo tee /etc/hostname
	I1010 19:23:10.495847  147758 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-541370
	
	I1010 19:23:10.495880  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHHostname
	I1010 19:23:10.498868  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.499205  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:10.499247  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.499362  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHPort
	I1010 19:23:10.499556  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:23:10.499700  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:23:10.499829  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHUsername
	I1010 19:23:10.499961  147758 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:10.500187  147758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.120 22 <nil> <nil>}
	I1010 19:23:10.500210  147758 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-541370' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-541370/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-541370' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1010 19:23:10.614318  147758 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1010 19:23:10.614357  147758 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19787-81676/.minikube CaCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19787-81676/.minikube}
	I1010 19:23:10.614412  147758 buildroot.go:174] setting up certificates
	I1010 19:23:10.614429  147758 provision.go:84] configureAuth start
	I1010 19:23:10.614457  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetMachineName
	I1010 19:23:10.614763  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetIP
	I1010 19:23:10.617457  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.617888  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:10.617916  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.618078  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHHostname
	I1010 19:23:10.620243  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.620635  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:10.620666  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.620789  147758 provision.go:143] copyHostCerts
	I1010 19:23:10.620895  147758 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem, removing ...
	I1010 19:23:10.620913  147758 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem
	I1010 19:23:10.620998  147758 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem (1078 bytes)
	I1010 19:23:10.621111  147758 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem, removing ...
	I1010 19:23:10.621123  147758 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem
	I1010 19:23:10.621159  147758 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem (1123 bytes)
	I1010 19:23:10.621245  147758 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem, removing ...
	I1010 19:23:10.621257  147758 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem
	I1010 19:23:10.621292  147758 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem (1675 bytes)
	I1010 19:23:10.621364  147758 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem org=jenkins.embed-certs-541370 san=[127.0.0.1 192.168.39.120 embed-certs-541370 localhost minikube]
	I1010 19:23:10.697456  147758 provision.go:177] copyRemoteCerts
	I1010 19:23:10.697515  147758 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1010 19:23:10.697547  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHHostname
	I1010 19:23:10.700439  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.700763  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:10.700799  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.700956  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHPort
	I1010 19:23:10.701162  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:23:10.701320  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHUsername
	I1010 19:23:10.701465  147758 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/embed-certs-541370/id_rsa Username:docker}
	I1010 19:23:10.783442  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1010 19:23:10.808446  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1010 19:23:10.832117  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1010 19:23:10.856286  147758 provision.go:87] duration metric: took 241.840139ms to configureAuth
	I1010 19:23:10.856318  147758 buildroot.go:189] setting minikube options for container-runtime
	I1010 19:23:10.856528  147758 config.go:182] Loaded profile config "embed-certs-541370": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 19:23:10.856640  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHHostname
	I1010 19:23:10.859252  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.859677  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:10.859708  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.859916  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHPort
	I1010 19:23:10.860087  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:23:10.860222  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:23:10.860344  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHUsername
	I1010 19:23:10.860524  147758 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:10.860688  147758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.120 22 <nil> <nil>}
	I1010 19:23:10.860702  147758 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1010 19:23:11.086349  147758 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1010 19:23:11.086375  147758 machine.go:96] duration metric: took 820.421344ms to provisionDockerMachine
	I1010 19:23:11.086386  147758 start.go:293] postStartSetup for "embed-certs-541370" (driver="kvm2")
	I1010 19:23:11.086401  147758 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1010 19:23:11.086423  147758 main.go:141] libmachine: (embed-certs-541370) Calling .DriverName
	I1010 19:23:11.086755  147758 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1010 19:23:11.086783  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHHostname
	I1010 19:23:11.089482  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:11.089838  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:11.089860  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:11.090042  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHPort
	I1010 19:23:11.090253  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:23:11.090410  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHUsername
	I1010 19:23:11.090535  147758 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/embed-certs-541370/id_rsa Username:docker}
	I1010 19:23:11.172474  147758 ssh_runner.go:195] Run: cat /etc/os-release
	I1010 19:23:11.176699  147758 info.go:137] Remote host: Buildroot 2023.02.9
	I1010 19:23:11.176733  147758 filesync.go:126] Scanning /home/jenkins/minikube-integration/19787-81676/.minikube/addons for local assets ...
	I1010 19:23:11.176800  147758 filesync.go:126] Scanning /home/jenkins/minikube-integration/19787-81676/.minikube/files for local assets ...
	I1010 19:23:11.176899  147758 filesync.go:149] local asset: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem -> 888762.pem in /etc/ssl/certs
	I1010 19:23:11.177044  147758 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1010 19:23:11.186985  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem --> /etc/ssl/certs/888762.pem (1708 bytes)
	I1010 19:23:11.211385  147758 start.go:296] duration metric: took 124.982089ms for postStartSetup
	I1010 19:23:11.211442  147758 fix.go:56] duration metric: took 19.913977793s for fixHost
	I1010 19:23:11.211472  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHHostname
	I1010 19:23:11.214421  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:11.214780  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:11.214812  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:11.214999  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHPort
	I1010 19:23:11.215219  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:23:11.215429  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:23:11.215612  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHUsername
	I1010 19:23:11.215786  147758 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:11.215974  147758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.120 22 <nil> <nil>}
	I1010 19:23:11.215985  147758 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1010 19:23:11.321786  147758 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728588191.295446348
	
	I1010 19:23:11.321814  147758 fix.go:216] guest clock: 1728588191.295446348
	I1010 19:23:11.321822  147758 fix.go:229] Guest: 2024-10-10 19:23:11.295446348 +0000 UTC Remote: 2024-10-10 19:23:11.211447413 +0000 UTC m=+249.373680838 (delta=83.998935ms)
	I1010 19:23:11.321870  147758 fix.go:200] guest clock delta is within tolerance: 83.998935ms
	I1010 19:23:11.321877  147758 start.go:83] releasing machines lock for "embed-certs-541370", held for 20.024455781s
	I1010 19:23:11.321905  147758 main.go:141] libmachine: (embed-certs-541370) Calling .DriverName
	I1010 19:23:11.322169  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetIP
	I1010 19:23:11.325004  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:11.325350  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:11.325375  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:11.325566  147758 main.go:141] libmachine: (embed-certs-541370) Calling .DriverName
	I1010 19:23:11.326090  147758 main.go:141] libmachine: (embed-certs-541370) Calling .DriverName
	I1010 19:23:11.326294  147758 main.go:141] libmachine: (embed-certs-541370) Calling .DriverName
	I1010 19:23:11.326383  147758 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1010 19:23:11.326444  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHHostname
	I1010 19:23:11.326501  147758 ssh_runner.go:195] Run: cat /version.json
	I1010 19:23:11.326529  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHHostname
	I1010 19:23:11.329311  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:11.329657  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:11.329690  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:11.329713  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:11.329866  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHPort
	I1010 19:23:11.330057  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:23:11.330160  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:11.330188  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:11.330204  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHUsername
	I1010 19:23:11.330344  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHPort
	I1010 19:23:11.330346  147758 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/embed-certs-541370/id_rsa Username:docker}
	I1010 19:23:11.330538  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:23:11.330687  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHUsername
	I1010 19:23:11.330821  147758 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/embed-certs-541370/id_rsa Username:docker}
	I1010 19:23:11.406525  147758 ssh_runner.go:195] Run: systemctl --version
	I1010 19:23:11.428958  147758 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1010 19:23:11.577663  147758 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1010 19:23:11.584024  147758 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1010 19:23:11.584112  147758 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1010 19:23:11.603163  147758 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1010 19:23:11.603190  147758 start.go:495] detecting cgroup driver to use...
	I1010 19:23:11.603291  147758 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1010 19:23:11.624744  147758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1010 19:23:11.645477  147758 docker.go:217] disabling cri-docker service (if available) ...
	I1010 19:23:11.645537  147758 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1010 19:23:11.660216  147758 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1010 19:23:11.675019  147758 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1010 19:23:11.796038  147758 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1010 19:23:11.967750  147758 docker.go:233] disabling docker service ...
	I1010 19:23:11.967828  147758 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1010 19:23:11.983184  147758 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1010 19:23:12.001603  147758 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1010 19:23:12.149408  147758 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1010 19:23:12.306724  147758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1010 19:23:12.324302  147758 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1010 19:23:12.345426  147758 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1010 19:23:12.345508  147758 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:12.357812  147758 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1010 19:23:12.357883  147758 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:12.370095  147758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:12.382389  147758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:12.395000  147758 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1010 19:23:12.408429  147758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:12.426851  147758 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:12.450568  147758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:12.463434  147758 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1010 19:23:12.474537  147758 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1010 19:23:12.474606  147758 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1010 19:23:12.489074  147758 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1010 19:23:12.500048  147758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 19:23:12.635695  147758 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1010 19:23:12.733511  147758 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1010 19:23:12.733593  147758 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1010 19:23:12.739072  147758 start.go:563] Will wait 60s for crictl version
	I1010 19:23:12.739138  147758 ssh_runner.go:195] Run: which crictl
	I1010 19:23:12.743675  147758 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1010 19:23:12.792272  147758 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1010 19:23:12.792379  147758 ssh_runner.go:195] Run: crio --version
	I1010 19:23:12.829968  147758 ssh_runner.go:195] Run: crio --version
	I1010 19:23:12.862579  147758 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1010 19:23:11.347158  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .Start
	I1010 19:23:11.347359  148123 main.go:141] libmachine: (old-k8s-version-947203) Ensuring networks are active...
	I1010 19:23:11.348252  148123 main.go:141] libmachine: (old-k8s-version-947203) Ensuring network default is active
	I1010 19:23:11.348689  148123 main.go:141] libmachine: (old-k8s-version-947203) Ensuring network mk-old-k8s-version-947203 is active
	I1010 19:23:11.349139  148123 main.go:141] libmachine: (old-k8s-version-947203) Getting domain xml...
	I1010 19:23:11.349799  148123 main.go:141] libmachine: (old-k8s-version-947203) Creating domain...
	I1010 19:23:12.671472  148123 main.go:141] libmachine: (old-k8s-version-947203) Waiting to get IP...
	I1010 19:23:12.672628  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:12.673145  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:12.673278  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:12.673132  149057 retry.go:31] will retry after 280.111667ms: waiting for machine to come up
	I1010 19:23:12.954865  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:12.955325  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:12.955377  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:12.955300  149057 retry.go:31] will retry after 289.967238ms: waiting for machine to come up
	I1010 19:23:13.247039  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:13.247663  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:13.247769  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:13.247632  149057 retry.go:31] will retry after 319.085935ms: waiting for machine to come up
	I1010 19:23:12.863797  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetIP
	I1010 19:23:12.867335  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:12.867760  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:12.867794  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:12.868029  147758 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1010 19:23:12.872503  147758 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 19:23:12.887684  147758 kubeadm.go:883] updating cluster {Name:embed-certs-541370 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:embed-certs-541370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.120 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1010 19:23:12.887809  147758 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1010 19:23:12.887853  147758 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 19:23:12.924155  147758 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1010 19:23:12.924240  147758 ssh_runner.go:195] Run: which lz4
	I1010 19:23:12.928613  147758 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1010 19:23:12.933024  147758 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1010 19:23:12.933069  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1010 19:23:14.450790  147758 crio.go:462] duration metric: took 1.522223644s to copy over tarball
	I1010 19:23:14.450893  147758 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1010 19:23:16.642155  147758 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.191220673s)
	I1010 19:23:16.642193  147758 crio.go:469] duration metric: took 2.191371146s to extract the tarball
	I1010 19:23:16.642202  147758 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1010 19:23:16.679611  147758 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 19:23:16.723840  147758 crio.go:514] all images are preloaded for cri-o runtime.
	I1010 19:23:16.723865  147758 cache_images.go:84] Images are preloaded, skipping loading
	I1010 19:23:16.723874  147758 kubeadm.go:934] updating node { 192.168.39.120 8443 v1.31.1 crio true true} ...
	I1010 19:23:16.723998  147758 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-541370 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.120
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-541370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1010 19:23:16.724081  147758 ssh_runner.go:195] Run: crio config
	I1010 19:23:16.779659  147758 cni.go:84] Creating CNI manager for ""
	I1010 19:23:16.779682  147758 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1010 19:23:16.779693  147758 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1010 19:23:16.779714  147758 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.120 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-541370 NodeName:embed-certs-541370 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.120"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.120 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1010 19:23:16.779842  147758 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.120
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-541370"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.120
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.120"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1010 19:23:16.779904  147758 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1010 19:23:16.791424  147758 binaries.go:44] Found k8s binaries, skipping transfer
	I1010 19:23:16.791493  147758 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1010 19:23:16.801715  147758 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I1010 19:23:16.821364  147758 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1010 19:23:16.842703  147758 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I1010 19:23:16.864835  147758 ssh_runner.go:195] Run: grep 192.168.39.120	control-plane.minikube.internal$ /etc/hosts
	I1010 19:23:16.868928  147758 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.120	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 19:23:13.568192  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:13.568690  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:13.568721  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:13.568657  149057 retry.go:31] will retry after 575.032546ms: waiting for machine to come up
	I1010 19:23:14.145650  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:14.146293  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:14.146319  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:14.146234  149057 retry.go:31] will retry after 702.803794ms: waiting for machine to come up
	I1010 19:23:14.851201  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:14.851736  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:14.851769  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:14.851706  149057 retry.go:31] will retry after 883.195401ms: waiting for machine to come up
	I1010 19:23:15.736627  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:15.737199  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:15.737252  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:15.737155  149057 retry.go:31] will retry after 794.699815ms: waiting for machine to come up
	I1010 19:23:16.533952  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:16.534510  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:16.534545  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:16.534443  149057 retry.go:31] will retry after 961.751912ms: waiting for machine to come up
	I1010 19:23:17.497535  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:17.498007  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:17.498035  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:17.497936  149057 retry.go:31] will retry after 1.423503471s: waiting for machine to come up
	I1010 19:23:16.883162  147758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 19:23:17.027646  147758 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 19:23:17.045083  147758 certs.go:68] Setting up /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/embed-certs-541370 for IP: 192.168.39.120
	I1010 19:23:17.045108  147758 certs.go:194] generating shared ca certs ...
	I1010 19:23:17.045130  147758 certs.go:226] acquiring lock for ca certs: {Name:mk934aa2a82050d7b4e8800492bf64f91d140517 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 19:23:17.045491  147758 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key
	I1010 19:23:17.045561  147758 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key
	I1010 19:23:17.045579  147758 certs.go:256] generating profile certs ...
	I1010 19:23:17.045730  147758 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/embed-certs-541370/client.key
	I1010 19:23:17.045814  147758 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/embed-certs-541370/apiserver.key.dd7630a8
	I1010 19:23:17.045874  147758 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/embed-certs-541370/proxy-client.key
	I1010 19:23:17.046015  147758 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876.pem (1338 bytes)
	W1010 19:23:17.046055  147758 certs.go:480] ignoring /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876_empty.pem, impossibly tiny 0 bytes
	I1010 19:23:17.046075  147758 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem (1679 bytes)
	I1010 19:23:17.046114  147758 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem (1078 bytes)
	I1010 19:23:17.046150  147758 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem (1123 bytes)
	I1010 19:23:17.046177  147758 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem (1675 bytes)
	I1010 19:23:17.046235  147758 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem (1708 bytes)
	I1010 19:23:17.047131  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1010 19:23:17.087057  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1010 19:23:17.137707  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1010 19:23:17.181707  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1010 19:23:17.213227  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/embed-certs-541370/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1010 19:23:17.247846  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/embed-certs-541370/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1010 19:23:17.275989  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/embed-certs-541370/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1010 19:23:17.301144  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/embed-certs-541370/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1010 19:23:17.326232  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem --> /usr/share/ca-certificates/888762.pem (1708 bytes)
	I1010 19:23:17.350586  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1010 19:23:17.374666  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876.pem --> /usr/share/ca-certificates/88876.pem (1338 bytes)
	I1010 19:23:17.399570  147758 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1010 19:23:17.417846  147758 ssh_runner.go:195] Run: openssl version
	I1010 19:23:17.424206  147758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/888762.pem && ln -fs /usr/share/ca-certificates/888762.pem /etc/ssl/certs/888762.pem"
	I1010 19:23:17.436091  147758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/888762.pem
	I1010 19:23:17.441020  147758 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 10 18:09 /usr/share/ca-certificates/888762.pem
	I1010 19:23:17.441090  147758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/888762.pem
	I1010 19:23:17.447318  147758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/888762.pem /etc/ssl/certs/3ec20f2e.0"
	I1010 19:23:17.459191  147758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1010 19:23:17.470878  147758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1010 19:23:17.476185  147758 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 10 17:58 /usr/share/ca-certificates/minikubeCA.pem
	I1010 19:23:17.476248  147758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1010 19:23:17.482808  147758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1010 19:23:17.494626  147758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/88876.pem && ln -fs /usr/share/ca-certificates/88876.pem /etc/ssl/certs/88876.pem"
	I1010 19:23:17.506522  147758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/88876.pem
	I1010 19:23:17.511484  147758 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 10 18:09 /usr/share/ca-certificates/88876.pem
	I1010 19:23:17.511558  147758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/88876.pem
	I1010 19:23:17.517445  147758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/88876.pem /etc/ssl/certs/51391683.0"
	I1010 19:23:17.529109  147758 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1010 19:23:17.534139  147758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1010 19:23:17.540846  147758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1010 19:23:17.547429  147758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1010 19:23:17.554350  147758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1010 19:23:17.561036  147758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1010 19:23:17.567571  147758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1010 19:23:17.574019  147758 kubeadm.go:392] StartCluster: {Name:embed-certs-541370 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:embed-certs-541370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.120 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 19:23:17.574128  147758 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1010 19:23:17.574187  147758 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1010 19:23:17.612699  147758 cri.go:89] found id: ""
	I1010 19:23:17.612804  147758 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1010 19:23:17.623827  147758 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1010 19:23:17.623856  147758 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1010 19:23:17.623917  147758 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1010 19:23:17.634732  147758 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1010 19:23:17.635754  147758 kubeconfig.go:125] found "embed-certs-541370" server: "https://192.168.39.120:8443"
	I1010 19:23:17.637813  147758 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1010 19:23:17.648543  147758 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.120
	I1010 19:23:17.648590  147758 kubeadm.go:1160] stopping kube-system containers ...
	I1010 19:23:17.648606  147758 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1010 19:23:17.648671  147758 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1010 19:23:17.693966  147758 cri.go:89] found id: ""
	I1010 19:23:17.694057  147758 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1010 19:23:17.715977  147758 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1010 19:23:17.727871  147758 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1010 19:23:17.727891  147758 kubeadm.go:157] found existing configuration files:
	
	I1010 19:23:17.727942  147758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1010 19:23:17.738274  147758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1010 19:23:17.738340  147758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1010 19:23:17.748925  147758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1010 19:23:17.758945  147758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1010 19:23:17.759008  147758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1010 19:23:17.769169  147758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1010 19:23:17.779196  147758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1010 19:23:17.779282  147758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1010 19:23:17.790948  147758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1010 19:23:17.802264  147758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1010 19:23:17.802332  147758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1010 19:23:17.814009  147758 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1010 19:23:17.826820  147758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:23:17.947270  147758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:23:19.128720  147758 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.181409785s)
	I1010 19:23:19.128770  147758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:23:19.343735  147758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:23:19.419728  147758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:23:19.529802  147758 api_server.go:52] waiting for apiserver process to appear ...
	I1010 19:23:19.529930  147758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:20.030019  147758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:20.530833  147758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:20.558314  147758 api_server.go:72] duration metric: took 1.028510044s to wait for apiserver process to appear ...
	I1010 19:23:20.558350  147758 api_server.go:88] waiting for apiserver healthz status ...
	I1010 19:23:20.558375  147758 api_server.go:253] Checking apiserver healthz at https://192.168.39.120:8443/healthz ...
	I1010 19:23:20.558991  147758 api_server.go:269] stopped: https://192.168.39.120:8443/healthz: Get "https://192.168.39.120:8443/healthz": dial tcp 192.168.39.120:8443: connect: connection refused
	I1010 19:23:21.058727  147758 api_server.go:253] Checking apiserver healthz at https://192.168.39.120:8443/healthz ...
	I1010 19:23:18.923057  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:18.923636  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:18.923671  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:18.923582  149057 retry.go:31] will retry after 2.09836426s: waiting for machine to come up
	I1010 19:23:21.023500  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:21.024054  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:21.024084  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:21.023999  149057 retry.go:31] will retry after 2.809962093s: waiting for machine to come up
	I1010 19:23:23.187135  147758 api_server.go:279] https://192.168.39.120:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1010 19:23:23.187187  147758 api_server.go:103] status: https://192.168.39.120:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1010 19:23:23.187203  147758 api_server.go:253] Checking apiserver healthz at https://192.168.39.120:8443/healthz ...
	I1010 19:23:23.233367  147758 api_server.go:279] https://192.168.39.120:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1010 19:23:23.233414  147758 api_server.go:103] status: https://192.168.39.120:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1010 19:23:23.558658  147758 api_server.go:253] Checking apiserver healthz at https://192.168.39.120:8443/healthz ...
	I1010 19:23:23.575108  147758 api_server.go:279] https://192.168.39.120:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1010 19:23:23.575139  147758 api_server.go:103] status: https://192.168.39.120:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1010 19:23:24.058679  147758 api_server.go:253] Checking apiserver healthz at https://192.168.39.120:8443/healthz ...
	I1010 19:23:24.065735  147758 api_server.go:279] https://192.168.39.120:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1010 19:23:24.065763  147758 api_server.go:103] status: https://192.168.39.120:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1010 19:23:24.559440  147758 api_server.go:253] Checking apiserver healthz at https://192.168.39.120:8443/healthz ...
	I1010 19:23:24.565460  147758 api_server.go:279] https://192.168.39.120:8443/healthz returned 200:
	ok
	I1010 19:23:24.571828  147758 api_server.go:141] control plane version: v1.31.1
	I1010 19:23:24.571859  147758 api_server.go:131] duration metric: took 4.013501806s to wait for apiserver health ...
	I1010 19:23:24.571869  147758 cni.go:84] Creating CNI manager for ""
	I1010 19:23:24.571875  147758 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1010 19:23:24.573875  147758 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1010 19:23:24.575458  147758 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1010 19:23:24.586870  147758 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1010 19:23:24.624362  147758 system_pods.go:43] waiting for kube-system pods to appear ...
	I1010 19:23:24.643465  147758 system_pods.go:59] 8 kube-system pods found
	I1010 19:23:24.643516  147758 system_pods.go:61] "coredns-7c65d6cfc9-fgtkg" [df696e79-ca6f-4d73-a57e-9c6cdc93c505] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1010 19:23:24.643532  147758 system_pods.go:61] "etcd-embed-certs-541370" [254fa12c-b0d2-499f-8dd9-c1505efeaaab] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1010 19:23:24.643543  147758 system_pods.go:61] "kube-apiserver-embed-certs-541370" [fcd3809d-d325-4481-8e86-c246e29458fe] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1010 19:23:24.643565  147758 system_pods.go:61] "kube-controller-manager-embed-certs-541370" [ab0fdd6b-d9b7-48dc-b82f-29b21d2295ee] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1010 19:23:24.643584  147758 system_pods.go:61] "kube-proxy-f5l6x" [446383fa-44c5-4b9e-bfc5-e38799597e75] Running
	I1010 19:23:24.643592  147758 system_pods.go:61] "kube-scheduler-embed-certs-541370" [1c6af7e7-ce16-4ae2-8feb-e5d474173de1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1010 19:23:24.643603  147758 system_pods.go:61] "metrics-server-6867b74b74-kw529" [aad00321-d499-4563-849e-286d6e699fc8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1010 19:23:24.643611  147758 system_pods.go:61] "storage-provisioner" [df4ae621-5066-4276-9276-a0538a9f9dd1] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1010 19:23:24.643620  147758 system_pods.go:74] duration metric: took 19.234558ms to wait for pod list to return data ...
	I1010 19:23:24.643637  147758 node_conditions.go:102] verifying NodePressure condition ...
	I1010 19:23:24.651647  147758 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1010 19:23:24.651683  147758 node_conditions.go:123] node cpu capacity is 2
	I1010 19:23:24.651699  147758 node_conditions.go:105] duration metric: took 8.056629ms to run NodePressure ...
	I1010 19:23:24.651720  147758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:23:24.915651  147758 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1010 19:23:24.921104  147758 kubeadm.go:739] kubelet initialised
	I1010 19:23:24.921131  147758 kubeadm.go:740] duration metric: took 5.44643ms waiting for restarted kubelet to initialise ...
	I1010 19:23:24.921142  147758 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 19:23:24.927535  147758 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-fgtkg" in "kube-system" namespace to be "Ready" ...
	I1010 19:23:23.837827  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:23.838271  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:23.838295  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:23.838220  149057 retry.go:31] will retry after 2.562944835s: waiting for machine to come up
	I1010 19:23:26.402699  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:26.403250  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:26.403294  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:26.403192  149057 retry.go:31] will retry after 3.867656846s: waiting for machine to come up
	I1010 19:23:26.932764  147758 pod_ready.go:103] pod "coredns-7c65d6cfc9-fgtkg" in "kube-system" namespace has status "Ready":"False"
	I1010 19:23:28.936055  147758 pod_ready.go:103] pod "coredns-7c65d6cfc9-fgtkg" in "kube-system" namespace has status "Ready":"False"
	I1010 19:23:31.434959  147758 pod_ready.go:103] pod "coredns-7c65d6cfc9-fgtkg" in "kube-system" namespace has status "Ready":"False"
	I1010 19:23:31.893914  148525 start.go:364] duration metric: took 2m17.836396131s to acquireMachinesLock for "default-k8s-diff-port-361847"
	I1010 19:23:31.893993  148525 start.go:96] Skipping create...Using existing machine configuration
	I1010 19:23:31.894007  148525 fix.go:54] fixHost starting: 
	I1010 19:23:31.894438  148525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:23:31.894502  148525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:23:31.914583  148525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37215
	I1010 19:23:31.915054  148525 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:23:31.915535  148525 main.go:141] libmachine: Using API Version  1
	I1010 19:23:31.915560  148525 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:23:31.915967  148525 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:23:31.916207  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .DriverName
	I1010 19:23:31.916387  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetState
	I1010 19:23:31.918035  148525 fix.go:112] recreateIfNeeded on default-k8s-diff-port-361847: state=Stopped err=<nil>
	I1010 19:23:31.918073  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .DriverName
	W1010 19:23:31.918241  148525 fix.go:138] unexpected machine state, will restart: <nil>
	I1010 19:23:31.920390  148525 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-361847" ...
	I1010 19:23:30.275222  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.275762  148123 main.go:141] libmachine: (old-k8s-version-947203) Found IP for machine: 192.168.61.112
	I1010 19:23:30.275779  148123 main.go:141] libmachine: (old-k8s-version-947203) Reserving static IP address...
	I1010 19:23:30.275794  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has current primary IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.276478  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "old-k8s-version-947203", mac: "52:54:00:b5:0b:b2", ip: "192.168.61.112"} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:30.276529  148123 main.go:141] libmachine: (old-k8s-version-947203) Reserved static IP address: 192.168.61.112
	I1010 19:23:30.276553  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | skip adding static IP to network mk-old-k8s-version-947203 - found existing host DHCP lease matching {name: "old-k8s-version-947203", mac: "52:54:00:b5:0b:b2", ip: "192.168.61.112"}
	I1010 19:23:30.276574  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | Getting to WaitForSSH function...
	I1010 19:23:30.276590  148123 main.go:141] libmachine: (old-k8s-version-947203) Waiting for SSH to be available...
	I1010 19:23:30.278486  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.278854  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:30.278890  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.278984  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | Using SSH client type: external
	I1010 19:23:30.279016  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | Using SSH private key: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/old-k8s-version-947203/id_rsa (-rw-------)
	I1010 19:23:30.279051  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.112 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19787-81676/.minikube/machines/old-k8s-version-947203/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1010 19:23:30.279068  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | About to run SSH command:
	I1010 19:23:30.279080  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | exit 0
	I1010 19:23:30.405053  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | SSH cmd err, output: <nil>: 
	I1010 19:23:30.405492  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetConfigRaw
	I1010 19:23:30.406224  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetIP
	I1010 19:23:30.408830  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.409178  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:30.409204  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.409462  148123 profile.go:143] Saving config to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/old-k8s-version-947203/config.json ...
	I1010 19:23:30.409673  148123 machine.go:93] provisionDockerMachine start ...
	I1010 19:23:30.409693  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .DriverName
	I1010 19:23:30.409916  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHHostname
	I1010 19:23:30.412131  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.412400  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:30.412427  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.412574  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHPort
	I1010 19:23:30.412770  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:30.412965  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:30.413117  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHUsername
	I1010 19:23:30.413368  148123 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:30.413653  148123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.112 22 <nil> <nil>}
	I1010 19:23:30.413668  148123 main.go:141] libmachine: About to run SSH command:
	hostname
	I1010 19:23:30.525149  148123 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1010 19:23:30.525176  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetMachineName
	I1010 19:23:30.525417  148123 buildroot.go:166] provisioning hostname "old-k8s-version-947203"
	I1010 19:23:30.525478  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetMachineName
	I1010 19:23:30.525703  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHHostname
	I1010 19:23:30.528343  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.528681  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:30.528708  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.528841  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHPort
	I1010 19:23:30.529035  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:30.529185  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:30.529303  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHUsername
	I1010 19:23:30.529460  148123 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:30.529635  148123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.112 22 <nil> <nil>}
	I1010 19:23:30.529645  148123 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-947203 && echo "old-k8s-version-947203" | sudo tee /etc/hostname
	I1010 19:23:30.655605  148123 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-947203
	
	I1010 19:23:30.655644  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHHostname
	I1010 19:23:30.658439  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.658834  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:30.658869  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.659063  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHPort
	I1010 19:23:30.659285  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:30.659458  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:30.659574  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHUsername
	I1010 19:23:30.659733  148123 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:30.659905  148123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.112 22 <nil> <nil>}
	I1010 19:23:30.659921  148123 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-947203' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-947203/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-947203' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1010 19:23:30.781974  148123 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1010 19:23:30.782011  148123 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19787-81676/.minikube CaCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19787-81676/.minikube}
	I1010 19:23:30.782033  148123 buildroot.go:174] setting up certificates
	I1010 19:23:30.782048  148123 provision.go:84] configureAuth start
	I1010 19:23:30.782059  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetMachineName
	I1010 19:23:30.782379  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetIP
	I1010 19:23:30.785189  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.785584  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:30.785613  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.785782  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHHostname
	I1010 19:23:30.788086  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.788432  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:30.788458  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.788582  148123 provision.go:143] copyHostCerts
	I1010 19:23:30.788645  148123 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem, removing ...
	I1010 19:23:30.788660  148123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem
	I1010 19:23:30.788729  148123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem (1078 bytes)
	I1010 19:23:30.788839  148123 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem, removing ...
	I1010 19:23:30.788867  148123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem
	I1010 19:23:30.788900  148123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem (1123 bytes)
	I1010 19:23:30.788977  148123 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem, removing ...
	I1010 19:23:30.788986  148123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem
	I1010 19:23:30.789013  148123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem (1675 bytes)
	I1010 19:23:30.789081  148123 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-947203 san=[127.0.0.1 192.168.61.112 localhost minikube old-k8s-version-947203]
	I1010 19:23:31.240626  148123 provision.go:177] copyRemoteCerts
	I1010 19:23:31.240698  148123 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1010 19:23:31.240737  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHHostname
	I1010 19:23:31.243678  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.244108  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:31.244138  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.244356  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHPort
	I1010 19:23:31.244565  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:31.244765  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHUsername
	I1010 19:23:31.244929  148123 sshutil.go:53] new ssh client: &{IP:192.168.61.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/old-k8s-version-947203/id_rsa Username:docker}
	I1010 19:23:31.331337  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1010 19:23:31.356266  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1010 19:23:31.381186  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1010 19:23:31.405301  148123 provision.go:87] duration metric: took 623.235619ms to configureAuth
	I1010 19:23:31.405337  148123 buildroot.go:189] setting minikube options for container-runtime
	I1010 19:23:31.405541  148123 config.go:182] Loaded profile config "old-k8s-version-947203": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1010 19:23:31.405620  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHHostname
	I1010 19:23:31.408111  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.408444  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:31.408470  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.408695  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHPort
	I1010 19:23:31.408947  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:31.409109  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:31.409229  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHUsername
	I1010 19:23:31.409374  148123 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:31.409583  148123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.112 22 <nil> <nil>}
	I1010 19:23:31.409609  148123 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1010 19:23:31.647023  148123 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1010 19:23:31.647050  148123 machine.go:96] duration metric: took 1.237364012s to provisionDockerMachine
	I1010 19:23:31.647063  148123 start.go:293] postStartSetup for "old-k8s-version-947203" (driver="kvm2")
	I1010 19:23:31.647073  148123 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1010 19:23:31.647104  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .DriverName
	I1010 19:23:31.647464  148123 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1010 19:23:31.647502  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHHostname
	I1010 19:23:31.649991  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.650288  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:31.650311  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.650484  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHPort
	I1010 19:23:31.650637  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:31.650832  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHUsername
	I1010 19:23:31.650968  148123 sshutil.go:53] new ssh client: &{IP:192.168.61.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/old-k8s-version-947203/id_rsa Username:docker}
	I1010 19:23:31.735985  148123 ssh_runner.go:195] Run: cat /etc/os-release
	I1010 19:23:31.740689  148123 info.go:137] Remote host: Buildroot 2023.02.9
	I1010 19:23:31.740725  148123 filesync.go:126] Scanning /home/jenkins/minikube-integration/19787-81676/.minikube/addons for local assets ...
	I1010 19:23:31.740803  148123 filesync.go:126] Scanning /home/jenkins/minikube-integration/19787-81676/.minikube/files for local assets ...
	I1010 19:23:31.740947  148123 filesync.go:149] local asset: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem -> 888762.pem in /etc/ssl/certs
	I1010 19:23:31.741057  148123 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1010 19:23:31.751383  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem --> /etc/ssl/certs/888762.pem (1708 bytes)
	I1010 19:23:31.777091  148123 start.go:296] duration metric: took 130.011618ms for postStartSetup
	I1010 19:23:31.777134  148123 fix.go:56] duration metric: took 20.455078936s for fixHost
	I1010 19:23:31.777158  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHHostname
	I1010 19:23:31.780083  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.780661  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:31.780708  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.780832  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHPort
	I1010 19:23:31.781069  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:31.781245  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:31.781382  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHUsername
	I1010 19:23:31.781534  148123 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:31.781711  148123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.112 22 <nil> <nil>}
	I1010 19:23:31.781721  148123 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1010 19:23:31.893727  148123 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728588211.849777581
	
	I1010 19:23:31.893756  148123 fix.go:216] guest clock: 1728588211.849777581
	I1010 19:23:31.893763  148123 fix.go:229] Guest: 2024-10-10 19:23:31.849777581 +0000 UTC Remote: 2024-10-10 19:23:31.777138808 +0000 UTC m=+218.253674512 (delta=72.638773ms)
	I1010 19:23:31.893806  148123 fix.go:200] guest clock delta is within tolerance: 72.638773ms
	I1010 19:23:31.893813  148123 start.go:83] releasing machines lock for "old-k8s-version-947203", held for 20.571797307s
	I1010 19:23:31.893848  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .DriverName
	I1010 19:23:31.894151  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetIP
	I1010 19:23:31.896747  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.897156  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:31.897207  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.897385  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .DriverName
	I1010 19:23:31.898036  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .DriverName
	I1010 19:23:31.898229  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .DriverName
	I1010 19:23:31.898375  148123 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1010 19:23:31.898422  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHHostname
	I1010 19:23:31.898461  148123 ssh_runner.go:195] Run: cat /version.json
	I1010 19:23:31.898487  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHHostname
	I1010 19:23:31.901274  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.901450  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.901608  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:31.901649  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.901784  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHPort
	I1010 19:23:31.901920  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:31.901945  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.901952  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:31.902112  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHPort
	I1010 19:23:31.902130  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHUsername
	I1010 19:23:31.902306  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:31.902348  148123 sshutil.go:53] new ssh client: &{IP:192.168.61.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/old-k8s-version-947203/id_rsa Username:docker}
	I1010 19:23:31.902412  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHUsername
	I1010 19:23:31.902533  148123 sshutil.go:53] new ssh client: &{IP:192.168.61.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/old-k8s-version-947203/id_rsa Username:docker}
	I1010 19:23:32.016264  148123 ssh_runner.go:195] Run: systemctl --version
	I1010 19:23:32.022618  148123 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1010 19:23:32.169502  148123 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1010 19:23:32.175960  148123 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1010 19:23:32.176039  148123 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1010 19:23:32.193584  148123 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1010 19:23:32.193615  148123 start.go:495] detecting cgroup driver to use...
	I1010 19:23:32.193698  148123 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1010 19:23:32.210923  148123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1010 19:23:32.227172  148123 docker.go:217] disabling cri-docker service (if available) ...
	I1010 19:23:32.227254  148123 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1010 19:23:32.242142  148123 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1010 19:23:32.256896  148123 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1010 19:23:32.387462  148123 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1010 19:23:32.575184  148123 docker.go:233] disabling docker service ...
	I1010 19:23:32.575257  148123 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1010 19:23:32.590667  148123 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1010 19:23:32.604825  148123 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1010 19:23:32.739827  148123 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1010 19:23:32.863648  148123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1010 19:23:32.879435  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1010 19:23:32.899582  148123 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1010 19:23:32.899659  148123 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:32.915010  148123 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1010 19:23:32.915081  148123 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:32.931181  148123 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:32.944038  148123 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:32.955182  148123 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1010 19:23:32.972220  148123 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1010 19:23:32.982681  148123 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1010 19:23:32.982739  148123 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1010 19:23:32.997012  148123 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1010 19:23:33.009468  148123 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 19:23:33.180403  148123 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1010 19:23:33.284934  148123 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1010 19:23:33.285008  148123 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1010 19:23:33.290063  148123 start.go:563] Will wait 60s for crictl version
	I1010 19:23:33.290124  148123 ssh_runner.go:195] Run: which crictl
	I1010 19:23:33.294752  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1010 19:23:33.342710  148123 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1010 19:23:33.342792  148123 ssh_runner.go:195] Run: crio --version
	I1010 19:23:33.372821  148123 ssh_runner.go:195] Run: crio --version
	I1010 19:23:33.409395  148123 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1010 19:23:33.411012  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetIP
	I1010 19:23:33.414374  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:33.414789  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:33.414836  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:33.415084  148123 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1010 19:23:33.420164  148123 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 19:23:33.433442  148123 kubeadm.go:883] updating cluster {Name:old-k8s-version-947203 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-947203 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.112 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1010 19:23:33.433631  148123 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1010 19:23:33.433717  148123 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 19:23:33.480013  148123 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1010 19:23:33.480078  148123 ssh_runner.go:195] Run: which lz4
	I1010 19:23:33.485443  148123 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1010 19:23:33.490986  148123 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1010 19:23:33.491032  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1010 19:23:31.921836  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .Start
	I1010 19:23:31.922036  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Ensuring networks are active...
	I1010 19:23:31.922890  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Ensuring network default is active
	I1010 19:23:31.923271  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Ensuring network mk-default-k8s-diff-port-361847 is active
	I1010 19:23:31.923685  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Getting domain xml...
	I1010 19:23:31.924449  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Creating domain...
	I1010 19:23:33.241164  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting to get IP...
	I1010 19:23:33.242273  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:33.242713  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | unable to find current IP address of domain default-k8s-diff-port-361847 in network mk-default-k8s-diff-port-361847
	I1010 19:23:33.242814  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | I1010 19:23:33.242702  149213 retry.go:31] will retry after 195.013046ms: waiting for machine to come up
	I1010 19:23:33.438965  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:33.439425  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | unable to find current IP address of domain default-k8s-diff-port-361847 in network mk-default-k8s-diff-port-361847
	I1010 19:23:33.439452  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | I1010 19:23:33.439379  149213 retry.go:31] will retry after 344.223823ms: waiting for machine to come up
	I1010 19:23:33.785167  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:33.785833  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | unable to find current IP address of domain default-k8s-diff-port-361847 in network mk-default-k8s-diff-port-361847
	I1010 19:23:33.785864  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | I1010 19:23:33.785780  149213 retry.go:31] will retry after 342.787658ms: waiting for machine to come up
	I1010 19:23:33.435066  147758 pod_ready.go:103] pod "coredns-7c65d6cfc9-fgtkg" in "kube-system" namespace has status "Ready":"False"
	I1010 19:23:34.936768  147758 pod_ready.go:93] pod "coredns-7c65d6cfc9-fgtkg" in "kube-system" namespace has status "Ready":"True"
	I1010 19:23:34.936800  147758 pod_ready.go:82] duration metric: took 10.009235225s for pod "coredns-7c65d6cfc9-fgtkg" in "kube-system" namespace to be "Ready" ...
	I1010 19:23:34.936814  147758 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:23:35.944395  147758 pod_ready.go:93] pod "etcd-embed-certs-541370" in "kube-system" namespace has status "Ready":"True"
	I1010 19:23:35.944430  147758 pod_ready.go:82] duration metric: took 1.007599746s for pod "etcd-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:23:35.944445  147758 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:23:35.953224  147758 pod_ready.go:93] pod "kube-apiserver-embed-certs-541370" in "kube-system" namespace has status "Ready":"True"
	I1010 19:23:35.953255  147758 pod_ready.go:82] duration metric: took 8.801702ms for pod "kube-apiserver-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:23:35.953266  147758 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:23:35.227279  148123 crio.go:462] duration metric: took 1.74186828s to copy over tarball
	I1010 19:23:35.227383  148123 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1010 19:23:38.216499  148123 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.989082447s)
	I1010 19:23:38.216533  148123 crio.go:469] duration metric: took 2.989218587s to extract the tarball
	I1010 19:23:38.216541  148123 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1010 19:23:38.259699  148123 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 19:23:38.308284  148123 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1010 19:23:38.308313  148123 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1010 19:23:38.308422  148123 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1010 19:23:38.308477  148123 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1010 19:23:38.308482  148123 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1010 19:23:38.308593  148123 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1010 19:23:38.308617  148123 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1010 19:23:38.308405  148123 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 19:23:38.308446  148123 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1010 19:23:38.308530  148123 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1010 19:23:38.310558  148123 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1010 19:23:38.310612  148123 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1010 19:23:38.310642  148123 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1010 19:23:38.310652  148123 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1010 19:23:38.310645  148123 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1010 19:23:38.310560  148123 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 19:23:38.310733  148123 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1010 19:23:38.311068  148123 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1010 19:23:38.469174  148123 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1010 19:23:38.469563  148123 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1010 19:23:38.488463  148123 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1010 19:23:38.490815  148123 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1010 19:23:38.491841  148123 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1010 19:23:38.496079  148123 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1010 19:23:38.501292  148123 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1010 19:23:34.130443  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:34.130968  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | unable to find current IP address of domain default-k8s-diff-port-361847 in network mk-default-k8s-diff-port-361847
	I1010 19:23:34.130998  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | I1010 19:23:34.130915  149213 retry.go:31] will retry after 393.100812ms: waiting for machine to come up
	I1010 19:23:34.525570  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:34.526032  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | unable to find current IP address of domain default-k8s-diff-port-361847 in network mk-default-k8s-diff-port-361847
	I1010 19:23:34.526060  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | I1010 19:23:34.525980  149213 retry.go:31] will retry after 465.468437ms: waiting for machine to come up
	I1010 19:23:34.992775  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:34.993348  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | unable to find current IP address of domain default-k8s-diff-port-361847 in network mk-default-k8s-diff-port-361847
	I1010 19:23:34.993386  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | I1010 19:23:34.993287  149213 retry.go:31] will retry after 907.884473ms: waiting for machine to come up
	I1010 19:23:35.902481  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:35.902942  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | unable to find current IP address of domain default-k8s-diff-port-361847 in network mk-default-k8s-diff-port-361847
	I1010 19:23:35.902974  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | I1010 19:23:35.902878  149213 retry.go:31] will retry after 1.157806188s: waiting for machine to come up
	I1010 19:23:37.062068  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:37.062749  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | unable to find current IP address of domain default-k8s-diff-port-361847 in network mk-default-k8s-diff-port-361847
	I1010 19:23:37.062777  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | I1010 19:23:37.062706  149213 retry.go:31] will retry after 1.432559208s: waiting for machine to come up
	I1010 19:23:38.496653  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:38.497120  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | unable to find current IP address of domain default-k8s-diff-port-361847 in network mk-default-k8s-diff-port-361847
	I1010 19:23:38.497153  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | I1010 19:23:38.497066  149213 retry.go:31] will retry after 1.559787003s: waiting for machine to come up
	I1010 19:23:37.961068  147758 pod_ready.go:103] pod "kube-controller-manager-embed-certs-541370" in "kube-system" namespace has status "Ready":"False"
	I1010 19:23:40.065559  147758 pod_ready.go:103] pod "kube-controller-manager-embed-certs-541370" in "kube-system" namespace has status "Ready":"False"
	I1010 19:23:40.528757  147758 pod_ready.go:93] pod "kube-controller-manager-embed-certs-541370" in "kube-system" namespace has status "Ready":"True"
	I1010 19:23:40.528786  147758 pod_ready.go:82] duration metric: took 4.575513259s for pod "kube-controller-manager-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:23:40.528802  147758 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-f5l6x" in "kube-system" namespace to be "Ready" ...
	I1010 19:23:40.538002  147758 pod_ready.go:93] pod "kube-proxy-f5l6x" in "kube-system" namespace has status "Ready":"True"
	I1010 19:23:40.538034  147758 pod_ready.go:82] duration metric: took 9.22357ms for pod "kube-proxy-f5l6x" in "kube-system" namespace to be "Ready" ...
	I1010 19:23:40.538049  147758 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:23:40.543594  147758 pod_ready.go:93] pod "kube-scheduler-embed-certs-541370" in "kube-system" namespace has status "Ready":"True"
	I1010 19:23:40.543615  147758 pod_ready.go:82] duration metric: took 5.558665ms for pod "kube-scheduler-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:23:40.543626  147758 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace to be "Ready" ...
	I1010 19:23:38.581315  148123 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1010 19:23:38.581361  148123 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1010 19:23:38.581385  148123 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1010 19:23:38.581407  148123 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1010 19:23:38.581442  148123 ssh_runner.go:195] Run: which crictl
	I1010 19:23:38.581457  148123 ssh_runner.go:195] Run: which crictl
	I1010 19:23:38.642668  148123 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1010 19:23:38.642721  148123 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1010 19:23:38.642777  148123 ssh_runner.go:195] Run: which crictl
	I1010 19:23:38.658235  148123 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1010 19:23:38.658291  148123 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1010 19:23:38.658288  148123 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1010 19:23:38.658328  148123 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1010 19:23:38.658346  148123 ssh_runner.go:195] Run: which crictl
	I1010 19:23:38.658373  148123 ssh_runner.go:195] Run: which crictl
	I1010 19:23:38.678038  148123 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1010 19:23:38.678097  148123 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1010 19:23:38.678155  148123 ssh_runner.go:195] Run: which crictl
	I1010 19:23:38.682719  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1010 19:23:38.682773  148123 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1010 19:23:38.682721  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1010 19:23:38.682791  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1010 19:23:38.682812  148123 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1010 19:23:38.682845  148123 ssh_runner.go:195] Run: which crictl
	I1010 19:23:38.682851  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1010 19:23:38.682872  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1010 19:23:38.691181  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1010 19:23:38.842626  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1010 19:23:38.842714  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1010 19:23:38.842754  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1010 19:23:38.842797  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1010 19:23:38.842721  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1010 19:23:38.842851  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1010 19:23:38.842789  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1010 19:23:38.989994  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1010 19:23:39.005815  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1010 19:23:39.005923  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1010 19:23:39.010029  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1010 19:23:39.010105  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1010 19:23:39.010150  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1010 19:23:39.010230  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1010 19:23:39.134646  148123 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 19:23:39.141431  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1010 19:23:39.176750  148123 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1010 19:23:39.176945  148123 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1010 19:23:39.176989  148123 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1010 19:23:39.177038  148123 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1010 19:23:39.177073  148123 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1010 19:23:39.177104  148123 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1010 19:23:39.335056  148123 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1010 19:23:39.335113  148123 cache_images.go:92] duration metric: took 1.026784312s to LoadCachedImages
	W1010 19:23:39.335180  148123 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I1010 19:23:39.335192  148123 kubeadm.go:934] updating node { 192.168.61.112 8443 v1.20.0 crio true true} ...
	I1010 19:23:39.335305  148123 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-947203 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.112
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-947203 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1010 19:23:39.335380  148123 ssh_runner.go:195] Run: crio config
	I1010 19:23:39.394307  148123 cni.go:84] Creating CNI manager for ""
	I1010 19:23:39.394338  148123 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1010 19:23:39.394350  148123 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1010 19:23:39.394378  148123 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.112 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-947203 NodeName:old-k8s-version-947203 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.112"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.112 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1010 19:23:39.394572  148123 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.112
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-947203"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.112
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.112"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1010 19:23:39.394662  148123 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1010 19:23:39.405510  148123 binaries.go:44] Found k8s binaries, skipping transfer
	I1010 19:23:39.405600  148123 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1010 19:23:39.415968  148123 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1010 19:23:39.443234  148123 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1010 19:23:39.462448  148123 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1010 19:23:39.481959  148123 ssh_runner.go:195] Run: grep 192.168.61.112	control-plane.minikube.internal$ /etc/hosts
	I1010 19:23:39.486188  148123 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.112	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 19:23:39.501922  148123 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 19:23:39.642769  148123 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 19:23:39.661335  148123 certs.go:68] Setting up /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/old-k8s-version-947203 for IP: 192.168.61.112
	I1010 19:23:39.661363  148123 certs.go:194] generating shared ca certs ...
	I1010 19:23:39.661384  148123 certs.go:226] acquiring lock for ca certs: {Name:mk934aa2a82050d7b4e8800492bf64f91d140517 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 19:23:39.661595  148123 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key
	I1010 19:23:39.661658  148123 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key
	I1010 19:23:39.661671  148123 certs.go:256] generating profile certs ...
	I1010 19:23:39.661779  148123 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/old-k8s-version-947203/client.key
	I1010 19:23:39.661867  148123 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/old-k8s-version-947203/apiserver.key.8a666a52
	I1010 19:23:39.661922  148123 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/old-k8s-version-947203/proxy-client.key
	I1010 19:23:39.662312  148123 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876.pem (1338 bytes)
	W1010 19:23:39.662395  148123 certs.go:480] ignoring /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876_empty.pem, impossibly tiny 0 bytes
	I1010 19:23:39.662410  148123 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem (1679 bytes)
	I1010 19:23:39.662447  148123 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem (1078 bytes)
	I1010 19:23:39.662501  148123 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem (1123 bytes)
	I1010 19:23:39.662531  148123 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem (1675 bytes)
	I1010 19:23:39.662622  148123 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem (1708 bytes)
	I1010 19:23:39.663372  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1010 19:23:39.710366  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1010 19:23:39.752724  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1010 19:23:39.801408  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1010 19:23:39.843557  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/old-k8s-version-947203/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1010 19:23:39.892522  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/old-k8s-version-947203/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1010 19:23:39.938519  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/old-k8s-version-947203/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1010 19:23:39.966140  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/old-k8s-version-947203/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1010 19:23:39.992974  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem --> /usr/share/ca-certificates/888762.pem (1708 bytes)
	I1010 19:23:40.019381  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1010 19:23:40.045769  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876.pem --> /usr/share/ca-certificates/88876.pem (1338 bytes)
	I1010 19:23:40.080683  148123 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1010 19:23:40.099747  148123 ssh_runner.go:195] Run: openssl version
	I1010 19:23:40.107865  148123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1010 19:23:40.123158  148123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1010 19:23:40.128594  148123 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 10 17:58 /usr/share/ca-certificates/minikubeCA.pem
	I1010 19:23:40.128660  148123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1010 19:23:40.135319  148123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1010 19:23:40.147514  148123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/88876.pem && ln -fs /usr/share/ca-certificates/88876.pem /etc/ssl/certs/88876.pem"
	I1010 19:23:40.162463  148123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/88876.pem
	I1010 19:23:40.168387  148123 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 10 18:09 /usr/share/ca-certificates/88876.pem
	I1010 19:23:40.168512  148123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/88876.pem
	I1010 19:23:40.176304  148123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/88876.pem /etc/ssl/certs/51391683.0"
	I1010 19:23:40.191306  148123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/888762.pem && ln -fs /usr/share/ca-certificates/888762.pem /etc/ssl/certs/888762.pem"
	I1010 19:23:40.204196  148123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/888762.pem
	I1010 19:23:40.211044  148123 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 10 18:09 /usr/share/ca-certificates/888762.pem
	I1010 19:23:40.211122  148123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/888762.pem
	I1010 19:23:40.219574  148123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/888762.pem /etc/ssl/certs/3ec20f2e.0"
	I1010 19:23:40.232634  148123 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1010 19:23:40.237639  148123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1010 19:23:40.244141  148123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1010 19:23:40.251188  148123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1010 19:23:40.258538  148123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1010 19:23:40.265029  148123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1010 19:23:40.271754  148123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1010 19:23:40.278352  148123 kubeadm.go:392] StartCluster: {Name:old-k8s-version-947203 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-947203 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.112 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 19:23:40.278453  148123 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1010 19:23:40.278518  148123 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1010 19:23:40.317771  148123 cri.go:89] found id: ""
	I1010 19:23:40.317838  148123 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1010 19:23:40.330494  148123 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1010 19:23:40.330520  148123 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1010 19:23:40.330576  148123 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1010 19:23:40.341984  148123 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1010 19:23:40.343386  148123 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-947203" does not appear in /home/jenkins/minikube-integration/19787-81676/kubeconfig
	I1010 19:23:40.344334  148123 kubeconfig.go:62] /home/jenkins/minikube-integration/19787-81676/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-947203" cluster setting kubeconfig missing "old-k8s-version-947203" context setting]
	I1010 19:23:40.345360  148123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19787-81676/kubeconfig: {Name:mk31297b607b774870865548d952622c04d970cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 19:23:40.347128  148123 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1010 19:23:40.357902  148123 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.112
	I1010 19:23:40.357944  148123 kubeadm.go:1160] stopping kube-system containers ...
	I1010 19:23:40.357961  148123 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1010 19:23:40.358031  148123 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1010 19:23:40.399970  148123 cri.go:89] found id: ""
	I1010 19:23:40.400057  148123 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1010 19:23:40.418527  148123 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1010 19:23:40.429182  148123 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1010 19:23:40.429217  148123 kubeadm.go:157] found existing configuration files:
	
	I1010 19:23:40.429262  148123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1010 19:23:40.439287  148123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1010 19:23:40.439363  148123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1010 19:23:40.449479  148123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1010 19:23:40.459288  148123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1010 19:23:40.459359  148123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1010 19:23:40.472733  148123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1010 19:23:40.484890  148123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1010 19:23:40.484958  148123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1010 19:23:40.495301  148123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1010 19:23:40.504750  148123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1010 19:23:40.504820  148123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1010 19:23:40.515034  148123 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1010 19:23:40.526168  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:23:40.675022  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:23:41.566371  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:23:41.815296  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:23:41.930082  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:23:42.027597  148123 api_server.go:52] waiting for apiserver process to appear ...
	I1010 19:23:42.027713  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:42.528219  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:43.027889  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:43.527735  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:40.058247  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:40.058783  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | unable to find current IP address of domain default-k8s-diff-port-361847 in network mk-default-k8s-diff-port-361847
	I1010 19:23:40.058835  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | I1010 19:23:40.058696  149213 retry.go:31] will retry after 2.214094081s: waiting for machine to come up
	I1010 19:23:42.274629  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:42.275167  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | unable to find current IP address of domain default-k8s-diff-port-361847 in network mk-default-k8s-diff-port-361847
	I1010 19:23:42.275194  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | I1010 19:23:42.275106  149213 retry.go:31] will retry after 2.126528577s: waiting for machine to come up
	I1010 19:23:42.550865  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:23:45.051043  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:23:44.028463  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:44.527916  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:45.027911  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:45.528797  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:46.028772  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:46.527982  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:47.027799  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:47.527894  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:48.028352  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:48.527935  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:44.403101  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:44.403575  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | unable to find current IP address of domain default-k8s-diff-port-361847 in network mk-default-k8s-diff-port-361847
	I1010 19:23:44.403616  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | I1010 19:23:44.403534  149213 retry.go:31] will retry after 3.603964622s: waiting for machine to come up
	I1010 19:23:48.008726  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:48.009142  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | unable to find current IP address of domain default-k8s-diff-port-361847 in network mk-default-k8s-diff-port-361847
	I1010 19:23:48.009191  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | I1010 19:23:48.009100  149213 retry.go:31] will retry after 3.639744981s: waiting for machine to come up
	I1010 19:23:47.551003  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:23:49.661572  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:23:52.858209  147213 start.go:364] duration metric: took 56.558774237s to acquireMachinesLock for "no-preload-320324"
	I1010 19:23:52.858274  147213 start.go:96] Skipping create...Using existing machine configuration
	I1010 19:23:52.858283  147213 fix.go:54] fixHost starting: 
	I1010 19:23:52.858705  147213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:23:52.858742  147213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:23:52.878428  147213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38525
	I1010 19:23:52.878955  147213 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:23:52.879563  147213 main.go:141] libmachine: Using API Version  1
	I1010 19:23:52.879599  147213 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:23:52.879945  147213 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:23:52.880144  147213 main.go:141] libmachine: (no-preload-320324) Calling .DriverName
	I1010 19:23:52.880282  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetState
	I1010 19:23:52.881626  147213 fix.go:112] recreateIfNeeded on no-preload-320324: state=Stopped err=<nil>
	I1010 19:23:52.881650  147213 main.go:141] libmachine: (no-preload-320324) Calling .DriverName
	W1010 19:23:52.881799  147213 fix.go:138] unexpected machine state, will restart: <nil>
	I1010 19:23:52.883912  147213 out.go:177] * Restarting existing kvm2 VM for "no-preload-320324" ...
	I1010 19:23:49.028421  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:49.528271  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:50.028462  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:50.527867  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:51.028782  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:51.528581  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:52.028732  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:52.527987  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:53.027852  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:53.528647  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:52.885239  147213 main.go:141] libmachine: (no-preload-320324) Calling .Start
	I1010 19:23:52.885429  147213 main.go:141] libmachine: (no-preload-320324) Ensuring networks are active...
	I1010 19:23:52.886211  147213 main.go:141] libmachine: (no-preload-320324) Ensuring network default is active
	I1010 19:23:52.886749  147213 main.go:141] libmachine: (no-preload-320324) Ensuring network mk-no-preload-320324 is active
	I1010 19:23:52.887310  147213 main.go:141] libmachine: (no-preload-320324) Getting domain xml...
	I1010 19:23:52.888034  147213 main.go:141] libmachine: (no-preload-320324) Creating domain...
	I1010 19:23:51.652975  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:51.653464  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Found IP for machine: 192.168.50.32
	I1010 19:23:51.653487  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Reserving static IP address...
	I1010 19:23:51.653509  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has current primary IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:51.653910  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-361847", mac: "52:54:00:a6:72:58", ip: "192.168.50.32"} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:51.653956  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | skip adding static IP to network mk-default-k8s-diff-port-361847 - found existing host DHCP lease matching {name: "default-k8s-diff-port-361847", mac: "52:54:00:a6:72:58", ip: "192.168.50.32"}
	I1010 19:23:51.653974  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Reserved static IP address: 192.168.50.32
	I1010 19:23:51.653993  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for SSH to be available...
	I1010 19:23:51.654006  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | Getting to WaitForSSH function...
	I1010 19:23:51.655927  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:51.656210  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:51.656240  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:51.656334  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | Using SSH client type: external
	I1010 19:23:51.656372  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | Using SSH private key: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/default-k8s-diff-port-361847/id_rsa (-rw-------)
	I1010 19:23:51.656409  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.32 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19787-81676/.minikube/machines/default-k8s-diff-port-361847/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1010 19:23:51.656425  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | About to run SSH command:
	I1010 19:23:51.656436  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | exit 0
	I1010 19:23:51.780839  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | SSH cmd err, output: <nil>: 
	I1010 19:23:51.781206  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetConfigRaw
	I1010 19:23:51.781939  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetIP
	I1010 19:23:51.784347  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:51.784663  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:51.784696  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:51.784918  148525 profile.go:143] Saving config to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/default-k8s-diff-port-361847/config.json ...
	I1010 19:23:51.785134  148525 machine.go:93] provisionDockerMachine start ...
	I1010 19:23:51.785158  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .DriverName
	I1010 19:23:51.785403  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHHostname
	I1010 19:23:51.787817  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:51.788306  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:51.788347  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:51.788547  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHPort
	I1010 19:23:51.788807  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:23:51.789038  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:23:51.789274  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHUsername
	I1010 19:23:51.789515  148525 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:51.789802  148525 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.32 22 <nil> <nil>}
	I1010 19:23:51.789825  148525 main.go:141] libmachine: About to run SSH command:
	hostname
	I1010 19:23:51.893367  148525 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1010 19:23:51.893399  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetMachineName
	I1010 19:23:51.893652  148525 buildroot.go:166] provisioning hostname "default-k8s-diff-port-361847"
	I1010 19:23:51.893699  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetMachineName
	I1010 19:23:51.893921  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHHostname
	I1010 19:23:51.896986  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:51.897377  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:51.897422  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:51.897662  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHPort
	I1010 19:23:51.897815  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:23:51.897949  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:23:51.898064  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHUsername
	I1010 19:23:51.898302  148525 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:51.898489  148525 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.32 22 <nil> <nil>}
	I1010 19:23:51.898502  148525 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-361847 && echo "default-k8s-diff-port-361847" | sudo tee /etc/hostname
	I1010 19:23:52.015158  148525 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-361847
	
	I1010 19:23:52.015199  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHHostname
	I1010 19:23:52.018094  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.018468  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:52.018497  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.018683  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHPort
	I1010 19:23:52.018901  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:23:52.019039  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:23:52.019209  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHUsername
	I1010 19:23:52.019474  148525 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:52.019690  148525 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.32 22 <nil> <nil>}
	I1010 19:23:52.019708  148525 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-361847' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-361847/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-361847' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1010 19:23:52.133923  148525 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1010 19:23:52.133960  148525 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19787-81676/.minikube CaCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19787-81676/.minikube}
	I1010 19:23:52.134007  148525 buildroot.go:174] setting up certificates
	I1010 19:23:52.134023  148525 provision.go:84] configureAuth start
	I1010 19:23:52.134043  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetMachineName
	I1010 19:23:52.134351  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetIP
	I1010 19:23:52.137242  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.137637  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:52.137670  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.137860  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHHostname
	I1010 19:23:52.140264  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.140640  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:52.140672  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.140833  148525 provision.go:143] copyHostCerts
	I1010 19:23:52.140907  148525 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem, removing ...
	I1010 19:23:52.140922  148525 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem
	I1010 19:23:52.140977  148525 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem (1078 bytes)
	I1010 19:23:52.141088  148525 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem, removing ...
	I1010 19:23:52.141098  148525 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem
	I1010 19:23:52.141118  148525 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem (1123 bytes)
	I1010 19:23:52.141175  148525 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem, removing ...
	I1010 19:23:52.141182  148525 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem
	I1010 19:23:52.141213  148525 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem (1675 bytes)
	I1010 19:23:52.141264  148525 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-361847 san=[127.0.0.1 192.168.50.32 default-k8s-diff-port-361847 localhost minikube]
	I1010 19:23:52.241146  148525 provision.go:177] copyRemoteCerts
	I1010 19:23:52.241212  148525 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1010 19:23:52.241241  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHHostname
	I1010 19:23:52.244061  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.244463  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:52.244490  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.244731  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHPort
	I1010 19:23:52.244929  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:23:52.245110  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHUsername
	I1010 19:23:52.245228  148525 sshutil.go:53] new ssh client: &{IP:192.168.50.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/default-k8s-diff-port-361847/id_rsa Username:docker}
	I1010 19:23:52.327309  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1010 19:23:52.352288  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1010 19:23:52.376308  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1010 19:23:52.400807  148525 provision.go:87] duration metric: took 266.765119ms to configureAuth
	I1010 19:23:52.400862  148525 buildroot.go:189] setting minikube options for container-runtime
	I1010 19:23:52.401065  148525 config.go:182] Loaded profile config "default-k8s-diff-port-361847": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 19:23:52.401171  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHHostname
	I1010 19:23:52.403552  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.403919  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:52.403950  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.404173  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHPort
	I1010 19:23:52.404371  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:23:52.404513  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:23:52.404629  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHUsername
	I1010 19:23:52.404743  148525 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:52.404927  148525 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.32 22 <nil> <nil>}
	I1010 19:23:52.404949  148525 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1010 19:23:52.622902  148525 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1010 19:23:52.622930  148525 machine.go:96] duration metric: took 837.779579ms to provisionDockerMachine
	I1010 19:23:52.622942  148525 start.go:293] postStartSetup for "default-k8s-diff-port-361847" (driver="kvm2")
	I1010 19:23:52.622952  148525 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1010 19:23:52.622968  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .DriverName
	I1010 19:23:52.623331  148525 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1010 19:23:52.623369  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHHostname
	I1010 19:23:52.626106  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.626435  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:52.626479  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.626721  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHPort
	I1010 19:23:52.626932  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:23:52.627091  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHUsername
	I1010 19:23:52.627262  148525 sshutil.go:53] new ssh client: &{IP:192.168.50.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/default-k8s-diff-port-361847/id_rsa Username:docker}
	I1010 19:23:52.708050  148525 ssh_runner.go:195] Run: cat /etc/os-release
	I1010 19:23:52.712524  148525 info.go:137] Remote host: Buildroot 2023.02.9
	I1010 19:23:52.712550  148525 filesync.go:126] Scanning /home/jenkins/minikube-integration/19787-81676/.minikube/addons for local assets ...
	I1010 19:23:52.712608  148525 filesync.go:126] Scanning /home/jenkins/minikube-integration/19787-81676/.minikube/files for local assets ...
	I1010 19:23:52.712688  148525 filesync.go:149] local asset: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem -> 888762.pem in /etc/ssl/certs
	I1010 19:23:52.712782  148525 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1010 19:23:52.723719  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem --> /etc/ssl/certs/888762.pem (1708 bytes)
	I1010 19:23:52.747686  148525 start.go:296] duration metric: took 124.729371ms for postStartSetup
	I1010 19:23:52.747727  148525 fix.go:56] duration metric: took 20.853721623s for fixHost
	I1010 19:23:52.747749  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHHostname
	I1010 19:23:52.750316  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.750645  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:52.750677  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.750817  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHPort
	I1010 19:23:52.751046  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:23:52.751195  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:23:52.751333  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHUsername
	I1010 19:23:52.751511  148525 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:52.751733  148525 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.32 22 <nil> <nil>}
	I1010 19:23:52.751749  148525 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1010 19:23:52.857986  148525 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728588232.831281012
	
	I1010 19:23:52.858019  148525 fix.go:216] guest clock: 1728588232.831281012
	I1010 19:23:52.858029  148525 fix.go:229] Guest: 2024-10-10 19:23:52.831281012 +0000 UTC Remote: 2024-10-10 19:23:52.747731551 +0000 UTC m=+158.845659062 (delta=83.549461ms)
	I1010 19:23:52.858075  148525 fix.go:200] guest clock delta is within tolerance: 83.549461ms
	I1010 19:23:52.858088  148525 start.go:83] releasing machines lock for "default-k8s-diff-port-361847", held for 20.964121636s
	I1010 19:23:52.858120  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .DriverName
	I1010 19:23:52.858491  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetIP
	I1010 19:23:52.861220  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.861640  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:52.861672  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.861828  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .DriverName
	I1010 19:23:52.862337  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .DriverName
	I1010 19:23:52.862548  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .DriverName
	I1010 19:23:52.862655  148525 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1010 19:23:52.862702  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHHostname
	I1010 19:23:52.862825  148525 ssh_runner.go:195] Run: cat /version.json
	I1010 19:23:52.862854  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHHostname
	I1010 19:23:52.865579  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.865840  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.865921  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:52.865960  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.866106  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHPort
	I1010 19:23:52.866290  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:23:52.866300  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:52.866319  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.866423  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHUsername
	I1010 19:23:52.866496  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHPort
	I1010 19:23:52.866648  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:23:52.866671  148525 sshutil.go:53] new ssh client: &{IP:192.168.50.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/default-k8s-diff-port-361847/id_rsa Username:docker}
	I1010 19:23:52.866798  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHUsername
	I1010 19:23:52.866910  148525 sshutil.go:53] new ssh client: &{IP:192.168.50.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/default-k8s-diff-port-361847/id_rsa Username:docker}
	I1010 19:23:52.966354  148525 ssh_runner.go:195] Run: systemctl --version
	I1010 19:23:52.972526  148525 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1010 19:23:53.119801  148525 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1010 19:23:53.126287  148525 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1010 19:23:53.126355  148525 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1010 19:23:53.147301  148525 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1010 19:23:53.147325  148525 start.go:495] detecting cgroup driver to use...
	I1010 19:23:53.147381  148525 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1010 19:23:53.167368  148525 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1010 19:23:53.183239  148525 docker.go:217] disabling cri-docker service (if available) ...
	I1010 19:23:53.183308  148525 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1010 19:23:53.203230  148525 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1010 19:23:53.217261  148525 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1010 19:23:53.343555  148525 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1010 19:23:53.491952  148525 docker.go:233] disabling docker service ...
	I1010 19:23:53.492054  148525 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1010 19:23:53.508136  148525 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1010 19:23:53.521662  148525 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1010 19:23:53.651858  148525 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1010 19:23:53.781954  148525 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1010 19:23:53.803934  148525 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1010 19:23:53.826070  148525 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1010 19:23:53.826146  148525 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:53.837506  148525 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1010 19:23:53.837587  148525 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:53.848653  148525 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:53.860511  148525 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:53.873254  148525 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1010 19:23:53.887862  148525 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:53.899507  148525 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:53.923325  148525 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:53.934999  148525 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1010 19:23:53.946869  148525 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1010 19:23:53.946945  148525 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1010 19:23:53.968116  148525 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1010 19:23:53.980109  148525 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 19:23:54.106345  148525 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1010 19:23:54.210345  148525 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1010 19:23:54.210417  148525 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1010 19:23:54.215968  148525 start.go:563] Will wait 60s for crictl version
	I1010 19:23:54.216037  148525 ssh_runner.go:195] Run: which crictl
	I1010 19:23:54.219885  148525 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1010 19:23:54.260286  148525 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1010 19:23:54.260375  148525 ssh_runner.go:195] Run: crio --version
	I1010 19:23:54.289908  148525 ssh_runner.go:195] Run: crio --version
	I1010 19:23:54.320940  148525 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1010 19:23:52.050137  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:23:54.060194  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:23:56.551981  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:23:54.027982  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:54.528065  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:55.027890  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:55.527753  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:56.027877  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:56.527790  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:57.028263  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:57.527790  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:58.028743  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:58.528039  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:54.234149  147213 main.go:141] libmachine: (no-preload-320324) Waiting to get IP...
	I1010 19:23:54.235147  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:23:54.235598  147213 main.go:141] libmachine: (no-preload-320324) DBG | unable to find current IP address of domain no-preload-320324 in network mk-no-preload-320324
	I1010 19:23:54.235657  147213 main.go:141] libmachine: (no-preload-320324) DBG | I1010 19:23:54.235580  149378 retry.go:31] will retry after 308.921504ms: waiting for machine to come up
	I1010 19:23:54.546327  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:23:54.547002  147213 main.go:141] libmachine: (no-preload-320324) DBG | unable to find current IP address of domain no-preload-320324 in network mk-no-preload-320324
	I1010 19:23:54.547029  147213 main.go:141] libmachine: (no-preload-320324) DBG | I1010 19:23:54.546956  149378 retry.go:31] will retry after 288.92327ms: waiting for machine to come up
	I1010 19:23:54.837625  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:23:54.838136  147213 main.go:141] libmachine: (no-preload-320324) DBG | unable to find current IP address of domain no-preload-320324 in network mk-no-preload-320324
	I1010 19:23:54.838164  147213 main.go:141] libmachine: (no-preload-320324) DBG | I1010 19:23:54.838054  149378 retry.go:31] will retry after 321.948113ms: waiting for machine to come up
	I1010 19:23:55.161940  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:23:55.162494  147213 main.go:141] libmachine: (no-preload-320324) DBG | unable to find current IP address of domain no-preload-320324 in network mk-no-preload-320324
	I1010 19:23:55.162526  147213 main.go:141] libmachine: (no-preload-320324) DBG | I1010 19:23:55.162441  149378 retry.go:31] will retry after 573.848095ms: waiting for machine to come up
	I1010 19:23:55.739080  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:23:55.739592  147213 main.go:141] libmachine: (no-preload-320324) DBG | unable to find current IP address of domain no-preload-320324 in network mk-no-preload-320324
	I1010 19:23:55.739620  147213 main.go:141] libmachine: (no-preload-320324) DBG | I1010 19:23:55.739494  149378 retry.go:31] will retry after 529.087622ms: waiting for machine to come up
	I1010 19:23:56.270324  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:23:56.270899  147213 main.go:141] libmachine: (no-preload-320324) DBG | unable to find current IP address of domain no-preload-320324 in network mk-no-preload-320324
	I1010 19:23:56.270929  147213 main.go:141] libmachine: (no-preload-320324) DBG | I1010 19:23:56.270850  149378 retry.go:31] will retry after 629.204989ms: waiting for machine to come up
	I1010 19:23:56.901836  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:23:56.902283  147213 main.go:141] libmachine: (no-preload-320324) DBG | unable to find current IP address of domain no-preload-320324 in network mk-no-preload-320324
	I1010 19:23:56.902325  147213 main.go:141] libmachine: (no-preload-320324) DBG | I1010 19:23:56.902222  149378 retry.go:31] will retry after 804.309499ms: waiting for machine to come up
	I1010 19:23:57.708806  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:23:57.709175  147213 main.go:141] libmachine: (no-preload-320324) DBG | unable to find current IP address of domain no-preload-320324 in network mk-no-preload-320324
	I1010 19:23:57.709208  147213 main.go:141] libmachine: (no-preload-320324) DBG | I1010 19:23:57.709151  149378 retry.go:31] will retry after 1.204078295s: waiting for machine to come up
	I1010 19:23:54.322534  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetIP
	I1010 19:23:54.325744  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:54.326217  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:54.326257  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:54.326533  148525 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1010 19:23:54.331527  148525 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 19:23:54.343881  148525 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-361847 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:default-k8s-diff-port-361847 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.32 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1010 19:23:54.344033  148525 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1010 19:23:54.344084  148525 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 19:23:54.389066  148525 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1010 19:23:54.389149  148525 ssh_runner.go:195] Run: which lz4
	I1010 19:23:54.393550  148525 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1010 19:23:54.397787  148525 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1010 19:23:54.397833  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1010 19:23:55.897111  148525 crio.go:462] duration metric: took 1.503593301s to copy over tarball
	I1010 19:23:55.897212  148525 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1010 19:23:58.060691  148525 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.16343467s)
	I1010 19:23:58.060731  148525 crio.go:469] duration metric: took 2.163580526s to extract the tarball
	I1010 19:23:58.060741  148525 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1010 19:23:58.103877  148525 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 19:23:58.162881  148525 crio.go:514] all images are preloaded for cri-o runtime.
	I1010 19:23:58.162907  148525 cache_images.go:84] Images are preloaded, skipping loading
	I1010 19:23:58.162915  148525 kubeadm.go:934] updating node { 192.168.50.32 8444 v1.31.1 crio true true} ...
	I1010 19:23:58.163031  148525 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-361847 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.32
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-361847 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1010 19:23:58.163098  148525 ssh_runner.go:195] Run: crio config
	I1010 19:23:58.219804  148525 cni.go:84] Creating CNI manager for ""
	I1010 19:23:58.219827  148525 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1010 19:23:58.219837  148525 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1010 19:23:58.219861  148525 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.32 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-361847 NodeName:default-k8s-diff-port-361847 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.32"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.32 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1010 19:23:58.219982  148525 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.32
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-361847"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.32
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.32"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1010 19:23:58.220042  148525 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1010 19:23:58.231444  148525 binaries.go:44] Found k8s binaries, skipping transfer
	I1010 19:23:58.231565  148525 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1010 19:23:58.241835  148525 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I1010 19:23:58.259408  148525 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1010 19:23:58.276571  148525 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I1010 19:23:58.294640  148525 ssh_runner.go:195] Run: grep 192.168.50.32	control-plane.minikube.internal$ /etc/hosts
	I1010 19:23:58.298503  148525 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.32	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 19:23:58.312286  148525 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 19:23:58.449757  148525 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 19:23:58.467342  148525 certs.go:68] Setting up /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/default-k8s-diff-port-361847 for IP: 192.168.50.32
	I1010 19:23:58.467377  148525 certs.go:194] generating shared ca certs ...
	I1010 19:23:58.467398  148525 certs.go:226] acquiring lock for ca certs: {Name:mk934aa2a82050d7b4e8800492bf64f91d140517 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 19:23:58.467583  148525 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key
	I1010 19:23:58.467642  148525 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key
	I1010 19:23:58.467655  148525 certs.go:256] generating profile certs ...
	I1010 19:23:58.467826  148525 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/default-k8s-diff-port-361847/client.key
	I1010 19:23:58.467895  148525 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/default-k8s-diff-port-361847/apiserver.key.ae5e3f04
	I1010 19:23:58.467951  148525 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/default-k8s-diff-port-361847/proxy-client.key
	I1010 19:23:58.468089  148525 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876.pem (1338 bytes)
	W1010 19:23:58.468136  148525 certs.go:480] ignoring /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876_empty.pem, impossibly tiny 0 bytes
	I1010 19:23:58.468153  148525 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem (1679 bytes)
	I1010 19:23:58.468194  148525 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem (1078 bytes)
	I1010 19:23:58.468226  148525 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem (1123 bytes)
	I1010 19:23:58.468260  148525 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem (1675 bytes)
	I1010 19:23:58.468317  148525 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem (1708 bytes)
	I1010 19:23:58.468931  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1010 19:23:58.529632  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1010 19:23:58.571900  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1010 19:23:58.612599  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1010 19:23:58.645536  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/default-k8s-diff-port-361847/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1010 19:23:58.675961  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/default-k8s-diff-port-361847/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1010 19:23:58.700712  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/default-k8s-diff-port-361847/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1010 19:23:58.725355  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/default-k8s-diff-port-361847/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1010 19:23:58.751138  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1010 19:23:58.775832  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876.pem --> /usr/share/ca-certificates/88876.pem (1338 bytes)
	I1010 19:23:58.800729  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem --> /usr/share/ca-certificates/888762.pem (1708 bytes)
	I1010 19:23:58.825558  148525 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1010 19:23:58.843331  148525 ssh_runner.go:195] Run: openssl version
	I1010 19:23:58.849271  148525 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/88876.pem && ln -fs /usr/share/ca-certificates/88876.pem /etc/ssl/certs/88876.pem"
	I1010 19:23:58.861031  148525 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/88876.pem
	I1010 19:23:58.865721  148525 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 10 18:09 /usr/share/ca-certificates/88876.pem
	I1010 19:23:58.865797  148525 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/88876.pem
	I1010 19:23:58.871961  148525 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/88876.pem /etc/ssl/certs/51391683.0"
	I1010 19:23:58.884520  148525 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/888762.pem && ln -fs /usr/share/ca-certificates/888762.pem /etc/ssl/certs/888762.pem"
	I1010 19:23:58.896744  148525 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/888762.pem
	I1010 19:23:58.901507  148525 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 10 18:09 /usr/share/ca-certificates/888762.pem
	I1010 19:23:58.901571  148525 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/888762.pem
	I1010 19:23:58.907366  148525 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/888762.pem /etc/ssl/certs/3ec20f2e.0"
	I1010 19:23:58.919784  148525 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1010 19:23:58.931972  148525 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1010 19:23:58.936897  148525 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 10 17:58 /usr/share/ca-certificates/minikubeCA.pem
	I1010 19:23:58.936981  148525 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1010 19:23:58.943007  148525 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1010 19:23:59.052037  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:01.551982  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:23:59.028519  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:59.527790  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:00.027889  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:00.528623  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:01.028697  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:01.528030  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:02.028062  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:02.527876  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:03.028745  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:03.528569  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:58.914409  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:23:58.914894  147213 main.go:141] libmachine: (no-preload-320324) DBG | unable to find current IP address of domain no-preload-320324 in network mk-no-preload-320324
	I1010 19:23:58.914927  147213 main.go:141] libmachine: (no-preload-320324) DBG | I1010 19:23:58.914831  149378 retry.go:31] will retry after 1.631827888s: waiting for machine to come up
	I1010 19:24:00.548505  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:00.549135  147213 main.go:141] libmachine: (no-preload-320324) DBG | unable to find current IP address of domain no-preload-320324 in network mk-no-preload-320324
	I1010 19:24:00.549164  147213 main.go:141] libmachine: (no-preload-320324) DBG | I1010 19:24:00.549043  149378 retry.go:31] will retry after 2.126895157s: waiting for machine to come up
	I1010 19:24:02.678328  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:02.678907  147213 main.go:141] libmachine: (no-preload-320324) DBG | unable to find current IP address of domain no-preload-320324 in network mk-no-preload-320324
	I1010 19:24:02.678969  147213 main.go:141] libmachine: (no-preload-320324) DBG | I1010 19:24:02.678891  149378 retry.go:31] will retry after 2.754376625s: waiting for machine to come up
	I1010 19:23:58.955104  148525 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1010 19:23:58.959833  148525 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1010 19:23:58.966528  148525 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1010 19:23:58.973590  148525 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1010 19:23:58.982390  148525 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1010 19:23:58.990767  148525 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1010 19:23:58.997162  148525 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1010 19:23:59.003647  148525 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-361847 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:default-k8s-diff-port-361847 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.32 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 19:23:59.003786  148525 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1010 19:23:59.003865  148525 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1010 19:23:59.048772  148525 cri.go:89] found id: ""
	I1010 19:23:59.048869  148525 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1010 19:23:59.061267  148525 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1010 19:23:59.061288  148525 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1010 19:23:59.061338  148525 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1010 19:23:59.072629  148525 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1010 19:23:59.074287  148525 kubeconfig.go:125] found "default-k8s-diff-port-361847" server: "https://192.168.50.32:8444"
	I1010 19:23:59.077880  148525 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1010 19:23:59.090738  148525 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.32
	I1010 19:23:59.090783  148525 kubeadm.go:1160] stopping kube-system containers ...
	I1010 19:23:59.090799  148525 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1010 19:23:59.090886  148525 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1010 19:23:59.136762  148525 cri.go:89] found id: ""
	I1010 19:23:59.136888  148525 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1010 19:23:59.155937  148525 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1010 19:23:59.166471  148525 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1010 19:23:59.166493  148525 kubeadm.go:157] found existing configuration files:
	
	I1010 19:23:59.166549  148525 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1010 19:23:59.178247  148525 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1010 19:23:59.178313  148525 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1010 19:23:59.189455  148525 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1010 19:23:59.200127  148525 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1010 19:23:59.200204  148525 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1010 19:23:59.210764  148525 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1010 19:23:59.221048  148525 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1010 19:23:59.221119  148525 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1010 19:23:59.231762  148525 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1010 19:23:59.242152  148525 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1010 19:23:59.242217  148525 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1010 19:23:59.252608  148525 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1010 19:23:59.265219  148525 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:23:59.391743  148525 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:24:00.243288  148525 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:24:00.453782  148525 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:24:00.532137  148525 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:24:00.623598  148525 api_server.go:52] waiting for apiserver process to appear ...
	I1010 19:24:00.623711  148525 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:01.124678  148525 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:01.624626  148525 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:01.667587  148525 api_server.go:72] duration metric: took 1.043987857s to wait for apiserver process to appear ...
	I1010 19:24:01.667621  148525 api_server.go:88] waiting for apiserver healthz status ...
	I1010 19:24:01.667649  148525 api_server.go:253] Checking apiserver healthz at https://192.168.50.32:8444/healthz ...
	I1010 19:24:01.668298  148525 api_server.go:269] stopped: https://192.168.50.32:8444/healthz: Get "https://192.168.50.32:8444/healthz": dial tcp 192.168.50.32:8444: connect: connection refused
	I1010 19:24:02.168273  148525 api_server.go:253] Checking apiserver healthz at https://192.168.50.32:8444/healthz ...
	I1010 19:24:05.275654  148525 api_server.go:279] https://192.168.50.32:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1010 19:24:05.275695  148525 api_server.go:103] status: https://192.168.50.32:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1010 19:24:05.275713  148525 api_server.go:253] Checking apiserver healthz at https://192.168.50.32:8444/healthz ...
	I1010 19:24:05.309713  148525 api_server.go:279] https://192.168.50.32:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1010 19:24:05.309770  148525 api_server.go:103] status: https://192.168.50.32:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1010 19:24:05.668325  148525 api_server.go:253] Checking apiserver healthz at https://192.168.50.32:8444/healthz ...
	I1010 19:24:05.684992  148525 api_server.go:279] https://192.168.50.32:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1010 19:24:05.685031  148525 api_server.go:103] status: https://192.168.50.32:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1010 19:24:06.168198  148525 api_server.go:253] Checking apiserver healthz at https://192.168.50.32:8444/healthz ...
	I1010 19:24:06.176584  148525 api_server.go:279] https://192.168.50.32:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1010 19:24:06.176627  148525 api_server.go:103] status: https://192.168.50.32:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1010 19:24:06.668130  148525 api_server.go:253] Checking apiserver healthz at https://192.168.50.32:8444/healthz ...
	I1010 19:24:06.682049  148525 api_server.go:279] https://192.168.50.32:8444/healthz returned 200:
	ok
	I1010 19:24:06.692780  148525 api_server.go:141] control plane version: v1.31.1
	I1010 19:24:06.692811  148525 api_server.go:131] duration metric: took 5.025182717s to wait for apiserver health ...
	I1010 19:24:06.692820  148525 cni.go:84] Creating CNI manager for ""
	I1010 19:24:06.692831  148525 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1010 19:24:06.694447  148525 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1010 19:24:03.558797  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:06.054012  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:04.028711  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:04.528770  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:05.028716  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:05.528083  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:06.028204  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:06.528430  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:07.027972  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:07.527987  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:08.027829  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:08.528676  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:05.435450  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:05.435940  147213 main.go:141] libmachine: (no-preload-320324) DBG | unable to find current IP address of domain no-preload-320324 in network mk-no-preload-320324
	I1010 19:24:05.435970  147213 main.go:141] libmachine: (no-preload-320324) DBG | I1010 19:24:05.435888  149378 retry.go:31] will retry after 2.981990051s: waiting for machine to come up
	I1010 19:24:08.419385  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:08.419982  147213 main.go:141] libmachine: (no-preload-320324) DBG | unable to find current IP address of domain no-preload-320324 in network mk-no-preload-320324
	I1010 19:24:08.420006  147213 main.go:141] libmachine: (no-preload-320324) DBG | I1010 19:24:08.419905  149378 retry.go:31] will retry after 3.976204267s: waiting for machine to come up
	I1010 19:24:06.695841  148525 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1010 19:24:06.711212  148525 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1010 19:24:06.747753  148525 system_pods.go:43] waiting for kube-system pods to appear ...
	I1010 19:24:06.768344  148525 system_pods.go:59] 8 kube-system pods found
	I1010 19:24:06.768429  148525 system_pods.go:61] "coredns-7c65d6cfc9-rv8vq" [93b209ea-bb5f-40c5-aea8-8771b785f021] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1010 19:24:06.768446  148525 system_pods.go:61] "etcd-default-k8s-diff-port-361847" [65129999-984d-497c-a6e1-9c53a5374991] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1010 19:24:06.768452  148525 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-361847" [5f18ba24-29cf-433e-a70d-23757278c04f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1010 19:24:06.768460  148525 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-361847" [c189c785-8ac5-4003-802d-9e7c089d450e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1010 19:24:06.768467  148525 system_pods.go:61] "kube-proxy-v5lm8" [e78eabf9-5c65-4cba-83fd-0837cef05126] Running
	I1010 19:24:06.768476  148525 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-361847" [4f84f0f5-e255-4534-9db3-e5cfee0b2447] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1010 19:24:06.768485  148525 system_pods.go:61] "metrics-server-6867b74b74-h5kjm" [a3979b79-bd21-490b-97ac-0a78efd43a99] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1010 19:24:06.768493  148525 system_pods.go:61] "storage-provisioner" [ca8606d3-9adb-46da-886a-3081b11b52a6] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1010 19:24:06.768499  148525 system_pods.go:74] duration metric: took 20.716461ms to wait for pod list to return data ...
	I1010 19:24:06.768509  148525 node_conditions.go:102] verifying NodePressure condition ...
	I1010 19:24:06.777935  148525 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1010 19:24:06.777973  148525 node_conditions.go:123] node cpu capacity is 2
	I1010 19:24:06.777988  148525 node_conditions.go:105] duration metric: took 9.473726ms to run NodePressure ...
	I1010 19:24:06.778019  148525 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:24:07.053296  148525 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1010 19:24:07.057585  148525 kubeadm.go:739] kubelet initialised
	I1010 19:24:07.057608  148525 kubeadm.go:740] duration metric: took 4.283027ms waiting for restarted kubelet to initialise ...
	I1010 19:24:07.057618  148525 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 19:24:07.064157  148525 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-rv8vq" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:07.069962  148525 pod_ready.go:98] node "default-k8s-diff-port-361847" hosting pod "coredns-7c65d6cfc9-rv8vq" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-361847" has status "Ready":"False"
	I1010 19:24:07.069989  148525 pod_ready.go:82] duration metric: took 5.791958ms for pod "coredns-7c65d6cfc9-rv8vq" in "kube-system" namespace to be "Ready" ...
	E1010 19:24:07.069999  148525 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-361847" hosting pod "coredns-7c65d6cfc9-rv8vq" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-361847" has status "Ready":"False"
	I1010 19:24:07.070022  148525 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:07.075615  148525 pod_ready.go:98] node "default-k8s-diff-port-361847" hosting pod "etcd-default-k8s-diff-port-361847" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-361847" has status "Ready":"False"
	I1010 19:24:07.075644  148525 pod_ready.go:82] duration metric: took 5.608749ms for pod "etcd-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	E1010 19:24:07.075654  148525 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-361847" hosting pod "etcd-default-k8s-diff-port-361847" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-361847" has status "Ready":"False"
	I1010 19:24:07.075661  148525 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:07.081717  148525 pod_ready.go:98] node "default-k8s-diff-port-361847" hosting pod "kube-apiserver-default-k8s-diff-port-361847" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-361847" has status "Ready":"False"
	I1010 19:24:07.081743  148525 pod_ready.go:82] duration metric: took 6.074977ms for pod "kube-apiserver-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	E1010 19:24:07.081754  148525 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-361847" hosting pod "kube-apiserver-default-k8s-diff-port-361847" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-361847" has status "Ready":"False"
	I1010 19:24:07.081761  148525 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:07.152204  148525 pod_ready.go:98] node "default-k8s-diff-port-361847" hosting pod "kube-controller-manager-default-k8s-diff-port-361847" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-361847" has status "Ready":"False"
	I1010 19:24:07.152244  148525 pod_ready.go:82] duration metric: took 70.475599ms for pod "kube-controller-manager-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	E1010 19:24:07.152258  148525 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-361847" hosting pod "kube-controller-manager-default-k8s-diff-port-361847" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-361847" has status "Ready":"False"
	I1010 19:24:07.152266  148525 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-v5lm8" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:07.551283  148525 pod_ready.go:93] pod "kube-proxy-v5lm8" in "kube-system" namespace has status "Ready":"True"
	I1010 19:24:07.551311  148525 pod_ready.go:82] duration metric: took 399.036581ms for pod "kube-proxy-v5lm8" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:07.551324  148525 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:08.550896  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:10.551437  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:09.028538  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:09.527883  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:10.028693  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:10.528439  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:11.028528  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:11.528679  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:12.028002  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:12.527904  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:13.028685  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:13.527833  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:12.401115  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.401808  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has current primary IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.401841  147213 main.go:141] libmachine: (no-preload-320324) Found IP for machine: 192.168.72.11
	I1010 19:24:12.401856  147213 main.go:141] libmachine: (no-preload-320324) Reserving static IP address...
	I1010 19:24:12.402368  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "no-preload-320324", mac: "52:54:00:95:03:cd", ip: "192.168.72.11"} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:12.402407  147213 main.go:141] libmachine: (no-preload-320324) DBG | skip adding static IP to network mk-no-preload-320324 - found existing host DHCP lease matching {name: "no-preload-320324", mac: "52:54:00:95:03:cd", ip: "192.168.72.11"}
	I1010 19:24:12.402426  147213 main.go:141] libmachine: (no-preload-320324) Reserved static IP address: 192.168.72.11
	I1010 19:24:12.402443  147213 main.go:141] libmachine: (no-preload-320324) Waiting for SSH to be available...
	I1010 19:24:12.402458  147213 main.go:141] libmachine: (no-preload-320324) DBG | Getting to WaitForSSH function...
	I1010 19:24:12.404803  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.405200  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:12.405226  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.405461  147213 main.go:141] libmachine: (no-preload-320324) DBG | Using SSH client type: external
	I1010 19:24:12.405494  147213 main.go:141] libmachine: (no-preload-320324) DBG | Using SSH private key: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/no-preload-320324/id_rsa (-rw-------)
	I1010 19:24:12.405527  147213 main.go:141] libmachine: (no-preload-320324) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.11 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19787-81676/.minikube/machines/no-preload-320324/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1010 19:24:12.405541  147213 main.go:141] libmachine: (no-preload-320324) DBG | About to run SSH command:
	I1010 19:24:12.405554  147213 main.go:141] libmachine: (no-preload-320324) DBG | exit 0
	I1010 19:24:12.529010  147213 main.go:141] libmachine: (no-preload-320324) DBG | SSH cmd err, output: <nil>: 
	I1010 19:24:12.529401  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetConfigRaw
	I1010 19:24:12.530257  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetIP
	I1010 19:24:12.533285  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.533692  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:12.533727  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.533963  147213 profile.go:143] Saving config to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/no-preload-320324/config.json ...
	I1010 19:24:12.534205  147213 machine.go:93] provisionDockerMachine start ...
	I1010 19:24:12.534230  147213 main.go:141] libmachine: (no-preload-320324) Calling .DriverName
	I1010 19:24:12.534450  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHHostname
	I1010 19:24:12.536585  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.536976  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:12.537003  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.537133  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHPort
	I1010 19:24:12.537323  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:12.537512  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:12.537689  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHUsername
	I1010 19:24:12.537925  147213 main.go:141] libmachine: Using SSH client type: native
	I1010 19:24:12.538138  147213 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.11 22 <nil> <nil>}
	I1010 19:24:12.538151  147213 main.go:141] libmachine: About to run SSH command:
	hostname
	I1010 19:24:12.641679  147213 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1010 19:24:12.641706  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetMachineName
	I1010 19:24:12.641964  147213 buildroot.go:166] provisioning hostname "no-preload-320324"
	I1010 19:24:12.642002  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetMachineName
	I1010 19:24:12.642235  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHHostname
	I1010 19:24:12.645149  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.645488  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:12.645521  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.645647  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHPort
	I1010 19:24:12.645836  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:12.646001  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:12.646155  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHUsername
	I1010 19:24:12.646352  147213 main.go:141] libmachine: Using SSH client type: native
	I1010 19:24:12.646533  147213 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.11 22 <nil> <nil>}
	I1010 19:24:12.646545  147213 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-320324 && echo "no-preload-320324" | sudo tee /etc/hostname
	I1010 19:24:12.766449  147213 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-320324
	
	I1010 19:24:12.766480  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHHostname
	I1010 19:24:12.769836  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.770331  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:12.770356  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.770584  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHPort
	I1010 19:24:12.770810  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:12.770962  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:12.771119  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHUsername
	I1010 19:24:12.771252  147213 main.go:141] libmachine: Using SSH client type: native
	I1010 19:24:12.771448  147213 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.11 22 <nil> <nil>}
	I1010 19:24:12.771470  147213 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-320324' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-320324/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-320324' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1010 19:24:12.882458  147213 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1010 19:24:12.882495  147213 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19787-81676/.minikube CaCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19787-81676/.minikube}
	I1010 19:24:12.882537  147213 buildroot.go:174] setting up certificates
	I1010 19:24:12.882547  147213 provision.go:84] configureAuth start
	I1010 19:24:12.882562  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetMachineName
	I1010 19:24:12.882865  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetIP
	I1010 19:24:12.885854  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.886139  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:12.886173  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.886308  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHHostname
	I1010 19:24:12.888479  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.888788  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:12.888819  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.888976  147213 provision.go:143] copyHostCerts
	I1010 19:24:12.889037  147213 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem, removing ...
	I1010 19:24:12.889049  147213 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem
	I1010 19:24:12.889102  147213 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem (1123 bytes)
	I1010 19:24:12.889235  147213 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem, removing ...
	I1010 19:24:12.889246  147213 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem
	I1010 19:24:12.889278  147213 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem (1675 bytes)
	I1010 19:24:12.889370  147213 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem, removing ...
	I1010 19:24:12.889381  147213 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem
	I1010 19:24:12.889406  147213 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem (1078 bytes)
	I1010 19:24:12.889493  147213 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem org=jenkins.no-preload-320324 san=[127.0.0.1 192.168.72.11 localhost minikube no-preload-320324]
	I1010 19:24:12.978176  147213 provision.go:177] copyRemoteCerts
	I1010 19:24:12.978235  147213 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1010 19:24:12.978261  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHHostname
	I1010 19:24:12.981662  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.982182  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:12.982218  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.982452  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHPort
	I1010 19:24:12.982647  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:12.982829  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHUsername
	I1010 19:24:12.983005  147213 sshutil.go:53] new ssh client: &{IP:192.168.72.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/no-preload-320324/id_rsa Username:docker}
	I1010 19:24:13.067269  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1010 19:24:13.092777  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1010 19:24:13.118530  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1010 19:24:13.143401  147213 provision.go:87] duration metric: took 260.833877ms to configureAuth
	I1010 19:24:13.143436  147213 buildroot.go:189] setting minikube options for container-runtime
	I1010 19:24:13.143678  147213 config.go:182] Loaded profile config "no-preload-320324": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 19:24:13.143776  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHHostname
	I1010 19:24:13.147086  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:13.147507  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:13.147531  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:13.147787  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHPort
	I1010 19:24:13.148032  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:13.148222  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:13.148452  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHUsername
	I1010 19:24:13.148660  147213 main.go:141] libmachine: Using SSH client type: native
	I1010 19:24:13.149013  147213 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.11 22 <nil> <nil>}
	I1010 19:24:13.149041  147213 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1010 19:24:13.375683  147213 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1010 19:24:13.375714  147213 machine.go:96] duration metric: took 841.493636ms to provisionDockerMachine
	I1010 19:24:13.375736  147213 start.go:293] postStartSetup for "no-preload-320324" (driver="kvm2")
	I1010 19:24:13.375754  147213 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1010 19:24:13.375775  147213 main.go:141] libmachine: (no-preload-320324) Calling .DriverName
	I1010 19:24:13.376085  147213 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1010 19:24:13.376116  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHHostname
	I1010 19:24:13.378855  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:13.379179  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:13.379224  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:13.379408  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHPort
	I1010 19:24:13.379608  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:13.379769  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHUsername
	I1010 19:24:13.379910  147213 sshutil.go:53] new ssh client: &{IP:192.168.72.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/no-preload-320324/id_rsa Username:docker}
	I1010 19:24:13.459580  147213 ssh_runner.go:195] Run: cat /etc/os-release
	I1010 19:24:13.463644  147213 info.go:137] Remote host: Buildroot 2023.02.9
	I1010 19:24:13.463674  147213 filesync.go:126] Scanning /home/jenkins/minikube-integration/19787-81676/.minikube/addons for local assets ...
	I1010 19:24:13.463751  147213 filesync.go:126] Scanning /home/jenkins/minikube-integration/19787-81676/.minikube/files for local assets ...
	I1010 19:24:13.463845  147213 filesync.go:149] local asset: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem -> 888762.pem in /etc/ssl/certs
	I1010 19:24:13.463963  147213 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1010 19:24:13.473483  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem --> /etc/ssl/certs/888762.pem (1708 bytes)
	I1010 19:24:13.498773  147213 start.go:296] duration metric: took 123.021762ms for postStartSetup
	I1010 19:24:13.498814  147213 fix.go:56] duration metric: took 20.640532088s for fixHost
	I1010 19:24:13.498834  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHHostname
	I1010 19:24:13.501681  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:13.502243  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:13.502281  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:13.502476  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHPort
	I1010 19:24:13.502679  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:13.502835  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:13.502993  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHUsername
	I1010 19:24:13.503177  147213 main.go:141] libmachine: Using SSH client type: native
	I1010 19:24:13.503383  147213 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.11 22 <nil> <nil>}
	I1010 19:24:13.503396  147213 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1010 19:24:13.613929  147213 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728588253.586950075
	
	I1010 19:24:13.613954  147213 fix.go:216] guest clock: 1728588253.586950075
	I1010 19:24:13.613963  147213 fix.go:229] Guest: 2024-10-10 19:24:13.586950075 +0000 UTC Remote: 2024-10-10 19:24:13.498818059 +0000 UTC m=+359.788559229 (delta=88.132016ms)
	I1010 19:24:13.613988  147213 fix.go:200] guest clock delta is within tolerance: 88.132016ms
	I1010 19:24:13.614020  147213 start.go:83] releasing machines lock for "no-preload-320324", held for 20.755775587s
	I1010 19:24:13.614063  147213 main.go:141] libmachine: (no-preload-320324) Calling .DriverName
	I1010 19:24:13.614473  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetIP
	I1010 19:24:13.617327  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:13.617694  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:13.617721  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:13.618016  147213 main.go:141] libmachine: (no-preload-320324) Calling .DriverName
	I1010 19:24:13.618670  147213 main.go:141] libmachine: (no-preload-320324) Calling .DriverName
	I1010 19:24:13.618884  147213 main.go:141] libmachine: (no-preload-320324) Calling .DriverName
	I1010 19:24:13.618989  147213 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1010 19:24:13.619047  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHHostname
	I1010 19:24:13.619142  147213 ssh_runner.go:195] Run: cat /version.json
	I1010 19:24:13.619185  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHHostname
	I1010 19:24:13.621972  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:13.622229  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:13.622322  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:13.622348  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:13.622533  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHPort
	I1010 19:24:13.622666  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:13.622697  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:13.622736  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:13.622865  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHPort
	I1010 19:24:13.622930  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHUsername
	I1010 19:24:13.623059  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:13.623073  147213 sshutil.go:53] new ssh client: &{IP:192.168.72.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/no-preload-320324/id_rsa Username:docker}
	I1010 19:24:13.623225  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHUsername
	I1010 19:24:13.623349  147213 sshutil.go:53] new ssh client: &{IP:192.168.72.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/no-preload-320324/id_rsa Username:docker}
	I1010 19:24:13.720999  147213 ssh_runner.go:195] Run: systemctl --version
	I1010 19:24:13.727679  147213 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1010 19:24:09.562834  148525 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-361847" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:12.058686  148525 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-361847" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:13.870558  147213 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1010 19:24:13.877853  147213 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1010 19:24:13.877923  147213 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1010 19:24:13.896295  147213 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1010 19:24:13.896325  147213 start.go:495] detecting cgroup driver to use...
	I1010 19:24:13.896400  147213 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1010 19:24:13.913122  147213 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1010 19:24:13.929359  147213 docker.go:217] disabling cri-docker service (if available) ...
	I1010 19:24:13.929437  147213 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1010 19:24:13.944840  147213 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1010 19:24:13.960062  147213 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1010 19:24:14.090774  147213 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1010 19:24:14.246094  147213 docker.go:233] disabling docker service ...
	I1010 19:24:14.246161  147213 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1010 19:24:14.264682  147213 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1010 19:24:14.280264  147213 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1010 19:24:14.437156  147213 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1010 19:24:14.569220  147213 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1010 19:24:14.585723  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1010 19:24:14.607349  147213 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1010 19:24:14.607429  147213 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:24:14.619113  147213 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1010 19:24:14.619198  147213 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:24:14.631818  147213 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:24:14.643977  147213 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:24:14.655753  147213 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1010 19:24:14.667235  147213 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:24:14.679225  147213 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:24:14.698760  147213 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:24:14.710440  147213 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1010 19:24:14.722565  147213 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1010 19:24:14.722625  147213 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1010 19:24:14.740587  147213 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1010 19:24:14.752630  147213 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 19:24:14.887728  147213 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1010 19:24:14.989026  147213 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1010 19:24:14.989109  147213 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1010 19:24:14.995309  147213 start.go:563] Will wait 60s for crictl version
	I1010 19:24:14.995366  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:24:14.999840  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1010 19:24:15.043758  147213 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1010 19:24:15.043856  147213 ssh_runner.go:195] Run: crio --version
	I1010 19:24:15.079274  147213 ssh_runner.go:195] Run: crio --version
	I1010 19:24:15.116630  147213 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1010 19:24:13.050633  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:15.552413  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:14.028090  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:14.528072  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:15.028648  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:15.528388  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:16.028060  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:16.528472  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:17.028098  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:17.528125  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:18.028563  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:18.528613  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:15.118343  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetIP
	I1010 19:24:15.121596  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:15.122101  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:15.122133  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:15.122396  147213 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1010 19:24:15.127140  147213 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 19:24:15.141249  147213 kubeadm.go:883] updating cluster {Name:no-preload-320324 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:no-preload-320324 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.11 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1010 19:24:15.141375  147213 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1010 19:24:15.141417  147213 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 19:24:15.183271  147213 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1010 19:24:15.183303  147213 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1010 19:24:15.183412  147213 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 19:24:15.183444  147213 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1010 19:24:15.183452  147213 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I1010 19:24:15.183459  147213 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1010 19:24:15.183422  147213 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1010 19:24:15.183493  147213 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1010 19:24:15.183512  147213 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I1010 19:24:15.183507  147213 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I1010 19:24:15.185097  147213 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I1010 19:24:15.185097  147213 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1010 19:24:15.185097  147213 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 19:24:15.185099  147213 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I1010 19:24:15.185097  147213 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I1010 19:24:15.185098  147213 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1010 19:24:15.185103  147213 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1010 19:24:15.185106  147213 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1010 19:24:15.328484  147213 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.1
	I1010 19:24:15.333573  147213 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I1010 19:24:15.340047  147213 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I1010 19:24:15.358922  147213 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I1010 19:24:15.359800  147213 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.1
	I1010 19:24:15.366668  147213 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.1
	I1010 19:24:15.409942  147213 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I1010 19:24:15.409995  147213 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1010 19:24:15.410050  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:24:15.416186  147213 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.1
	I1010 19:24:15.452343  147213 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I1010 19:24:15.452385  147213 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I1010 19:24:15.452426  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:24:15.533567  147213 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I1010 19:24:15.533620  147213 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I1010 19:24:15.533671  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:24:15.585611  147213 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I1010 19:24:15.585659  147213 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I1010 19:24:15.585685  147213 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I1010 19:24:15.585712  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:24:15.585724  147213 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I1010 19:24:15.585765  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:24:15.585769  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I1010 19:24:15.585805  147213 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I1010 19:24:15.585832  147213 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I1010 19:24:15.585856  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1010 19:24:15.585872  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:24:15.585943  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1010 19:24:15.603131  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I1010 19:24:15.661918  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1010 19:24:15.683739  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I1010 19:24:15.683760  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I1010 19:24:15.683833  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1010 19:24:15.683880  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I1010 19:24:15.685385  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I1010 19:24:15.792253  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1010 19:24:15.818116  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I1010 19:24:15.818183  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I1010 19:24:15.818289  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1010 19:24:15.818321  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I1010 19:24:15.818402  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I1010 19:24:15.878069  147213 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I1010 19:24:15.878202  147213 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I1010 19:24:15.940520  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I1010 19:24:15.953841  147213 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I1010 19:24:15.953955  147213 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I1010 19:24:15.953990  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I1010 19:24:15.954047  147213 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I1010 19:24:15.954115  147213 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I1010 19:24:15.954120  147213 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I1010 19:24:15.954130  147213 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I1010 19:24:15.954144  147213 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I1010 19:24:15.954157  147213 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I1010 19:24:15.954205  147213 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	I1010 19:24:16.005975  147213 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I1010 19:24:16.006028  147213 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I1010 19:24:16.006090  147213 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I1010 19:24:16.023905  147213 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I1010 19:24:16.023990  147213 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.1 (exists)
	I1010 19:24:16.024024  147213 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I1010 19:24:16.024023  147213 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.1 (exists)
	I1010 19:24:16.033715  147213 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 19:24:18.150881  147213 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1: (2.144766677s)
	I1010 19:24:18.150935  147213 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.1 (exists)
	I1010 19:24:18.150931  147213 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (2.196753845s)
	I1010 19:24:18.150944  147213 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1: (2.126894115s)
	I1010 19:24:18.150973  147213 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.1 (exists)
	I1010 19:24:18.150953  147213 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I1010 19:24:18.150982  147213 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.117235962s)
	I1010 19:24:18.151002  147213 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I1010 19:24:18.151014  147213 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1010 19:24:18.151053  147213 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 19:24:18.151069  147213 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I1010 19:24:18.151097  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:24:14.059223  148525 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-361847" in "kube-system" namespace has status "Ready":"True"
	I1010 19:24:14.059252  148525 pod_ready.go:82] duration metric: took 6.507918149s for pod "kube-scheduler-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:14.059266  148525 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:16.066908  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:18.082398  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:18.051799  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:20.552644  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:19.028306  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:19.527854  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:20.028765  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:20.528245  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:21.027936  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:21.528172  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:22.028527  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:22.528446  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:23.028709  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:23.528711  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:21.952099  147213 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.801005716s)
	I1010 19:24:21.952134  147213 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I1010 19:24:21.952163  147213 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I1010 19:24:21.952165  147213 ssh_runner.go:235] Completed: which crictl: (3.801048272s)
	I1010 19:24:21.952212  147213 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I1010 19:24:21.952225  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 19:24:21.993627  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 19:24:20.566055  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:22.567145  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:23.053514  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:25.554151  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:24.028402  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:24.528516  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:25.028207  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:25.527928  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:26.028032  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:26.527826  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:27.028609  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:27.528448  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:28.028018  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:28.528597  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:23.929370  147213 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1: (1.977128659s)
	I1010 19:24:23.929418  147213 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I1010 19:24:23.929450  147213 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I1010 19:24:23.929498  147213 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.935844384s)
	I1010 19:24:23.929532  147213 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1
	I1010 19:24:23.929551  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 19:24:26.009485  147213 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.079908324s)
	I1010 19:24:26.009567  147213 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1010 19:24:26.009484  147213 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1: (2.079925224s)
	I1010 19:24:26.009641  147213 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I1010 19:24:26.009671  147213 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I1010 19:24:26.009684  147213 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1010 19:24:26.009720  147213 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1
	I1010 19:24:27.968483  147213 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.958772952s)
	I1010 19:24:27.968534  147213 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1010 19:24:27.968559  147213 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1: (1.958813643s)
	I1010 19:24:27.968587  147213 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I1010 19:24:27.968619  147213 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I1010 19:24:27.968686  147213 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1
	I1010 19:24:25.069787  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:27.567013  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:28.050968  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:30.551528  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:29.027960  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:29.528054  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:30.028227  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:30.527791  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:31.027790  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:31.528761  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:32.028343  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:32.528553  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:33.028195  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:33.527895  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:29.315157  147213 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1: (1.346440456s)
	I1010 19:24:29.315211  147213 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I1010 19:24:29.315244  147213 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1010 19:24:29.315296  147213 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1010 19:24:30.173931  147213 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1010 19:24:30.173977  147213 cache_images.go:123] Successfully loaded all cached images
	I1010 19:24:30.173985  147213 cache_images.go:92] duration metric: took 14.990666845s to LoadCachedImages
	I1010 19:24:30.174001  147213 kubeadm.go:934] updating node { 192.168.72.11 8443 v1.31.1 crio true true} ...
	I1010 19:24:30.174129  147213 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-320324 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.11
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-320324 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1010 19:24:30.174221  147213 ssh_runner.go:195] Run: crio config
	I1010 19:24:30.222677  147213 cni.go:84] Creating CNI manager for ""
	I1010 19:24:30.222702  147213 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1010 19:24:30.222711  147213 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1010 19:24:30.222736  147213 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.11 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-320324 NodeName:no-preload-320324 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.11"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.11 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1010 19:24:30.222923  147213 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.11
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-320324"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.11
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.11"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1010 19:24:30.222998  147213 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1010 19:24:30.233755  147213 binaries.go:44] Found k8s binaries, skipping transfer
	I1010 19:24:30.233818  147213 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1010 19:24:30.243829  147213 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I1010 19:24:30.263056  147213 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1010 19:24:30.282362  147213 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I1010 19:24:30.300449  147213 ssh_runner.go:195] Run: grep 192.168.72.11	control-plane.minikube.internal$ /etc/hosts
	I1010 19:24:30.304661  147213 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.11	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 19:24:30.317462  147213 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 19:24:30.445515  147213 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 19:24:30.462816  147213 certs.go:68] Setting up /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/no-preload-320324 for IP: 192.168.72.11
	I1010 19:24:30.462847  147213 certs.go:194] generating shared ca certs ...
	I1010 19:24:30.462871  147213 certs.go:226] acquiring lock for ca certs: {Name:mk934aa2a82050d7b4e8800492bf64f91d140517 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 19:24:30.463074  147213 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key
	I1010 19:24:30.463132  147213 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key
	I1010 19:24:30.463145  147213 certs.go:256] generating profile certs ...
	I1010 19:24:30.463289  147213 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/no-preload-320324/client.key
	I1010 19:24:30.463364  147213 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/no-preload-320324/apiserver.key.a7785fc5
	I1010 19:24:30.463413  147213 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/no-preload-320324/proxy-client.key
	I1010 19:24:30.463565  147213 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876.pem (1338 bytes)
	W1010 19:24:30.463604  147213 certs.go:480] ignoring /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876_empty.pem, impossibly tiny 0 bytes
	I1010 19:24:30.463617  147213 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem (1679 bytes)
	I1010 19:24:30.463657  147213 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem (1078 bytes)
	I1010 19:24:30.463689  147213 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem (1123 bytes)
	I1010 19:24:30.463721  147213 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem (1675 bytes)
	I1010 19:24:30.463774  147213 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem (1708 bytes)
	I1010 19:24:30.464502  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1010 19:24:30.525320  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1010 19:24:30.565229  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1010 19:24:30.597731  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1010 19:24:30.626174  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/no-preload-320324/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1010 19:24:30.659991  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/no-preload-320324/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1010 19:24:30.685662  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/no-preload-320324/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1010 19:24:30.710757  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/no-preload-320324/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1010 19:24:30.736325  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem --> /usr/share/ca-certificates/888762.pem (1708 bytes)
	I1010 19:24:30.771239  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1010 19:24:30.796467  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876.pem --> /usr/share/ca-certificates/88876.pem (1338 bytes)
	I1010 19:24:30.821925  147213 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1010 19:24:30.840743  147213 ssh_runner.go:195] Run: openssl version
	I1010 19:24:30.846898  147213 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/88876.pem && ln -fs /usr/share/ca-certificates/88876.pem /etc/ssl/certs/88876.pem"
	I1010 19:24:30.858410  147213 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/88876.pem
	I1010 19:24:30.863188  147213 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 10 18:09 /usr/share/ca-certificates/88876.pem
	I1010 19:24:30.863260  147213 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/88876.pem
	I1010 19:24:30.869307  147213 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/88876.pem /etc/ssl/certs/51391683.0"
	I1010 19:24:30.880319  147213 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/888762.pem && ln -fs /usr/share/ca-certificates/888762.pem /etc/ssl/certs/888762.pem"
	I1010 19:24:30.891307  147213 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/888762.pem
	I1010 19:24:30.895771  147213 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 10 18:09 /usr/share/ca-certificates/888762.pem
	I1010 19:24:30.895828  147213 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/888762.pem
	I1010 19:24:30.901510  147213 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/888762.pem /etc/ssl/certs/3ec20f2e.0"
	I1010 19:24:30.912627  147213 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1010 19:24:30.924330  147213 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1010 19:24:30.929108  147213 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 10 17:58 /usr/share/ca-certificates/minikubeCA.pem
	I1010 19:24:30.929194  147213 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1010 19:24:30.935266  147213 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1010 19:24:30.946714  147213 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1010 19:24:30.951692  147213 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1010 19:24:30.957910  147213 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1010 19:24:30.964296  147213 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1010 19:24:30.971001  147213 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1010 19:24:30.977427  147213 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1010 19:24:30.984201  147213 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1010 19:24:30.990532  147213 kubeadm.go:392] StartCluster: {Name:no-preload-320324 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:no-preload-320324 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.11 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 19:24:30.990622  147213 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1010 19:24:30.990727  147213 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1010 19:24:31.033544  147213 cri.go:89] found id: ""
	I1010 19:24:31.033624  147213 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1010 19:24:31.044956  147213 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1010 19:24:31.044975  147213 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1010 19:24:31.045025  147213 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1010 19:24:31.056563  147213 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1010 19:24:31.057705  147213 kubeconfig.go:125] found "no-preload-320324" server: "https://192.168.72.11:8443"
	I1010 19:24:31.059853  147213 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1010 19:24:31.071304  147213 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.11
	I1010 19:24:31.071338  147213 kubeadm.go:1160] stopping kube-system containers ...
	I1010 19:24:31.071353  147213 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1010 19:24:31.071444  147213 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1010 19:24:31.107345  147213 cri.go:89] found id: ""
	I1010 19:24:31.107429  147213 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1010 19:24:31.125556  147213 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1010 19:24:31.135390  147213 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1010 19:24:31.135428  147213 kubeadm.go:157] found existing configuration files:
	
	I1010 19:24:31.135478  147213 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1010 19:24:31.144653  147213 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1010 19:24:31.144715  147213 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1010 19:24:31.154458  147213 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1010 19:24:31.163444  147213 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1010 19:24:31.163501  147213 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1010 19:24:31.172633  147213 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1010 19:24:31.181939  147213 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1010 19:24:31.182001  147213 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1010 19:24:31.191638  147213 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1010 19:24:31.200846  147213 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1010 19:24:31.200935  147213 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1010 19:24:31.211048  147213 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1010 19:24:31.221008  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:24:31.352733  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:24:32.270546  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:24:32.474510  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:24:32.551517  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:24:32.707737  147213 api_server.go:52] waiting for apiserver process to appear ...
	I1010 19:24:32.707826  147213 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:33.208647  147213 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:33.708539  147213 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:33.728647  147213 api_server.go:72] duration metric: took 1.020907246s to wait for apiserver process to appear ...
	I1010 19:24:33.728678  147213 api_server.go:88] waiting for apiserver healthz status ...
	I1010 19:24:33.728701  147213 api_server.go:253] Checking apiserver healthz at https://192.168.72.11:8443/healthz ...
	I1010 19:24:30.066635  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:32.066732  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:32.552277  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:35.051399  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:34.028434  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:34.527949  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:35.028224  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:35.528470  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:36.028540  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:36.528499  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:37.028362  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:37.528483  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:38.028531  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:38.527865  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:37.025756  147213 api_server.go:279] https://192.168.72.11:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1010 19:24:37.025787  147213 api_server.go:103] status: https://192.168.72.11:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1010 19:24:37.025802  147213 api_server.go:253] Checking apiserver healthz at https://192.168.72.11:8443/healthz ...
	I1010 19:24:37.078247  147213 api_server.go:279] https://192.168.72.11:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1010 19:24:37.078283  147213 api_server.go:103] status: https://192.168.72.11:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1010 19:24:37.229601  147213 api_server.go:253] Checking apiserver healthz at https://192.168.72.11:8443/healthz ...
	I1010 19:24:37.237166  147213 api_server.go:279] https://192.168.72.11:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1010 19:24:37.237204  147213 api_server.go:103] status: https://192.168.72.11:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1010 19:24:37.728824  147213 api_server.go:253] Checking apiserver healthz at https://192.168.72.11:8443/healthz ...
	I1010 19:24:37.735660  147213 api_server.go:279] https://192.168.72.11:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1010 19:24:37.735700  147213 api_server.go:103] status: https://192.168.72.11:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1010 19:24:38.229746  147213 api_server.go:253] Checking apiserver healthz at https://192.168.72.11:8443/healthz ...
	I1010 19:24:38.234449  147213 api_server.go:279] https://192.168.72.11:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1010 19:24:38.234491  147213 api_server.go:103] status: https://192.168.72.11:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1010 19:24:38.729000  147213 api_server.go:253] Checking apiserver healthz at https://192.168.72.11:8443/healthz ...
	I1010 19:24:38.737564  147213 api_server.go:279] https://192.168.72.11:8443/healthz returned 200:
	ok
	I1010 19:24:38.751982  147213 api_server.go:141] control plane version: v1.31.1
	I1010 19:24:38.752012  147213 api_server.go:131] duration metric: took 5.023326632s to wait for apiserver health ...
	I1010 19:24:38.752023  147213 cni.go:84] Creating CNI manager for ""
	I1010 19:24:38.752030  147213 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1010 19:24:38.753351  147213 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1010 19:24:34.067208  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:36.067413  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:38.566729  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:38.754645  147213 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1010 19:24:38.772086  147213 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1010 19:24:38.792017  147213 system_pods.go:43] waiting for kube-system pods to appear ...
	I1010 19:24:38.800547  147213 system_pods.go:59] 8 kube-system pods found
	I1010 19:24:38.800592  147213 system_pods.go:61] "coredns-7c65d6cfc9-86brb" [28e5f869-f82f-4bd4-9d9c-89499fa89c89] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1010 19:24:38.800602  147213 system_pods.go:61] "etcd-no-preload-320324" [3027fc7b-a2cd-47c9-a6f1-122bfd0475a8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1010 19:24:38.800609  147213 system_pods.go:61] "kube-apiserver-no-preload-320324" [6e3d2eab-4568-48e9-957c-e724ff89b5da] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1010 19:24:38.800617  147213 system_pods.go:61] "kube-controller-manager-no-preload-320324" [a6d65cd3-48b1-4faf-bb7f-f6433641d6f3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1010 19:24:38.800624  147213 system_pods.go:61] "kube-proxy-vn6sv" [e5b2c419-7299-4bc4-b263-99408b9484eb] Running
	I1010 19:24:38.800629  147213 system_pods.go:61] "kube-scheduler-no-preload-320324" [e089c258-e720-4c78-ada1-eebbe89556c1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1010 19:24:38.800638  147213 system_pods.go:61] "metrics-server-6867b74b74-8w9lk" [354939e6-2ca9-44f5-8e8e-c10493c68b79] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1010 19:24:38.800642  147213 system_pods.go:61] "storage-provisioner" [d4965b53-60c3-4a97-bd52-d164d977247a] Running
	I1010 19:24:38.800648  147213 system_pods.go:74] duration metric: took 8.60732ms to wait for pod list to return data ...
	I1010 19:24:38.800654  147213 node_conditions.go:102] verifying NodePressure condition ...
	I1010 19:24:38.804628  147213 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1010 19:24:38.804663  147213 node_conditions.go:123] node cpu capacity is 2
	I1010 19:24:38.804680  147213 node_conditions.go:105] duration metric: took 4.021699ms to run NodePressure ...
	I1010 19:24:38.804700  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:24:39.078452  147213 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1010 19:24:39.087090  147213 kubeadm.go:739] kubelet initialised
	I1010 19:24:39.087116  147213 kubeadm.go:740] duration metric: took 8.636436ms waiting for restarted kubelet to initialise ...
	I1010 19:24:39.087125  147213 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 19:24:39.094468  147213 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-86brb" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:39.108724  147213 pod_ready.go:98] node "no-preload-320324" hosting pod "coredns-7c65d6cfc9-86brb" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:39.108756  147213 pod_ready.go:82] duration metric: took 14.254631ms for pod "coredns-7c65d6cfc9-86brb" in "kube-system" namespace to be "Ready" ...
	E1010 19:24:39.108770  147213 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-320324" hosting pod "coredns-7c65d6cfc9-86brb" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:39.108780  147213 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:39.119304  147213 pod_ready.go:98] node "no-preload-320324" hosting pod "etcd-no-preload-320324" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:39.119335  147213 pod_ready.go:82] duration metric: took 10.543376ms for pod "etcd-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	E1010 19:24:39.119345  147213 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-320324" hosting pod "etcd-no-preload-320324" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:39.119352  147213 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:39.127243  147213 pod_ready.go:98] node "no-preload-320324" hosting pod "kube-apiserver-no-preload-320324" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:39.127268  147213 pod_ready.go:82] duration metric: took 7.907414ms for pod "kube-apiserver-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	E1010 19:24:39.127278  147213 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-320324" hosting pod "kube-apiserver-no-preload-320324" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:39.127285  147213 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:39.195549  147213 pod_ready.go:98] node "no-preload-320324" hosting pod "kube-controller-manager-no-preload-320324" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:39.195578  147213 pod_ready.go:82] duration metric: took 68.282333ms for pod "kube-controller-manager-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	E1010 19:24:39.195588  147213 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-320324" hosting pod "kube-controller-manager-no-preload-320324" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:39.195594  147213 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-vn6sv" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:39.595842  147213 pod_ready.go:98] node "no-preload-320324" hosting pod "kube-proxy-vn6sv" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:39.595871  147213 pod_ready.go:82] duration metric: took 400.267905ms for pod "kube-proxy-vn6sv" in "kube-system" namespace to be "Ready" ...
	E1010 19:24:39.595880  147213 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-320324" hosting pod "kube-proxy-vn6sv" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:39.595886  147213 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:39.995731  147213 pod_ready.go:98] node "no-preload-320324" hosting pod "kube-scheduler-no-preload-320324" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:39.995760  147213 pod_ready.go:82] duration metric: took 399.866947ms for pod "kube-scheduler-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	E1010 19:24:39.995770  147213 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-320324" hosting pod "kube-scheduler-no-preload-320324" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:39.995777  147213 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:40.396420  147213 pod_ready.go:98] node "no-preload-320324" hosting pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:40.396456  147213 pod_ready.go:82] duration metric: took 400.667834ms for pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace to be "Ready" ...
	E1010 19:24:40.396470  147213 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-320324" hosting pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:40.396482  147213 pod_ready.go:39] duration metric: took 1.309346973s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 19:24:40.396508  147213 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1010 19:24:40.409956  147213 ops.go:34] apiserver oom_adj: -16
	I1010 19:24:40.409980  147213 kubeadm.go:597] duration metric: took 9.364998977s to restartPrimaryControlPlane
	I1010 19:24:40.409991  147213 kubeadm.go:394] duration metric: took 9.419470024s to StartCluster
	I1010 19:24:40.410009  147213 settings.go:142] acquiring lock: {Name:mk8d73e472e6ca16e47be8eb712406a95625b8da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 19:24:40.410085  147213 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19787-81676/kubeconfig
	I1010 19:24:40.413037  147213 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19787-81676/kubeconfig: {Name:mk31297b607b774870865548d952622c04d970cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 19:24:40.413448  147213 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.11 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1010 19:24:40.413783  147213 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1010 19:24:40.413979  147213 config.go:182] Loaded profile config "no-preload-320324": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 19:24:40.413996  147213 addons.go:69] Setting default-storageclass=true in profile "no-preload-320324"
	I1010 19:24:40.414020  147213 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-320324"
	I1010 19:24:40.413983  147213 addons.go:69] Setting storage-provisioner=true in profile "no-preload-320324"
	I1010 19:24:40.414048  147213 addons.go:234] Setting addon storage-provisioner=true in "no-preload-320324"
	W1010 19:24:40.414057  147213 addons.go:243] addon storage-provisioner should already be in state true
	I1010 19:24:40.414091  147213 host.go:66] Checking if "no-preload-320324" exists ...
	I1010 19:24:40.414170  147213 addons.go:69] Setting metrics-server=true in profile "no-preload-320324"
	I1010 19:24:40.414230  147213 addons.go:234] Setting addon metrics-server=true in "no-preload-320324"
	W1010 19:24:40.414252  147213 addons.go:243] addon metrics-server should already be in state true
	I1010 19:24:40.414292  147213 host.go:66] Checking if "no-preload-320324" exists ...
	I1010 19:24:40.414612  147213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:24:40.414640  147213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:24:40.414678  147213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:24:40.414712  147213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:24:40.415409  147213 out.go:177] * Verifying Kubernetes components...
	I1010 19:24:40.415412  147213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:24:40.415553  147213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:24:40.416812  147213 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 19:24:40.431363  147213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44727
	I1010 19:24:40.431474  147213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46487
	I1010 19:24:40.431659  147213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33767
	I1010 19:24:40.431983  147213 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:24:40.432136  147213 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:24:40.432156  147213 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:24:40.432567  147213 main.go:141] libmachine: Using API Version  1
	I1010 19:24:40.432587  147213 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:24:40.432710  147213 main.go:141] libmachine: Using API Version  1
	I1010 19:24:40.432732  147213 main.go:141] libmachine: Using API Version  1
	I1010 19:24:40.432740  147213 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:24:40.432749  147213 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:24:40.433000  147213 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:24:40.433079  147213 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:24:40.433103  147213 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:24:40.433468  147213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:24:40.433498  147213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:24:40.436984  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetState
	I1010 19:24:40.453362  147213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:24:40.453426  147213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:24:40.454884  147213 addons.go:234] Setting addon default-storageclass=true in "no-preload-320324"
	W1010 19:24:40.454913  147213 addons.go:243] addon default-storageclass should already be in state true
	I1010 19:24:40.454947  147213 host.go:66] Checking if "no-preload-320324" exists ...
	I1010 19:24:40.455335  147213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:24:40.455394  147213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:24:40.470642  147213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38831
	I1010 19:24:40.471118  147213 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:24:40.471701  147213 main.go:141] libmachine: Using API Version  1
	I1010 19:24:40.471730  147213 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:24:40.472241  147213 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:24:40.472523  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetState
	I1010 19:24:40.473953  147213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36987
	I1010 19:24:40.474196  147213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34527
	I1010 19:24:40.474332  147213 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:24:40.474672  147213 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:24:40.474814  147213 main.go:141] libmachine: Using API Version  1
	I1010 19:24:40.474827  147213 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:24:40.475181  147213 main.go:141] libmachine: Using API Version  1
	I1010 19:24:40.475210  147213 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:24:40.475310  147213 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:24:40.475702  147213 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:24:40.475785  147213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:24:40.475825  147213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:24:40.475922  147213 main.go:141] libmachine: (no-preload-320324) Calling .DriverName
	I1010 19:24:40.476046  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetState
	I1010 19:24:40.478147  147213 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 19:24:40.478395  147213 main.go:141] libmachine: (no-preload-320324) Calling .DriverName
	I1010 19:24:40.479869  147213 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 19:24:40.479896  147213 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1010 19:24:40.479922  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHHostname
	I1010 19:24:40.480549  147213 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1010 19:24:37.051611  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:39.551952  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:41.553895  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:40.482101  147213 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1010 19:24:40.482119  147213 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1010 19:24:40.482144  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHHostname
	I1010 19:24:40.484066  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:40.484560  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:40.484588  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:40.484833  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHPort
	I1010 19:24:40.485065  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:40.485241  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHUsername
	I1010 19:24:40.485272  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:40.485443  147213 sshutil.go:53] new ssh client: &{IP:192.168.72.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/no-preload-320324/id_rsa Username:docker}
	I1010 19:24:40.485788  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:40.485807  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:40.485842  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHPort
	I1010 19:24:40.486017  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:40.486202  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHUsername
	I1010 19:24:40.486454  147213 sshutil.go:53] new ssh client: &{IP:192.168.72.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/no-preload-320324/id_rsa Username:docker}
	I1010 19:24:40.492533  147213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38967
	I1010 19:24:40.493012  147213 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:24:40.493566  147213 main.go:141] libmachine: Using API Version  1
	I1010 19:24:40.493595  147213 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:24:40.494056  147213 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:24:40.494325  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetState
	I1010 19:24:40.496053  147213 main.go:141] libmachine: (no-preload-320324) Calling .DriverName
	I1010 19:24:40.496301  147213 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1010 19:24:40.496321  147213 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1010 19:24:40.496344  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHHostname
	I1010 19:24:40.499125  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:40.499667  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:40.499690  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:40.499843  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHPort
	I1010 19:24:40.500022  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:40.500194  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHUsername
	I1010 19:24:40.500357  147213 sshutil.go:53] new ssh client: &{IP:192.168.72.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/no-preload-320324/id_rsa Username:docker}
	I1010 19:24:40.651454  147213 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 19:24:40.667056  147213 node_ready.go:35] waiting up to 6m0s for node "no-preload-320324" to be "Ready" ...
	I1010 19:24:40.782217  147213 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 19:24:40.803094  147213 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1010 19:24:40.803122  147213 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1010 19:24:40.812288  147213 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1010 19:24:40.837679  147213 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1010 19:24:40.837723  147213 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1010 19:24:40.882090  147213 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1010 19:24:40.882119  147213 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1010 19:24:40.940115  147213 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1010 19:24:41.949181  147213 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.136852217s)
	I1010 19:24:41.949258  147213 main.go:141] libmachine: Making call to close driver server
	I1010 19:24:41.949275  147213 main.go:141] libmachine: (no-preload-320324) Calling .Close
	I1010 19:24:41.949286  147213 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.167030419s)
	I1010 19:24:41.949327  147213 main.go:141] libmachine: Making call to close driver server
	I1010 19:24:41.949345  147213 main.go:141] libmachine: (no-preload-320324) Calling .Close
	I1010 19:24:41.949625  147213 main.go:141] libmachine: (no-preload-320324) DBG | Closing plugin on server side
	I1010 19:24:41.949652  147213 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:24:41.949660  147213 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:24:41.949661  147213 main.go:141] libmachine: (no-preload-320324) DBG | Closing plugin on server side
	I1010 19:24:41.949668  147213 main.go:141] libmachine: Making call to close driver server
	I1010 19:24:41.949679  147213 main.go:141] libmachine: (no-preload-320324) Calling .Close
	I1010 19:24:41.949761  147213 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:24:41.949804  147213 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:24:41.949819  147213 main.go:141] libmachine: Making call to close driver server
	I1010 19:24:41.949826  147213 main.go:141] libmachine: (no-preload-320324) Calling .Close
	I1010 19:24:41.950811  147213 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:24:41.950824  147213 main.go:141] libmachine: (no-preload-320324) DBG | Closing plugin on server side
	I1010 19:24:41.950827  147213 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:24:41.950822  147213 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:24:41.950845  147213 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:24:41.950811  147213 main.go:141] libmachine: (no-preload-320324) DBG | Closing plugin on server side
	I1010 19:24:41.957797  147213 main.go:141] libmachine: Making call to close driver server
	I1010 19:24:41.957814  147213 main.go:141] libmachine: (no-preload-320324) Calling .Close
	I1010 19:24:41.958071  147213 main.go:141] libmachine: (no-preload-320324) DBG | Closing plugin on server side
	I1010 19:24:41.958077  147213 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:24:41.958099  147213 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:24:42.005530  147213 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.065377363s)
	I1010 19:24:42.005590  147213 main.go:141] libmachine: Making call to close driver server
	I1010 19:24:42.005602  147213 main.go:141] libmachine: (no-preload-320324) Calling .Close
	I1010 19:24:42.005914  147213 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:24:42.005937  147213 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:24:42.005935  147213 main.go:141] libmachine: (no-preload-320324) DBG | Closing plugin on server side
	I1010 19:24:42.005972  147213 main.go:141] libmachine: Making call to close driver server
	I1010 19:24:42.006003  147213 main.go:141] libmachine: (no-preload-320324) Calling .Close
	I1010 19:24:42.006280  147213 main.go:141] libmachine: (no-preload-320324) DBG | Closing plugin on server side
	I1010 19:24:42.006313  147213 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:24:42.006335  147213 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:24:42.006354  147213 addons.go:475] Verifying addon metrics-server=true in "no-preload-320324"
	I1010 19:24:42.008523  147213 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1010 19:24:39.028363  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:39.528571  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:40.027992  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:40.528552  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:41.028220  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:41.527889  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:42.028462  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:24:42.028549  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:24:42.070669  148123 cri.go:89] found id: ""
	I1010 19:24:42.070701  148123 logs.go:282] 0 containers: []
	W1010 19:24:42.070710  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:24:42.070716  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:24:42.070775  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:24:42.110693  148123 cri.go:89] found id: ""
	I1010 19:24:42.110731  148123 logs.go:282] 0 containers: []
	W1010 19:24:42.110748  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:24:42.110756  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:24:42.110816  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:24:42.148471  148123 cri.go:89] found id: ""
	I1010 19:24:42.148511  148123 logs.go:282] 0 containers: []
	W1010 19:24:42.148525  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:24:42.148535  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:24:42.148603  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:24:42.184637  148123 cri.go:89] found id: ""
	I1010 19:24:42.184670  148123 logs.go:282] 0 containers: []
	W1010 19:24:42.184683  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:24:42.184691  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:24:42.184759  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:24:42.226794  148123 cri.go:89] found id: ""
	I1010 19:24:42.226834  148123 logs.go:282] 0 containers: []
	W1010 19:24:42.226848  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:24:42.226857  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:24:42.226942  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:24:42.262969  148123 cri.go:89] found id: ""
	I1010 19:24:42.263004  148123 logs.go:282] 0 containers: []
	W1010 19:24:42.263017  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:24:42.263027  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:24:42.263167  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:24:42.301063  148123 cri.go:89] found id: ""
	I1010 19:24:42.301088  148123 logs.go:282] 0 containers: []
	W1010 19:24:42.301096  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:24:42.301102  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:24:42.301153  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:24:42.338829  148123 cri.go:89] found id: ""
	I1010 19:24:42.338862  148123 logs.go:282] 0 containers: []
	W1010 19:24:42.338873  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:24:42.338886  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:24:42.338901  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:24:42.391426  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:24:42.391478  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:24:42.405520  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:24:42.405563  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:24:42.544245  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:24:42.544275  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:24:42.544292  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:24:42.620760  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:24:42.620804  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:24:42.009965  147213 addons.go:510] duration metric: took 1.596190602s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1010 19:24:42.672792  147213 node_ready.go:53] node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:41.066744  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:43.066850  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:43.557231  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:46.051820  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:45.164389  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:45.179714  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:24:45.179776  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:24:45.216271  148123 cri.go:89] found id: ""
	I1010 19:24:45.216308  148123 logs.go:282] 0 containers: []
	W1010 19:24:45.216319  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:24:45.216327  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:24:45.216394  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:24:45.255109  148123 cri.go:89] found id: ""
	I1010 19:24:45.255154  148123 logs.go:282] 0 containers: []
	W1010 19:24:45.255167  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:24:45.255176  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:24:45.255248  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:24:45.294338  148123 cri.go:89] found id: ""
	I1010 19:24:45.294369  148123 logs.go:282] 0 containers: []
	W1010 19:24:45.294380  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:24:45.294388  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:24:45.294457  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:24:45.339573  148123 cri.go:89] found id: ""
	I1010 19:24:45.339606  148123 logs.go:282] 0 containers: []
	W1010 19:24:45.339626  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:24:45.339636  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:24:45.339703  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:24:45.400184  148123 cri.go:89] found id: ""
	I1010 19:24:45.400214  148123 logs.go:282] 0 containers: []
	W1010 19:24:45.400225  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:24:45.400234  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:24:45.400301  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:24:45.436152  148123 cri.go:89] found id: ""
	I1010 19:24:45.436183  148123 logs.go:282] 0 containers: []
	W1010 19:24:45.436195  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:24:45.436203  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:24:45.436262  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:24:45.484321  148123 cri.go:89] found id: ""
	I1010 19:24:45.484347  148123 logs.go:282] 0 containers: []
	W1010 19:24:45.484355  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:24:45.484361  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:24:45.484441  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:24:45.532893  148123 cri.go:89] found id: ""
	I1010 19:24:45.532923  148123 logs.go:282] 0 containers: []
	W1010 19:24:45.532932  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:24:45.532943  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:24:45.532958  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:24:45.585183  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:24:45.585214  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:24:45.638800  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:24:45.638847  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:24:45.653928  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:24:45.653961  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:24:45.733534  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:24:45.733564  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:24:45.733580  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:24:48.314367  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:48.329687  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:24:48.329754  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:24:48.370372  148123 cri.go:89] found id: ""
	I1010 19:24:48.370400  148123 logs.go:282] 0 containers: []
	W1010 19:24:48.370409  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:24:48.370415  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:24:48.370490  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:24:48.409362  148123 cri.go:89] found id: ""
	I1010 19:24:48.409396  148123 logs.go:282] 0 containers: []
	W1010 19:24:48.409407  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:24:48.409415  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:24:48.409500  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:24:48.449640  148123 cri.go:89] found id: ""
	I1010 19:24:48.449672  148123 logs.go:282] 0 containers: []
	W1010 19:24:48.449681  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:24:48.449687  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:24:48.449746  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:24:48.485053  148123 cri.go:89] found id: ""
	I1010 19:24:48.485104  148123 logs.go:282] 0 containers: []
	W1010 19:24:48.485121  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:24:48.485129  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:24:48.485183  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:24:48.520083  148123 cri.go:89] found id: ""
	I1010 19:24:48.520113  148123 logs.go:282] 0 containers: []
	W1010 19:24:48.520121  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:24:48.520127  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:24:48.520185  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:24:48.555112  148123 cri.go:89] found id: ""
	I1010 19:24:48.555138  148123 logs.go:282] 0 containers: []
	W1010 19:24:48.555149  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:24:48.555156  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:24:48.555241  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:24:45.171882  147213 node_ready.go:53] node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:47.673073  147213 node_ready.go:49] node "no-preload-320324" has status "Ready":"True"
	I1010 19:24:47.673103  147213 node_ready.go:38] duration metric: took 7.00601327s for node "no-preload-320324" to be "Ready" ...
	I1010 19:24:47.673117  147213 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 19:24:47.682195  147213 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-86brb" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:47.690079  147213 pod_ready.go:93] pod "coredns-7c65d6cfc9-86brb" in "kube-system" namespace has status "Ready":"True"
	I1010 19:24:47.690111  147213 pod_ready.go:82] duration metric: took 7.882823ms for pod "coredns-7c65d6cfc9-86brb" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:47.690126  147213 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:47.698009  147213 pod_ready.go:93] pod "etcd-no-preload-320324" in "kube-system" namespace has status "Ready":"True"
	I1010 19:24:47.698038  147213 pod_ready.go:82] duration metric: took 7.903016ms for pod "etcd-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:47.698052  147213 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:45.066893  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:47.566144  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:48.551853  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:51.050365  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:48.592682  148123 cri.go:89] found id: ""
	I1010 19:24:48.592710  148123 logs.go:282] 0 containers: []
	W1010 19:24:48.592719  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:24:48.592725  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:24:48.592775  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:24:48.628449  148123 cri.go:89] found id: ""
	I1010 19:24:48.628482  148123 logs.go:282] 0 containers: []
	W1010 19:24:48.628490  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:24:48.628500  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:24:48.628513  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:24:48.709385  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:24:48.709428  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:24:48.752542  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:24:48.752677  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:24:48.812331  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:24:48.812385  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:24:48.827057  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:24:48.827095  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:24:48.908312  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:24:51.409122  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:51.423404  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:24:51.423482  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:24:51.465742  148123 cri.go:89] found id: ""
	I1010 19:24:51.465781  148123 logs.go:282] 0 containers: []
	W1010 19:24:51.465793  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:24:51.465803  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:24:51.465875  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:24:51.501597  148123 cri.go:89] found id: ""
	I1010 19:24:51.501630  148123 logs.go:282] 0 containers: []
	W1010 19:24:51.501641  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:24:51.501648  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:24:51.501711  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:24:51.537500  148123 cri.go:89] found id: ""
	I1010 19:24:51.537539  148123 logs.go:282] 0 containers: []
	W1010 19:24:51.537551  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:24:51.537559  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:24:51.537626  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:24:51.579546  148123 cri.go:89] found id: ""
	I1010 19:24:51.579576  148123 logs.go:282] 0 containers: []
	W1010 19:24:51.579587  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:24:51.579595  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:24:51.579660  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:24:51.619893  148123 cri.go:89] found id: ""
	I1010 19:24:51.619917  148123 logs.go:282] 0 containers: []
	W1010 19:24:51.619925  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:24:51.619931  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:24:51.619999  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:24:51.655787  148123 cri.go:89] found id: ""
	I1010 19:24:51.655824  148123 logs.go:282] 0 containers: []
	W1010 19:24:51.655837  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:24:51.655846  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:24:51.655921  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:24:51.699582  148123 cri.go:89] found id: ""
	I1010 19:24:51.699611  148123 logs.go:282] 0 containers: []
	W1010 19:24:51.699619  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:24:51.699625  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:24:51.699685  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:24:51.738658  148123 cri.go:89] found id: ""
	I1010 19:24:51.738689  148123 logs.go:282] 0 containers: []
	W1010 19:24:51.738697  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:24:51.738707  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:24:51.738721  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:24:51.789556  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:24:51.789587  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:24:51.858919  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:24:51.858968  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:24:51.887818  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:24:51.887854  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:24:51.964408  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:24:51.964434  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:24:51.964449  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:24:49.705130  147213 pod_ready.go:103] pod "kube-apiserver-no-preload-320324" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:51.705847  147213 pod_ready.go:103] pod "kube-apiserver-no-preload-320324" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:53.205374  147213 pod_ready.go:93] pod "kube-apiserver-no-preload-320324" in "kube-system" namespace has status "Ready":"True"
	I1010 19:24:53.205401  147213 pod_ready.go:82] duration metric: took 5.507341974s for pod "kube-apiserver-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:53.205413  147213 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:53.210237  147213 pod_ready.go:93] pod "kube-controller-manager-no-preload-320324" in "kube-system" namespace has status "Ready":"True"
	I1010 19:24:53.210259  147213 pod_ready.go:82] duration metric: took 4.83925ms for pod "kube-controller-manager-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:53.210269  147213 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-vn6sv" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:53.215158  147213 pod_ready.go:93] pod "kube-proxy-vn6sv" in "kube-system" namespace has status "Ready":"True"
	I1010 19:24:53.215186  147213 pod_ready.go:82] duration metric: took 4.909888ms for pod "kube-proxy-vn6sv" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:53.215198  147213 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:53.220077  147213 pod_ready.go:93] pod "kube-scheduler-no-preload-320324" in "kube-system" namespace has status "Ready":"True"
	I1010 19:24:53.220097  147213 pod_ready.go:82] duration metric: took 4.890652ms for pod "kube-scheduler-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:53.220105  147213 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:50.066165  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:52.066343  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:53.552604  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:56.050748  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:54.545627  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:54.560748  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:24:54.560830  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:24:54.595863  148123 cri.go:89] found id: ""
	I1010 19:24:54.595893  148123 logs.go:282] 0 containers: []
	W1010 19:24:54.595903  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:24:54.595912  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:24:54.595978  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:24:54.633645  148123 cri.go:89] found id: ""
	I1010 19:24:54.633681  148123 logs.go:282] 0 containers: []
	W1010 19:24:54.633693  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:24:54.633701  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:24:54.633761  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:24:54.668269  148123 cri.go:89] found id: ""
	I1010 19:24:54.668299  148123 logs.go:282] 0 containers: []
	W1010 19:24:54.668311  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:24:54.668317  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:24:54.668369  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:24:54.706559  148123 cri.go:89] found id: ""
	I1010 19:24:54.706591  148123 logs.go:282] 0 containers: []
	W1010 19:24:54.706600  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:24:54.706608  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:24:54.706673  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:24:54.746248  148123 cri.go:89] found id: ""
	I1010 19:24:54.746283  148123 logs.go:282] 0 containers: []
	W1010 19:24:54.746295  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:24:54.746303  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:24:54.746383  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:24:54.782982  148123 cri.go:89] found id: ""
	I1010 19:24:54.783017  148123 logs.go:282] 0 containers: []
	W1010 19:24:54.783027  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:24:54.783033  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:24:54.783085  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:24:54.819664  148123 cri.go:89] found id: ""
	I1010 19:24:54.819700  148123 logs.go:282] 0 containers: []
	W1010 19:24:54.819713  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:24:54.819722  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:24:54.819797  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:24:54.859603  148123 cri.go:89] found id: ""
	I1010 19:24:54.859632  148123 logs.go:282] 0 containers: []
	W1010 19:24:54.859640  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:24:54.859650  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:24:54.859662  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:24:54.910949  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:24:54.910987  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:24:54.925941  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:24:54.925975  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:24:55.009626  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:24:55.009654  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:24:55.009669  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:24:55.097196  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:24:55.097237  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:24:57.641732  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:57.661141  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:24:57.661222  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:24:57.716054  148123 cri.go:89] found id: ""
	I1010 19:24:57.716086  148123 logs.go:282] 0 containers: []
	W1010 19:24:57.716094  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:24:57.716100  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:24:57.716178  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:24:57.766862  148123 cri.go:89] found id: ""
	I1010 19:24:57.766892  148123 logs.go:282] 0 containers: []
	W1010 19:24:57.766906  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:24:57.766917  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:24:57.766989  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:24:57.814776  148123 cri.go:89] found id: ""
	I1010 19:24:57.814808  148123 logs.go:282] 0 containers: []
	W1010 19:24:57.814821  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:24:57.814829  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:24:57.814899  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:24:57.850458  148123 cri.go:89] found id: ""
	I1010 19:24:57.850495  148123 logs.go:282] 0 containers: []
	W1010 19:24:57.850505  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:24:57.850516  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:24:57.850667  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:24:57.886541  148123 cri.go:89] found id: ""
	I1010 19:24:57.886566  148123 logs.go:282] 0 containers: []
	W1010 19:24:57.886575  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:24:57.886581  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:24:57.886645  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:24:57.920748  148123 cri.go:89] found id: ""
	I1010 19:24:57.920783  148123 logs.go:282] 0 containers: []
	W1010 19:24:57.920795  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:24:57.920802  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:24:57.920887  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:24:57.957801  148123 cri.go:89] found id: ""
	I1010 19:24:57.957833  148123 logs.go:282] 0 containers: []
	W1010 19:24:57.957844  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:24:57.957852  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:24:57.957919  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:24:57.995648  148123 cri.go:89] found id: ""
	I1010 19:24:57.995683  148123 logs.go:282] 0 containers: []
	W1010 19:24:57.995694  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:24:57.995706  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:24:57.995723  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:24:58.034987  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:24:58.035030  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:24:58.089014  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:24:58.089066  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:24:58.104179  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:24:58.104211  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:24:58.178239  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:24:58.178271  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:24:58.178291  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:24:55.229459  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:57.727298  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:54.566779  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:57.065902  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:58.051248  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:00.550512  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:00.756057  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:00.770086  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:00.770150  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:00.811400  148123 cri.go:89] found id: ""
	I1010 19:25:00.811444  148123 logs.go:282] 0 containers: []
	W1010 19:25:00.811456  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:00.811466  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:00.811533  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:00.848821  148123 cri.go:89] found id: ""
	I1010 19:25:00.848859  148123 logs.go:282] 0 containers: []
	W1010 19:25:00.848870  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:00.848877  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:00.848947  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:00.883529  148123 cri.go:89] found id: ""
	I1010 19:25:00.883557  148123 logs.go:282] 0 containers: []
	W1010 19:25:00.883566  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:00.883573  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:00.883631  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:00.922952  148123 cri.go:89] found id: ""
	I1010 19:25:00.922982  148123 logs.go:282] 0 containers: []
	W1010 19:25:00.922992  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:00.922998  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:00.923057  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:00.956116  148123 cri.go:89] found id: ""
	I1010 19:25:00.956147  148123 logs.go:282] 0 containers: []
	W1010 19:25:00.956159  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:00.956168  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:00.956233  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:00.998892  148123 cri.go:89] found id: ""
	I1010 19:25:00.998919  148123 logs.go:282] 0 containers: []
	W1010 19:25:00.998930  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:00.998939  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:00.998996  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:01.037617  148123 cri.go:89] found id: ""
	I1010 19:25:01.037649  148123 logs.go:282] 0 containers: []
	W1010 19:25:01.037657  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:01.037663  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:01.037717  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:01.078003  148123 cri.go:89] found id: ""
	I1010 19:25:01.078034  148123 logs.go:282] 0 containers: []
	W1010 19:25:01.078046  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:01.078055  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:01.078069  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:01.118948  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:01.118977  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:01.171053  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:01.171091  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:01.187027  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:01.187060  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:01.261288  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:01.261315  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:01.261330  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:24:59.728997  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:02.227142  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:59.566448  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:02.066184  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:02.551951  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:05.050558  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:03.848144  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:03.862527  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:03.862601  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:03.899120  148123 cri.go:89] found id: ""
	I1010 19:25:03.899159  148123 logs.go:282] 0 containers: []
	W1010 19:25:03.899180  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:03.899187  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:03.899276  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:03.935461  148123 cri.go:89] found id: ""
	I1010 19:25:03.935492  148123 logs.go:282] 0 containers: []
	W1010 19:25:03.935500  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:03.935507  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:03.935569  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:03.971892  148123 cri.go:89] found id: ""
	I1010 19:25:03.971925  148123 logs.go:282] 0 containers: []
	W1010 19:25:03.971937  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:03.971945  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:03.972019  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:04.008341  148123 cri.go:89] found id: ""
	I1010 19:25:04.008368  148123 logs.go:282] 0 containers: []
	W1010 19:25:04.008377  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:04.008390  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:04.008447  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:04.042007  148123 cri.go:89] found id: ""
	I1010 19:25:04.042036  148123 logs.go:282] 0 containers: []
	W1010 19:25:04.042044  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:04.042051  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:04.042101  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:04.078017  148123 cri.go:89] found id: ""
	I1010 19:25:04.078045  148123 logs.go:282] 0 containers: []
	W1010 19:25:04.078053  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:04.078059  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:04.078112  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:04.116792  148123 cri.go:89] found id: ""
	I1010 19:25:04.116823  148123 logs.go:282] 0 containers: []
	W1010 19:25:04.116832  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:04.116839  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:04.116928  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:04.153468  148123 cri.go:89] found id: ""
	I1010 19:25:04.153496  148123 logs.go:282] 0 containers: []
	W1010 19:25:04.153503  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:04.153513  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:04.153525  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:04.230646  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:04.230683  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:04.270975  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:04.271015  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:04.320845  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:04.320902  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:04.337789  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:04.337828  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:04.416077  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:06.916412  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:06.931309  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:06.931376  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:06.969224  148123 cri.go:89] found id: ""
	I1010 19:25:06.969257  148123 logs.go:282] 0 containers: []
	W1010 19:25:06.969269  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:06.969277  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:06.969346  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:07.006598  148123 cri.go:89] found id: ""
	I1010 19:25:07.006653  148123 logs.go:282] 0 containers: []
	W1010 19:25:07.006667  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:07.006676  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:07.006744  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:07.046392  148123 cri.go:89] found id: ""
	I1010 19:25:07.046418  148123 logs.go:282] 0 containers: []
	W1010 19:25:07.046427  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:07.046433  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:07.046491  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:07.089570  148123 cri.go:89] found id: ""
	I1010 19:25:07.089603  148123 logs.go:282] 0 containers: []
	W1010 19:25:07.089615  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:07.089624  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:07.089689  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:07.127152  148123 cri.go:89] found id: ""
	I1010 19:25:07.127185  148123 logs.go:282] 0 containers: []
	W1010 19:25:07.127195  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:07.127204  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:07.127282  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:07.160516  148123 cri.go:89] found id: ""
	I1010 19:25:07.160543  148123 logs.go:282] 0 containers: []
	W1010 19:25:07.160554  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:07.160563  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:07.160635  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:07.197270  148123 cri.go:89] found id: ""
	I1010 19:25:07.197307  148123 logs.go:282] 0 containers: []
	W1010 19:25:07.197318  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:07.197327  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:07.197395  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:07.232658  148123 cri.go:89] found id: ""
	I1010 19:25:07.232687  148123 logs.go:282] 0 containers: []
	W1010 19:25:07.232696  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:07.232706  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:07.232720  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:07.246579  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:07.246609  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:07.315200  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:07.315230  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:07.315251  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:07.389532  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:07.389580  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:07.438808  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:07.438837  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:04.227537  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:06.727865  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:04.067121  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:06.565089  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:08.565565  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:07.051371  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:09.051420  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:11.054211  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:09.990294  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:10.004155  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:10.004270  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:10.039034  148123 cri.go:89] found id: ""
	I1010 19:25:10.039068  148123 logs.go:282] 0 containers: []
	W1010 19:25:10.039078  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:10.039087  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:10.039174  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:10.079936  148123 cri.go:89] found id: ""
	I1010 19:25:10.079967  148123 logs.go:282] 0 containers: []
	W1010 19:25:10.079978  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:10.079985  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:10.080068  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:10.116441  148123 cri.go:89] found id: ""
	I1010 19:25:10.116471  148123 logs.go:282] 0 containers: []
	W1010 19:25:10.116483  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:10.116491  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:10.116556  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:10.151998  148123 cri.go:89] found id: ""
	I1010 19:25:10.152045  148123 logs.go:282] 0 containers: []
	W1010 19:25:10.152058  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:10.152075  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:10.152153  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:10.188514  148123 cri.go:89] found id: ""
	I1010 19:25:10.188547  148123 logs.go:282] 0 containers: []
	W1010 19:25:10.188565  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:10.188574  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:10.188640  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:10.223648  148123 cri.go:89] found id: ""
	I1010 19:25:10.223682  148123 logs.go:282] 0 containers: []
	W1010 19:25:10.223694  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:10.223702  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:10.223771  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:10.264117  148123 cri.go:89] found id: ""
	I1010 19:25:10.264143  148123 logs.go:282] 0 containers: []
	W1010 19:25:10.264151  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:10.264158  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:10.264215  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:10.302566  148123 cri.go:89] found id: ""
	I1010 19:25:10.302601  148123 logs.go:282] 0 containers: []
	W1010 19:25:10.302613  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:10.302625  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:10.302649  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:10.316567  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:10.316606  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:10.388642  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:10.388674  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:10.388692  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:10.471035  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:10.471090  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:10.519600  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:10.519645  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:13.075428  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:13.090830  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:13.090915  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:13.126302  148123 cri.go:89] found id: ""
	I1010 19:25:13.126336  148123 logs.go:282] 0 containers: []
	W1010 19:25:13.126348  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:13.126357  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:13.126429  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:13.163309  148123 cri.go:89] found id: ""
	I1010 19:25:13.163344  148123 logs.go:282] 0 containers: []
	W1010 19:25:13.163357  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:13.163365  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:13.163428  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:13.197569  148123 cri.go:89] found id: ""
	I1010 19:25:13.197605  148123 logs.go:282] 0 containers: []
	W1010 19:25:13.197615  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:13.197621  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:13.197692  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:13.236299  148123 cri.go:89] found id: ""
	I1010 19:25:13.236328  148123 logs.go:282] 0 containers: []
	W1010 19:25:13.236336  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:13.236342  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:13.236406  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:13.275667  148123 cri.go:89] found id: ""
	I1010 19:25:13.275696  148123 logs.go:282] 0 containers: []
	W1010 19:25:13.275705  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:13.275711  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:13.275776  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:13.309728  148123 cri.go:89] found id: ""
	I1010 19:25:13.309763  148123 logs.go:282] 0 containers: []
	W1010 19:25:13.309774  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:13.309786  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:13.309854  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:13.347464  148123 cri.go:89] found id: ""
	I1010 19:25:13.347493  148123 logs.go:282] 0 containers: []
	W1010 19:25:13.347504  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:13.347513  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:13.347576  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:13.385086  148123 cri.go:89] found id: ""
	I1010 19:25:13.385119  148123 logs.go:282] 0 containers: []
	W1010 19:25:13.385130  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:13.385142  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:13.385156  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:13.466513  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:13.466553  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:13.508610  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:13.508651  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:09.226850  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:11.227241  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:13.726879  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:10.565663  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:12.565845  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:13.555465  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:16.051764  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:13.564897  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:13.564932  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:13.578771  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:13.578803  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:13.651857  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:16.153066  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:16.167613  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:16.167698  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:16.203867  148123 cri.go:89] found id: ""
	I1010 19:25:16.203897  148123 logs.go:282] 0 containers: []
	W1010 19:25:16.203906  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:16.203912  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:16.203965  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:16.242077  148123 cri.go:89] found id: ""
	I1010 19:25:16.242109  148123 logs.go:282] 0 containers: []
	W1010 19:25:16.242130  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:16.242138  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:16.242234  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:16.283792  148123 cri.go:89] found id: ""
	I1010 19:25:16.283825  148123 logs.go:282] 0 containers: []
	W1010 19:25:16.283836  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:16.283844  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:16.283907  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:16.319934  148123 cri.go:89] found id: ""
	I1010 19:25:16.319969  148123 logs.go:282] 0 containers: []
	W1010 19:25:16.319980  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:16.319990  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:16.320063  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:16.360439  148123 cri.go:89] found id: ""
	I1010 19:25:16.360482  148123 logs.go:282] 0 containers: []
	W1010 19:25:16.360495  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:16.360504  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:16.360569  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:16.395890  148123 cri.go:89] found id: ""
	I1010 19:25:16.395922  148123 logs.go:282] 0 containers: []
	W1010 19:25:16.395931  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:16.395941  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:16.396009  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:16.431396  148123 cri.go:89] found id: ""
	I1010 19:25:16.431442  148123 logs.go:282] 0 containers: []
	W1010 19:25:16.431456  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:16.431463  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:16.431531  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:16.471768  148123 cri.go:89] found id: ""
	I1010 19:25:16.471796  148123 logs.go:282] 0 containers: []
	W1010 19:25:16.471804  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:16.471814  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:16.471830  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:16.526112  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:16.526155  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:16.541041  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:16.541074  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:16.623245  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:16.623273  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:16.623289  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:16.710840  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:16.710884  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:15.727171  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:17.728705  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:15.067362  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:17.566242  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:18.551207  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:21.050222  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:19.257566  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:19.273982  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:19.274063  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:19.311360  148123 cri.go:89] found id: ""
	I1010 19:25:19.311392  148123 logs.go:282] 0 containers: []
	W1010 19:25:19.311404  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:19.311411  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:19.311475  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:19.347025  148123 cri.go:89] found id: ""
	I1010 19:25:19.347053  148123 logs.go:282] 0 containers: []
	W1010 19:25:19.347062  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:19.347068  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:19.347120  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:19.383575  148123 cri.go:89] found id: ""
	I1010 19:25:19.383608  148123 logs.go:282] 0 containers: []
	W1010 19:25:19.383620  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:19.383633  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:19.383698  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:19.419245  148123 cri.go:89] found id: ""
	I1010 19:25:19.419270  148123 logs.go:282] 0 containers: []
	W1010 19:25:19.419278  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:19.419284  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:19.419334  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:19.453563  148123 cri.go:89] found id: ""
	I1010 19:25:19.453591  148123 logs.go:282] 0 containers: []
	W1010 19:25:19.453601  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:19.453658  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:19.453715  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:19.496430  148123 cri.go:89] found id: ""
	I1010 19:25:19.496458  148123 logs.go:282] 0 containers: []
	W1010 19:25:19.496466  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:19.496473  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:19.496533  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:19.534626  148123 cri.go:89] found id: ""
	I1010 19:25:19.534670  148123 logs.go:282] 0 containers: []
	W1010 19:25:19.534682  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:19.534691  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:19.534757  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:19.574186  148123 cri.go:89] found id: ""
	I1010 19:25:19.574236  148123 logs.go:282] 0 containers: []
	W1010 19:25:19.574248  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:19.574261  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:19.574283  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:19.587952  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:19.587989  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:19.664607  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:19.664644  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:19.664664  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:19.742185  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:19.742224  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:19.781530  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:19.781562  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:22.335398  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:22.349627  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:22.349693  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:22.389075  148123 cri.go:89] found id: ""
	I1010 19:25:22.389102  148123 logs.go:282] 0 containers: []
	W1010 19:25:22.389110  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:22.389117  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:22.389168  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:22.424331  148123 cri.go:89] found id: ""
	I1010 19:25:22.424368  148123 logs.go:282] 0 containers: []
	W1010 19:25:22.424381  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:22.424389  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:22.424460  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:22.464011  148123 cri.go:89] found id: ""
	I1010 19:25:22.464051  148123 logs.go:282] 0 containers: []
	W1010 19:25:22.464062  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:22.464070  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:22.464141  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:22.500786  148123 cri.go:89] found id: ""
	I1010 19:25:22.500819  148123 logs.go:282] 0 containers: []
	W1010 19:25:22.500828  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:22.500835  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:22.500914  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:22.538016  148123 cri.go:89] found id: ""
	I1010 19:25:22.538053  148123 logs.go:282] 0 containers: []
	W1010 19:25:22.538061  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:22.538068  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:22.538124  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:22.576600  148123 cri.go:89] found id: ""
	I1010 19:25:22.576629  148123 logs.go:282] 0 containers: []
	W1010 19:25:22.576637  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:22.576643  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:22.576702  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:22.616020  148123 cri.go:89] found id: ""
	I1010 19:25:22.616048  148123 logs.go:282] 0 containers: []
	W1010 19:25:22.616057  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:22.616064  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:22.616114  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:22.652238  148123 cri.go:89] found id: ""
	I1010 19:25:22.652271  148123 logs.go:282] 0 containers: []
	W1010 19:25:22.652283  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:22.652295  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:22.652311  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:22.726182  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:22.726216  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:22.726234  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:22.814040  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:22.814086  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:22.859254  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:22.859289  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:22.916565  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:22.916607  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:20.227871  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:22.732566  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:20.066872  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:22.566173  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:23.050833  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:25.551662  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:25.432636  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:25.446982  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:25.447047  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:25.482440  148123 cri.go:89] found id: ""
	I1010 19:25:25.482472  148123 logs.go:282] 0 containers: []
	W1010 19:25:25.482484  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:25.482492  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:25.482552  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:25.517709  148123 cri.go:89] found id: ""
	I1010 19:25:25.517742  148123 logs.go:282] 0 containers: []
	W1010 19:25:25.517756  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:25.517764  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:25.517867  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:25.561504  148123 cri.go:89] found id: ""
	I1010 19:25:25.561532  148123 logs.go:282] 0 containers: []
	W1010 19:25:25.561544  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:25.561552  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:25.561616  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:25.601704  148123 cri.go:89] found id: ""
	I1010 19:25:25.601741  148123 logs.go:282] 0 containers: []
	W1010 19:25:25.601753  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:25.601762  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:25.601825  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:25.638519  148123 cri.go:89] found id: ""
	I1010 19:25:25.638544  148123 logs.go:282] 0 containers: []
	W1010 19:25:25.638552  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:25.638558  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:25.638609  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:25.673390  148123 cri.go:89] found id: ""
	I1010 19:25:25.673425  148123 logs.go:282] 0 containers: []
	W1010 19:25:25.673436  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:25.673447  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:25.673525  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:25.708251  148123 cri.go:89] found id: ""
	I1010 19:25:25.708278  148123 logs.go:282] 0 containers: []
	W1010 19:25:25.708286  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:25.708293  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:25.708354  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:25.749793  148123 cri.go:89] found id: ""
	I1010 19:25:25.749826  148123 logs.go:282] 0 containers: []
	W1010 19:25:25.749837  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:25.749846  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:25.749861  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:25.802511  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:25.802554  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:25.817523  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:25.817562  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:25.898595  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:25.898619  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:25.898635  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:25.978629  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:25.978684  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:28.520165  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:28.532725  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:28.532793  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:25.226875  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:27.729015  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:25.066298  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:27.066963  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:27.551915  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:29.558497  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:28.571667  148123 cri.go:89] found id: ""
	I1010 19:25:28.571695  148123 logs.go:282] 0 containers: []
	W1010 19:25:28.571704  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:28.571710  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:28.571770  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:28.608136  148123 cri.go:89] found id: ""
	I1010 19:25:28.608165  148123 logs.go:282] 0 containers: []
	W1010 19:25:28.608174  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:28.608181  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:28.608244  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:28.642123  148123 cri.go:89] found id: ""
	I1010 19:25:28.642161  148123 logs.go:282] 0 containers: []
	W1010 19:25:28.642173  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:28.642181  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:28.642242  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:28.678600  148123 cri.go:89] found id: ""
	I1010 19:25:28.678633  148123 logs.go:282] 0 containers: []
	W1010 19:25:28.678643  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:28.678651  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:28.678702  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:28.716627  148123 cri.go:89] found id: ""
	I1010 19:25:28.716660  148123 logs.go:282] 0 containers: []
	W1010 19:25:28.716673  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:28.716681  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:28.716766  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:28.752750  148123 cri.go:89] found id: ""
	I1010 19:25:28.752786  148123 logs.go:282] 0 containers: []
	W1010 19:25:28.752798  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:28.752806  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:28.752898  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:28.790021  148123 cri.go:89] found id: ""
	I1010 19:25:28.790054  148123 logs.go:282] 0 containers: []
	W1010 19:25:28.790066  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:28.790075  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:28.790128  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:28.828760  148123 cri.go:89] found id: ""
	I1010 19:25:28.828803  148123 logs.go:282] 0 containers: []
	W1010 19:25:28.828827  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:28.828839  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:28.828877  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:28.879773  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:28.879811  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:28.894311  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:28.894342  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:28.968675  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:28.968706  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:28.968729  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:29.045862  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:29.045913  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:31.588772  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:31.601453  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:31.601526  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:31.637056  148123 cri.go:89] found id: ""
	I1010 19:25:31.637090  148123 logs.go:282] 0 containers: []
	W1010 19:25:31.637101  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:31.637108  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:31.637183  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:31.677825  148123 cri.go:89] found id: ""
	I1010 19:25:31.677854  148123 logs.go:282] 0 containers: []
	W1010 19:25:31.677862  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:31.677867  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:31.677920  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:31.715372  148123 cri.go:89] found id: ""
	I1010 19:25:31.715402  148123 logs.go:282] 0 containers: []
	W1010 19:25:31.715411  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:31.715419  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:31.715470  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:31.753767  148123 cri.go:89] found id: ""
	I1010 19:25:31.753794  148123 logs.go:282] 0 containers: []
	W1010 19:25:31.753806  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:31.753814  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:31.753879  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:31.792999  148123 cri.go:89] found id: ""
	I1010 19:25:31.793024  148123 logs.go:282] 0 containers: []
	W1010 19:25:31.793035  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:31.793050  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:31.793106  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:31.831571  148123 cri.go:89] found id: ""
	I1010 19:25:31.831598  148123 logs.go:282] 0 containers: []
	W1010 19:25:31.831607  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:31.831614  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:31.831673  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:31.880382  148123 cri.go:89] found id: ""
	I1010 19:25:31.880422  148123 logs.go:282] 0 containers: []
	W1010 19:25:31.880436  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:31.880447  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:31.880527  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:31.919596  148123 cri.go:89] found id: ""
	I1010 19:25:31.919627  148123 logs.go:282] 0 containers: []
	W1010 19:25:31.919637  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:31.919647  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:31.919661  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:31.967963  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:31.968001  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:31.983064  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:31.983106  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:32.056073  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:32.056105  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:32.056122  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:32.142927  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:32.142978  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:30.226683  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:32.227047  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:29.565699  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:31.566109  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:32.051411  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:34.052064  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:36.550062  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:34.690025  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:34.705326  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:34.705413  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:34.740756  148123 cri.go:89] found id: ""
	I1010 19:25:34.740786  148123 logs.go:282] 0 containers: []
	W1010 19:25:34.740793  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:34.740799  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:34.740872  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:34.777021  148123 cri.go:89] found id: ""
	I1010 19:25:34.777049  148123 logs.go:282] 0 containers: []
	W1010 19:25:34.777061  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:34.777069  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:34.777123  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:34.826699  148123 cri.go:89] found id: ""
	I1010 19:25:34.826735  148123 logs.go:282] 0 containers: []
	W1010 19:25:34.826747  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:34.826754  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:34.826896  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:34.864297  148123 cri.go:89] found id: ""
	I1010 19:25:34.864329  148123 logs.go:282] 0 containers: []
	W1010 19:25:34.864340  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:34.864348  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:34.864415  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:34.899198  148123 cri.go:89] found id: ""
	I1010 19:25:34.899234  148123 logs.go:282] 0 containers: []
	W1010 19:25:34.899247  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:34.899256  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:34.899315  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:34.934132  148123 cri.go:89] found id: ""
	I1010 19:25:34.934162  148123 logs.go:282] 0 containers: []
	W1010 19:25:34.934170  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:34.934176  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:34.934238  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:34.969858  148123 cri.go:89] found id: ""
	I1010 19:25:34.969891  148123 logs.go:282] 0 containers: []
	W1010 19:25:34.969903  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:34.969911  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:34.969978  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:35.006082  148123 cri.go:89] found id: ""
	I1010 19:25:35.006121  148123 logs.go:282] 0 containers: []
	W1010 19:25:35.006132  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:35.006145  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:35.006160  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:35.060596  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:35.060631  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:35.076246  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:35.076281  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:35.151879  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:35.151912  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:35.151931  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:35.231802  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:35.231845  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:37.774550  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:37.787871  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:37.787938  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:37.824273  148123 cri.go:89] found id: ""
	I1010 19:25:37.824306  148123 logs.go:282] 0 containers: []
	W1010 19:25:37.824317  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:37.824325  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:37.824410  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:37.860548  148123 cri.go:89] found id: ""
	I1010 19:25:37.860582  148123 logs.go:282] 0 containers: []
	W1010 19:25:37.860593  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:37.860601  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:37.860677  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:37.894616  148123 cri.go:89] found id: ""
	I1010 19:25:37.894644  148123 logs.go:282] 0 containers: []
	W1010 19:25:37.894654  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:37.894662  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:37.894730  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:37.931247  148123 cri.go:89] found id: ""
	I1010 19:25:37.931272  148123 logs.go:282] 0 containers: []
	W1010 19:25:37.931280  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:37.931287  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:37.931337  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:37.968028  148123 cri.go:89] found id: ""
	I1010 19:25:37.968068  148123 logs.go:282] 0 containers: []
	W1010 19:25:37.968079  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:37.968086  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:37.968153  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:38.004661  148123 cri.go:89] found id: ""
	I1010 19:25:38.004692  148123 logs.go:282] 0 containers: []
	W1010 19:25:38.004700  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:38.004706  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:38.004760  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:38.040874  148123 cri.go:89] found id: ""
	I1010 19:25:38.040906  148123 logs.go:282] 0 containers: []
	W1010 19:25:38.040915  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:38.040922  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:38.040990  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:38.083771  148123 cri.go:89] found id: ""
	I1010 19:25:38.083802  148123 logs.go:282] 0 containers: []
	W1010 19:25:38.083811  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:38.083821  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:38.083836  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:38.099645  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:38.099683  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:38.172168  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:38.172207  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:38.172223  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:38.248523  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:38.248561  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:38.306594  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:38.306630  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:34.728106  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:37.226285  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:34.065919  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:36.066751  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:38.067361  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:38.550359  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:40.551190  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:40.873868  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:40.889561  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:40.889668  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:40.926769  148123 cri.go:89] found id: ""
	I1010 19:25:40.926800  148123 logs.go:282] 0 containers: []
	W1010 19:25:40.926808  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:40.926816  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:40.926869  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:40.962564  148123 cri.go:89] found id: ""
	I1010 19:25:40.962592  148123 logs.go:282] 0 containers: []
	W1010 19:25:40.962600  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:40.962606  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:40.962659  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:40.997856  148123 cri.go:89] found id: ""
	I1010 19:25:40.997885  148123 logs.go:282] 0 containers: []
	W1010 19:25:40.997894  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:40.997900  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:40.997951  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:41.035479  148123 cri.go:89] found id: ""
	I1010 19:25:41.035506  148123 logs.go:282] 0 containers: []
	W1010 19:25:41.035514  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:41.035520  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:41.035575  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:41.077733  148123 cri.go:89] found id: ""
	I1010 19:25:41.077817  148123 logs.go:282] 0 containers: []
	W1010 19:25:41.077830  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:41.077840  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:41.077916  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:41.115439  148123 cri.go:89] found id: ""
	I1010 19:25:41.115476  148123 logs.go:282] 0 containers: []
	W1010 19:25:41.115489  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:41.115498  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:41.115552  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:41.152586  148123 cri.go:89] found id: ""
	I1010 19:25:41.152625  148123 logs.go:282] 0 containers: []
	W1010 19:25:41.152636  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:41.152643  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:41.152695  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:41.186807  148123 cri.go:89] found id: ""
	I1010 19:25:41.186840  148123 logs.go:282] 0 containers: []
	W1010 19:25:41.186851  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:41.186862  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:41.186880  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:41.200404  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:41.200447  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:41.280717  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:41.280744  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:41.280760  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:41.360561  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:41.360611  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:41.404461  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:41.404497  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:39.226903  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:41.227077  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:43.727197  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:40.570404  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:43.066523  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:43.050813  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:45.051094  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:43.955723  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:43.970895  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:43.970964  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:44.012162  148123 cri.go:89] found id: ""
	I1010 19:25:44.012194  148123 logs.go:282] 0 containers: []
	W1010 19:25:44.012203  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:44.012209  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:44.012282  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:44.074583  148123 cri.go:89] found id: ""
	I1010 19:25:44.074606  148123 logs.go:282] 0 containers: []
	W1010 19:25:44.074614  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:44.074620  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:44.074675  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:44.133562  148123 cri.go:89] found id: ""
	I1010 19:25:44.133588  148123 logs.go:282] 0 containers: []
	W1010 19:25:44.133596  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:44.133602  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:44.133658  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:44.168832  148123 cri.go:89] found id: ""
	I1010 19:25:44.168883  148123 logs.go:282] 0 containers: []
	W1010 19:25:44.168896  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:44.168908  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:44.168967  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:44.204896  148123 cri.go:89] found id: ""
	I1010 19:25:44.204923  148123 logs.go:282] 0 containers: []
	W1010 19:25:44.204934  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:44.204943  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:44.205002  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:44.242410  148123 cri.go:89] found id: ""
	I1010 19:25:44.242437  148123 logs.go:282] 0 containers: []
	W1010 19:25:44.242448  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:44.242456  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:44.242524  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:44.278553  148123 cri.go:89] found id: ""
	I1010 19:25:44.278581  148123 logs.go:282] 0 containers: []
	W1010 19:25:44.278589  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:44.278595  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:44.278658  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:44.316099  148123 cri.go:89] found id: ""
	I1010 19:25:44.316125  148123 logs.go:282] 0 containers: []
	W1010 19:25:44.316132  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:44.316141  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:44.316155  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:44.365809  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:44.365860  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:44.380129  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:44.380162  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:44.454241  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:44.454267  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:44.454281  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:44.530928  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:44.530974  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:47.078279  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:47.092233  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:47.092310  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:47.129517  148123 cri.go:89] found id: ""
	I1010 19:25:47.129557  148123 logs.go:282] 0 containers: []
	W1010 19:25:47.129573  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:47.129582  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:47.129654  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:47.170309  148123 cri.go:89] found id: ""
	I1010 19:25:47.170345  148123 logs.go:282] 0 containers: []
	W1010 19:25:47.170357  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:47.170365  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:47.170432  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:47.205110  148123 cri.go:89] found id: ""
	I1010 19:25:47.205145  148123 logs.go:282] 0 containers: []
	W1010 19:25:47.205156  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:47.205162  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:47.205228  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:47.245668  148123 cri.go:89] found id: ""
	I1010 19:25:47.245695  148123 logs.go:282] 0 containers: []
	W1010 19:25:47.245704  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:47.245711  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:47.245768  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:47.284549  148123 cri.go:89] found id: ""
	I1010 19:25:47.284576  148123 logs.go:282] 0 containers: []
	W1010 19:25:47.284584  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:47.284590  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:47.284643  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:47.324676  148123 cri.go:89] found id: ""
	I1010 19:25:47.324708  148123 logs.go:282] 0 containers: []
	W1010 19:25:47.324720  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:47.324729  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:47.324782  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:47.364315  148123 cri.go:89] found id: ""
	I1010 19:25:47.364347  148123 logs.go:282] 0 containers: []
	W1010 19:25:47.364356  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:47.364362  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:47.364433  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:47.400611  148123 cri.go:89] found id: ""
	I1010 19:25:47.400641  148123 logs.go:282] 0 containers: []
	W1010 19:25:47.400652  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:47.400664  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:47.400680  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:47.415185  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:47.415227  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:47.481638  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:47.481665  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:47.481681  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:47.560135  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:47.560171  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:47.602144  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:47.602174  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:46.227386  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:48.227699  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:45.066887  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:47.565340  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:47.051459  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:49.550170  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:51.554542  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:50.151770  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:50.165336  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:50.165403  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:50.201045  148123 cri.go:89] found id: ""
	I1010 19:25:50.201072  148123 logs.go:282] 0 containers: []
	W1010 19:25:50.201082  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:50.201089  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:50.201154  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:50.237047  148123 cri.go:89] found id: ""
	I1010 19:25:50.237082  148123 logs.go:282] 0 containers: []
	W1010 19:25:50.237094  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:50.237102  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:50.237174  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:50.273727  148123 cri.go:89] found id: ""
	I1010 19:25:50.273756  148123 logs.go:282] 0 containers: []
	W1010 19:25:50.273767  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:50.273780  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:50.273843  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:50.311397  148123 cri.go:89] found id: ""
	I1010 19:25:50.311424  148123 logs.go:282] 0 containers: []
	W1010 19:25:50.311433  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:50.311450  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:50.311513  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:50.347593  148123 cri.go:89] found id: ""
	I1010 19:25:50.347625  148123 logs.go:282] 0 containers: []
	W1010 19:25:50.347637  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:50.347705  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:50.347782  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:50.382195  148123 cri.go:89] found id: ""
	I1010 19:25:50.382228  148123 logs.go:282] 0 containers: []
	W1010 19:25:50.382240  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:50.382247  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:50.382313  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:50.419189  148123 cri.go:89] found id: ""
	I1010 19:25:50.419221  148123 logs.go:282] 0 containers: []
	W1010 19:25:50.419229  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:50.419236  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:50.419297  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:50.454368  148123 cri.go:89] found id: ""
	I1010 19:25:50.454399  148123 logs.go:282] 0 containers: []
	W1010 19:25:50.454410  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:50.454421  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:50.454440  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:50.495575  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:50.495623  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:50.548691  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:50.548737  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:50.564585  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:50.564621  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:50.637152  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:50.637174  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:50.637190  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:53.221939  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:53.235889  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:53.235968  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:53.278589  148123 cri.go:89] found id: ""
	I1010 19:25:53.278620  148123 logs.go:282] 0 containers: []
	W1010 19:25:53.278632  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:53.278640  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:53.278705  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:53.315062  148123 cri.go:89] found id: ""
	I1010 19:25:53.315090  148123 logs.go:282] 0 containers: []
	W1010 19:25:53.315101  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:53.315109  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:53.315182  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:53.348994  148123 cri.go:89] found id: ""
	I1010 19:25:53.349031  148123 logs.go:282] 0 containers: []
	W1010 19:25:53.349040  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:53.349046  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:53.349099  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:53.383334  148123 cri.go:89] found id: ""
	I1010 19:25:53.383363  148123 logs.go:282] 0 containers: []
	W1010 19:25:53.383373  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:53.383380  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:53.383450  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:53.417888  148123 cri.go:89] found id: ""
	I1010 19:25:53.417920  148123 logs.go:282] 0 containers: []
	W1010 19:25:53.417931  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:53.417938  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:53.418013  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:53.454757  148123 cri.go:89] found id: ""
	I1010 19:25:53.454787  148123 logs.go:282] 0 containers: []
	W1010 19:25:53.454798  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:53.454806  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:53.454871  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:53.491422  148123 cri.go:89] found id: ""
	I1010 19:25:53.491452  148123 logs.go:282] 0 containers: []
	W1010 19:25:53.491464  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:53.491472  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:53.491541  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:53.531240  148123 cri.go:89] found id: ""
	I1010 19:25:53.531271  148123 logs.go:282] 0 containers: []
	W1010 19:25:53.531282  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:53.531294  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:53.531310  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:50.727196  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:53.226957  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:50.065907  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:52.567056  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:54.051112  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:56.554137  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:53.582576  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:53.582608  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:53.596481  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:53.596511  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:53.666134  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:53.666162  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:53.666178  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:53.745621  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:53.745659  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:56.290744  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:56.307536  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:56.307610  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:56.353456  148123 cri.go:89] found id: ""
	I1010 19:25:56.353484  148123 logs.go:282] 0 containers: []
	W1010 19:25:56.353494  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:56.353501  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:56.353553  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:56.394093  148123 cri.go:89] found id: ""
	I1010 19:25:56.394120  148123 logs.go:282] 0 containers: []
	W1010 19:25:56.394131  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:56.394138  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:56.394203  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:56.428388  148123 cri.go:89] found id: ""
	I1010 19:25:56.428428  148123 logs.go:282] 0 containers: []
	W1010 19:25:56.428441  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:56.428450  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:56.428524  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:56.464414  148123 cri.go:89] found id: ""
	I1010 19:25:56.464459  148123 logs.go:282] 0 containers: []
	W1010 19:25:56.464472  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:56.464480  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:56.464563  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:56.505271  148123 cri.go:89] found id: ""
	I1010 19:25:56.505307  148123 logs.go:282] 0 containers: []
	W1010 19:25:56.505316  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:56.505322  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:56.505374  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:56.546424  148123 cri.go:89] found id: ""
	I1010 19:25:56.546456  148123 logs.go:282] 0 containers: []
	W1010 19:25:56.546467  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:56.546475  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:56.546545  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:56.587458  148123 cri.go:89] found id: ""
	I1010 19:25:56.587489  148123 logs.go:282] 0 containers: []
	W1010 19:25:56.587501  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:56.587509  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:56.587588  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:56.628248  148123 cri.go:89] found id: ""
	I1010 19:25:56.628279  148123 logs.go:282] 0 containers: []
	W1010 19:25:56.628291  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:56.628304  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:56.628320  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:56.642109  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:56.642141  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:56.723646  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:56.723678  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:56.723696  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:56.805849  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:56.805899  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:56.849863  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:56.849891  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:55.230447  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:57.726896  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:55.066248  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:57.565240  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:59.051145  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:01.554276  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:59.407093  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:59.422915  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:59.423005  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:59.462867  148123 cri.go:89] found id: ""
	I1010 19:25:59.462898  148123 logs.go:282] 0 containers: []
	W1010 19:25:59.462909  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:59.462917  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:59.462980  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:59.496931  148123 cri.go:89] found id: ""
	I1010 19:25:59.496958  148123 logs.go:282] 0 containers: []
	W1010 19:25:59.496968  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:59.496983  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:59.497049  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:59.532241  148123 cri.go:89] found id: ""
	I1010 19:25:59.532271  148123 logs.go:282] 0 containers: []
	W1010 19:25:59.532283  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:59.532291  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:59.532352  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:59.571285  148123 cri.go:89] found id: ""
	I1010 19:25:59.571313  148123 logs.go:282] 0 containers: []
	W1010 19:25:59.571324  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:59.571331  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:59.571391  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:59.606687  148123 cri.go:89] found id: ""
	I1010 19:25:59.606721  148123 logs.go:282] 0 containers: []
	W1010 19:25:59.606734  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:59.606741  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:59.606800  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:59.643247  148123 cri.go:89] found id: ""
	I1010 19:25:59.643276  148123 logs.go:282] 0 containers: []
	W1010 19:25:59.643286  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:59.643294  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:59.643369  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:59.679305  148123 cri.go:89] found id: ""
	I1010 19:25:59.679335  148123 logs.go:282] 0 containers: []
	W1010 19:25:59.679344  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:59.679350  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:59.679407  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:59.716149  148123 cri.go:89] found id: ""
	I1010 19:25:59.716198  148123 logs.go:282] 0 containers: []
	W1010 19:25:59.716210  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:59.716222  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:59.716239  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:59.730334  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:59.730364  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:59.799759  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:59.799786  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:59.799804  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:59.881883  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:59.881925  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:59.923755  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:59.923784  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:02.478043  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:02.492627  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:02.492705  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:02.532133  148123 cri.go:89] found id: ""
	I1010 19:26:02.532160  148123 logs.go:282] 0 containers: []
	W1010 19:26:02.532172  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:02.532179  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:02.532244  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:02.571915  148123 cri.go:89] found id: ""
	I1010 19:26:02.571943  148123 logs.go:282] 0 containers: []
	W1010 19:26:02.571951  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:02.571957  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:02.572014  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:02.616330  148123 cri.go:89] found id: ""
	I1010 19:26:02.616365  148123 logs.go:282] 0 containers: []
	W1010 19:26:02.616376  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:02.616385  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:02.616455  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:02.662245  148123 cri.go:89] found id: ""
	I1010 19:26:02.662275  148123 logs.go:282] 0 containers: []
	W1010 19:26:02.662284  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:02.662291  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:02.662354  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:02.698692  148123 cri.go:89] found id: ""
	I1010 19:26:02.698720  148123 logs.go:282] 0 containers: []
	W1010 19:26:02.698731  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:02.698739  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:02.698805  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:02.736643  148123 cri.go:89] found id: ""
	I1010 19:26:02.736667  148123 logs.go:282] 0 containers: []
	W1010 19:26:02.736677  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:02.736685  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:02.736742  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:02.780356  148123 cri.go:89] found id: ""
	I1010 19:26:02.780388  148123 logs.go:282] 0 containers: []
	W1010 19:26:02.780399  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:02.780407  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:02.780475  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:02.816716  148123 cri.go:89] found id: ""
	I1010 19:26:02.816744  148123 logs.go:282] 0 containers: []
	W1010 19:26:02.816752  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:02.816761  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:02.816775  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:02.899002  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:02.899046  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:02.942060  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:02.942093  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:02.997605  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:02.997661  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:03.011768  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:03.011804  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:03.096404  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:59.727075  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:02.227526  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:59.565903  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:02.066179  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:04.049656  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:06.050425  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:05.596971  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:05.612524  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:05.612598  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:05.649999  148123 cri.go:89] found id: ""
	I1010 19:26:05.650032  148123 logs.go:282] 0 containers: []
	W1010 19:26:05.650044  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:05.650052  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:05.650119  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:05.684365  148123 cri.go:89] found id: ""
	I1010 19:26:05.684398  148123 logs.go:282] 0 containers: []
	W1010 19:26:05.684419  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:05.684427  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:05.684494  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:05.721801  148123 cri.go:89] found id: ""
	I1010 19:26:05.721832  148123 logs.go:282] 0 containers: []
	W1010 19:26:05.721840  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:05.721847  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:05.721908  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:05.762010  148123 cri.go:89] found id: ""
	I1010 19:26:05.762037  148123 logs.go:282] 0 containers: []
	W1010 19:26:05.762046  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:05.762056  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:05.762117  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:05.803425  148123 cri.go:89] found id: ""
	I1010 19:26:05.803457  148123 logs.go:282] 0 containers: []
	W1010 19:26:05.803468  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:05.803476  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:05.803551  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:05.838800  148123 cri.go:89] found id: ""
	I1010 19:26:05.838833  148123 logs.go:282] 0 containers: []
	W1010 19:26:05.838845  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:05.838853  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:05.838921  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:05.875110  148123 cri.go:89] found id: ""
	I1010 19:26:05.875147  148123 logs.go:282] 0 containers: []
	W1010 19:26:05.875158  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:05.875167  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:05.875245  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:05.908951  148123 cri.go:89] found id: ""
	I1010 19:26:05.908988  148123 logs.go:282] 0 containers: []
	W1010 19:26:05.909000  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:05.909012  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:05.909027  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:05.922263  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:05.922293  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:05.996441  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:05.996465  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:05.996484  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:06.078113  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:06.078160  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:06.122919  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:06.122956  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:04.726335  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:06.728178  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:04.066573  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:06.564991  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:08.566655  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:08.050522  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:10.550288  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:08.674464  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:08.688452  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:08.688570  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:08.726240  148123 cri.go:89] found id: ""
	I1010 19:26:08.726269  148123 logs.go:282] 0 containers: []
	W1010 19:26:08.726279  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:08.726287  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:08.726370  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:08.764762  148123 cri.go:89] found id: ""
	I1010 19:26:08.764790  148123 logs.go:282] 0 containers: []
	W1010 19:26:08.764799  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:08.764806  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:08.764887  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:08.805522  148123 cri.go:89] found id: ""
	I1010 19:26:08.805552  148123 logs.go:282] 0 containers: []
	W1010 19:26:08.805563  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:08.805572  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:08.805642  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:08.845694  148123 cri.go:89] found id: ""
	I1010 19:26:08.845729  148123 logs.go:282] 0 containers: []
	W1010 19:26:08.845740  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:08.845747  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:08.845817  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:08.885024  148123 cri.go:89] found id: ""
	I1010 19:26:08.885054  148123 logs.go:282] 0 containers: []
	W1010 19:26:08.885066  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:08.885074  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:08.885138  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:08.927500  148123 cri.go:89] found id: ""
	I1010 19:26:08.927531  148123 logs.go:282] 0 containers: []
	W1010 19:26:08.927540  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:08.927547  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:08.927616  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:08.963870  148123 cri.go:89] found id: ""
	I1010 19:26:08.963904  148123 logs.go:282] 0 containers: []
	W1010 19:26:08.963916  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:08.963924  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:08.963988  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:08.997984  148123 cri.go:89] found id: ""
	I1010 19:26:08.998017  148123 logs.go:282] 0 containers: []
	W1010 19:26:08.998027  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:08.998039  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:08.998056  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:09.049307  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:09.049341  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:09.063341  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:09.063432  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:09.145190  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:09.145214  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:09.145226  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:09.231409  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:09.231445  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:11.773475  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:11.788981  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:11.789055  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:11.830289  148123 cri.go:89] found id: ""
	I1010 19:26:11.830319  148123 logs.go:282] 0 containers: []
	W1010 19:26:11.830327  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:11.830333  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:11.830399  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:11.867093  148123 cri.go:89] found id: ""
	I1010 19:26:11.867120  148123 logs.go:282] 0 containers: []
	W1010 19:26:11.867131  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:11.867138  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:11.867212  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:11.902146  148123 cri.go:89] found id: ""
	I1010 19:26:11.902181  148123 logs.go:282] 0 containers: []
	W1010 19:26:11.902192  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:11.902201  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:11.902271  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:11.938652  148123 cri.go:89] found id: ""
	I1010 19:26:11.938681  148123 logs.go:282] 0 containers: []
	W1010 19:26:11.938691  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:11.938703  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:11.938771  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:11.975903  148123 cri.go:89] found id: ""
	I1010 19:26:11.975937  148123 logs.go:282] 0 containers: []
	W1010 19:26:11.975947  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:11.975955  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:11.976020  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:12.014083  148123 cri.go:89] found id: ""
	I1010 19:26:12.014111  148123 logs.go:282] 0 containers: []
	W1010 19:26:12.014119  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:12.014126  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:12.014190  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:12.057832  148123 cri.go:89] found id: ""
	I1010 19:26:12.057858  148123 logs.go:282] 0 containers: []
	W1010 19:26:12.057867  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:12.057874  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:12.057925  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:12.103253  148123 cri.go:89] found id: ""
	I1010 19:26:12.103286  148123 logs.go:282] 0 containers: []
	W1010 19:26:12.103298  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:12.103311  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:12.103326  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:12.171062  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:12.171104  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:12.185463  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:12.185496  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:12.261568  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:12.261592  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:12.261607  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:12.345819  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:12.345860  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:09.226954  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:11.227205  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:13.227457  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:11.066777  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:13.565854  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:12.551323  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:15.051745  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:14.886283  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:14.901117  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:14.901213  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:14.935235  148123 cri.go:89] found id: ""
	I1010 19:26:14.935264  148123 logs.go:282] 0 containers: []
	W1010 19:26:14.935273  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:14.935280  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:14.935336  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:14.970617  148123 cri.go:89] found id: ""
	I1010 19:26:14.970642  148123 logs.go:282] 0 containers: []
	W1010 19:26:14.970652  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:14.970658  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:14.970722  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:15.011086  148123 cri.go:89] found id: ""
	I1010 19:26:15.011113  148123 logs.go:282] 0 containers: []
	W1010 19:26:15.011122  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:15.011128  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:15.011188  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:15.053004  148123 cri.go:89] found id: ""
	I1010 19:26:15.053026  148123 logs.go:282] 0 containers: []
	W1010 19:26:15.053033  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:15.053039  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:15.053099  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:15.089275  148123 cri.go:89] found id: ""
	I1010 19:26:15.089304  148123 logs.go:282] 0 containers: []
	W1010 19:26:15.089312  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:15.089319  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:15.089383  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:15.126620  148123 cri.go:89] found id: ""
	I1010 19:26:15.126645  148123 logs.go:282] 0 containers: []
	W1010 19:26:15.126654  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:15.126660  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:15.126723  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:15.160920  148123 cri.go:89] found id: ""
	I1010 19:26:15.160956  148123 logs.go:282] 0 containers: []
	W1010 19:26:15.160969  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:15.160979  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:15.161037  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:15.197430  148123 cri.go:89] found id: ""
	I1010 19:26:15.197462  148123 logs.go:282] 0 containers: []
	W1010 19:26:15.197474  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:15.197486  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:15.197507  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:15.240905  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:15.240953  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:15.297663  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:15.297719  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:15.312279  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:15.312313  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:15.395296  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:15.395320  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:15.395336  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:17.978030  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:17.992574  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:17.992651  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:18.028846  148123 cri.go:89] found id: ""
	I1010 19:26:18.028890  148123 logs.go:282] 0 containers: []
	W1010 19:26:18.028901  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:18.028908  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:18.028965  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:18.073004  148123 cri.go:89] found id: ""
	I1010 19:26:18.073033  148123 logs.go:282] 0 containers: []
	W1010 19:26:18.073041  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:18.073047  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:18.073118  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:18.112062  148123 cri.go:89] found id: ""
	I1010 19:26:18.112098  148123 logs.go:282] 0 containers: []
	W1010 19:26:18.112111  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:18.112121  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:18.112209  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:18.151565  148123 cri.go:89] found id: ""
	I1010 19:26:18.151597  148123 logs.go:282] 0 containers: []
	W1010 19:26:18.151608  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:18.151616  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:18.151675  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:18.197483  148123 cri.go:89] found id: ""
	I1010 19:26:18.197509  148123 logs.go:282] 0 containers: []
	W1010 19:26:18.197518  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:18.197524  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:18.197591  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:18.235151  148123 cri.go:89] found id: ""
	I1010 19:26:18.235181  148123 logs.go:282] 0 containers: []
	W1010 19:26:18.235193  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:18.235201  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:18.235279  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:18.273807  148123 cri.go:89] found id: ""
	I1010 19:26:18.273841  148123 logs.go:282] 0 containers: []
	W1010 19:26:18.273852  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:18.273858  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:18.273920  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:18.311644  148123 cri.go:89] found id: ""
	I1010 19:26:18.311677  148123 logs.go:282] 0 containers: []
	W1010 19:26:18.311688  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:18.311700  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:18.311716  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:18.355599  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:18.355635  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:18.407786  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:18.407830  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:18.422944  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:18.422976  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:18.496830  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:18.496873  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:18.496887  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:15.227600  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:17.726712  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:16.065701  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:18.066861  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:17.558257  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:20.050914  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:21.108300  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:21.123177  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:21.123243  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:21.162544  148123 cri.go:89] found id: ""
	I1010 19:26:21.162575  148123 logs.go:282] 0 containers: []
	W1010 19:26:21.162586  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:21.162595  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:21.162662  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:21.199799  148123 cri.go:89] found id: ""
	I1010 19:26:21.199828  148123 logs.go:282] 0 containers: []
	W1010 19:26:21.199839  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:21.199845  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:21.199901  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:21.246333  148123 cri.go:89] found id: ""
	I1010 19:26:21.246357  148123 logs.go:282] 0 containers: []
	W1010 19:26:21.246366  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:21.246372  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:21.246433  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:21.282681  148123 cri.go:89] found id: ""
	I1010 19:26:21.282719  148123 logs.go:282] 0 containers: []
	W1010 19:26:21.282732  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:21.282740  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:21.282810  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:21.319500  148123 cri.go:89] found id: ""
	I1010 19:26:21.319535  148123 logs.go:282] 0 containers: []
	W1010 19:26:21.319546  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:21.319554  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:21.319623  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:21.355779  148123 cri.go:89] found id: ""
	I1010 19:26:21.355809  148123 logs.go:282] 0 containers: []
	W1010 19:26:21.355820  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:21.355828  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:21.355902  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:21.394567  148123 cri.go:89] found id: ""
	I1010 19:26:21.394608  148123 logs.go:282] 0 containers: []
	W1010 19:26:21.394617  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:21.394623  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:21.394684  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:21.430009  148123 cri.go:89] found id: ""
	I1010 19:26:21.430046  148123 logs.go:282] 0 containers: []
	W1010 19:26:21.430058  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:21.430069  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:21.430089  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:21.443267  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:21.443301  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:21.517046  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:21.517068  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:21.517083  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:21.594927  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:21.594978  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:21.634274  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:21.634311  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:20.227157  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:22.727736  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:20.566652  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:23.066459  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:22.550526  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:25.050647  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:24.189760  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:24.204355  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:24.204420  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:24.248130  148123 cri.go:89] found id: ""
	I1010 19:26:24.248160  148123 logs.go:282] 0 containers: []
	W1010 19:26:24.248171  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:24.248178  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:24.248244  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:24.293281  148123 cri.go:89] found id: ""
	I1010 19:26:24.293312  148123 logs.go:282] 0 containers: []
	W1010 19:26:24.293324  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:24.293331  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:24.293400  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:24.350709  148123 cri.go:89] found id: ""
	I1010 19:26:24.350743  148123 logs.go:282] 0 containers: []
	W1010 19:26:24.350755  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:24.350765  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:24.350838  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:24.394113  148123 cri.go:89] found id: ""
	I1010 19:26:24.394152  148123 logs.go:282] 0 containers: []
	W1010 19:26:24.394170  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:24.394181  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:24.394256  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:24.445999  148123 cri.go:89] found id: ""
	I1010 19:26:24.446030  148123 logs.go:282] 0 containers: []
	W1010 19:26:24.446042  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:24.446051  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:24.446120  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:24.483566  148123 cri.go:89] found id: ""
	I1010 19:26:24.483596  148123 logs.go:282] 0 containers: []
	W1010 19:26:24.483605  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:24.483612  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:24.483665  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:24.520705  148123 cri.go:89] found id: ""
	I1010 19:26:24.520736  148123 logs.go:282] 0 containers: []
	W1010 19:26:24.520748  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:24.520757  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:24.520825  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:24.557316  148123 cri.go:89] found id: ""
	I1010 19:26:24.557346  148123 logs.go:282] 0 containers: []
	W1010 19:26:24.557355  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:24.557364  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:24.557376  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:24.608065  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:24.608109  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:24.623202  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:24.623250  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:24.694665  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:24.694697  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:24.694716  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:24.783369  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:24.783414  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:27.327524  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:27.341132  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:27.341214  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:27.376589  148123 cri.go:89] found id: ""
	I1010 19:26:27.376618  148123 logs.go:282] 0 containers: []
	W1010 19:26:27.376627  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:27.376633  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:27.376684  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:27.414452  148123 cri.go:89] found id: ""
	I1010 19:26:27.414491  148123 logs.go:282] 0 containers: []
	W1010 19:26:27.414502  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:27.414510  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:27.414572  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:27.449839  148123 cri.go:89] found id: ""
	I1010 19:26:27.449867  148123 logs.go:282] 0 containers: []
	W1010 19:26:27.449875  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:27.449882  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:27.449932  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:27.492445  148123 cri.go:89] found id: ""
	I1010 19:26:27.492472  148123 logs.go:282] 0 containers: []
	W1010 19:26:27.492480  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:27.492487  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:27.492549  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:27.533032  148123 cri.go:89] found id: ""
	I1010 19:26:27.533085  148123 logs.go:282] 0 containers: []
	W1010 19:26:27.533095  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:27.533122  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:27.533199  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:27.571096  148123 cri.go:89] found id: ""
	I1010 19:26:27.571122  148123 logs.go:282] 0 containers: []
	W1010 19:26:27.571130  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:27.571135  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:27.571204  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:27.608762  148123 cri.go:89] found id: ""
	I1010 19:26:27.608798  148123 logs.go:282] 0 containers: []
	W1010 19:26:27.608809  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:27.608818  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:27.608896  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:27.648200  148123 cri.go:89] found id: ""
	I1010 19:26:27.648236  148123 logs.go:282] 0 containers: []
	W1010 19:26:27.648248  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:27.648260  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:27.648275  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:27.698124  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:27.698154  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:27.755009  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:27.755052  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:27.770048  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:27.770084  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:27.845475  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:27.845505  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:27.845519  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:24.729352  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:26.731831  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:25.566028  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:27.567052  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:27.555698  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:30.049914  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:30.423117  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:30.436706  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:30.436768  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:30.473367  148123 cri.go:89] found id: ""
	I1010 19:26:30.473396  148123 logs.go:282] 0 containers: []
	W1010 19:26:30.473408  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:30.473417  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:30.473470  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:30.510560  148123 cri.go:89] found id: ""
	I1010 19:26:30.510599  148123 logs.go:282] 0 containers: []
	W1010 19:26:30.510610  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:30.510619  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:30.510687  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:30.549682  148123 cri.go:89] found id: ""
	I1010 19:26:30.549715  148123 logs.go:282] 0 containers: []
	W1010 19:26:30.549726  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:30.549734  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:30.549798  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:30.586401  148123 cri.go:89] found id: ""
	I1010 19:26:30.586425  148123 logs.go:282] 0 containers: []
	W1010 19:26:30.586434  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:30.586441  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:30.586492  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:30.622012  148123 cri.go:89] found id: ""
	I1010 19:26:30.622037  148123 logs.go:282] 0 containers: []
	W1010 19:26:30.622045  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:30.622052  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:30.622107  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:30.658336  148123 cri.go:89] found id: ""
	I1010 19:26:30.658364  148123 logs.go:282] 0 containers: []
	W1010 19:26:30.658372  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:30.658378  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:30.658442  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:30.695495  148123 cri.go:89] found id: ""
	I1010 19:26:30.695523  148123 logs.go:282] 0 containers: []
	W1010 19:26:30.695532  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:30.695538  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:30.695591  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:30.735093  148123 cri.go:89] found id: ""
	I1010 19:26:30.735125  148123 logs.go:282] 0 containers: []
	W1010 19:26:30.735136  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:30.735148  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:30.735163  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:30.816097  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:30.816140  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:30.853878  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:30.853915  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:30.906289  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:30.906341  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:30.923742  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:30.923769  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:30.993226  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:33.493799  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:33.508083  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:33.508166  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:33.545019  148123 cri.go:89] found id: ""
	I1010 19:26:33.545055  148123 logs.go:282] 0 containers: []
	W1010 19:26:33.545072  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:33.545081  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:33.545150  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:29.226673  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:31.227117  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:33.727777  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:30.068231  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:32.566025  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:32.050118  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:34.051720  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:36.550138  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:33.584215  148123 cri.go:89] found id: ""
	I1010 19:26:33.584242  148123 logs.go:282] 0 containers: []
	W1010 19:26:33.584252  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:33.584259  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:33.584315  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:33.620598  148123 cri.go:89] found id: ""
	I1010 19:26:33.620627  148123 logs.go:282] 0 containers: []
	W1010 19:26:33.620636  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:33.620643  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:33.620691  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:33.655225  148123 cri.go:89] found id: ""
	I1010 19:26:33.655252  148123 logs.go:282] 0 containers: []
	W1010 19:26:33.655260  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:33.655267  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:33.655322  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:33.693464  148123 cri.go:89] found id: ""
	I1010 19:26:33.693490  148123 logs.go:282] 0 containers: []
	W1010 19:26:33.693498  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:33.693505  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:33.693554  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:33.734024  148123 cri.go:89] found id: ""
	I1010 19:26:33.734059  148123 logs.go:282] 0 containers: []
	W1010 19:26:33.734071  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:33.734079  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:33.734141  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:33.769434  148123 cri.go:89] found id: ""
	I1010 19:26:33.769467  148123 logs.go:282] 0 containers: []
	W1010 19:26:33.769476  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:33.769483  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:33.769533  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:33.809011  148123 cri.go:89] found id: ""
	I1010 19:26:33.809040  148123 logs.go:282] 0 containers: []
	W1010 19:26:33.809049  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:33.809059  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:33.809071  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:33.864186  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:33.864230  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:33.878681  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:33.878711  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:33.958225  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:33.958250  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:33.958265  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:34.034730  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:34.034774  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:36.577123  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:36.591494  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:36.591560  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:36.625170  148123 cri.go:89] found id: ""
	I1010 19:26:36.625210  148123 logs.go:282] 0 containers: []
	W1010 19:26:36.625222  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:36.625230  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:36.625295  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:36.660790  148123 cri.go:89] found id: ""
	I1010 19:26:36.660821  148123 logs.go:282] 0 containers: []
	W1010 19:26:36.660831  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:36.660838  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:36.660911  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:36.695742  148123 cri.go:89] found id: ""
	I1010 19:26:36.695774  148123 logs.go:282] 0 containers: []
	W1010 19:26:36.695786  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:36.695793  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:36.695854  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:36.729523  148123 cri.go:89] found id: ""
	I1010 19:26:36.729552  148123 logs.go:282] 0 containers: []
	W1010 19:26:36.729561  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:36.729569  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:36.729630  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:36.765515  148123 cri.go:89] found id: ""
	I1010 19:26:36.765549  148123 logs.go:282] 0 containers: []
	W1010 19:26:36.765561  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:36.765571  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:36.765635  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:36.799110  148123 cri.go:89] found id: ""
	I1010 19:26:36.799144  148123 logs.go:282] 0 containers: []
	W1010 19:26:36.799155  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:36.799164  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:36.799224  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:36.832806  148123 cri.go:89] found id: ""
	I1010 19:26:36.832834  148123 logs.go:282] 0 containers: []
	W1010 19:26:36.832845  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:36.832865  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:36.832933  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:36.869093  148123 cri.go:89] found id: ""
	I1010 19:26:36.869126  148123 logs.go:282] 0 containers: []
	W1010 19:26:36.869137  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:36.869149  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:36.869165  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:36.922229  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:36.922276  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:36.936973  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:36.937019  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:37.016400  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:37.016425  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:37.016448  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:37.106308  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:37.106354  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:36.227451  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:38.726229  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:35.067396  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:37.565711  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:38.550438  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:41.050698  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:39.652101  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:39.665546  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:39.665639  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:39.699646  148123 cri.go:89] found id: ""
	I1010 19:26:39.699675  148123 logs.go:282] 0 containers: []
	W1010 19:26:39.699686  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:39.699693  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:39.699745  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:39.735568  148123 cri.go:89] found id: ""
	I1010 19:26:39.735592  148123 logs.go:282] 0 containers: []
	W1010 19:26:39.735600  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:39.735606  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:39.735656  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:39.770021  148123 cri.go:89] found id: ""
	I1010 19:26:39.770049  148123 logs.go:282] 0 containers: []
	W1010 19:26:39.770060  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:39.770067  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:39.770139  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:39.804867  148123 cri.go:89] found id: ""
	I1010 19:26:39.804894  148123 logs.go:282] 0 containers: []
	W1010 19:26:39.804903  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:39.804910  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:39.804972  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:39.847074  148123 cri.go:89] found id: ""
	I1010 19:26:39.847104  148123 logs.go:282] 0 containers: []
	W1010 19:26:39.847115  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:39.847124  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:39.847200  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:39.890334  148123 cri.go:89] found id: ""
	I1010 19:26:39.890368  148123 logs.go:282] 0 containers: []
	W1010 19:26:39.890379  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:39.890387  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:39.890456  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:39.926316  148123 cri.go:89] found id: ""
	I1010 19:26:39.926346  148123 logs.go:282] 0 containers: []
	W1010 19:26:39.926355  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:39.926362  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:39.926416  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:39.966179  148123 cri.go:89] found id: ""
	I1010 19:26:39.966215  148123 logs.go:282] 0 containers: []
	W1010 19:26:39.966227  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:39.966239  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:39.966255  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:40.047457  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:40.047505  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:40.086611  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:40.086656  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:40.139123  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:40.139167  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:40.152966  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:40.152995  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:40.231009  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:42.731851  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:42.748201  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:42.748287  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:42.787567  148123 cri.go:89] found id: ""
	I1010 19:26:42.787599  148123 logs.go:282] 0 containers: []
	W1010 19:26:42.787607  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:42.787614  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:42.787697  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:42.831051  148123 cri.go:89] found id: ""
	I1010 19:26:42.831089  148123 logs.go:282] 0 containers: []
	W1010 19:26:42.831100  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:42.831109  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:42.831180  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:42.871352  148123 cri.go:89] found id: ""
	I1010 19:26:42.871390  148123 logs.go:282] 0 containers: []
	W1010 19:26:42.871402  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:42.871410  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:42.871484  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:42.910283  148123 cri.go:89] found id: ""
	I1010 19:26:42.910316  148123 logs.go:282] 0 containers: []
	W1010 19:26:42.910327  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:42.910335  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:42.910403  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:42.946971  148123 cri.go:89] found id: ""
	I1010 19:26:42.947003  148123 logs.go:282] 0 containers: []
	W1010 19:26:42.947012  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:42.947019  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:42.947087  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:42.985484  148123 cri.go:89] found id: ""
	I1010 19:26:42.985517  148123 logs.go:282] 0 containers: []
	W1010 19:26:42.985528  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:42.985537  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:42.985603  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:43.024500  148123 cri.go:89] found id: ""
	I1010 19:26:43.024535  148123 logs.go:282] 0 containers: []
	W1010 19:26:43.024544  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:43.024552  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:43.024608  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:43.064008  148123 cri.go:89] found id: ""
	I1010 19:26:43.064039  148123 logs.go:282] 0 containers: []
	W1010 19:26:43.064049  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:43.064060  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:43.064075  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:43.119367  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:43.119415  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:43.133396  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:43.133428  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:43.207447  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:43.207470  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:43.207491  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:43.290416  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:43.290456  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:40.727919  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:43.227782  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:40.066461  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:42.565505  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:43.051835  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:45.052308  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:45.834115  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:45.849172  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:45.849238  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:45.886020  148123 cri.go:89] found id: ""
	I1010 19:26:45.886049  148123 logs.go:282] 0 containers: []
	W1010 19:26:45.886060  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:45.886067  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:45.886152  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:45.923188  148123 cri.go:89] found id: ""
	I1010 19:26:45.923218  148123 logs.go:282] 0 containers: []
	W1010 19:26:45.923245  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:45.923253  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:45.923316  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:45.960297  148123 cri.go:89] found id: ""
	I1010 19:26:45.960337  148123 logs.go:282] 0 containers: []
	W1010 19:26:45.960351  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:45.960361  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:45.960432  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:45.995140  148123 cri.go:89] found id: ""
	I1010 19:26:45.995173  148123 logs.go:282] 0 containers: []
	W1010 19:26:45.995184  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:45.995191  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:45.995265  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:46.040390  148123 cri.go:89] found id: ""
	I1010 19:26:46.040418  148123 logs.go:282] 0 containers: []
	W1010 19:26:46.040426  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:46.040433  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:46.040500  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:46.079999  148123 cri.go:89] found id: ""
	I1010 19:26:46.080029  148123 logs.go:282] 0 containers: []
	W1010 19:26:46.080042  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:46.080052  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:46.080105  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:46.117331  148123 cri.go:89] found id: ""
	I1010 19:26:46.117357  148123 logs.go:282] 0 containers: []
	W1010 19:26:46.117364  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:46.117370  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:46.117441  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:46.152248  148123 cri.go:89] found id: ""
	I1010 19:26:46.152279  148123 logs.go:282] 0 containers: []
	W1010 19:26:46.152290  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:46.152303  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:46.152319  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:46.192503  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:46.192533  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:46.246419  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:46.246456  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:46.259864  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:46.259895  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:46.338518  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:46.338549  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:46.338567  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:45.726776  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:48.228318  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:44.567056  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:47.065636  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:47.551013  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:50.053824  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:48.924216  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:48.938741  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:48.938805  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:48.973310  148123 cri.go:89] found id: ""
	I1010 19:26:48.973339  148123 logs.go:282] 0 containers: []
	W1010 19:26:48.973348  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:48.973356  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:48.973411  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:49.010467  148123 cri.go:89] found id: ""
	I1010 19:26:49.010492  148123 logs.go:282] 0 containers: []
	W1010 19:26:49.010500  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:49.010506  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:49.010585  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:49.051621  148123 cri.go:89] found id: ""
	I1010 19:26:49.051646  148123 logs.go:282] 0 containers: []
	W1010 19:26:49.051653  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:49.051664  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:49.051727  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:49.091090  148123 cri.go:89] found id: ""
	I1010 19:26:49.091121  148123 logs.go:282] 0 containers: []
	W1010 19:26:49.091132  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:49.091140  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:49.091202  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:49.127669  148123 cri.go:89] found id: ""
	I1010 19:26:49.127712  148123 logs.go:282] 0 containers: []
	W1010 19:26:49.127724  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:49.127732  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:49.127804  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:49.164446  148123 cri.go:89] found id: ""
	I1010 19:26:49.164476  148123 logs.go:282] 0 containers: []
	W1010 19:26:49.164485  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:49.164491  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:49.164553  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:49.201222  148123 cri.go:89] found id: ""
	I1010 19:26:49.201263  148123 logs.go:282] 0 containers: []
	W1010 19:26:49.201275  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:49.201284  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:49.201345  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:49.238258  148123 cri.go:89] found id: ""
	I1010 19:26:49.238283  148123 logs.go:282] 0 containers: []
	W1010 19:26:49.238290  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:49.238299  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:49.238311  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:49.313576  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:49.313619  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:49.351066  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:49.351096  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:49.402772  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:49.402823  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:49.417713  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:49.417752  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:49.488834  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:51.989031  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:52.003056  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:52.003140  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:52.039682  148123 cri.go:89] found id: ""
	I1010 19:26:52.039709  148123 logs.go:282] 0 containers: []
	W1010 19:26:52.039718  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:52.039725  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:52.039788  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:52.081402  148123 cri.go:89] found id: ""
	I1010 19:26:52.081433  148123 logs.go:282] 0 containers: []
	W1010 19:26:52.081443  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:52.081449  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:52.081502  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:52.118270  148123 cri.go:89] found id: ""
	I1010 19:26:52.118304  148123 logs.go:282] 0 containers: []
	W1010 19:26:52.118315  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:52.118325  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:52.118392  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:52.154692  148123 cri.go:89] found id: ""
	I1010 19:26:52.154724  148123 logs.go:282] 0 containers: []
	W1010 19:26:52.154735  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:52.154743  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:52.154807  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:52.190056  148123 cri.go:89] found id: ""
	I1010 19:26:52.190085  148123 logs.go:282] 0 containers: []
	W1010 19:26:52.190094  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:52.190101  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:52.190161  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:52.225468  148123 cri.go:89] found id: ""
	I1010 19:26:52.225501  148123 logs.go:282] 0 containers: []
	W1010 19:26:52.225511  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:52.225521  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:52.225589  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:52.260682  148123 cri.go:89] found id: ""
	I1010 19:26:52.260710  148123 logs.go:282] 0 containers: []
	W1010 19:26:52.260718  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:52.260724  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:52.260774  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:52.297613  148123 cri.go:89] found id: ""
	I1010 19:26:52.297642  148123 logs.go:282] 0 containers: []
	W1010 19:26:52.297659  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:52.297672  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:52.297689  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:52.352224  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:52.352267  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:52.367003  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:52.367033  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:52.443124  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:52.443157  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:52.443173  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:52.522391  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:52.522433  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:50.726363  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:52.727069  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:49.069109  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:51.566132  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:53.567867  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:52.554195  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:55.050995  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:55.066877  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:55.082191  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:55.082258  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:55.120729  148123 cri.go:89] found id: ""
	I1010 19:26:55.120763  148123 logs.go:282] 0 containers: []
	W1010 19:26:55.120775  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:55.120784  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:55.120863  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:55.154795  148123 cri.go:89] found id: ""
	I1010 19:26:55.154827  148123 logs.go:282] 0 containers: []
	W1010 19:26:55.154839  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:55.154848  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:55.154904  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:55.194846  148123 cri.go:89] found id: ""
	I1010 19:26:55.194876  148123 logs.go:282] 0 containers: []
	W1010 19:26:55.194889  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:55.194897  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:55.194958  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:55.230173  148123 cri.go:89] found id: ""
	I1010 19:26:55.230201  148123 logs.go:282] 0 containers: []
	W1010 19:26:55.230212  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:55.230220  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:55.230290  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:55.264999  148123 cri.go:89] found id: ""
	I1010 19:26:55.265025  148123 logs.go:282] 0 containers: []
	W1010 19:26:55.265032  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:55.265039  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:55.265096  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:55.303666  148123 cri.go:89] found id: ""
	I1010 19:26:55.303695  148123 logs.go:282] 0 containers: []
	W1010 19:26:55.303703  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:55.303724  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:55.303797  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:55.342061  148123 cri.go:89] found id: ""
	I1010 19:26:55.342087  148123 logs.go:282] 0 containers: []
	W1010 19:26:55.342098  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:55.342106  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:55.342171  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:55.378305  148123 cri.go:89] found id: ""
	I1010 19:26:55.378336  148123 logs.go:282] 0 containers: []
	W1010 19:26:55.378345  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:55.378354  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:55.378365  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:55.416815  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:55.416865  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:55.467371  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:55.467415  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:55.481704  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:55.481744  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:55.559219  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:55.559242  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:55.559254  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:58.141472  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:58.154326  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:58.154394  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:58.193053  148123 cri.go:89] found id: ""
	I1010 19:26:58.193080  148123 logs.go:282] 0 containers: []
	W1010 19:26:58.193088  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:58.193094  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:58.193142  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:58.232514  148123 cri.go:89] found id: ""
	I1010 19:26:58.232538  148123 logs.go:282] 0 containers: []
	W1010 19:26:58.232545  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:58.232551  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:58.232599  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:58.266724  148123 cri.go:89] found id: ""
	I1010 19:26:58.266756  148123 logs.go:282] 0 containers: []
	W1010 19:26:58.266765  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:58.266771  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:58.266824  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:58.301986  148123 cri.go:89] found id: ""
	I1010 19:26:58.302012  148123 logs.go:282] 0 containers: []
	W1010 19:26:58.302019  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:58.302031  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:58.302080  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:58.340394  148123 cri.go:89] found id: ""
	I1010 19:26:58.340431  148123 logs.go:282] 0 containers: []
	W1010 19:26:58.340440  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:58.340448  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:58.340500  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:58.407035  148123 cri.go:89] found id: ""
	I1010 19:26:58.407069  148123 logs.go:282] 0 containers: []
	W1010 19:26:58.407081  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:58.407088  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:58.407151  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:58.446650  148123 cri.go:89] found id: ""
	I1010 19:26:58.446682  148123 logs.go:282] 0 containers: []
	W1010 19:26:58.446694  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:58.446703  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:58.446763  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:58.480719  148123 cri.go:89] found id: ""
	I1010 19:26:58.480750  148123 logs.go:282] 0 containers: []
	W1010 19:26:58.480759  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:58.480768  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:58.480781  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:58.561568  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:58.561602  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:55.227199  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:57.726841  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:56.065787  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:58.566732  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:57.550718  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:59.550793  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:58.609237  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:58.609264  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:58.659705  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:58.659748  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:58.674646  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:58.674680  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:58.750922  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:01.252098  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:01.265420  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:01.265497  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:01.300133  148123 cri.go:89] found id: ""
	I1010 19:27:01.300168  148123 logs.go:282] 0 containers: []
	W1010 19:27:01.300180  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:01.300188  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:01.300272  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:01.337451  148123 cri.go:89] found id: ""
	I1010 19:27:01.337477  148123 logs.go:282] 0 containers: []
	W1010 19:27:01.337485  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:01.337492  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:01.337539  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:01.371987  148123 cri.go:89] found id: ""
	I1010 19:27:01.372019  148123 logs.go:282] 0 containers: []
	W1010 19:27:01.372028  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:01.372034  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:01.372098  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:01.408996  148123 cri.go:89] found id: ""
	I1010 19:27:01.409022  148123 logs.go:282] 0 containers: []
	W1010 19:27:01.409033  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:01.409042  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:01.409109  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:01.443558  148123 cri.go:89] found id: ""
	I1010 19:27:01.443587  148123 logs.go:282] 0 containers: []
	W1010 19:27:01.443595  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:01.443602  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:01.443663  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:01.478517  148123 cri.go:89] found id: ""
	I1010 19:27:01.478546  148123 logs.go:282] 0 containers: []
	W1010 19:27:01.478555  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:01.478561  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:01.478611  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:01.513528  148123 cri.go:89] found id: ""
	I1010 19:27:01.513556  148123 logs.go:282] 0 containers: []
	W1010 19:27:01.513568  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:01.513576  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:01.513641  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:01.558861  148123 cri.go:89] found id: ""
	I1010 19:27:01.558897  148123 logs.go:282] 0 containers: []
	W1010 19:27:01.558909  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:01.558921  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:01.558937  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:01.599719  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:01.599747  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:01.650736  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:01.650774  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:01.664643  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:01.664669  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:01.736533  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:01.736557  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:01.736570  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:00.225540  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:02.226962  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:00.567193  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:03.066587  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:02.050439  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:04.050984  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:06.550977  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:04.317302  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:04.330450  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:04.330519  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:04.368893  148123 cri.go:89] found id: ""
	I1010 19:27:04.368923  148123 logs.go:282] 0 containers: []
	W1010 19:27:04.368932  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:04.368939  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:04.368993  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:04.406598  148123 cri.go:89] found id: ""
	I1010 19:27:04.406625  148123 logs.go:282] 0 containers: []
	W1010 19:27:04.406634  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:04.406640  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:04.406692  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:04.441485  148123 cri.go:89] found id: ""
	I1010 19:27:04.441515  148123 logs.go:282] 0 containers: []
	W1010 19:27:04.441525  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:04.441532  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:04.441581  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:04.476663  148123 cri.go:89] found id: ""
	I1010 19:27:04.476690  148123 logs.go:282] 0 containers: []
	W1010 19:27:04.476698  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:04.476704  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:04.476765  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:04.512124  148123 cri.go:89] found id: ""
	I1010 19:27:04.512171  148123 logs.go:282] 0 containers: []
	W1010 19:27:04.512186  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:04.512195  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:04.512260  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:04.547902  148123 cri.go:89] found id: ""
	I1010 19:27:04.547929  148123 logs.go:282] 0 containers: []
	W1010 19:27:04.547940  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:04.547949  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:04.548008  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:04.585257  148123 cri.go:89] found id: ""
	I1010 19:27:04.585287  148123 logs.go:282] 0 containers: []
	W1010 19:27:04.585297  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:04.585303  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:04.585352  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:04.621019  148123 cri.go:89] found id: ""
	I1010 19:27:04.621048  148123 logs.go:282] 0 containers: []
	W1010 19:27:04.621057  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:04.621068  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:04.621080  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:04.662770  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:04.662801  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:04.714551  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:04.714592  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:04.730922  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:04.730951  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:04.841139  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:04.841163  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:04.841178  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:07.428528  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:07.442423  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:07.442498  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:07.476722  148123 cri.go:89] found id: ""
	I1010 19:27:07.476753  148123 logs.go:282] 0 containers: []
	W1010 19:27:07.476764  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:07.476772  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:07.476835  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:07.512211  148123 cri.go:89] found id: ""
	I1010 19:27:07.512245  148123 logs.go:282] 0 containers: []
	W1010 19:27:07.512256  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:07.512263  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:07.512325  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:07.546546  148123 cri.go:89] found id: ""
	I1010 19:27:07.546588  148123 logs.go:282] 0 containers: []
	W1010 19:27:07.546601  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:07.546619  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:07.546688  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:07.585743  148123 cri.go:89] found id: ""
	I1010 19:27:07.585768  148123 logs.go:282] 0 containers: []
	W1010 19:27:07.585777  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:07.585783  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:07.585848  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:07.626827  148123 cri.go:89] found id: ""
	I1010 19:27:07.626855  148123 logs.go:282] 0 containers: []
	W1010 19:27:07.626865  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:07.626874  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:07.626960  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:07.661910  148123 cri.go:89] found id: ""
	I1010 19:27:07.661940  148123 logs.go:282] 0 containers: []
	W1010 19:27:07.661948  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:07.661955  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:07.662072  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:07.698255  148123 cri.go:89] found id: ""
	I1010 19:27:07.698288  148123 logs.go:282] 0 containers: []
	W1010 19:27:07.698301  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:07.698309  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:07.698374  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:07.734389  148123 cri.go:89] found id: ""
	I1010 19:27:07.734417  148123 logs.go:282] 0 containers: []
	W1010 19:27:07.734428  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:07.734440  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:07.734454  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:07.814089  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:07.814130  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:07.854799  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:07.854831  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:07.906595  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:07.906635  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:07.921928  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:07.921966  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:07.989839  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:04.727522  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:07.226694  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:05.565868  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:07.567139  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:09.050772  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:11.051291  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:10.490115  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:10.504216  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:10.504292  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:10.554789  148123 cri.go:89] found id: ""
	I1010 19:27:10.554815  148123 logs.go:282] 0 containers: []
	W1010 19:27:10.554825  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:10.554831  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:10.554889  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:10.618970  148123 cri.go:89] found id: ""
	I1010 19:27:10.618997  148123 logs.go:282] 0 containers: []
	W1010 19:27:10.619005  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:10.619012  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:10.619061  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:10.666900  148123 cri.go:89] found id: ""
	I1010 19:27:10.666933  148123 logs.go:282] 0 containers: []
	W1010 19:27:10.666946  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:10.666953  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:10.667023  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:10.704364  148123 cri.go:89] found id: ""
	I1010 19:27:10.704405  148123 logs.go:282] 0 containers: []
	W1010 19:27:10.704430  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:10.704450  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:10.704525  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:10.742341  148123 cri.go:89] found id: ""
	I1010 19:27:10.742380  148123 logs.go:282] 0 containers: []
	W1010 19:27:10.742389  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:10.742396  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:10.742461  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:10.783056  148123 cri.go:89] found id: ""
	I1010 19:27:10.783088  148123 logs.go:282] 0 containers: []
	W1010 19:27:10.783098  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:10.783105  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:10.783174  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:10.823198  148123 cri.go:89] found id: ""
	I1010 19:27:10.823224  148123 logs.go:282] 0 containers: []
	W1010 19:27:10.823233  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:10.823243  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:10.823302  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:10.858154  148123 cri.go:89] found id: ""
	I1010 19:27:10.858195  148123 logs.go:282] 0 containers: []
	W1010 19:27:10.858204  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:10.858213  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:10.858229  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:10.935491  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:10.935518  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:10.935532  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:11.015653  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:11.015694  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:11.058765  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:11.058793  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:11.116748  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:11.116799  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:09.727270  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:12.225797  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:10.065372  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:12.065695  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:13.550669  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:16.051044  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:13.631579  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:13.644885  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:13.644953  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:13.685242  148123 cri.go:89] found id: ""
	I1010 19:27:13.685273  148123 logs.go:282] 0 containers: []
	W1010 19:27:13.685285  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:13.685294  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:13.685364  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:13.720831  148123 cri.go:89] found id: ""
	I1010 19:27:13.720866  148123 logs.go:282] 0 containers: []
	W1010 19:27:13.720877  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:13.720885  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:13.720947  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:13.766374  148123 cri.go:89] found id: ""
	I1010 19:27:13.766404  148123 logs.go:282] 0 containers: []
	W1010 19:27:13.766413  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:13.766419  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:13.766468  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:13.804550  148123 cri.go:89] found id: ""
	I1010 19:27:13.804585  148123 logs.go:282] 0 containers: []
	W1010 19:27:13.804597  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:13.804606  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:13.804674  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:13.842406  148123 cri.go:89] found id: ""
	I1010 19:27:13.842432  148123 logs.go:282] 0 containers: []
	W1010 19:27:13.842440  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:13.842460  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:13.842678  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:13.882365  148123 cri.go:89] found id: ""
	I1010 19:27:13.882402  148123 logs.go:282] 0 containers: []
	W1010 19:27:13.882415  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:13.882423  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:13.882505  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:13.926016  148123 cri.go:89] found id: ""
	I1010 19:27:13.926052  148123 logs.go:282] 0 containers: []
	W1010 19:27:13.926065  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:13.926075  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:13.926177  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:13.960485  148123 cri.go:89] found id: ""
	I1010 19:27:13.960523  148123 logs.go:282] 0 containers: []
	W1010 19:27:13.960533  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:13.960542  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:13.960558  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:14.013013  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:14.013055  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:14.026906  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:14.026935  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:14.104469  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:14.104494  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:14.104507  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:14.185917  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:14.185959  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:16.732088  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:16.746646  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:16.746710  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:16.784715  148123 cri.go:89] found id: ""
	I1010 19:27:16.784739  148123 logs.go:282] 0 containers: []
	W1010 19:27:16.784749  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:16.784755  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:16.784807  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:16.824181  148123 cri.go:89] found id: ""
	I1010 19:27:16.824210  148123 logs.go:282] 0 containers: []
	W1010 19:27:16.824220  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:16.824228  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:16.824289  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:16.860901  148123 cri.go:89] found id: ""
	I1010 19:27:16.860932  148123 logs.go:282] 0 containers: []
	W1010 19:27:16.860941  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:16.860947  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:16.860997  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:16.895127  148123 cri.go:89] found id: ""
	I1010 19:27:16.895159  148123 logs.go:282] 0 containers: []
	W1010 19:27:16.895175  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:16.895183  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:16.895266  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:16.931989  148123 cri.go:89] found id: ""
	I1010 19:27:16.932020  148123 logs.go:282] 0 containers: []
	W1010 19:27:16.932028  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:16.932035  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:16.932086  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:16.967254  148123 cri.go:89] found id: ""
	I1010 19:27:16.967283  148123 logs.go:282] 0 containers: []
	W1010 19:27:16.967292  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:16.967299  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:16.967347  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:17.003010  148123 cri.go:89] found id: ""
	I1010 19:27:17.003035  148123 logs.go:282] 0 containers: []
	W1010 19:27:17.003043  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:17.003050  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:17.003097  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:17.040053  148123 cri.go:89] found id: ""
	I1010 19:27:17.040093  148123 logs.go:282] 0 containers: []
	W1010 19:27:17.040102  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:17.040118  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:17.040131  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:17.093176  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:17.093217  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:17.108086  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:17.108116  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:17.186423  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:17.186452  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:17.186467  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:17.271884  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:17.271926  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:14.227197  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:16.739354  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:14.066233  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:16.565852  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:18.566337  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:18.051613  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:20.549888  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:19.817954  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:19.831924  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:19.831993  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:19.873415  148123 cri.go:89] found id: ""
	I1010 19:27:19.873441  148123 logs.go:282] 0 containers: []
	W1010 19:27:19.873459  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:19.873466  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:19.873527  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:19.912174  148123 cri.go:89] found id: ""
	I1010 19:27:19.912207  148123 logs.go:282] 0 containers: []
	W1010 19:27:19.912219  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:19.912227  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:19.912298  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:19.948422  148123 cri.go:89] found id: ""
	I1010 19:27:19.948456  148123 logs.go:282] 0 containers: []
	W1010 19:27:19.948466  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:19.948472  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:19.948524  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:19.983917  148123 cri.go:89] found id: ""
	I1010 19:27:19.983949  148123 logs.go:282] 0 containers: []
	W1010 19:27:19.983962  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:19.983970  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:19.984024  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:20.019160  148123 cri.go:89] found id: ""
	I1010 19:27:20.019187  148123 logs.go:282] 0 containers: []
	W1010 19:27:20.019198  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:20.019207  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:20.019271  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:20.056073  148123 cri.go:89] found id: ""
	I1010 19:27:20.056104  148123 logs.go:282] 0 containers: []
	W1010 19:27:20.056116  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:20.056124  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:20.056187  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:20.095877  148123 cri.go:89] found id: ""
	I1010 19:27:20.095916  148123 logs.go:282] 0 containers: []
	W1010 19:27:20.095928  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:20.095935  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:20.096007  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:20.132360  148123 cri.go:89] found id: ""
	I1010 19:27:20.132385  148123 logs.go:282] 0 containers: []
	W1010 19:27:20.132394  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:20.132402  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:20.132413  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:20.190573  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:20.190619  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:20.205785  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:20.205819  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:20.278882  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:20.278911  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:20.278924  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:20.363982  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:20.364024  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:22.906059  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:22.919839  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:22.919912  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:22.958203  148123 cri.go:89] found id: ""
	I1010 19:27:22.958242  148123 logs.go:282] 0 containers: []
	W1010 19:27:22.958252  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:22.958258  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:22.958312  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:22.995834  148123 cri.go:89] found id: ""
	I1010 19:27:22.995866  148123 logs.go:282] 0 containers: []
	W1010 19:27:22.995874  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:22.995880  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:22.995945  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:23.035913  148123 cri.go:89] found id: ""
	I1010 19:27:23.035950  148123 logs.go:282] 0 containers: []
	W1010 19:27:23.035962  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:23.035970  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:23.036039  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:23.073995  148123 cri.go:89] found id: ""
	I1010 19:27:23.074036  148123 logs.go:282] 0 containers: []
	W1010 19:27:23.074049  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:23.074057  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:23.074117  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:23.111190  148123 cri.go:89] found id: ""
	I1010 19:27:23.111222  148123 logs.go:282] 0 containers: []
	W1010 19:27:23.111234  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:23.111243  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:23.111305  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:23.147771  148123 cri.go:89] found id: ""
	I1010 19:27:23.147797  148123 logs.go:282] 0 containers: []
	W1010 19:27:23.147806  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:23.147814  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:23.147878  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:23.185485  148123 cri.go:89] found id: ""
	I1010 19:27:23.185517  148123 logs.go:282] 0 containers: []
	W1010 19:27:23.185527  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:23.185535  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:23.185598  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:23.222030  148123 cri.go:89] found id: ""
	I1010 19:27:23.222060  148123 logs.go:282] 0 containers: []
	W1010 19:27:23.222070  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:23.222081  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:23.222097  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:23.301826  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:23.301849  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:23.301861  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:23.378688  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:23.378730  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:23.426683  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:23.426717  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:23.478425  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:23.478467  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:19.226994  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:21.727366  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:21.067094  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:23.567075  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:22.550076  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:24.551681  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:25.993768  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:26.008311  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:26.008383  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:26.046315  148123 cri.go:89] found id: ""
	I1010 19:27:26.046343  148123 logs.go:282] 0 containers: []
	W1010 19:27:26.046355  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:26.046364  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:26.046418  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:26.081823  148123 cri.go:89] found id: ""
	I1010 19:27:26.081847  148123 logs.go:282] 0 containers: []
	W1010 19:27:26.081855  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:26.081861  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:26.081913  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:26.117139  148123 cri.go:89] found id: ""
	I1010 19:27:26.117167  148123 logs.go:282] 0 containers: []
	W1010 19:27:26.117183  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:26.117191  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:26.117242  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:26.154477  148123 cri.go:89] found id: ""
	I1010 19:27:26.154501  148123 logs.go:282] 0 containers: []
	W1010 19:27:26.154510  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:26.154517  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:26.154568  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:26.187497  148123 cri.go:89] found id: ""
	I1010 19:27:26.187527  148123 logs.go:282] 0 containers: []
	W1010 19:27:26.187535  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:26.187541  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:26.187604  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:26.223110  148123 cri.go:89] found id: ""
	I1010 19:27:26.223140  148123 logs.go:282] 0 containers: []
	W1010 19:27:26.223151  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:26.223160  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:26.223226  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:26.264975  148123 cri.go:89] found id: ""
	I1010 19:27:26.265003  148123 logs.go:282] 0 containers: []
	W1010 19:27:26.265011  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:26.265017  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:26.265067  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:26.307467  148123 cri.go:89] found id: ""
	I1010 19:27:26.307494  148123 logs.go:282] 0 containers: []
	W1010 19:27:26.307503  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:26.307512  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:26.307524  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:26.346313  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:26.346342  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:26.400069  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:26.400109  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:26.415467  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:26.415500  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:26.486986  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:26.487016  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:26.487033  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:24.226736  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:26.228720  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:28.726470  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:26.067100  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:28.565675  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:27.051110  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:29.051207  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:31.553085  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:29.064522  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:29.077631  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:29.077696  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:29.116127  148123 cri.go:89] found id: ""
	I1010 19:27:29.116154  148123 logs.go:282] 0 containers: []
	W1010 19:27:29.116164  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:29.116172  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:29.116241  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:29.152863  148123 cri.go:89] found id: ""
	I1010 19:27:29.152893  148123 logs.go:282] 0 containers: []
	W1010 19:27:29.152901  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:29.152907  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:29.152963  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:29.189951  148123 cri.go:89] found id: ""
	I1010 19:27:29.189983  148123 logs.go:282] 0 containers: []
	W1010 19:27:29.189992  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:29.189999  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:29.190060  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:29.226611  148123 cri.go:89] found id: ""
	I1010 19:27:29.226646  148123 logs.go:282] 0 containers: []
	W1010 19:27:29.226657  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:29.226665  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:29.226729  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:29.271884  148123 cri.go:89] found id: ""
	I1010 19:27:29.271921  148123 logs.go:282] 0 containers: []
	W1010 19:27:29.271934  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:29.271944  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:29.272016  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:29.306128  148123 cri.go:89] found id: ""
	I1010 19:27:29.306168  148123 logs.go:282] 0 containers: []
	W1010 19:27:29.306181  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:29.306191  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:29.306255  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:29.341869  148123 cri.go:89] found id: ""
	I1010 19:27:29.341893  148123 logs.go:282] 0 containers: []
	W1010 19:27:29.341901  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:29.341908  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:29.341962  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:29.376213  148123 cri.go:89] found id: ""
	I1010 19:27:29.376240  148123 logs.go:282] 0 containers: []
	W1010 19:27:29.376249  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:29.376259  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:29.376273  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:29.428827  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:29.428878  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:29.443719  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:29.443754  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:29.516166  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:29.516193  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:29.516208  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:29.596643  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:29.596681  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:32.135286  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:32.148791  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:32.148893  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:32.185511  148123 cri.go:89] found id: ""
	I1010 19:27:32.185542  148123 logs.go:282] 0 containers: []
	W1010 19:27:32.185553  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:32.185560  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:32.185626  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:32.222908  148123 cri.go:89] found id: ""
	I1010 19:27:32.222934  148123 logs.go:282] 0 containers: []
	W1010 19:27:32.222942  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:32.222948  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:32.222999  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:32.260998  148123 cri.go:89] found id: ""
	I1010 19:27:32.261033  148123 logs.go:282] 0 containers: []
	W1010 19:27:32.261045  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:32.261063  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:32.261128  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:32.298311  148123 cri.go:89] found id: ""
	I1010 19:27:32.298339  148123 logs.go:282] 0 containers: []
	W1010 19:27:32.298348  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:32.298355  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:32.298409  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:32.334134  148123 cri.go:89] found id: ""
	I1010 19:27:32.334220  148123 logs.go:282] 0 containers: []
	W1010 19:27:32.334236  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:32.334249  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:32.334319  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:32.369690  148123 cri.go:89] found id: ""
	I1010 19:27:32.369723  148123 logs.go:282] 0 containers: []
	W1010 19:27:32.369735  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:32.369743  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:32.369807  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:32.407164  148123 cri.go:89] found id: ""
	I1010 19:27:32.407210  148123 logs.go:282] 0 containers: []
	W1010 19:27:32.407218  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:32.407224  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:32.407279  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:32.446468  148123 cri.go:89] found id: ""
	I1010 19:27:32.446494  148123 logs.go:282] 0 containers: []
	W1010 19:27:32.446505  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:32.446519  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:32.446535  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:32.497953  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:32.497997  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:32.513590  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:32.513620  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:32.594037  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:32.594072  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:32.594090  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:32.673546  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:32.673587  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:30.727725  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:32.727813  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:31.066731  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:33.067815  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:34.050574  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:36.550119  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:35.226084  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:35.241152  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:35.241222  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:35.283208  148123 cri.go:89] found id: ""
	I1010 19:27:35.283245  148123 logs.go:282] 0 containers: []
	W1010 19:27:35.283269  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:35.283286  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:35.283346  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:35.320406  148123 cri.go:89] found id: ""
	I1010 19:27:35.320444  148123 logs.go:282] 0 containers: []
	W1010 19:27:35.320457  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:35.320466  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:35.320523  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:35.360984  148123 cri.go:89] found id: ""
	I1010 19:27:35.361015  148123 logs.go:282] 0 containers: []
	W1010 19:27:35.361027  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:35.361034  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:35.361097  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:35.400191  148123 cri.go:89] found id: ""
	I1010 19:27:35.400219  148123 logs.go:282] 0 containers: []
	W1010 19:27:35.400230  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:35.400244  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:35.400314  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:35.438092  148123 cri.go:89] found id: ""
	I1010 19:27:35.438126  148123 logs.go:282] 0 containers: []
	W1010 19:27:35.438139  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:35.438147  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:35.438215  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:35.472656  148123 cri.go:89] found id: ""
	I1010 19:27:35.472681  148123 logs.go:282] 0 containers: []
	W1010 19:27:35.472688  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:35.472694  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:35.472743  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:35.505895  148123 cri.go:89] found id: ""
	I1010 19:27:35.505931  148123 logs.go:282] 0 containers: []
	W1010 19:27:35.505942  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:35.505950  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:35.506015  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:35.540406  148123 cri.go:89] found id: ""
	I1010 19:27:35.540441  148123 logs.go:282] 0 containers: []
	W1010 19:27:35.540451  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:35.540462  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:35.540478  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:35.555874  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:35.555903  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:35.628400  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:35.628427  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:35.628453  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:35.708262  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:35.708305  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:35.752796  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:35.752824  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:38.306212  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:38.319473  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:38.319543  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:38.357493  148123 cri.go:89] found id: ""
	I1010 19:27:38.357520  148123 logs.go:282] 0 containers: []
	W1010 19:27:38.357529  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:38.357536  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:38.357588  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:38.395908  148123 cri.go:89] found id: ""
	I1010 19:27:38.395935  148123 logs.go:282] 0 containers: []
	W1010 19:27:38.395943  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:38.395949  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:38.396010  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:38.434495  148123 cri.go:89] found id: ""
	I1010 19:27:38.434529  148123 logs.go:282] 0 containers: []
	W1010 19:27:38.434539  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:38.434545  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:38.434608  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:38.474647  148123 cri.go:89] found id: ""
	I1010 19:27:38.474686  148123 logs.go:282] 0 containers: []
	W1010 19:27:38.474698  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:38.474710  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:38.474784  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:38.511105  148123 cri.go:89] found id: ""
	I1010 19:27:38.511133  148123 logs.go:282] 0 containers: []
	W1010 19:27:38.511141  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:38.511148  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:38.511199  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:38.551011  148123 cri.go:89] found id: ""
	I1010 19:27:38.551055  148123 logs.go:282] 0 containers: []
	W1010 19:27:38.551066  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:38.551074  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:38.551139  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:35.227301  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:37.726528  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:35.567838  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:38.066658  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:38.552499  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:40.544561  147758 pod_ready.go:82] duration metric: took 4m0.00091784s for pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace to be "Ready" ...
	E1010 19:27:40.544600  147758 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace to be "Ready" (will not retry!)
	I1010 19:27:40.544623  147758 pod_ready.go:39] duration metric: took 4m15.623470592s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 19:27:40.544664  147758 kubeadm.go:597] duration metric: took 4m22.92080204s to restartPrimaryControlPlane
	W1010 19:27:40.544737  147758 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1010 19:27:40.544829  147758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1010 19:27:38.590579  148123 cri.go:89] found id: ""
	I1010 19:27:38.590606  148123 logs.go:282] 0 containers: []
	W1010 19:27:38.590615  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:38.590621  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:38.590686  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:38.627775  148123 cri.go:89] found id: ""
	I1010 19:27:38.627812  148123 logs.go:282] 0 containers: []
	W1010 19:27:38.627826  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:38.627838  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:38.627854  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:38.683195  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:38.683239  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:38.697220  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:38.697250  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:38.771619  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:38.771651  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:38.771668  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:38.851324  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:38.851362  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:41.408165  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:41.422958  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:41.423032  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:41.466774  148123 cri.go:89] found id: ""
	I1010 19:27:41.466806  148123 logs.go:282] 0 containers: []
	W1010 19:27:41.466816  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:41.466823  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:41.466874  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:41.506429  148123 cri.go:89] found id: ""
	I1010 19:27:41.506470  148123 logs.go:282] 0 containers: []
	W1010 19:27:41.506482  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:41.506489  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:41.506555  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:41.548929  148123 cri.go:89] found id: ""
	I1010 19:27:41.548965  148123 logs.go:282] 0 containers: []
	W1010 19:27:41.548976  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:41.548983  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:41.549044  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:41.592243  148123 cri.go:89] found id: ""
	I1010 19:27:41.592274  148123 logs.go:282] 0 containers: []
	W1010 19:27:41.592283  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:41.592290  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:41.592342  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:41.628516  148123 cri.go:89] found id: ""
	I1010 19:27:41.628558  148123 logs.go:282] 0 containers: []
	W1010 19:27:41.628570  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:41.628578  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:41.628650  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:41.665011  148123 cri.go:89] found id: ""
	I1010 19:27:41.665041  148123 logs.go:282] 0 containers: []
	W1010 19:27:41.665053  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:41.665060  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:41.665112  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:41.702650  148123 cri.go:89] found id: ""
	I1010 19:27:41.702681  148123 logs.go:282] 0 containers: []
	W1010 19:27:41.702692  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:41.702700  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:41.702764  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:41.738971  148123 cri.go:89] found id: ""
	I1010 19:27:41.739002  148123 logs.go:282] 0 containers: []
	W1010 19:27:41.739021  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:41.739031  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:41.739062  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:41.813296  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:41.813321  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:41.813335  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:41.895138  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:41.895185  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:41.941975  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:41.942012  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:41.994838  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:41.994888  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:39.727140  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:41.728263  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:40.566241  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:43.065219  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:44.511153  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:44.524871  148123 kubeadm.go:597] duration metric: took 4m4.194337013s to restartPrimaryControlPlane
	W1010 19:27:44.524953  148123 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1010 19:27:44.524988  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1010 19:27:44.994063  148123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 19:27:45.010926  148123 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1010 19:27:45.021757  148123 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1010 19:27:45.032246  148123 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1010 19:27:45.032270  148123 kubeadm.go:157] found existing configuration files:
	
	I1010 19:27:45.032326  148123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1010 19:27:45.042799  148123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1010 19:27:45.042866  148123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1010 19:27:45.053017  148123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1010 19:27:45.063373  148123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1010 19:27:45.063445  148123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1010 19:27:45.074689  148123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1010 19:27:45.085187  148123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1010 19:27:45.085241  148123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1010 19:27:45.096275  148123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1010 19:27:45.106686  148123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1010 19:27:45.106750  148123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1010 19:27:45.117018  148123 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1010 19:27:45.358920  148123 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1010 19:27:44.226853  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:46.227586  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:48.727469  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:45.066410  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:47.569864  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:51.230704  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:53.727351  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:50.065845  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:52.066267  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:55.727457  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:58.226861  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:54.564611  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:56.566702  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:00.728542  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:03.225779  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:59.065614  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:01.068088  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:03.566502  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:06.739904  147758 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.195045639s)
	I1010 19:28:06.739984  147758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 19:28:06.756046  147758 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1010 19:28:06.768580  147758 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1010 19:28:06.780663  147758 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1010 19:28:06.780732  147758 kubeadm.go:157] found existing configuration files:
	
	I1010 19:28:06.780807  147758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1010 19:28:06.792092  147758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1010 19:28:06.792179  147758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1010 19:28:06.804515  147758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1010 19:28:06.814969  147758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1010 19:28:06.815040  147758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1010 19:28:06.826056  147758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1010 19:28:06.836050  147758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1010 19:28:06.836108  147758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1010 19:28:06.846125  147758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1010 19:28:06.855505  147758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1010 19:28:06.855559  147758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1010 19:28:06.865367  147758 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1010 19:28:06.916227  147758 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1010 19:28:06.916375  147758 kubeadm.go:310] [preflight] Running pre-flight checks
	I1010 19:28:07.036539  147758 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1010 19:28:07.036652  147758 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1010 19:28:07.036762  147758 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1010 19:28:07.044897  147758 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1010 19:28:07.046978  147758 out.go:235]   - Generating certificates and keys ...
	I1010 19:28:07.047117  147758 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1010 19:28:07.047229  147758 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1010 19:28:07.047384  147758 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1010 19:28:07.047467  147758 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1010 19:28:07.047584  147758 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1010 19:28:07.047675  147758 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1010 19:28:07.047794  147758 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1010 19:28:07.047902  147758 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1010 19:28:07.048005  147758 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1010 19:28:07.048093  147758 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1010 19:28:07.048142  147758 kubeadm.go:310] [certs] Using the existing "sa" key
	I1010 19:28:07.048210  147758 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1010 19:28:07.127836  147758 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1010 19:28:07.434492  147758 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1010 19:28:07.487567  147758 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1010 19:28:07.731314  147758 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1010 19:28:07.919060  147758 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1010 19:28:07.919565  147758 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1010 19:28:07.922740  147758 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1010 19:28:05.227611  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:07.229836  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:06.065246  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:08.067360  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:07.925140  147758 out.go:235]   - Booting up control plane ...
	I1010 19:28:07.925239  147758 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1010 19:28:07.925356  147758 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1010 19:28:07.925444  147758 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1010 19:28:07.944375  147758 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1010 19:28:07.951182  147758 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1010 19:28:07.951274  147758 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1010 19:28:08.087325  147758 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1010 19:28:08.087560  147758 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1010 19:28:08.598361  147758 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 511.081439ms
	I1010 19:28:08.598502  147758 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1010 19:28:09.727932  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:12.227939  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:10.566945  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:13.067142  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:14.100517  147758 kubeadm.go:310] [api-check] The API server is healthy after 5.501985157s
	I1010 19:28:14.119932  147758 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1010 19:28:14.149557  147758 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1010 19:28:14.207413  147758 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1010 19:28:14.207735  147758 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-541370 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1010 19:28:14.226199  147758 kubeadm.go:310] [bootstrap-token] Using token: sbg4v0.t5me93bb5vn8m913
	I1010 19:28:14.228059  147758 out.go:235]   - Configuring RBAC rules ...
	I1010 19:28:14.228208  147758 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1010 19:28:14.241706  147758 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1010 19:28:14.256554  147758 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1010 19:28:14.263129  147758 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1010 19:28:14.274346  147758 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1010 19:28:14.282313  147758 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1010 19:28:14.507850  147758 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1010 19:28:14.970234  147758 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1010 19:28:15.508328  147758 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1010 19:28:15.509530  147758 kubeadm.go:310] 
	I1010 19:28:15.509635  147758 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1010 19:28:15.509653  147758 kubeadm.go:310] 
	I1010 19:28:15.509743  147758 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1010 19:28:15.509762  147758 kubeadm.go:310] 
	I1010 19:28:15.509795  147758 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1010 19:28:15.509888  147758 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1010 19:28:15.509954  147758 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1010 19:28:15.509970  147758 kubeadm.go:310] 
	I1010 19:28:15.510083  147758 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1010 19:28:15.510103  147758 kubeadm.go:310] 
	I1010 19:28:15.510203  147758 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1010 19:28:15.510214  147758 kubeadm.go:310] 
	I1010 19:28:15.510297  147758 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1010 19:28:15.510410  147758 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1010 19:28:15.510489  147758 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1010 19:28:15.510495  147758 kubeadm.go:310] 
	I1010 19:28:15.510603  147758 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1010 19:28:15.510707  147758 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1010 19:28:15.510724  147758 kubeadm.go:310] 
	I1010 19:28:15.510807  147758 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token sbg4v0.t5me93bb5vn8m913 \
	I1010 19:28:15.510958  147758 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:bc8568f96f96483e4403a7eff9866ac7bd8b0e76d8203796a49dd1a769f11bda \
	I1010 19:28:15.511005  147758 kubeadm.go:310] 	--control-plane 
	I1010 19:28:15.511034  147758 kubeadm.go:310] 
	I1010 19:28:15.511161  147758 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1010 19:28:15.511173  147758 kubeadm.go:310] 
	I1010 19:28:15.511268  147758 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token sbg4v0.t5me93bb5vn8m913 \
	I1010 19:28:15.511403  147758 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:bc8568f96f96483e4403a7eff9866ac7bd8b0e76d8203796a49dd1a769f11bda 
	I1010 19:28:15.512298  147758 kubeadm.go:310] W1010 19:28:06.890572    2539 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1010 19:28:15.512594  147758 kubeadm.go:310] W1010 19:28:06.891448    2539 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1010 19:28:15.512702  147758 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1010 19:28:15.512734  147758 cni.go:84] Creating CNI manager for ""
	I1010 19:28:15.512744  147758 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1010 19:28:15.514703  147758 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1010 19:28:15.516229  147758 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1010 19:28:15.527554  147758 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1010 19:28:15.549266  147758 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1010 19:28:15.549362  147758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:15.549399  147758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-541370 minikube.k8s.io/updated_at=2024_10_10T19_28_15_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=e90d7de550e839433dc631623053398b652747fc minikube.k8s.io/name=embed-certs-541370 minikube.k8s.io/primary=true
	I1010 19:28:15.590732  147758 ops.go:34] apiserver oom_adj: -16
	I1010 19:28:15.740942  147758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:16.241392  147758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:16.741807  147758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:14.229241  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:16.727260  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:14.059512  148525 pod_ready.go:82] duration metric: took 4m0.00022742s for pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace to be "Ready" ...
	E1010 19:28:14.059550  148525 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace to be "Ready" (will not retry!)
	I1010 19:28:14.059569  148525 pod_ready.go:39] duration metric: took 4m7.001942194s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 19:28:14.059614  148525 kubeadm.go:597] duration metric: took 4m14.998320151s to restartPrimaryControlPlane
	W1010 19:28:14.059672  148525 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1010 19:28:14.059698  148525 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1010 19:28:17.241315  147758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:17.741580  147758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:18.241006  147758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:18.742042  147758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:19.241251  147758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:19.741030  147758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:19.862541  147758 kubeadm.go:1113] duration metric: took 4.313246481s to wait for elevateKubeSystemPrivileges
	I1010 19:28:19.862579  147758 kubeadm.go:394] duration metric: took 5m2.288571479s to StartCluster
	I1010 19:28:19.862628  147758 settings.go:142] acquiring lock: {Name:mk8d73e472e6ca16e47be8eb712406a95625b8da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 19:28:19.862751  147758 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19787-81676/kubeconfig
	I1010 19:28:19.864528  147758 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19787-81676/kubeconfig: {Name:mk31297b607b774870865548d952622c04d970cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 19:28:19.864812  147758 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.120 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1010 19:28:19.864910  147758 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1010 19:28:19.865019  147758 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-541370"
	I1010 19:28:19.865041  147758 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-541370"
	W1010 19:28:19.865053  147758 addons.go:243] addon storage-provisioner should already be in state true
	I1010 19:28:19.865062  147758 addons.go:69] Setting default-storageclass=true in profile "embed-certs-541370"
	I1010 19:28:19.865085  147758 host.go:66] Checking if "embed-certs-541370" exists ...
	I1010 19:28:19.865077  147758 config.go:182] Loaded profile config "embed-certs-541370": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 19:28:19.865129  147758 addons.go:69] Setting metrics-server=true in profile "embed-certs-541370"
	I1010 19:28:19.865164  147758 addons.go:234] Setting addon metrics-server=true in "embed-certs-541370"
	W1010 19:28:19.865179  147758 addons.go:243] addon metrics-server should already be in state true
	I1010 19:28:19.865115  147758 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-541370"
	I1010 19:28:19.865215  147758 host.go:66] Checking if "embed-certs-541370" exists ...
	I1010 19:28:19.865558  147758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:28:19.865593  147758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:28:19.865607  147758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:28:19.865629  147758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:28:19.865595  147758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:28:19.865725  147758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:28:19.866857  147758 out.go:177] * Verifying Kubernetes components...
	I1010 19:28:19.868590  147758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 19:28:19.882524  147758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42899
	I1010 19:28:19.882595  147758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39477
	I1010 19:28:19.882678  147758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42999
	I1010 19:28:19.883065  147758 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:28:19.883168  147758 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:28:19.883281  147758 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:28:19.883559  147758 main.go:141] libmachine: Using API Version  1
	I1010 19:28:19.883575  147758 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:28:19.883657  147758 main.go:141] libmachine: Using API Version  1
	I1010 19:28:19.883669  147758 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:28:19.883802  147758 main.go:141] libmachine: Using API Version  1
	I1010 19:28:19.883818  147758 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:28:19.883968  147758 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:28:19.883976  147758 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:28:19.884141  147758 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:28:19.884194  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetState
	I1010 19:28:19.884408  147758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:28:19.884437  147758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:28:19.884684  147758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:28:19.884746  147758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:28:19.887912  147758 addons.go:234] Setting addon default-storageclass=true in "embed-certs-541370"
	W1010 19:28:19.887942  147758 addons.go:243] addon default-storageclass should already be in state true
	I1010 19:28:19.887973  147758 host.go:66] Checking if "embed-certs-541370" exists ...
	I1010 19:28:19.888333  147758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:28:19.888383  147758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:28:19.901588  147758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37895
	I1010 19:28:19.902131  147758 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:28:19.902597  147758 main.go:141] libmachine: Using API Version  1
	I1010 19:28:19.902621  147758 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:28:19.902927  147758 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:28:19.903101  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetState
	I1010 19:28:19.904556  147758 main.go:141] libmachine: (embed-certs-541370) Calling .DriverName
	I1010 19:28:19.905207  147758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33927
	I1010 19:28:19.905621  147758 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:28:19.906188  147758 main.go:141] libmachine: Using API Version  1
	I1010 19:28:19.906209  147758 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:28:19.906599  147758 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:28:19.906647  147758 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 19:28:19.906837  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetState
	I1010 19:28:19.907699  147758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34245
	I1010 19:28:19.908147  147758 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:28:19.908557  147758 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 19:28:19.908584  147758 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1010 19:28:19.908610  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHHostname
	I1010 19:28:19.908705  147758 main.go:141] libmachine: (embed-certs-541370) Calling .DriverName
	I1010 19:28:19.908717  147758 main.go:141] libmachine: Using API Version  1
	I1010 19:28:19.908745  147758 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:28:19.909364  147758 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:28:19.910154  147758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:28:19.910208  147758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:28:19.910840  147758 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1010 19:28:19.912716  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:28:19.912722  147758 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1010 19:28:19.912743  147758 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1010 19:28:19.912769  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHHostname
	I1010 19:28:19.913199  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:28:19.913224  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:28:19.913500  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHPort
	I1010 19:28:19.913682  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:28:19.913845  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHUsername
	I1010 19:28:19.913972  147758 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/embed-certs-541370/id_rsa Username:docker}
	I1010 19:28:19.921800  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:28:19.922343  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:28:19.922374  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:28:19.922653  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHPort
	I1010 19:28:19.922842  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:28:19.922965  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHUsername
	I1010 19:28:19.923108  147758 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/embed-certs-541370/id_rsa Username:docker}
	I1010 19:28:19.935097  147758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39933
	I1010 19:28:19.935605  147758 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:28:19.936123  147758 main.go:141] libmachine: Using API Version  1
	I1010 19:28:19.936146  147758 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:28:19.936561  147758 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:28:19.936747  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetState
	I1010 19:28:19.938789  147758 main.go:141] libmachine: (embed-certs-541370) Calling .DriverName
	I1010 19:28:19.939019  147758 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1010 19:28:19.939034  147758 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1010 19:28:19.939054  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHHostname
	I1010 19:28:19.941682  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:28:19.942137  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:28:19.942165  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:28:19.942404  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHPort
	I1010 19:28:19.942642  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:28:19.942767  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHUsername
	I1010 19:28:19.942915  147758 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/embed-certs-541370/id_rsa Username:docker}
	I1010 19:28:20.108247  147758 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 19:28:20.149819  147758 node_ready.go:35] waiting up to 6m0s for node "embed-certs-541370" to be "Ready" ...
	I1010 19:28:20.163096  147758 node_ready.go:49] node "embed-certs-541370" has status "Ready":"True"
	I1010 19:28:20.163118  147758 node_ready.go:38] duration metric: took 13.26779ms for node "embed-certs-541370" to be "Ready" ...
	I1010 19:28:20.163128  147758 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 19:28:20.168620  147758 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-59752" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:20.241952  147758 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1010 19:28:20.241978  147758 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1010 19:28:20.249679  147758 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1010 19:28:20.290149  147758 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1010 19:28:20.290190  147758 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1010 19:28:20.291475  147758 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 19:28:20.410539  147758 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1010 19:28:20.410582  147758 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1010 19:28:20.491567  147758 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1010 19:28:20.684370  147758 main.go:141] libmachine: Making call to close driver server
	I1010 19:28:20.684403  147758 main.go:141] libmachine: (embed-certs-541370) Calling .Close
	I1010 19:28:20.684695  147758 main.go:141] libmachine: (embed-certs-541370) DBG | Closing plugin on server side
	I1010 19:28:20.684742  147758 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:28:20.684749  147758 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:28:20.684756  147758 main.go:141] libmachine: Making call to close driver server
	I1010 19:28:20.684764  147758 main.go:141] libmachine: (embed-certs-541370) Calling .Close
	I1010 19:28:20.685029  147758 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:28:20.685059  147758 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:28:20.685036  147758 main.go:141] libmachine: (embed-certs-541370) DBG | Closing plugin on server side
	I1010 19:28:20.695901  147758 main.go:141] libmachine: Making call to close driver server
	I1010 19:28:20.695926  147758 main.go:141] libmachine: (embed-certs-541370) Calling .Close
	I1010 19:28:20.696202  147758 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:28:20.696249  147758 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:28:21.439463  147758 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.147952803s)
	I1010 19:28:21.439626  147758 main.go:141] libmachine: Making call to close driver server
	I1010 19:28:21.439659  147758 main.go:141] libmachine: (embed-certs-541370) Calling .Close
	I1010 19:28:21.439951  147758 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:28:21.439969  147758 main.go:141] libmachine: (embed-certs-541370) DBG | Closing plugin on server side
	I1010 19:28:21.439976  147758 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:28:21.439997  147758 main.go:141] libmachine: Making call to close driver server
	I1010 19:28:21.440009  147758 main.go:141] libmachine: (embed-certs-541370) Calling .Close
	I1010 19:28:21.440299  147758 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:28:21.440298  147758 main.go:141] libmachine: (embed-certs-541370) DBG | Closing plugin on server side
	I1010 19:28:21.440314  147758 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:28:21.780486  147758 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.288854773s)
	I1010 19:28:21.780551  147758 main.go:141] libmachine: Making call to close driver server
	I1010 19:28:21.780567  147758 main.go:141] libmachine: (embed-certs-541370) Calling .Close
	I1010 19:28:21.780948  147758 main.go:141] libmachine: (embed-certs-541370) DBG | Closing plugin on server side
	I1010 19:28:21.780980  147758 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:28:21.780996  147758 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:28:21.781007  147758 main.go:141] libmachine: Making call to close driver server
	I1010 19:28:21.781016  147758 main.go:141] libmachine: (embed-certs-541370) Calling .Close
	I1010 19:28:21.781289  147758 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:28:21.781310  147758 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:28:21.781331  147758 addons.go:475] Verifying addon metrics-server=true in "embed-certs-541370"
	I1010 19:28:21.783512  147758 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1010 19:28:21.784958  147758 addons.go:510] duration metric: took 1.92006141s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1010 19:28:19.225844  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:21.227960  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:23.726439  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:22.195129  147758 pod_ready.go:103] pod "coredns-7c65d6cfc9-59752" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:24.678736  147758 pod_ready.go:103] pod "coredns-7c65d6cfc9-59752" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:25.727053  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:27.727657  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:27.177348  147758 pod_ready.go:103] pod "coredns-7c65d6cfc9-59752" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:29.177459  147758 pod_ready.go:93] pod "coredns-7c65d6cfc9-59752" in "kube-system" namespace has status "Ready":"True"
	I1010 19:28:29.177485  147758 pod_ready.go:82] duration metric: took 9.008841503s for pod "coredns-7c65d6cfc9-59752" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:29.177495  147758 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-n7wxs" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:29.182744  147758 pod_ready.go:93] pod "coredns-7c65d6cfc9-n7wxs" in "kube-system" namespace has status "Ready":"True"
	I1010 19:28:29.182777  147758 pod_ready.go:82] duration metric: took 5.273263ms for pod "coredns-7c65d6cfc9-n7wxs" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:29.182791  147758 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:29.191507  147758 pod_ready.go:93] pod "etcd-embed-certs-541370" in "kube-system" namespace has status "Ready":"True"
	I1010 19:28:29.191539  147758 pod_ready.go:82] duration metric: took 8.738961ms for pod "etcd-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:29.191554  147758 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:29.199167  147758 pod_ready.go:93] pod "kube-apiserver-embed-certs-541370" in "kube-system" namespace has status "Ready":"True"
	I1010 19:28:29.199218  147758 pod_ready.go:82] duration metric: took 7.635672ms for pod "kube-apiserver-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:29.199234  147758 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:29.204558  147758 pod_ready.go:93] pod "kube-controller-manager-embed-certs-541370" in "kube-system" namespace has status "Ready":"True"
	I1010 19:28:29.204581  147758 pod_ready.go:82] duration metric: took 5.337574ms for pod "kube-controller-manager-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:29.204591  147758 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-6hdds" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:29.573781  147758 pod_ready.go:93] pod "kube-proxy-6hdds" in "kube-system" namespace has status "Ready":"True"
	I1010 19:28:29.573808  147758 pod_ready.go:82] duration metric: took 369.210969ms for pod "kube-proxy-6hdds" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:29.573818  147758 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:29.974015  147758 pod_ready.go:93] pod "kube-scheduler-embed-certs-541370" in "kube-system" namespace has status "Ready":"True"
	I1010 19:28:29.974039  147758 pod_ready.go:82] duration metric: took 400.214845ms for pod "kube-scheduler-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:29.974048  147758 pod_ready.go:39] duration metric: took 9.810911064s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 19:28:29.974066  147758 api_server.go:52] waiting for apiserver process to appear ...
	I1010 19:28:29.974120  147758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:28:29.991332  147758 api_server.go:72] duration metric: took 10.126480862s to wait for apiserver process to appear ...
	I1010 19:28:29.991356  147758 api_server.go:88] waiting for apiserver healthz status ...
	I1010 19:28:29.991382  147758 api_server.go:253] Checking apiserver healthz at https://192.168.39.120:8443/healthz ...
	I1010 19:28:29.995855  147758 api_server.go:279] https://192.168.39.120:8443/healthz returned 200:
	ok
	I1010 19:28:29.997488  147758 api_server.go:141] control plane version: v1.31.1
	I1010 19:28:29.997516  147758 api_server.go:131] duration metric: took 6.152312ms to wait for apiserver health ...
	I1010 19:28:29.997526  147758 system_pods.go:43] waiting for kube-system pods to appear ...
	I1010 19:28:30.176631  147758 system_pods.go:59] 9 kube-system pods found
	I1010 19:28:30.176662  147758 system_pods.go:61] "coredns-7c65d6cfc9-59752" [f7980c69-dd8e-42e0-a0ab-1dedf2203367] Running
	I1010 19:28:30.176668  147758 system_pods.go:61] "coredns-7c65d6cfc9-n7wxs" [ac89df92-bbdb-432b-8c4a-d9ced3c2f2b5] Running
	I1010 19:28:30.176672  147758 system_pods.go:61] "etcd-embed-certs-541370" [94f92d38-753f-47c7-95b6-7ef0486a172d] Running
	I1010 19:28:30.176676  147758 system_pods.go:61] "kube-apiserver-embed-certs-541370" [b60f7ec1-6416-4e0b-8f49-d673496187b6] Running
	I1010 19:28:30.176680  147758 system_pods.go:61] "kube-controller-manager-embed-certs-541370" [ca09511b-ec93-4146-9d43-5fbc0880394a] Running
	I1010 19:28:30.176683  147758 system_pods.go:61] "kube-proxy-6hdds" [fe7cbbf4-12be-469d-b176-37c4daccab96] Running
	I1010 19:28:30.176686  147758 system_pods.go:61] "kube-scheduler-embed-certs-541370" [37c96f22-d5a4-4233-ba65-7367b075656a] Running
	I1010 19:28:30.176693  147758 system_pods.go:61] "metrics-server-6867b74b74-znhn4" [5dc1f764-c7c7-480e-b787-5f5cf6c14a84] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1010 19:28:30.176699  147758 system_pods.go:61] "storage-provisioner" [cfb28184-daef-40be-9170-b42058727418] Running
	I1010 19:28:30.176707  147758 system_pods.go:74] duration metric: took 179.174083ms to wait for pod list to return data ...
	I1010 19:28:30.176714  147758 default_sa.go:34] waiting for default service account to be created ...
	I1010 19:28:30.375326  147758 default_sa.go:45] found service account: "default"
	I1010 19:28:30.375361  147758 default_sa.go:55] duration metric: took 198.640267ms for default service account to be created ...
	I1010 19:28:30.375374  147758 system_pods.go:116] waiting for k8s-apps to be running ...
	I1010 19:28:30.578749  147758 system_pods.go:86] 9 kube-system pods found
	I1010 19:28:30.578780  147758 system_pods.go:89] "coredns-7c65d6cfc9-59752" [f7980c69-dd8e-42e0-a0ab-1dedf2203367] Running
	I1010 19:28:30.578786  147758 system_pods.go:89] "coredns-7c65d6cfc9-n7wxs" [ac89df92-bbdb-432b-8c4a-d9ced3c2f2b5] Running
	I1010 19:28:30.578790  147758 system_pods.go:89] "etcd-embed-certs-541370" [94f92d38-753f-47c7-95b6-7ef0486a172d] Running
	I1010 19:28:30.578794  147758 system_pods.go:89] "kube-apiserver-embed-certs-541370" [b60f7ec1-6416-4e0b-8f49-d673496187b6] Running
	I1010 19:28:30.578797  147758 system_pods.go:89] "kube-controller-manager-embed-certs-541370" [ca09511b-ec93-4146-9d43-5fbc0880394a] Running
	I1010 19:28:30.578801  147758 system_pods.go:89] "kube-proxy-6hdds" [fe7cbbf4-12be-469d-b176-37c4daccab96] Running
	I1010 19:28:30.578804  147758 system_pods.go:89] "kube-scheduler-embed-certs-541370" [37c96f22-d5a4-4233-ba65-7367b075656a] Running
	I1010 19:28:30.578810  147758 system_pods.go:89] "metrics-server-6867b74b74-znhn4" [5dc1f764-c7c7-480e-b787-5f5cf6c14a84] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1010 19:28:30.578814  147758 system_pods.go:89] "storage-provisioner" [cfb28184-daef-40be-9170-b42058727418] Running
	I1010 19:28:30.578822  147758 system_pods.go:126] duration metric: took 203.441477ms to wait for k8s-apps to be running ...
	I1010 19:28:30.578829  147758 system_svc.go:44] waiting for kubelet service to be running ....
	I1010 19:28:30.578877  147758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 19:28:30.596523  147758 system_svc.go:56] duration metric: took 17.684729ms WaitForService to wait for kubelet
	I1010 19:28:30.596553  147758 kubeadm.go:582] duration metric: took 10.731708748s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1010 19:28:30.596573  147758 node_conditions.go:102] verifying NodePressure condition ...
	I1010 19:28:30.774749  147758 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1010 19:28:30.774783  147758 node_conditions.go:123] node cpu capacity is 2
	I1010 19:28:30.774807  147758 node_conditions.go:105] duration metric: took 178.228671ms to run NodePressure ...
	I1010 19:28:30.774822  147758 start.go:241] waiting for startup goroutines ...
	I1010 19:28:30.774831  147758 start.go:246] waiting for cluster config update ...
	I1010 19:28:30.774845  147758 start.go:255] writing updated cluster config ...
	I1010 19:28:30.775121  147758 ssh_runner.go:195] Run: rm -f paused
	I1010 19:28:30.826689  147758 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1010 19:28:30.828795  147758 out.go:177] * Done! kubectl is now configured to use "embed-certs-541370" cluster and "default" namespace by default
	I1010 19:28:29.728096  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:32.229632  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:34.726536  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:36.727032  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:38.727488  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:40.372903  148525 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.31317648s)
	I1010 19:28:40.372991  148525 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 19:28:40.389319  148525 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1010 19:28:40.400123  148525 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1010 19:28:40.411906  148525 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1010 19:28:40.411932  148525 kubeadm.go:157] found existing configuration files:
	
	I1010 19:28:40.411976  148525 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1010 19:28:40.421840  148525 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1010 19:28:40.421904  148525 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1010 19:28:40.432229  148525 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1010 19:28:40.442121  148525 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1010 19:28:40.442203  148525 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1010 19:28:40.452969  148525 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1010 19:28:40.463085  148525 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1010 19:28:40.463146  148525 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1010 19:28:40.473103  148525 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1010 19:28:40.482854  148525 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1010 19:28:40.482914  148525 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1010 19:28:40.494023  148525 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1010 19:28:40.543369  148525 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1010 19:28:40.543466  148525 kubeadm.go:310] [preflight] Running pre-flight checks
	I1010 19:28:40.657301  148525 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1010 19:28:40.657462  148525 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1010 19:28:40.657579  148525 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1010 19:28:40.669222  148525 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1010 19:28:40.670995  148525 out.go:235]   - Generating certificates and keys ...
	I1010 19:28:40.671102  148525 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1010 19:28:40.671171  148525 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1010 19:28:40.671284  148525 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1010 19:28:40.671374  148525 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1010 19:28:40.671471  148525 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1010 19:28:40.671557  148525 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1010 19:28:40.671650  148525 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1010 19:28:40.671751  148525 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1010 19:28:40.671895  148525 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1010 19:28:40.672000  148525 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1010 19:28:40.672056  148525 kubeadm.go:310] [certs] Using the existing "sa" key
	I1010 19:28:40.672136  148525 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1010 19:28:40.876613  148525 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1010 19:28:41.109518  148525 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1010 19:28:41.186751  148525 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1010 19:28:41.424710  148525 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1010 19:28:41.479611  148525 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1010 19:28:41.480235  148525 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1010 19:28:41.483222  148525 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1010 19:28:41.227521  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:43.728023  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:41.484809  148525 out.go:235]   - Booting up control plane ...
	I1010 19:28:41.484935  148525 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1010 19:28:41.485020  148525 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1010 19:28:41.485317  148525 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1010 19:28:41.506919  148525 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1010 19:28:41.517006  148525 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1010 19:28:41.517077  148525 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1010 19:28:41.653211  148525 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1010 19:28:41.653364  148525 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1010 19:28:42.655360  148525 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001910447s
	I1010 19:28:42.655482  148525 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1010 19:28:47.658431  148525 kubeadm.go:310] [api-check] The API server is healthy after 5.003169217s
	I1010 19:28:47.676178  148525 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1010 19:28:47.694752  148525 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1010 19:28:47.720376  148525 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1010 19:28:47.720645  148525 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-361847 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1010 19:28:47.736489  148525 kubeadm.go:310] [bootstrap-token] Using token: cprf0t.lm4xp75yi0cdu4sy
	I1010 19:28:46.228217  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:48.726740  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:47.737958  148525 out.go:235]   - Configuring RBAC rules ...
	I1010 19:28:47.738089  148525 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1010 19:28:47.750073  148525 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1010 19:28:47.758010  148525 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1010 19:28:47.761649  148525 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1010 19:28:47.768953  148525 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1010 19:28:47.774428  148525 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1010 19:28:48.065988  148525 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1010 19:28:48.502538  148525 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1010 19:28:49.066479  148525 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1010 19:28:49.069842  148525 kubeadm.go:310] 
	I1010 19:28:49.069937  148525 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1010 19:28:49.069947  148525 kubeadm.go:310] 
	I1010 19:28:49.070046  148525 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1010 19:28:49.070058  148525 kubeadm.go:310] 
	I1010 19:28:49.070089  148525 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1010 19:28:49.070166  148525 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1010 19:28:49.070254  148525 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1010 19:28:49.070265  148525 kubeadm.go:310] 
	I1010 19:28:49.070342  148525 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1010 19:28:49.070353  148525 kubeadm.go:310] 
	I1010 19:28:49.070446  148525 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1010 19:28:49.070478  148525 kubeadm.go:310] 
	I1010 19:28:49.070544  148525 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1010 19:28:49.070640  148525 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1010 19:28:49.070750  148525 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1010 19:28:49.070773  148525 kubeadm.go:310] 
	I1010 19:28:49.070880  148525 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1010 19:28:49.070990  148525 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1010 19:28:49.071001  148525 kubeadm.go:310] 
	I1010 19:28:49.071153  148525 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token cprf0t.lm4xp75yi0cdu4sy \
	I1010 19:28:49.071299  148525 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:bc8568f96f96483e4403a7eff9866ac7bd8b0e76d8203796a49dd1a769f11bda \
	I1010 19:28:49.071330  148525 kubeadm.go:310] 	--control-plane 
	I1010 19:28:49.071349  148525 kubeadm.go:310] 
	I1010 19:28:49.071468  148525 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1010 19:28:49.071497  148525 kubeadm.go:310] 
	I1010 19:28:49.072228  148525 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token cprf0t.lm4xp75yi0cdu4sy \
	I1010 19:28:49.072354  148525 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:bc8568f96f96483e4403a7eff9866ac7bd8b0e76d8203796a49dd1a769f11bda 
	I1010 19:28:49.074595  148525 kubeadm.go:310] W1010 19:28:40.525557    2560 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1010 19:28:49.074944  148525 kubeadm.go:310] W1010 19:28:40.526329    2560 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1010 19:28:49.075102  148525 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1010 19:28:49.075143  148525 cni.go:84] Creating CNI manager for ""
	I1010 19:28:49.075166  148525 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1010 19:28:49.077190  148525 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1010 19:28:49.078665  148525 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1010 19:28:49.091792  148525 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1010 19:28:49.113801  148525 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1010 19:28:49.113920  148525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-361847 minikube.k8s.io/updated_at=2024_10_10T19_28_49_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=e90d7de550e839433dc631623053398b652747fc minikube.k8s.io/name=default-k8s-diff-port-361847 minikube.k8s.io/primary=true
	I1010 19:28:49.114074  148525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:49.154398  148525 ops.go:34] apiserver oom_adj: -16
	I1010 19:28:49.351271  148525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:49.852049  148525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:50.351441  148525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:50.852022  148525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:51.351391  148525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:51.851329  148525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:52.351840  148525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:52.852392  148525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:53.351397  148525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:53.443325  148525 kubeadm.go:1113] duration metric: took 4.329288133s to wait for elevateKubeSystemPrivileges
	I1010 19:28:53.443363  148525 kubeadm.go:394] duration metric: took 4m54.439732071s to StartCluster
	I1010 19:28:53.443386  148525 settings.go:142] acquiring lock: {Name:mk8d73e472e6ca16e47be8eb712406a95625b8da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 19:28:53.443481  148525 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19787-81676/kubeconfig
	I1010 19:28:53.445465  148525 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19787-81676/kubeconfig: {Name:mk31297b607b774870865548d952622c04d970cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 19:28:53.445747  148525 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.32 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1010 19:28:53.445842  148525 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1010 19:28:53.445957  148525 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-361847"
	I1010 19:28:53.445980  148525 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-361847"
	W1010 19:28:53.445992  148525 addons.go:243] addon storage-provisioner should already be in state true
	I1010 19:28:53.446004  148525 config.go:182] Loaded profile config "default-k8s-diff-port-361847": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 19:28:53.446026  148525 host.go:66] Checking if "default-k8s-diff-port-361847" exists ...
	I1010 19:28:53.446065  148525 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-361847"
	I1010 19:28:53.446100  148525 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-361847"
	I1010 19:28:53.446085  148525 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-361847"
	I1010 19:28:53.446137  148525 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-361847"
	W1010 19:28:53.446151  148525 addons.go:243] addon metrics-server should already be in state true
	I1010 19:28:53.446242  148525 host.go:66] Checking if "default-k8s-diff-port-361847" exists ...
	I1010 19:28:53.446515  148525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:28:53.446562  148525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:28:53.447089  148525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:28:53.447135  148525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:28:53.447315  148525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:28:53.447360  148525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:28:53.450779  148525 out.go:177] * Verifying Kubernetes components...
	I1010 19:28:53.452838  148525 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 19:28:53.465502  148525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36411
	I1010 19:28:53.466020  148525 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:28:53.466572  148525 main.go:141] libmachine: Using API Version  1
	I1010 19:28:53.466594  148525 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:28:53.466772  148525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42293
	I1010 19:28:53.467034  148525 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:28:53.467209  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetState
	I1010 19:28:53.467310  148525 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:28:53.467828  148525 main.go:141] libmachine: Using API Version  1
	I1010 19:28:53.467857  148525 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:28:53.467899  148525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36387
	I1010 19:28:53.468270  148525 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:28:53.468451  148525 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:28:53.468866  148525 main.go:141] libmachine: Using API Version  1
	I1010 19:28:53.468891  148525 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:28:53.469102  148525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:28:53.469150  148525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:28:53.469484  148525 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:28:53.470068  148525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:28:53.470114  148525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:28:53.471192  148525 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-361847"
	W1010 19:28:53.471213  148525 addons.go:243] addon default-storageclass should already be in state true
	I1010 19:28:53.471261  148525 host.go:66] Checking if "default-k8s-diff-port-361847" exists ...
	I1010 19:28:53.471618  148525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:28:53.471664  148525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:28:53.486550  148525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38971
	I1010 19:28:53.487068  148525 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:28:53.487608  148525 main.go:141] libmachine: Using API Version  1
	I1010 19:28:53.487626  148525 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:28:53.488015  148525 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:28:53.488329  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetState
	I1010 19:28:53.490200  148525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44971
	I1010 19:28:53.490240  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .DriverName
	I1010 19:28:53.490790  148525 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:28:53.491318  148525 main.go:141] libmachine: Using API Version  1
	I1010 19:28:53.491341  148525 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:28:53.491682  148525 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:28:53.491957  148525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45297
	I1010 19:28:53.492100  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetState
	I1010 19:28:53.492423  148525 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:28:53.492731  148525 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1010 19:28:53.492811  148525 main.go:141] libmachine: Using API Version  1
	I1010 19:28:53.492831  148525 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:28:53.493240  148525 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:28:53.493885  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .DriverName
	I1010 19:28:53.493979  148525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:28:53.494031  148525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:28:53.494359  148525 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1010 19:28:53.494381  148525 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1010 19:28:53.494397  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHHostname
	I1010 19:28:53.495771  148525 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 19:28:51.226596  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:53.227299  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:53.227335  147213 pod_ready.go:82] duration metric: took 4m0.007224391s for pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace to be "Ready" ...
	E1010 19:28:53.227346  147213 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1010 19:28:53.227355  147213 pod_ready.go:39] duration metric: took 4m5.554224355s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 19:28:53.227375  147213 api_server.go:52] waiting for apiserver process to appear ...
	I1010 19:28:53.227419  147213 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:28:53.227484  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:28:53.288713  147213 cri.go:89] found id: "20a9cb514f18abab89d82173a52c43693b13548712f96726c491e14100e5deba"
	I1010 19:28:53.288749  147213 cri.go:89] found id: ""
	I1010 19:28:53.288759  147213 logs.go:282] 1 containers: [20a9cb514f18abab89d82173a52c43693b13548712f96726c491e14100e5deba]
	I1010 19:28:53.288823  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:53.294819  147213 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:28:53.294904  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:28:53.340169  147213 cri.go:89] found id: "bfc9f1f069a0299235908025d3c8840e059ddc2fc2f579932edb134cff568c36"
	I1010 19:28:53.340197  147213 cri.go:89] found id: ""
	I1010 19:28:53.340207  147213 logs.go:282] 1 containers: [bfc9f1f069a0299235908025d3c8840e059ddc2fc2f579932edb134cff568c36]
	I1010 19:28:53.340271  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:53.345214  147213 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:28:53.345292  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:28:53.392808  147213 cri.go:89] found id: "3c98f0e3e46ce82feb8638281ffb8c8cdf326b15115a6edfe91de59de6868846"
	I1010 19:28:53.392838  147213 cri.go:89] found id: ""
	I1010 19:28:53.392859  147213 logs.go:282] 1 containers: [3c98f0e3e46ce82feb8638281ffb8c8cdf326b15115a6edfe91de59de6868846]
	I1010 19:28:53.392921  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:53.398275  147213 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:28:53.398361  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:28:53.439567  147213 cri.go:89] found id: "d397ef1d012accb6f9cb41fa5bd5ed4649588e9ad8552f87d2f9a579a9c3f023"
	I1010 19:28:53.439594  147213 cri.go:89] found id: ""
	I1010 19:28:53.439604  147213 logs.go:282] 1 containers: [d397ef1d012accb6f9cb41fa5bd5ed4649588e9ad8552f87d2f9a579a9c3f023]
	I1010 19:28:53.439665  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:53.444366  147213 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:28:53.444436  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:28:53.522580  147213 cri.go:89] found id: "3a26f9cbec8dc8e6e1b1c1cc24a2432dc85819a78557be2263db6bc34e847664"
	I1010 19:28:53.522597  147213 cri.go:89] found id: ""
	I1010 19:28:53.522605  147213 logs.go:282] 1 containers: [3a26f9cbec8dc8e6e1b1c1cc24a2432dc85819a78557be2263db6bc34e847664]
	I1010 19:28:53.522654  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:53.528890  147213 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:28:53.528974  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:28:53.575933  147213 cri.go:89] found id: "d59196636b2826805e226757e3c5d3683d49f2fddb43f2e965aeae02026742ea"
	I1010 19:28:53.575963  147213 cri.go:89] found id: ""
	I1010 19:28:53.575975  147213 logs.go:282] 1 containers: [d59196636b2826805e226757e3c5d3683d49f2fddb43f2e965aeae02026742ea]
	I1010 19:28:53.576035  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:53.581693  147213 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:28:53.581763  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:28:53.619789  147213 cri.go:89] found id: ""
	I1010 19:28:53.619819  147213 logs.go:282] 0 containers: []
	W1010 19:28:53.619831  147213 logs.go:284] No container was found matching "kindnet"
	I1010 19:28:53.619839  147213 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1010 19:28:53.619899  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1010 19:28:53.659715  147213 cri.go:89] found id: "dfabbf70cd44985666c6b54a456e49c66a41cc19300a483565decd937ce240ab"
	I1010 19:28:53.659746  147213 cri.go:89] found id: "e14d37c6da3f630d60720c5dc7ec414f060a7b327fafd27b633f3402e552840e"
	I1010 19:28:53.659752  147213 cri.go:89] found id: ""
	I1010 19:28:53.659762  147213 logs.go:282] 2 containers: [dfabbf70cd44985666c6b54a456e49c66a41cc19300a483565decd937ce240ab e14d37c6da3f630d60720c5dc7ec414f060a7b327fafd27b633f3402e552840e]
	I1010 19:28:53.659828  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:53.664377  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:53.668766  147213 logs.go:123] Gathering logs for dmesg ...
	I1010 19:28:53.668796  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:28:53.685976  147213 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:28:53.686007  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 19:28:53.497232  148525 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 19:28:53.497251  148525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1010 19:28:53.497273  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHHostname
	I1010 19:28:53.497732  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:28:53.498599  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:28:53.498627  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:28:53.498971  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHPort
	I1010 19:28:53.499159  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:28:53.499312  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHUsername
	I1010 19:28:53.499414  148525 sshutil.go:53] new ssh client: &{IP:192.168.50.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/default-k8s-diff-port-361847/id_rsa Username:docker}
	I1010 19:28:53.501044  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:28:53.501509  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:28:53.501531  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:28:53.501782  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHPort
	I1010 19:28:53.501956  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:28:53.502080  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHUsername
	I1010 19:28:53.502232  148525 sshutil.go:53] new ssh client: &{IP:192.168.50.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/default-k8s-diff-port-361847/id_rsa Username:docker}
	I1010 19:28:53.512240  148525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34899
	I1010 19:28:53.512809  148525 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:28:53.513347  148525 main.go:141] libmachine: Using API Version  1
	I1010 19:28:53.513368  148525 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:28:53.513787  148525 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:28:53.514001  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetState
	I1010 19:28:53.515436  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .DriverName
	I1010 19:28:53.515639  148525 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1010 19:28:53.515659  148525 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1010 19:28:53.515681  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHHostname
	I1010 19:28:53.518128  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:28:53.518596  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:28:53.518628  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:28:53.518909  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHPort
	I1010 19:28:53.519080  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:28:53.519216  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHUsername
	I1010 19:28:53.519376  148525 sshutil.go:53] new ssh client: &{IP:192.168.50.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/default-k8s-diff-port-361847/id_rsa Username:docker}
	I1010 19:28:53.712871  148525 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 19:28:53.755059  148525 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-361847" to be "Ready" ...
	I1010 19:28:53.766564  148525 node_ready.go:49] node "default-k8s-diff-port-361847" has status "Ready":"True"
	I1010 19:28:53.766590  148525 node_ready.go:38] duration metric: took 11.490223ms for node "default-k8s-diff-port-361847" to be "Ready" ...
	I1010 19:28:53.766603  148525 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 19:28:53.777458  148525 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-dh9th" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:53.875493  148525 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1010 19:28:53.875525  148525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1010 19:28:53.911443  148525 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1010 19:28:53.944885  148525 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1010 19:28:53.944919  148525 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1010 19:28:53.945487  148525 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 19:28:54.011209  148525 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1010 19:28:54.011239  148525 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1010 19:28:54.039679  148525 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1010 19:28:54.598172  148525 main.go:141] libmachine: Making call to close driver server
	I1010 19:28:54.598226  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .Close
	I1010 19:28:54.598584  148525 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:28:54.598608  148525 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:28:54.598619  148525 main.go:141] libmachine: Making call to close driver server
	I1010 19:28:54.598629  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .Close
	I1010 19:28:54.598898  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | Closing plugin on server side
	I1010 19:28:54.598931  148525 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:28:54.598939  148525 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:28:54.643365  148525 main.go:141] libmachine: Making call to close driver server
	I1010 19:28:54.643392  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .Close
	I1010 19:28:54.643734  148525 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:28:54.643760  148525 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:28:55.287018  148525 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.341483807s)
	I1010 19:28:55.287045  148525 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.247326452s)
	I1010 19:28:55.287089  148525 main.go:141] libmachine: Making call to close driver server
	I1010 19:28:55.287094  148525 main.go:141] libmachine: Making call to close driver server
	I1010 19:28:55.287106  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .Close
	I1010 19:28:55.287112  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .Close
	I1010 19:28:55.287440  148525 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:28:55.287479  148525 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:28:55.287506  148525 main.go:141] libmachine: Making call to close driver server
	I1010 19:28:55.287524  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .Close
	I1010 19:28:55.287570  148525 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:28:55.287589  148525 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:28:55.287598  148525 main.go:141] libmachine: Making call to close driver server
	I1010 19:28:55.287607  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .Close
	I1010 19:28:55.287818  148525 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:28:55.287831  148525 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:28:55.287840  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | Closing plugin on server side
	I1010 19:28:55.287855  148525 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:28:55.287862  148525 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:28:55.287872  148525 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-361847"
	I1010 19:28:55.287880  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | Closing plugin on server side
	I1010 19:28:55.289944  148525 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1010 19:28:53.841387  147213 logs.go:123] Gathering logs for kube-apiserver [20a9cb514f18abab89d82173a52c43693b13548712f96726c491e14100e5deba] ...
	I1010 19:28:53.841441  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 20a9cb514f18abab89d82173a52c43693b13548712f96726c491e14100e5deba"
	I1010 19:28:53.892951  147213 logs.go:123] Gathering logs for coredns [3c98f0e3e46ce82feb8638281ffb8c8cdf326b15115a6edfe91de59de6868846] ...
	I1010 19:28:53.893005  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c98f0e3e46ce82feb8638281ffb8c8cdf326b15115a6edfe91de59de6868846"
	I1010 19:28:53.947636  147213 logs.go:123] Gathering logs for kube-scheduler [d397ef1d012accb6f9cb41fa5bd5ed4649588e9ad8552f87d2f9a579a9c3f023] ...
	I1010 19:28:53.947668  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d397ef1d012accb6f9cb41fa5bd5ed4649588e9ad8552f87d2f9a579a9c3f023"
	I1010 19:28:53.992969  147213 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:28:53.992998  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:28:54.520652  147213 logs.go:123] Gathering logs for kubelet ...
	I1010 19:28:54.520703  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:28:54.588366  147213 logs.go:123] Gathering logs for etcd [bfc9f1f069a0299235908025d3c8840e059ddc2fc2f579932edb134cff568c36] ...
	I1010 19:28:54.588418  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bfc9f1f069a0299235908025d3c8840e059ddc2fc2f579932edb134cff568c36"
	I1010 19:28:54.651179  147213 logs.go:123] Gathering logs for kube-proxy [3a26f9cbec8dc8e6e1b1c1cc24a2432dc85819a78557be2263db6bc34e847664] ...
	I1010 19:28:54.651227  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3a26f9cbec8dc8e6e1b1c1cc24a2432dc85819a78557be2263db6bc34e847664"
	I1010 19:28:54.712881  147213 logs.go:123] Gathering logs for kube-controller-manager [d59196636b2826805e226757e3c5d3683d49f2fddb43f2e965aeae02026742ea] ...
	I1010 19:28:54.712925  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d59196636b2826805e226757e3c5d3683d49f2fddb43f2e965aeae02026742ea"
	I1010 19:28:54.779030  147213 logs.go:123] Gathering logs for storage-provisioner [dfabbf70cd44985666c6b54a456e49c66a41cc19300a483565decd937ce240ab] ...
	I1010 19:28:54.779094  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dfabbf70cd44985666c6b54a456e49c66a41cc19300a483565decd937ce240ab"
	I1010 19:28:54.821961  147213 logs.go:123] Gathering logs for storage-provisioner [e14d37c6da3f630d60720c5dc7ec414f060a7b327fafd27b633f3402e552840e] ...
	I1010 19:28:54.822002  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e14d37c6da3f630d60720c5dc7ec414f060a7b327fafd27b633f3402e552840e"
	I1010 19:28:54.871409  147213 logs.go:123] Gathering logs for container status ...
	I1010 19:28:54.871446  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:28:57.425310  147213 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:28:57.442308  147213 api_server.go:72] duration metric: took 4m17.02881034s to wait for apiserver process to appear ...
	I1010 19:28:57.442343  147213 api_server.go:88] waiting for apiserver healthz status ...
	I1010 19:28:57.442383  147213 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:28:57.442444  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:28:57.481392  147213 cri.go:89] found id: "20a9cb514f18abab89d82173a52c43693b13548712f96726c491e14100e5deba"
	I1010 19:28:57.481420  147213 cri.go:89] found id: ""
	I1010 19:28:57.481430  147213 logs.go:282] 1 containers: [20a9cb514f18abab89d82173a52c43693b13548712f96726c491e14100e5deba]
	I1010 19:28:57.481503  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:57.486191  147213 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:28:57.486269  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:28:57.532238  147213 cri.go:89] found id: "bfc9f1f069a0299235908025d3c8840e059ddc2fc2f579932edb134cff568c36"
	I1010 19:28:57.532271  147213 cri.go:89] found id: ""
	I1010 19:28:57.532284  147213 logs.go:282] 1 containers: [bfc9f1f069a0299235908025d3c8840e059ddc2fc2f579932edb134cff568c36]
	I1010 19:28:57.532357  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:57.538105  147213 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:28:57.538188  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:28:57.579729  147213 cri.go:89] found id: "3c98f0e3e46ce82feb8638281ffb8c8cdf326b15115a6edfe91de59de6868846"
	I1010 19:28:57.579757  147213 cri.go:89] found id: ""
	I1010 19:28:57.579767  147213 logs.go:282] 1 containers: [3c98f0e3e46ce82feb8638281ffb8c8cdf326b15115a6edfe91de59de6868846]
	I1010 19:28:57.579833  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:57.584494  147213 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:28:57.584568  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:28:57.623920  147213 cri.go:89] found id: "d397ef1d012accb6f9cb41fa5bd5ed4649588e9ad8552f87d2f9a579a9c3f023"
	I1010 19:28:57.623949  147213 cri.go:89] found id: ""
	I1010 19:28:57.623960  147213 logs.go:282] 1 containers: [d397ef1d012accb6f9cb41fa5bd5ed4649588e9ad8552f87d2f9a579a9c3f023]
	I1010 19:28:57.624028  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:57.628927  147213 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:28:57.629018  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:28:57.669669  147213 cri.go:89] found id: "3a26f9cbec8dc8e6e1b1c1cc24a2432dc85819a78557be2263db6bc34e847664"
	I1010 19:28:57.669698  147213 cri.go:89] found id: ""
	I1010 19:28:57.669707  147213 logs.go:282] 1 containers: [3a26f9cbec8dc8e6e1b1c1cc24a2432dc85819a78557be2263db6bc34e847664]
	I1010 19:28:57.669771  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:57.674449  147213 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:28:57.674526  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:28:57.721856  147213 cri.go:89] found id: "d59196636b2826805e226757e3c5d3683d49f2fddb43f2e965aeae02026742ea"
	I1010 19:28:57.721881  147213 cri.go:89] found id: ""
	I1010 19:28:57.721891  147213 logs.go:282] 1 containers: [d59196636b2826805e226757e3c5d3683d49f2fddb43f2e965aeae02026742ea]
	I1010 19:28:57.721955  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:57.726422  147213 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:28:57.726497  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:28:57.764464  147213 cri.go:89] found id: ""
	I1010 19:28:57.764499  147213 logs.go:282] 0 containers: []
	W1010 19:28:57.764512  147213 logs.go:284] No container was found matching "kindnet"
	I1010 19:28:57.764521  147213 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1010 19:28:57.764595  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1010 19:28:57.809758  147213 cri.go:89] found id: "dfabbf70cd44985666c6b54a456e49c66a41cc19300a483565decd937ce240ab"
	I1010 19:28:57.809784  147213 cri.go:89] found id: "e14d37c6da3f630d60720c5dc7ec414f060a7b327fafd27b633f3402e552840e"
	I1010 19:28:57.809788  147213 cri.go:89] found id: ""
	I1010 19:28:57.809797  147213 logs.go:282] 2 containers: [dfabbf70cd44985666c6b54a456e49c66a41cc19300a483565decd937ce240ab e14d37c6da3f630d60720c5dc7ec414f060a7b327fafd27b633f3402e552840e]
	I1010 19:28:57.809854  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:57.815576  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:57.820152  147213 logs.go:123] Gathering logs for kube-apiserver [20a9cb514f18abab89d82173a52c43693b13548712f96726c491e14100e5deba] ...
	I1010 19:28:57.820181  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 20a9cb514f18abab89d82173a52c43693b13548712f96726c491e14100e5deba"
	I1010 19:28:57.869339  147213 logs.go:123] Gathering logs for etcd [bfc9f1f069a0299235908025d3c8840e059ddc2fc2f579932edb134cff568c36] ...
	I1010 19:28:57.869383  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bfc9f1f069a0299235908025d3c8840e059ddc2fc2f579932edb134cff568c36"
	I1010 19:28:57.918698  147213 logs.go:123] Gathering logs for kube-proxy [3a26f9cbec8dc8e6e1b1c1cc24a2432dc85819a78557be2263db6bc34e847664] ...
	I1010 19:28:57.918739  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3a26f9cbec8dc8e6e1b1c1cc24a2432dc85819a78557be2263db6bc34e847664"
	I1010 19:28:57.960939  147213 logs.go:123] Gathering logs for kube-controller-manager [d59196636b2826805e226757e3c5d3683d49f2fddb43f2e965aeae02026742ea] ...
	I1010 19:28:57.960985  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d59196636b2826805e226757e3c5d3683d49f2fddb43f2e965aeae02026742ea"
	I1010 19:28:58.013572  147213 logs.go:123] Gathering logs for storage-provisioner [dfabbf70cd44985666c6b54a456e49c66a41cc19300a483565decd937ce240ab] ...
	I1010 19:28:58.013612  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dfabbf70cd44985666c6b54a456e49c66a41cc19300a483565decd937ce240ab"
	I1010 19:28:58.053247  147213 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:28:58.053277  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:28:58.507428  147213 logs.go:123] Gathering logs for container status ...
	I1010 19:28:58.507473  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:28:58.552704  147213 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:28:58.552742  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 19:28:58.672077  147213 logs.go:123] Gathering logs for dmesg ...
	I1010 19:28:58.672127  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:28:58.690997  147213 logs.go:123] Gathering logs for coredns [3c98f0e3e46ce82feb8638281ffb8c8cdf326b15115a6edfe91de59de6868846] ...
	I1010 19:28:58.691049  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c98f0e3e46ce82feb8638281ffb8c8cdf326b15115a6edfe91de59de6868846"
	I1010 19:28:58.735251  147213 logs.go:123] Gathering logs for kube-scheduler [d397ef1d012accb6f9cb41fa5bd5ed4649588e9ad8552f87d2f9a579a9c3f023] ...
	I1010 19:28:58.735287  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d397ef1d012accb6f9cb41fa5bd5ed4649588e9ad8552f87d2f9a579a9c3f023"
	I1010 19:28:55.291700  148525 addons.go:510] duration metric: took 1.845864985s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1010 19:28:55.785186  148525 pod_ready.go:103] pod "coredns-7c65d6cfc9-dh9th" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:57.789567  148525 pod_ready.go:103] pod "coredns-7c65d6cfc9-dh9th" in "kube-system" namespace has status "Ready":"False"
	I1010 19:29:00.284444  148525 pod_ready.go:103] pod "coredns-7c65d6cfc9-dh9th" in "kube-system" namespace has status "Ready":"False"
	I1010 19:29:01.297627  148525 pod_ready.go:93] pod "coredns-7c65d6cfc9-dh9th" in "kube-system" namespace has status "Ready":"True"
	I1010 19:29:01.297660  148525 pod_ready.go:82] duration metric: took 7.520173084s for pod "coredns-7c65d6cfc9-dh9th" in "kube-system" namespace to be "Ready" ...
	I1010 19:29:01.297676  148525 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-fgxh7" in "kube-system" namespace to be "Ready" ...
	I1010 19:29:01.804654  148525 pod_ready.go:93] pod "coredns-7c65d6cfc9-fgxh7" in "kube-system" namespace has status "Ready":"True"
	I1010 19:29:01.804676  148525 pod_ready.go:82] duration metric: took 506.992872ms for pod "coredns-7c65d6cfc9-fgxh7" in "kube-system" namespace to be "Ready" ...
	I1010 19:29:01.804690  148525 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	I1010 19:29:01.809788  148525 pod_ready.go:93] pod "etcd-default-k8s-diff-port-361847" in "kube-system" namespace has status "Ready":"True"
	I1010 19:29:01.809814  148525 pod_ready.go:82] duration metric: took 5.116023ms for pod "etcd-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	I1010 19:29:01.809825  148525 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	I1010 19:29:01.814460  148525 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-361847" in "kube-system" namespace has status "Ready":"True"
	I1010 19:29:01.814486  148525 pod_ready.go:82] duration metric: took 4.652085ms for pod "kube-apiserver-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	I1010 19:29:01.814501  148525 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	I1010 19:29:01.819719  148525 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-361847" in "kube-system" namespace has status "Ready":"True"
	I1010 19:29:01.819741  148525 pod_ready.go:82] duration metric: took 5.231258ms for pod "kube-controller-manager-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	I1010 19:29:01.819753  148525 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-jlvn6" in "kube-system" namespace to be "Ready" ...
	I1010 19:29:02.082285  148525 pod_ready.go:93] pod "kube-proxy-jlvn6" in "kube-system" namespace has status "Ready":"True"
	I1010 19:29:02.082325  148525 pod_ready.go:82] duration metric: took 262.562954ms for pod "kube-proxy-jlvn6" in "kube-system" namespace to be "Ready" ...
	I1010 19:29:02.082342  148525 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	I1010 19:29:02.481705  148525 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-361847" in "kube-system" namespace has status "Ready":"True"
	I1010 19:29:02.481730  148525 pod_ready.go:82] duration metric: took 399.378957ms for pod "kube-scheduler-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	I1010 19:29:02.481742  148525 pod_ready.go:39] duration metric: took 8.715126416s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 19:29:02.481779  148525 api_server.go:52] waiting for apiserver process to appear ...
	I1010 19:29:02.481832  148525 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:29:02.498706  148525 api_server.go:72] duration metric: took 9.052891898s to wait for apiserver process to appear ...
	I1010 19:29:02.498760  148525 api_server.go:88] waiting for apiserver healthz status ...
	I1010 19:29:02.498795  148525 api_server.go:253] Checking apiserver healthz at https://192.168.50.32:8444/healthz ...
	I1010 19:29:02.503501  148525 api_server.go:279] https://192.168.50.32:8444/healthz returned 200:
	ok
	I1010 19:29:02.504594  148525 api_server.go:141] control plane version: v1.31.1
	I1010 19:29:02.504620  148525 api_server.go:131] duration metric: took 5.850548ms to wait for apiserver health ...
	I1010 19:29:02.504629  148525 system_pods.go:43] waiting for kube-system pods to appear ...
	I1010 19:29:02.685579  148525 system_pods.go:59] 9 kube-system pods found
	I1010 19:29:02.685611  148525 system_pods.go:61] "coredns-7c65d6cfc9-dh9th" [ff14d755-810a-497a-b1fc-7fe231748af3] Running
	I1010 19:29:02.685618  148525 system_pods.go:61] "coredns-7c65d6cfc9-fgxh7" [b4faa977-3205-4395-bda3-8fe24fdcf6cc] Running
	I1010 19:29:02.685624  148525 system_pods.go:61] "etcd-default-k8s-diff-port-361847" [d7e1e625-2945-4755-927a-aab5e40f5392] Running
	I1010 19:29:02.685630  148525 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-361847" [0f7dace6-6cdb-4438-a4b3-3fecac56c709] Running
	I1010 19:29:02.685635  148525 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-361847" [d08fb57e-d916-4d4f-929a-509c1c8c0f89] Running
	I1010 19:29:02.685639  148525 system_pods.go:61] "kube-proxy-jlvn6" [6336f682-0362-4855-b848-3540052aec19] Running
	I1010 19:29:02.685644  148525 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-361847" [60282768-5672-42b9-b9f5-6d5cd0b186cf] Running
	I1010 19:29:02.685653  148525 system_pods.go:61] "metrics-server-6867b74b74-fdf7p" [6f8ca204-13fe-4adb-9c09-33ec6821ff2d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1010 19:29:02.685658  148525 system_pods.go:61] "storage-provisioner" [ea1a4ade-9648-401f-a0ad-633ab3c1196b] Running
	I1010 19:29:02.685669  148525 system_pods.go:74] duration metric: took 181.032548ms to wait for pod list to return data ...
	I1010 19:29:02.685683  148525 default_sa.go:34] waiting for default service account to be created ...
	I1010 19:29:02.883256  148525 default_sa.go:45] found service account: "default"
	I1010 19:29:02.883288  148525 default_sa.go:55] duration metric: took 197.59742ms for default service account to be created ...
	I1010 19:29:02.883298  148525 system_pods.go:116] waiting for k8s-apps to be running ...
	I1010 19:29:03.084706  148525 system_pods.go:86] 9 kube-system pods found
	I1010 19:29:03.084737  148525 system_pods.go:89] "coredns-7c65d6cfc9-dh9th" [ff14d755-810a-497a-b1fc-7fe231748af3] Running
	I1010 19:29:03.084742  148525 system_pods.go:89] "coredns-7c65d6cfc9-fgxh7" [b4faa977-3205-4395-bda3-8fe24fdcf6cc] Running
	I1010 19:29:03.084746  148525 system_pods.go:89] "etcd-default-k8s-diff-port-361847" [d7e1e625-2945-4755-927a-aab5e40f5392] Running
	I1010 19:29:03.084751  148525 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-361847" [0f7dace6-6cdb-4438-a4b3-3fecac56c709] Running
	I1010 19:29:03.084755  148525 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-361847" [d08fb57e-d916-4d4f-929a-509c1c8c0f89] Running
	I1010 19:29:03.084759  148525 system_pods.go:89] "kube-proxy-jlvn6" [6336f682-0362-4855-b848-3540052aec19] Running
	I1010 19:29:03.084762  148525 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-361847" [60282768-5672-42b9-b9f5-6d5cd0b186cf] Running
	I1010 19:29:03.084768  148525 system_pods.go:89] "metrics-server-6867b74b74-fdf7p" [6f8ca204-13fe-4adb-9c09-33ec6821ff2d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1010 19:29:03.084772  148525 system_pods.go:89] "storage-provisioner" [ea1a4ade-9648-401f-a0ad-633ab3c1196b] Running
	I1010 19:29:03.084779  148525 system_pods.go:126] duration metric: took 201.476637ms to wait for k8s-apps to be running ...
	I1010 19:29:03.084787  148525 system_svc.go:44] waiting for kubelet service to be running ....
	I1010 19:29:03.084832  148525 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 19:29:03.100986  148525 system_svc.go:56] duration metric: took 16.183062ms WaitForService to wait for kubelet
	I1010 19:29:03.101026  148525 kubeadm.go:582] duration metric: took 9.655245557s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1010 19:29:03.101050  148525 node_conditions.go:102] verifying NodePressure condition ...
	I1010 19:29:03.282063  148525 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1010 19:29:03.282095  148525 node_conditions.go:123] node cpu capacity is 2
	I1010 19:29:03.282106  148525 node_conditions.go:105] duration metric: took 181.049888ms to run NodePressure ...
	I1010 19:29:03.282119  148525 start.go:241] waiting for startup goroutines ...
	I1010 19:29:03.282125  148525 start.go:246] waiting for cluster config update ...
	I1010 19:29:03.282135  148525 start.go:255] writing updated cluster config ...
	I1010 19:29:03.282414  148525 ssh_runner.go:195] Run: rm -f paused
	I1010 19:29:03.331838  148525 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1010 19:29:03.333698  148525 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-361847" cluster and "default" namespace by default
	I1010 19:28:58.775358  147213 logs.go:123] Gathering logs for storage-provisioner [e14d37c6da3f630d60720c5dc7ec414f060a7b327fafd27b633f3402e552840e] ...
	I1010 19:28:58.775396  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e14d37c6da3f630d60720c5dc7ec414f060a7b327fafd27b633f3402e552840e"
	I1010 19:28:58.812210  147213 logs.go:123] Gathering logs for kubelet ...
	I1010 19:28:58.812269  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:29:01.381750  147213 api_server.go:253] Checking apiserver healthz at https://192.168.72.11:8443/healthz ...
	I1010 19:29:01.386658  147213 api_server.go:279] https://192.168.72.11:8443/healthz returned 200:
	ok
	I1010 19:29:01.387793  147213 api_server.go:141] control plane version: v1.31.1
	I1010 19:29:01.387819  147213 api_server.go:131] duration metric: took 3.945468552s to wait for apiserver health ...
	I1010 19:29:01.387829  147213 system_pods.go:43] waiting for kube-system pods to appear ...
	I1010 19:29:01.387861  147213 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:29:01.387948  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:29:01.433312  147213 cri.go:89] found id: "20a9cb514f18abab89d82173a52c43693b13548712f96726c491e14100e5deba"
	I1010 19:29:01.433344  147213 cri.go:89] found id: ""
	I1010 19:29:01.433433  147213 logs.go:282] 1 containers: [20a9cb514f18abab89d82173a52c43693b13548712f96726c491e14100e5deba]
	I1010 19:29:01.433521  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:29:01.437920  147213 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:29:01.437983  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:29:01.476429  147213 cri.go:89] found id: "bfc9f1f069a0299235908025d3c8840e059ddc2fc2f579932edb134cff568c36"
	I1010 19:29:01.476458  147213 cri.go:89] found id: ""
	I1010 19:29:01.476470  147213 logs.go:282] 1 containers: [bfc9f1f069a0299235908025d3c8840e059ddc2fc2f579932edb134cff568c36]
	I1010 19:29:01.476522  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:29:01.480912  147213 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:29:01.480987  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:29:01.522141  147213 cri.go:89] found id: "3c98f0e3e46ce82feb8638281ffb8c8cdf326b15115a6edfe91de59de6868846"
	I1010 19:29:01.522164  147213 cri.go:89] found id: ""
	I1010 19:29:01.522173  147213 logs.go:282] 1 containers: [3c98f0e3e46ce82feb8638281ffb8c8cdf326b15115a6edfe91de59de6868846]
	I1010 19:29:01.522238  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:29:01.526742  147213 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:29:01.526803  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:29:01.572715  147213 cri.go:89] found id: "d397ef1d012accb6f9cb41fa5bd5ed4649588e9ad8552f87d2f9a579a9c3f023"
	I1010 19:29:01.572747  147213 cri.go:89] found id: ""
	I1010 19:29:01.572759  147213 logs.go:282] 1 containers: [d397ef1d012accb6f9cb41fa5bd5ed4649588e9ad8552f87d2f9a579a9c3f023]
	I1010 19:29:01.572814  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:29:01.577754  147213 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:29:01.577832  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:29:01.616077  147213 cri.go:89] found id: "3a26f9cbec8dc8e6e1b1c1cc24a2432dc85819a78557be2263db6bc34e847664"
	I1010 19:29:01.616104  147213 cri.go:89] found id: ""
	I1010 19:29:01.616121  147213 logs.go:282] 1 containers: [3a26f9cbec8dc8e6e1b1c1cc24a2432dc85819a78557be2263db6bc34e847664]
	I1010 19:29:01.616185  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:29:01.620622  147213 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:29:01.620702  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:29:01.662859  147213 cri.go:89] found id: "d59196636b2826805e226757e3c5d3683d49f2fddb43f2e965aeae02026742ea"
	I1010 19:29:01.662889  147213 cri.go:89] found id: ""
	I1010 19:29:01.662903  147213 logs.go:282] 1 containers: [d59196636b2826805e226757e3c5d3683d49f2fddb43f2e965aeae02026742ea]
	I1010 19:29:01.662964  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:29:01.667491  147213 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:29:01.667585  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:29:01.706191  147213 cri.go:89] found id: ""
	I1010 19:29:01.706217  147213 logs.go:282] 0 containers: []
	W1010 19:29:01.706228  147213 logs.go:284] No container was found matching "kindnet"
	I1010 19:29:01.706234  147213 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1010 19:29:01.706299  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1010 19:29:01.753559  147213 cri.go:89] found id: "dfabbf70cd44985666c6b54a456e49c66a41cc19300a483565decd937ce240ab"
	I1010 19:29:01.753581  147213 cri.go:89] found id: "e14d37c6da3f630d60720c5dc7ec414f060a7b327fafd27b633f3402e552840e"
	I1010 19:29:01.753584  147213 cri.go:89] found id: ""
	I1010 19:29:01.753591  147213 logs.go:282] 2 containers: [dfabbf70cd44985666c6b54a456e49c66a41cc19300a483565decd937ce240ab e14d37c6da3f630d60720c5dc7ec414f060a7b327fafd27b633f3402e552840e]
	I1010 19:29:01.753645  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:29:01.758179  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:29:01.762336  147213 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:29:01.762358  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 19:29:01.867667  147213 logs.go:123] Gathering logs for etcd [bfc9f1f069a0299235908025d3c8840e059ddc2fc2f579932edb134cff568c36] ...
	I1010 19:29:01.867698  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bfc9f1f069a0299235908025d3c8840e059ddc2fc2f579932edb134cff568c36"
	I1010 19:29:01.911722  147213 logs.go:123] Gathering logs for coredns [3c98f0e3e46ce82feb8638281ffb8c8cdf326b15115a6edfe91de59de6868846] ...
	I1010 19:29:01.911756  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c98f0e3e46ce82feb8638281ffb8c8cdf326b15115a6edfe91de59de6868846"
	I1010 19:29:01.955152  147213 logs.go:123] Gathering logs for kube-proxy [3a26f9cbec8dc8e6e1b1c1cc24a2432dc85819a78557be2263db6bc34e847664] ...
	I1010 19:29:01.955189  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3a26f9cbec8dc8e6e1b1c1cc24a2432dc85819a78557be2263db6bc34e847664"
	I1010 19:29:01.995010  147213 logs.go:123] Gathering logs for kube-controller-manager [d59196636b2826805e226757e3c5d3683d49f2fddb43f2e965aeae02026742ea] ...
	I1010 19:29:01.995041  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d59196636b2826805e226757e3c5d3683d49f2fddb43f2e965aeae02026742ea"
	I1010 19:29:02.047505  147213 logs.go:123] Gathering logs for storage-provisioner [dfabbf70cd44985666c6b54a456e49c66a41cc19300a483565decd937ce240ab] ...
	I1010 19:29:02.047546  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dfabbf70cd44985666c6b54a456e49c66a41cc19300a483565decd937ce240ab"
	I1010 19:29:02.085080  147213 logs.go:123] Gathering logs for storage-provisioner [e14d37c6da3f630d60720c5dc7ec414f060a7b327fafd27b633f3402e552840e] ...
	I1010 19:29:02.085110  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e14d37c6da3f630d60720c5dc7ec414f060a7b327fafd27b633f3402e552840e"
	I1010 19:29:02.128482  147213 logs.go:123] Gathering logs for kubelet ...
	I1010 19:29:02.128527  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:29:02.194867  147213 logs.go:123] Gathering logs for dmesg ...
	I1010 19:29:02.194904  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:29:02.211881  147213 logs.go:123] Gathering logs for kube-apiserver [20a9cb514f18abab89d82173a52c43693b13548712f96726c491e14100e5deba] ...
	I1010 19:29:02.211911  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 20a9cb514f18abab89d82173a52c43693b13548712f96726c491e14100e5deba"
	I1010 19:29:02.262969  147213 logs.go:123] Gathering logs for kube-scheduler [d397ef1d012accb6f9cb41fa5bd5ed4649588e9ad8552f87d2f9a579a9c3f023] ...
	I1010 19:29:02.263013  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d397ef1d012accb6f9cb41fa5bd5ed4649588e9ad8552f87d2f9a579a9c3f023"
	I1010 19:29:02.302921  147213 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:29:02.302956  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:29:02.671102  147213 logs.go:123] Gathering logs for container status ...
	I1010 19:29:02.671169  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:29:05.241477  147213 system_pods.go:59] 8 kube-system pods found
	I1010 19:29:05.241508  147213 system_pods.go:61] "coredns-7c65d6cfc9-86brb" [28e5f869-f82f-4bd4-9d9c-89499fa89c89] Running
	I1010 19:29:05.241513  147213 system_pods.go:61] "etcd-no-preload-320324" [3027fc7b-a2cd-47c9-a6f1-122bfd0475a8] Running
	I1010 19:29:05.241517  147213 system_pods.go:61] "kube-apiserver-no-preload-320324" [6e3d2eab-4568-48e9-957c-e724ff89b5da] Running
	I1010 19:29:05.241521  147213 system_pods.go:61] "kube-controller-manager-no-preload-320324" [a6d65cd3-48b1-4faf-bb7f-f6433641d6f3] Running
	I1010 19:29:05.241525  147213 system_pods.go:61] "kube-proxy-vn6sv" [e5b2c419-7299-4bc4-b263-99408b9484eb] Running
	I1010 19:29:05.241528  147213 system_pods.go:61] "kube-scheduler-no-preload-320324" [e089c258-e720-4c78-ada1-eebbe89556c1] Running
	I1010 19:29:05.241534  147213 system_pods.go:61] "metrics-server-6867b74b74-8w9lk" [354939e6-2ca9-44f5-8e8e-c10493c68b79] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1010 19:29:05.241540  147213 system_pods.go:61] "storage-provisioner" [d4965b53-60c3-4a97-bd52-d164d977247a] Running
	I1010 19:29:05.241549  147213 system_pods.go:74] duration metric: took 3.853712488s to wait for pod list to return data ...
	I1010 19:29:05.241556  147213 default_sa.go:34] waiting for default service account to be created ...
	I1010 19:29:05.244686  147213 default_sa.go:45] found service account: "default"
	I1010 19:29:05.244721  147213 default_sa.go:55] duration metric: took 3.158069ms for default service account to be created ...
	I1010 19:29:05.244733  147213 system_pods.go:116] waiting for k8s-apps to be running ...
	I1010 19:29:05.249372  147213 system_pods.go:86] 8 kube-system pods found
	I1010 19:29:05.249398  147213 system_pods.go:89] "coredns-7c65d6cfc9-86brb" [28e5f869-f82f-4bd4-9d9c-89499fa89c89] Running
	I1010 19:29:05.249404  147213 system_pods.go:89] "etcd-no-preload-320324" [3027fc7b-a2cd-47c9-a6f1-122bfd0475a8] Running
	I1010 19:29:05.249408  147213 system_pods.go:89] "kube-apiserver-no-preload-320324" [6e3d2eab-4568-48e9-957c-e724ff89b5da] Running
	I1010 19:29:05.249413  147213 system_pods.go:89] "kube-controller-manager-no-preload-320324" [a6d65cd3-48b1-4faf-bb7f-f6433641d6f3] Running
	I1010 19:29:05.249418  147213 system_pods.go:89] "kube-proxy-vn6sv" [e5b2c419-7299-4bc4-b263-99408b9484eb] Running
	I1010 19:29:05.249425  147213 system_pods.go:89] "kube-scheduler-no-preload-320324" [e089c258-e720-4c78-ada1-eebbe89556c1] Running
	I1010 19:29:05.249433  147213 system_pods.go:89] "metrics-server-6867b74b74-8w9lk" [354939e6-2ca9-44f5-8e8e-c10493c68b79] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1010 19:29:05.249442  147213 system_pods.go:89] "storage-provisioner" [d4965b53-60c3-4a97-bd52-d164d977247a] Running
	I1010 19:29:05.249455  147213 system_pods.go:126] duration metric: took 4.715381ms to wait for k8s-apps to be running ...
	I1010 19:29:05.249467  147213 system_svc.go:44] waiting for kubelet service to be running ....
	I1010 19:29:05.249519  147213 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 19:29:05.265180  147213 system_svc.go:56] duration metric: took 15.703413ms WaitForService to wait for kubelet
	I1010 19:29:05.265216  147213 kubeadm.go:582] duration metric: took 4m24.851723603s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1010 19:29:05.265237  147213 node_conditions.go:102] verifying NodePressure condition ...
	I1010 19:29:05.268775  147213 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1010 19:29:05.268807  147213 node_conditions.go:123] node cpu capacity is 2
	I1010 19:29:05.268821  147213 node_conditions.go:105] duration metric: took 3.575195ms to run NodePressure ...
	I1010 19:29:05.268834  147213 start.go:241] waiting for startup goroutines ...
	I1010 19:29:05.268840  147213 start.go:246] waiting for cluster config update ...
	I1010 19:29:05.268869  147213 start.go:255] writing updated cluster config ...
	I1010 19:29:05.269148  147213 ssh_runner.go:195] Run: rm -f paused
	I1010 19:29:05.319999  147213 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1010 19:29:05.322189  147213 out.go:177] * Done! kubectl is now configured to use "no-preload-320324" cluster and "default" namespace by default
	I1010 19:29:41.275119  148123 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1010 19:29:41.275272  148123 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1010 19:29:41.276822  148123 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1010 19:29:41.276919  148123 kubeadm.go:310] [preflight] Running pre-flight checks
	I1010 19:29:41.277017  148123 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1010 19:29:41.277142  148123 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1010 19:29:41.277254  148123 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1010 19:29:41.277357  148123 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1010 19:29:41.279069  148123 out.go:235]   - Generating certificates and keys ...
	I1010 19:29:41.279160  148123 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1010 19:29:41.279217  148123 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1010 19:29:41.279306  148123 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1010 19:29:41.279381  148123 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1010 19:29:41.279484  148123 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1010 19:29:41.279576  148123 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1010 19:29:41.279674  148123 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1010 19:29:41.279779  148123 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1010 19:29:41.279906  148123 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1010 19:29:41.279971  148123 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1010 19:29:41.280005  148123 kubeadm.go:310] [certs] Using the existing "sa" key
	I1010 19:29:41.280052  148123 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1010 19:29:41.280095  148123 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1010 19:29:41.280144  148123 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1010 19:29:41.280219  148123 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1010 19:29:41.280317  148123 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1010 19:29:41.280474  148123 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1010 19:29:41.280583  148123 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1010 19:29:41.280648  148123 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1010 19:29:41.280736  148123 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1010 19:29:41.282180  148123 out.go:235]   - Booting up control plane ...
	I1010 19:29:41.282266  148123 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1010 19:29:41.282336  148123 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1010 19:29:41.282414  148123 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1010 19:29:41.282538  148123 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1010 19:29:41.282748  148123 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1010 19:29:41.282822  148123 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1010 19:29:41.282896  148123 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1010 19:29:41.283061  148123 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1010 19:29:41.283123  148123 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1010 19:29:41.283287  148123 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1010 19:29:41.283348  148123 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1010 19:29:41.283519  148123 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1010 19:29:41.283623  148123 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1010 19:29:41.283809  148123 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1010 19:29:41.283900  148123 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1010 19:29:41.284115  148123 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1010 19:29:41.284135  148123 kubeadm.go:310] 
	I1010 19:29:41.284201  148123 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1010 19:29:41.284243  148123 kubeadm.go:310] 		timed out waiting for the condition
	I1010 19:29:41.284257  148123 kubeadm.go:310] 
	I1010 19:29:41.284307  148123 kubeadm.go:310] 	This error is likely caused by:
	I1010 19:29:41.284346  148123 kubeadm.go:310] 		- The kubelet is not running
	I1010 19:29:41.284481  148123 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1010 19:29:41.284505  148123 kubeadm.go:310] 
	I1010 19:29:41.284655  148123 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1010 19:29:41.284714  148123 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1010 19:29:41.284752  148123 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1010 19:29:41.284758  148123 kubeadm.go:310] 
	I1010 19:29:41.284913  148123 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1010 19:29:41.285038  148123 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1010 19:29:41.285055  148123 kubeadm.go:310] 
	I1010 19:29:41.285169  148123 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1010 19:29:41.285307  148123 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1010 19:29:41.285475  148123 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1010 19:29:41.285587  148123 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1010 19:29:41.285644  148123 kubeadm.go:310] 
	W1010 19:29:41.285747  148123 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1010 19:29:41.285792  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1010 19:29:46.681684  148123 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.395862838s)
	I1010 19:29:46.681797  148123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 19:29:46.696927  148123 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1010 19:29:46.708250  148123 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1010 19:29:46.708280  148123 kubeadm.go:157] found existing configuration files:
	
	I1010 19:29:46.708339  148123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1010 19:29:46.719748  148123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1010 19:29:46.719828  148123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1010 19:29:46.730892  148123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1010 19:29:46.742329  148123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1010 19:29:46.742401  148123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1010 19:29:46.752919  148123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1010 19:29:46.762860  148123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1010 19:29:46.762932  148123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1010 19:29:46.772513  148123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1010 19:29:46.781672  148123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1010 19:29:46.781746  148123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1010 19:29:46.791250  148123 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1010 19:29:47.018706  148123 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1010 19:31:43.351851  148123 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1010 19:31:43.351992  148123 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1010 19:31:43.353664  148123 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1010 19:31:43.353752  148123 kubeadm.go:310] [preflight] Running pre-flight checks
	I1010 19:31:43.353861  148123 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1010 19:31:43.353998  148123 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1010 19:31:43.354130  148123 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1010 19:31:43.354235  148123 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1010 19:31:43.356450  148123 out.go:235]   - Generating certificates and keys ...
	I1010 19:31:43.356557  148123 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1010 19:31:43.356652  148123 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1010 19:31:43.356783  148123 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1010 19:31:43.356902  148123 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1010 19:31:43.357002  148123 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1010 19:31:43.357074  148123 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1010 19:31:43.357181  148123 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1010 19:31:43.357247  148123 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1010 19:31:43.357325  148123 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1010 19:31:43.357408  148123 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1010 19:31:43.357459  148123 kubeadm.go:310] [certs] Using the existing "sa" key
	I1010 19:31:43.357529  148123 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1010 19:31:43.357604  148123 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1010 19:31:43.357676  148123 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1010 19:31:43.357751  148123 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1010 19:31:43.357830  148123 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1010 19:31:43.357979  148123 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1010 19:31:43.358092  148123 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1010 19:31:43.358158  148123 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1010 19:31:43.358263  148123 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1010 19:31:43.360216  148123 out.go:235]   - Booting up control plane ...
	I1010 19:31:43.360332  148123 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1010 19:31:43.360463  148123 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1010 19:31:43.360551  148123 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1010 19:31:43.360673  148123 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1010 19:31:43.360907  148123 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1010 19:31:43.360976  148123 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1010 19:31:43.361058  148123 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1010 19:31:43.361309  148123 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1010 19:31:43.361410  148123 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1010 19:31:43.361648  148123 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1010 19:31:43.361742  148123 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1010 19:31:43.361960  148123 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1010 19:31:43.362058  148123 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1010 19:31:43.362308  148123 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1010 19:31:43.362399  148123 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1010 19:31:43.362640  148123 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1010 19:31:43.362652  148123 kubeadm.go:310] 
	I1010 19:31:43.362708  148123 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1010 19:31:43.362758  148123 kubeadm.go:310] 		timed out waiting for the condition
	I1010 19:31:43.362768  148123 kubeadm.go:310] 
	I1010 19:31:43.362809  148123 kubeadm.go:310] 	This error is likely caused by:
	I1010 19:31:43.362858  148123 kubeadm.go:310] 		- The kubelet is not running
	I1010 19:31:43.362981  148123 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1010 19:31:43.362993  148123 kubeadm.go:310] 
	I1010 19:31:43.363119  148123 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1010 19:31:43.363164  148123 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1010 19:31:43.363204  148123 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1010 19:31:43.363219  148123 kubeadm.go:310] 
	I1010 19:31:43.363344  148123 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1010 19:31:43.363454  148123 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1010 19:31:43.363465  148123 kubeadm.go:310] 
	I1010 19:31:43.363591  148123 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1010 19:31:43.363708  148123 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1010 19:31:43.363803  148123 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1010 19:31:43.363892  148123 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1010 19:31:43.363978  148123 kubeadm.go:394] duration metric: took 8m3.085634474s to StartCluster
	I1010 19:31:43.364052  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:31:43.364159  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:31:43.364268  148123 kubeadm.go:310] 
	I1010 19:31:43.413041  148123 cri.go:89] found id: ""
	I1010 19:31:43.413075  148123 logs.go:282] 0 containers: []
	W1010 19:31:43.413086  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:31:43.413092  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:31:43.413206  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:31:43.456454  148123 cri.go:89] found id: ""
	I1010 19:31:43.456487  148123 logs.go:282] 0 containers: []
	W1010 19:31:43.456499  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:31:43.456507  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:31:43.456567  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:31:43.498582  148123 cri.go:89] found id: ""
	I1010 19:31:43.498614  148123 logs.go:282] 0 containers: []
	W1010 19:31:43.498624  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:31:43.498633  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:31:43.498694  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:31:43.540557  148123 cri.go:89] found id: ""
	I1010 19:31:43.540589  148123 logs.go:282] 0 containers: []
	W1010 19:31:43.540598  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:31:43.540605  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:31:43.540661  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:31:43.576743  148123 cri.go:89] found id: ""
	I1010 19:31:43.576776  148123 logs.go:282] 0 containers: []
	W1010 19:31:43.576787  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:31:43.576796  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:31:43.576887  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:31:43.614620  148123 cri.go:89] found id: ""
	I1010 19:31:43.614652  148123 logs.go:282] 0 containers: []
	W1010 19:31:43.614662  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:31:43.614668  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:31:43.614730  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:31:43.652008  148123 cri.go:89] found id: ""
	I1010 19:31:43.652061  148123 logs.go:282] 0 containers: []
	W1010 19:31:43.652075  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:31:43.652084  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:31:43.652153  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:31:43.689990  148123 cri.go:89] found id: ""
	I1010 19:31:43.690019  148123 logs.go:282] 0 containers: []
	W1010 19:31:43.690028  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:31:43.690043  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:31:43.690056  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:31:43.745634  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:31:43.745673  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:31:43.760064  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:31:43.760095  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:31:43.840168  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:31:43.840197  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:31:43.840214  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:31:43.943265  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:31:43.943303  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1010 19:31:43.987758  148123 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1010 19:31:43.987816  148123 out.go:270] * 
	W1010 19:31:43.987891  148123 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1010 19:31:43.987906  148123 out.go:270] * 
	W1010 19:31:43.988703  148123 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1010 19:31:43.991928  148123 out.go:201] 
	W1010 19:31:43.993494  148123 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1010 19:31:43.993556  148123 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1010 19:31:43.993585  148123 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1010 19:31:43.995273  148123 out.go:201] 
	
	
	==> CRI-O <==
	Oct 10 19:31:45 old-k8s-version-947203 crio[635]: time="2024-10-10 19:31:45.873494833Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728588705873474526,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f0664bca-555e-49b0-8d3e-0740b046fa5b name=/runtime.v1.ImageService/ImageFsInfo
	Oct 10 19:31:45 old-k8s-version-947203 crio[635]: time="2024-10-10 19:31:45.874113904Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ca250df8-a923-45ff-ad41-8755588cd979 name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 19:31:45 old-k8s-version-947203 crio[635]: time="2024-10-10 19:31:45.874178173Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ca250df8-a923-45ff-ad41-8755588cd979 name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 19:31:45 old-k8s-version-947203 crio[635]: time="2024-10-10 19:31:45.874211786Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=ca250df8-a923-45ff-ad41-8755588cd979 name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 19:31:45 old-k8s-version-947203 crio[635]: time="2024-10-10 19:31:45.909259655Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e04cb6d9-dde2-4590-bbef-37ad07d29023 name=/runtime.v1.RuntimeService/Version
	Oct 10 19:31:45 old-k8s-version-947203 crio[635]: time="2024-10-10 19:31:45.909415275Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e04cb6d9-dde2-4590-bbef-37ad07d29023 name=/runtime.v1.RuntimeService/Version
	Oct 10 19:31:45 old-k8s-version-947203 crio[635]: time="2024-10-10 19:31:45.910795891Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ea75523d-38b4-4867-9365-a949c04378a5 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 10 19:31:45 old-k8s-version-947203 crio[635]: time="2024-10-10 19:31:45.911148204Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728588705911116054,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ea75523d-38b4-4867-9365-a949c04378a5 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 10 19:31:45 old-k8s-version-947203 crio[635]: time="2024-10-10 19:31:45.911814727Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cab4bebb-f666-495c-8104-a9c8e12d9162 name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 19:31:45 old-k8s-version-947203 crio[635]: time="2024-10-10 19:31:45.911866521Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cab4bebb-f666-495c-8104-a9c8e12d9162 name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 19:31:45 old-k8s-version-947203 crio[635]: time="2024-10-10 19:31:45.911896228Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=cab4bebb-f666-495c-8104-a9c8e12d9162 name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 19:31:45 old-k8s-version-947203 crio[635]: time="2024-10-10 19:31:45.946715650Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4ad2ec7b-8988-4f95-a0dc-38a91e7c60dc name=/runtime.v1.RuntimeService/Version
	Oct 10 19:31:45 old-k8s-version-947203 crio[635]: time="2024-10-10 19:31:45.946839388Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4ad2ec7b-8988-4f95-a0dc-38a91e7c60dc name=/runtime.v1.RuntimeService/Version
	Oct 10 19:31:45 old-k8s-version-947203 crio[635]: time="2024-10-10 19:31:45.948230505Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f29d16d7-317c-454d-a833-f7b96db9fc90 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 10 19:31:45 old-k8s-version-947203 crio[635]: time="2024-10-10 19:31:45.948703289Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728588705948677562,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f29d16d7-317c-454d-a833-f7b96db9fc90 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 10 19:31:45 old-k8s-version-947203 crio[635]: time="2024-10-10 19:31:45.949360241Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1f03edf5-f937-4b4a-b1cc-0a87d91f59f8 name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 19:31:45 old-k8s-version-947203 crio[635]: time="2024-10-10 19:31:45.949438183Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1f03edf5-f937-4b4a-b1cc-0a87d91f59f8 name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 19:31:45 old-k8s-version-947203 crio[635]: time="2024-10-10 19:31:45.949472905Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=1f03edf5-f937-4b4a-b1cc-0a87d91f59f8 name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 19:31:45 old-k8s-version-947203 crio[635]: time="2024-10-10 19:31:45.983536172Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1ed10ba5-4b46-413d-b3b7-31c3cddf157e name=/runtime.v1.RuntimeService/Version
	Oct 10 19:31:45 old-k8s-version-947203 crio[635]: time="2024-10-10 19:31:45.983623493Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1ed10ba5-4b46-413d-b3b7-31c3cddf157e name=/runtime.v1.RuntimeService/Version
	Oct 10 19:31:45 old-k8s-version-947203 crio[635]: time="2024-10-10 19:31:45.984955268Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=46d44b7d-8ab7-4052-a5a0-66fa5489c2ab name=/runtime.v1.ImageService/ImageFsInfo
	Oct 10 19:31:45 old-k8s-version-947203 crio[635]: time="2024-10-10 19:31:45.985560451Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728588705985537523,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=46d44b7d-8ab7-4052-a5a0-66fa5489c2ab name=/runtime.v1.ImageService/ImageFsInfo
	Oct 10 19:31:45 old-k8s-version-947203 crio[635]: time="2024-10-10 19:31:45.986034986Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1a987609-3b4b-404f-b3ee-4587a76d6115 name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 19:31:45 old-k8s-version-947203 crio[635]: time="2024-10-10 19:31:45.986108174Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1a987609-3b4b-404f-b3ee-4587a76d6115 name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 19:31:45 old-k8s-version-947203 crio[635]: time="2024-10-10 19:31:45.986142266Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=1a987609-3b4b-404f-b3ee-4587a76d6115 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct10 19:23] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051246] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042600] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.085550] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.699486] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.514715] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.834131] systemd-fstab-generator[559]: Ignoring "noauto" option for root device
	[  +0.134078] systemd-fstab-generator[571]: Ignoring "noauto" option for root device
	[  +0.216712] systemd-fstab-generator[585]: Ignoring "noauto" option for root device
	[  +0.120541] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.278860] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +6.492743] systemd-fstab-generator[881]: Ignoring "noauto" option for root device
	[  +0.072493] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.094540] systemd-fstab-generator[1007]: Ignoring "noauto" option for root device
	[ +12.843516] kauditd_printk_skb: 46 callbacks suppressed
	[Oct10 19:27] systemd-fstab-generator[5080]: Ignoring "noauto" option for root device
	[Oct10 19:29] systemd-fstab-generator[5373]: Ignoring "noauto" option for root device
	[  +0.064417] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 19:31:46 up 8 min,  0 users,  load average: 0.00, 0.07, 0.05
	Linux old-k8s-version-947203 5.10.207 #1 SMP Tue Oct 8 15:16:25 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Oct 10 19:31:43 old-k8s-version-947203 kubelet[5555]: net.cgoLookupIPCNAME(0x48ab5d6, 0x3, 0xc000b593b0, 0x1f, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
	Oct 10 19:31:43 old-k8s-version-947203 kubelet[5555]:         /usr/local/go/src/net/cgo_unix.go:161 +0x16b
	Oct 10 19:31:43 old-k8s-version-947203 kubelet[5555]: net.cgoIPLookup(0xc0002e53e0, 0x48ab5d6, 0x3, 0xc000b593b0, 0x1f)
	Oct 10 19:31:43 old-k8s-version-947203 kubelet[5555]:         /usr/local/go/src/net/cgo_unix.go:218 +0x67
	Oct 10 19:31:43 old-k8s-version-947203 kubelet[5555]: created by net.cgoLookupIP
	Oct 10 19:31:43 old-k8s-version-947203 kubelet[5555]:         /usr/local/go/src/net/cgo_unix.go:228 +0xc7
	Oct 10 19:31:43 old-k8s-version-947203 kubelet[5555]: goroutine 123 [select]:
	Oct 10 19:31:43 old-k8s-version-947203 kubelet[5555]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*controlBuffer).get(0xc0000514f0, 0x1, 0x0, 0x0, 0x0, 0x0)
	Oct 10 19:31:43 old-k8s-version-947203 kubelet[5555]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:395 +0x125
	Oct 10 19:31:43 old-k8s-version-947203 kubelet[5555]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*loopyWriter).run(0xc0001e6900, 0x0, 0x0)
	Oct 10 19:31:43 old-k8s-version-947203 kubelet[5555]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:513 +0x1d3
	Oct 10 19:31:43 old-k8s-version-947203 kubelet[5555]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client.func3(0xc000546540)
	Oct 10 19:31:43 old-k8s-version-947203 kubelet[5555]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:346 +0x7b
	Oct 10 19:31:43 old-k8s-version-947203 kubelet[5555]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Oct 10 19:31:43 old-k8s-version-947203 kubelet[5555]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:344 +0xefc
	Oct 10 19:31:43 old-k8s-version-947203 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Oct 10 19:31:43 old-k8s-version-947203 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Oct 10 19:31:43 old-k8s-version-947203 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Oct 10 19:31:44 old-k8s-version-947203 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Oct 10 19:31:44 old-k8s-version-947203 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Oct 10 19:31:44 old-k8s-version-947203 kubelet[5621]: I1010 19:31:44.122926    5621 server.go:416] Version: v1.20.0
	Oct 10 19:31:44 old-k8s-version-947203 kubelet[5621]: I1010 19:31:44.123257    5621 server.go:837] Client rotation is on, will bootstrap in background
	Oct 10 19:31:44 old-k8s-version-947203 kubelet[5621]: I1010 19:31:44.125480    5621 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Oct 10 19:31:44 old-k8s-version-947203 kubelet[5621]: I1010 19:31:44.126617    5621 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Oct 10 19:31:44 old-k8s-version-947203 kubelet[5621]: W1010 19:31:44.126644    5621 manager.go:159] Cannot detect current cgroup on cgroup v2
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-947203 -n old-k8s-version-947203
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-947203 -n old-k8s-version-947203: exit status 2 (246.716647ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-947203" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (714.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-361847 -n default-k8s-diff-port-361847
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-361847 -n default-k8s-diff-port-361847: exit status 3 (3.168146845s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1010 19:21:04.677268  148415 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.32:22: connect: no route to host
	E1010 19:21:04.677289  148415 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.50.32:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-361847 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-361847 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.154680892s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.50.32:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-361847 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-361847 -n default-k8s-diff-port-361847
E1010 19:21:12.211504   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/enable-default-cni-873515/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-361847 -n default-k8s-diff-port-361847: exit status 3 (3.061059097s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1010 19:21:13.893295  148479 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.32:22: connect: no route to host
	E1010 19:21:13.893324  148479 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.50.32:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-361847" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.36s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-541370 -n embed-certs-541370
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-10-10 19:37:31.390155657 +0000 UTC m=+5999.075090846
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-541370 -n embed-certs-541370
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-541370 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-541370 logs -n 25: (2.12997997s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p no-preload-320324                                   | no-preload-320324            | jenkins | v1.34.0 | 10 Oct 24 19:15 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-029826             | newest-cni-029826            | jenkins | v1.34.0 | 10 Oct 24 19:16 UTC | 10 Oct 24 19:16 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-029826                                   | newest-cni-029826            | jenkins | v1.34.0 | 10 Oct 24 19:16 UTC | 10 Oct 24 19:16 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-029826                  | newest-cni-029826            | jenkins | v1.34.0 | 10 Oct 24 19:16 UTC | 10 Oct 24 19:16 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-029826 --memory=2200 --alsologtostderr   | newest-cni-029826            | jenkins | v1.34.0 | 10 Oct 24 19:16 UTC | 10 Oct 24 19:16 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-541370            | embed-certs-541370           | jenkins | v1.34.0 | 10 Oct 24 19:16 UTC | 10 Oct 24 19:16 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-541370                                  | embed-certs-541370           | jenkins | v1.34.0 | 10 Oct 24 19:16 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| image   | newest-cni-029826 image list                           | newest-cni-029826            | jenkins | v1.34.0 | 10 Oct 24 19:16 UTC | 10 Oct 24 19:16 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-029826                                   | newest-cni-029826            | jenkins | v1.34.0 | 10 Oct 24 19:16 UTC | 10 Oct 24 19:16 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-029826                                   | newest-cni-029826            | jenkins | v1.34.0 | 10 Oct 24 19:16 UTC | 10 Oct 24 19:16 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-029826                                   | newest-cni-029826            | jenkins | v1.34.0 | 10 Oct 24 19:16 UTC | 10 Oct 24 19:16 UTC |
	| delete  | -p newest-cni-029826                                   | newest-cni-029826            | jenkins | v1.34.0 | 10 Oct 24 19:16 UTC | 10 Oct 24 19:17 UTC |
	| start   | -p                                                     | default-k8s-diff-port-361847 | jenkins | v1.34.0 | 10 Oct 24 19:17 UTC | 10 Oct 24 19:18 UTC |
	|         | default-k8s-diff-port-361847                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-320324                  | no-preload-320324            | jenkins | v1.34.0 | 10 Oct 24 19:18 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-320324                                   | no-preload-320324            | jenkins | v1.34.0 | 10 Oct 24 19:18 UTC | 10 Oct 24 19:29 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-947203        | old-k8s-version-947203       | jenkins | v1.34.0 | 10 Oct 24 19:18 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-361847  | default-k8s-diff-port-361847 | jenkins | v1.34.0 | 10 Oct 24 19:18 UTC | 10 Oct 24 19:18 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-361847 | jenkins | v1.34.0 | 10 Oct 24 19:18 UTC |                     |
	|         | default-k8s-diff-port-361847                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-541370                 | embed-certs-541370           | jenkins | v1.34.0 | 10 Oct 24 19:18 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-541370                                  | embed-certs-541370           | jenkins | v1.34.0 | 10 Oct 24 19:19 UTC | 10 Oct 24 19:28 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-947203                              | old-k8s-version-947203       | jenkins | v1.34.0 | 10 Oct 24 19:19 UTC | 10 Oct 24 19:19 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-947203             | old-k8s-version-947203       | jenkins | v1.34.0 | 10 Oct 24 19:19 UTC | 10 Oct 24 19:19 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-947203                              | old-k8s-version-947203       | jenkins | v1.34.0 | 10 Oct 24 19:19 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-361847       | default-k8s-diff-port-361847 | jenkins | v1.34.0 | 10 Oct 24 19:21 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-361847 | jenkins | v1.34.0 | 10 Oct 24 19:21 UTC | 10 Oct 24 19:29 UTC |
	|         | default-k8s-diff-port-361847                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/10 19:21:13
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1010 19:21:13.943219  148525 out.go:345] Setting OutFile to fd 1 ...
	I1010 19:21:13.943336  148525 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 19:21:13.943343  148525 out.go:358] Setting ErrFile to fd 2...
	I1010 19:21:13.943347  148525 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 19:21:13.943560  148525 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19787-81676/.minikube/bin
	I1010 19:21:13.944109  148525 out.go:352] Setting JSON to false
	I1010 19:21:13.945219  148525 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":11020,"bootTime":1728577054,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1010 19:21:13.945321  148525 start.go:139] virtualization: kvm guest
	I1010 19:21:13.947915  148525 out.go:177] * [default-k8s-diff-port-361847] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1010 19:21:13.950021  148525 out.go:177]   - MINIKUBE_LOCATION=19787
	I1010 19:21:13.950037  148525 notify.go:220] Checking for updates...
	I1010 19:21:13.952994  148525 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1010 19:21:13.954661  148525 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19787-81676/kubeconfig
	I1010 19:21:13.956438  148525 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19787-81676/.minikube
	I1010 19:21:13.958502  148525 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1010 19:21:13.960099  148525 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1010 19:21:13.961930  148525 config.go:182] Loaded profile config "default-k8s-diff-port-361847": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 19:21:13.962374  148525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:21:13.962450  148525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:21:13.978323  148525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41247
	I1010 19:21:13.978926  148525 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:21:13.979520  148525 main.go:141] libmachine: Using API Version  1
	I1010 19:21:13.979538  148525 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:21:13.979954  148525 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:21:13.980144  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .DriverName
	I1010 19:21:13.980446  148525 driver.go:394] Setting default libvirt URI to qemu:///system
	I1010 19:21:13.980745  148525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:21:13.980784  148525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:21:13.996046  148525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37617
	I1010 19:21:13.996534  148525 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:21:13.997069  148525 main.go:141] libmachine: Using API Version  1
	I1010 19:21:13.997097  148525 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:21:13.997530  148525 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:21:13.997788  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .DriverName
	I1010 19:21:14.033593  148525 out.go:177] * Using the kvm2 driver based on existing profile
	I1010 19:21:14.035367  148525 start.go:297] selected driver: kvm2
	I1010 19:21:14.035394  148525 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-361847 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-361847 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.32 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 19:21:14.035526  148525 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1010 19:21:14.036341  148525 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 19:21:14.036452  148525 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19787-81676/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1010 19:21:14.052462  148525 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1010 19:21:14.052918  148525 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1010 19:21:14.052967  148525 cni.go:84] Creating CNI manager for ""
	I1010 19:21:14.053019  148525 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1010 19:21:14.053067  148525 start.go:340] cluster config:
	{Name:default-k8s-diff-port-361847 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-361847 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.32 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-h
ost Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 19:21:14.053178  148525 iso.go:125] acquiring lock: {Name:mk53f4624ffe67cac4f5d6b29b9ed87292a010a5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 19:21:14.055485  148525 out.go:177] * Starting "default-k8s-diff-port-361847" primary control-plane node in "default-k8s-diff-port-361847" cluster
	I1010 19:21:16.773106  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:21:14.056945  148525 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1010 19:21:14.057002  148525 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1010 19:21:14.057014  148525 cache.go:56] Caching tarball of preloaded images
	I1010 19:21:14.057118  148525 preload.go:172] Found /home/jenkins/minikube-integration/19787-81676/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1010 19:21:14.057134  148525 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1010 19:21:14.057268  148525 profile.go:143] Saving config to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/default-k8s-diff-port-361847/config.json ...
	I1010 19:21:14.057476  148525 start.go:360] acquireMachinesLock for default-k8s-diff-port-361847: {Name:mkf50a8a3600473176c10a3ea212772e747151e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1010 19:21:22.853158  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:21:25.925174  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:21:32.005160  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:21:35.077198  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:21:41.157130  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:21:44.229127  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:21:50.309136  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:21:53.381191  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:21:59.461129  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:22:02.533201  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:22:08.613124  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:22:11.685169  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:22:17.765161  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:22:20.837208  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:22:26.917127  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:22:29.989172  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:22:36.069147  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:22:39.141173  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:22:45.221160  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:22:48.293141  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:22:51.297376  147758 start.go:364] duration metric: took 3m49.312490934s to acquireMachinesLock for "embed-certs-541370"
	I1010 19:22:51.297453  147758 start.go:96] Skipping create...Using existing machine configuration
	I1010 19:22:51.297464  147758 fix.go:54] fixHost starting: 
	I1010 19:22:51.297787  147758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:22:51.297848  147758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:22:51.314087  147758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43639
	I1010 19:22:51.314588  147758 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:22:51.315115  147758 main.go:141] libmachine: Using API Version  1
	I1010 19:22:51.315138  147758 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:22:51.315509  147758 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:22:51.315691  147758 main.go:141] libmachine: (embed-certs-541370) Calling .DriverName
	I1010 19:22:51.315879  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetState
	I1010 19:22:51.317597  147758 fix.go:112] recreateIfNeeded on embed-certs-541370: state=Stopped err=<nil>
	I1010 19:22:51.317621  147758 main.go:141] libmachine: (embed-certs-541370) Calling .DriverName
	W1010 19:22:51.317781  147758 fix.go:138] unexpected machine state, will restart: <nil>
	I1010 19:22:51.319664  147758 out.go:177] * Restarting existing kvm2 VM for "embed-certs-541370" ...
	I1010 19:22:51.320967  147758 main.go:141] libmachine: (embed-certs-541370) Calling .Start
	I1010 19:22:51.321134  147758 main.go:141] libmachine: (embed-certs-541370) Ensuring networks are active...
	I1010 19:22:51.322026  147758 main.go:141] libmachine: (embed-certs-541370) Ensuring network default is active
	I1010 19:22:51.322468  147758 main.go:141] libmachine: (embed-certs-541370) Ensuring network mk-embed-certs-541370 is active
	I1010 19:22:51.322874  147758 main.go:141] libmachine: (embed-certs-541370) Getting domain xml...
	I1010 19:22:51.323687  147758 main.go:141] libmachine: (embed-certs-541370) Creating domain...
	I1010 19:22:51.294881  147213 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1010 19:22:51.294927  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetMachineName
	I1010 19:22:51.295226  147213 buildroot.go:166] provisioning hostname "no-preload-320324"
	I1010 19:22:51.295256  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetMachineName
	I1010 19:22:51.295454  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHHostname
	I1010 19:22:51.297198  147213 machine.go:96] duration metric: took 4m37.414594306s to provisionDockerMachine
	I1010 19:22:51.297252  147213 fix.go:56] duration metric: took 4m37.436635356s for fixHost
	I1010 19:22:51.297259  147213 start.go:83] releasing machines lock for "no-preload-320324", held for 4m37.436668423s
	W1010 19:22:51.297278  147213 start.go:714] error starting host: provision: host is not running
	W1010 19:22:51.297382  147213 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1010 19:22:51.297396  147213 start.go:729] Will try again in 5 seconds ...
	I1010 19:22:52.568699  147758 main.go:141] libmachine: (embed-certs-541370) Waiting to get IP...
	I1010 19:22:52.569582  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:22:52.569952  147758 main.go:141] libmachine: (embed-certs-541370) DBG | unable to find current IP address of domain embed-certs-541370 in network mk-embed-certs-541370
	I1010 19:22:52.570018  147758 main.go:141] libmachine: (embed-certs-541370) DBG | I1010 19:22:52.569935  148914 retry.go:31] will retry after 261.244287ms: waiting for machine to come up
	I1010 19:22:52.832639  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:22:52.833280  147758 main.go:141] libmachine: (embed-certs-541370) DBG | unable to find current IP address of domain embed-certs-541370 in network mk-embed-certs-541370
	I1010 19:22:52.833310  147758 main.go:141] libmachine: (embed-certs-541370) DBG | I1010 19:22:52.833200  148914 retry.go:31] will retry after 304.116732ms: waiting for machine to come up
	I1010 19:22:53.138770  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:22:53.139091  147758 main.go:141] libmachine: (embed-certs-541370) DBG | unable to find current IP address of domain embed-certs-541370 in network mk-embed-certs-541370
	I1010 19:22:53.139124  147758 main.go:141] libmachine: (embed-certs-541370) DBG | I1010 19:22:53.139055  148914 retry.go:31] will retry after 484.354474ms: waiting for machine to come up
	I1010 19:22:53.624831  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:22:53.625293  147758 main.go:141] libmachine: (embed-certs-541370) DBG | unable to find current IP address of domain embed-certs-541370 in network mk-embed-certs-541370
	I1010 19:22:53.625323  147758 main.go:141] libmachine: (embed-certs-541370) DBG | I1010 19:22:53.625234  148914 retry.go:31] will retry after 591.916836ms: waiting for machine to come up
	I1010 19:22:54.219214  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:22:54.219732  147758 main.go:141] libmachine: (embed-certs-541370) DBG | unable to find current IP address of domain embed-certs-541370 in network mk-embed-certs-541370
	I1010 19:22:54.219763  147758 main.go:141] libmachine: (embed-certs-541370) DBG | I1010 19:22:54.219673  148914 retry.go:31] will retry after 614.162479ms: waiting for machine to come up
	I1010 19:22:54.835573  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:22:54.836038  147758 main.go:141] libmachine: (embed-certs-541370) DBG | unable to find current IP address of domain embed-certs-541370 in network mk-embed-certs-541370
	I1010 19:22:54.836063  147758 main.go:141] libmachine: (embed-certs-541370) DBG | I1010 19:22:54.835988  148914 retry.go:31] will retry after 824.170953ms: waiting for machine to come up
	I1010 19:22:55.662092  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:22:55.662646  147758 main.go:141] libmachine: (embed-certs-541370) DBG | unable to find current IP address of domain embed-certs-541370 in network mk-embed-certs-541370
	I1010 19:22:55.662668  147758 main.go:141] libmachine: (embed-certs-541370) DBG | I1010 19:22:55.662586  148914 retry.go:31] will retry after 928.483848ms: waiting for machine to come up
	I1010 19:22:56.593200  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:22:56.593724  147758 main.go:141] libmachine: (embed-certs-541370) DBG | unable to find current IP address of domain embed-certs-541370 in network mk-embed-certs-541370
	I1010 19:22:56.593756  147758 main.go:141] libmachine: (embed-certs-541370) DBG | I1010 19:22:56.593679  148914 retry.go:31] will retry after 941.138644ms: waiting for machine to come up
	I1010 19:22:56.299351  147213 start.go:360] acquireMachinesLock for no-preload-320324: {Name:mkf50a8a3600473176c10a3ea212772e747151e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1010 19:22:57.536977  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:22:57.537403  147758 main.go:141] libmachine: (embed-certs-541370) DBG | unable to find current IP address of domain embed-certs-541370 in network mk-embed-certs-541370
	I1010 19:22:57.537429  147758 main.go:141] libmachine: (embed-certs-541370) DBG | I1010 19:22:57.537331  148914 retry.go:31] will retry after 1.262203584s: waiting for machine to come up
	I1010 19:22:58.801921  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:22:58.802420  147758 main.go:141] libmachine: (embed-certs-541370) DBG | unable to find current IP address of domain embed-certs-541370 in network mk-embed-certs-541370
	I1010 19:22:58.802454  147758 main.go:141] libmachine: (embed-certs-541370) DBG | I1010 19:22:58.802381  148914 retry.go:31] will retry after 2.154751391s: waiting for machine to come up
	I1010 19:23:00.960100  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:00.960661  147758 main.go:141] libmachine: (embed-certs-541370) DBG | unable to find current IP address of domain embed-certs-541370 in network mk-embed-certs-541370
	I1010 19:23:00.960684  147758 main.go:141] libmachine: (embed-certs-541370) DBG | I1010 19:23:00.960607  148914 retry.go:31] will retry after 1.945155171s: waiting for machine to come up
	I1010 19:23:02.907705  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:02.908097  147758 main.go:141] libmachine: (embed-certs-541370) DBG | unable to find current IP address of domain embed-certs-541370 in network mk-embed-certs-541370
	I1010 19:23:02.908129  147758 main.go:141] libmachine: (embed-certs-541370) DBG | I1010 19:23:02.908038  148914 retry.go:31] will retry after 3.245262469s: waiting for machine to come up
	I1010 19:23:06.157527  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:06.157897  147758 main.go:141] libmachine: (embed-certs-541370) DBG | unable to find current IP address of domain embed-certs-541370 in network mk-embed-certs-541370
	I1010 19:23:06.157925  147758 main.go:141] libmachine: (embed-certs-541370) DBG | I1010 19:23:06.157858  148914 retry.go:31] will retry after 3.973579024s: waiting for machine to come up
	I1010 19:23:11.321975  148123 start.go:364] duration metric: took 3m17.648521443s to acquireMachinesLock for "old-k8s-version-947203"
	I1010 19:23:11.322043  148123 start.go:96] Skipping create...Using existing machine configuration
	I1010 19:23:11.322056  148123 fix.go:54] fixHost starting: 
	I1010 19:23:11.322537  148123 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:23:11.322596  148123 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:23:11.339586  148123 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42157
	I1010 19:23:11.340091  148123 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:23:11.340686  148123 main.go:141] libmachine: Using API Version  1
	I1010 19:23:11.340719  148123 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:23:11.341101  148123 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:23:11.341295  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .DriverName
	I1010 19:23:11.341453  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetState
	I1010 19:23:11.343215  148123 fix.go:112] recreateIfNeeded on old-k8s-version-947203: state=Stopped err=<nil>
	I1010 19:23:11.343247  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .DriverName
	W1010 19:23:11.343408  148123 fix.go:138] unexpected machine state, will restart: <nil>
	I1010 19:23:11.345653  148123 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-947203" ...
	I1010 19:23:10.135369  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.135799  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has current primary IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.135830  147758 main.go:141] libmachine: (embed-certs-541370) Found IP for machine: 192.168.39.120
	I1010 19:23:10.135839  147758 main.go:141] libmachine: (embed-certs-541370) Reserving static IP address...
	I1010 19:23:10.136283  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "embed-certs-541370", mac: "52:54:00:e2:ee:d0", ip: "192.168.39.120"} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:10.136311  147758 main.go:141] libmachine: (embed-certs-541370) Reserved static IP address: 192.168.39.120
	I1010 19:23:10.136327  147758 main.go:141] libmachine: (embed-certs-541370) DBG | skip adding static IP to network mk-embed-certs-541370 - found existing host DHCP lease matching {name: "embed-certs-541370", mac: "52:54:00:e2:ee:d0", ip: "192.168.39.120"}
	I1010 19:23:10.136339  147758 main.go:141] libmachine: (embed-certs-541370) DBG | Getting to WaitForSSH function...
	I1010 19:23:10.136351  147758 main.go:141] libmachine: (embed-certs-541370) Waiting for SSH to be available...
	I1010 19:23:10.138861  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.139259  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:10.139293  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.139438  147758 main.go:141] libmachine: (embed-certs-541370) DBG | Using SSH client type: external
	I1010 19:23:10.139472  147758 main.go:141] libmachine: (embed-certs-541370) DBG | Using SSH private key: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/embed-certs-541370/id_rsa (-rw-------)
	I1010 19:23:10.139517  147758 main.go:141] libmachine: (embed-certs-541370) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.120 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19787-81676/.minikube/machines/embed-certs-541370/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1010 19:23:10.139541  147758 main.go:141] libmachine: (embed-certs-541370) DBG | About to run SSH command:
	I1010 19:23:10.139562  147758 main.go:141] libmachine: (embed-certs-541370) DBG | exit 0
	I1010 19:23:10.261078  147758 main.go:141] libmachine: (embed-certs-541370) DBG | SSH cmd err, output: <nil>: 
	I1010 19:23:10.261533  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetConfigRaw
	I1010 19:23:10.262192  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetIP
	I1010 19:23:10.265071  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.265467  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:10.265515  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.265737  147758 profile.go:143] Saving config to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/embed-certs-541370/config.json ...
	I1010 19:23:10.265941  147758 machine.go:93] provisionDockerMachine start ...
	I1010 19:23:10.265960  147758 main.go:141] libmachine: (embed-certs-541370) Calling .DriverName
	I1010 19:23:10.266188  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHHostname
	I1010 19:23:10.269186  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.269618  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:10.269649  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.269799  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHPort
	I1010 19:23:10.269984  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:23:10.270206  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:23:10.270345  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHUsername
	I1010 19:23:10.270550  147758 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:10.270834  147758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.120 22 <nil> <nil>}
	I1010 19:23:10.270849  147758 main.go:141] libmachine: About to run SSH command:
	hostname
	I1010 19:23:10.373285  147758 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1010 19:23:10.373316  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetMachineName
	I1010 19:23:10.373625  147758 buildroot.go:166] provisioning hostname "embed-certs-541370"
	I1010 19:23:10.373660  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetMachineName
	I1010 19:23:10.373835  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHHostname
	I1010 19:23:10.376552  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.376951  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:10.376994  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.377132  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHPort
	I1010 19:23:10.377332  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:23:10.377489  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:23:10.377606  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHUsername
	I1010 19:23:10.377745  147758 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:10.377918  147758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.120 22 <nil> <nil>}
	I1010 19:23:10.377930  147758 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-541370 && echo "embed-certs-541370" | sudo tee /etc/hostname
	I1010 19:23:10.495847  147758 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-541370
	
	I1010 19:23:10.495880  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHHostname
	I1010 19:23:10.498868  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.499205  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:10.499247  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.499362  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHPort
	I1010 19:23:10.499556  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:23:10.499700  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:23:10.499829  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHUsername
	I1010 19:23:10.499961  147758 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:10.500187  147758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.120 22 <nil> <nil>}
	I1010 19:23:10.500210  147758 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-541370' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-541370/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-541370' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1010 19:23:10.614318  147758 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1010 19:23:10.614357  147758 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19787-81676/.minikube CaCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19787-81676/.minikube}
	I1010 19:23:10.614412  147758 buildroot.go:174] setting up certificates
	I1010 19:23:10.614429  147758 provision.go:84] configureAuth start
	I1010 19:23:10.614457  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetMachineName
	I1010 19:23:10.614763  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetIP
	I1010 19:23:10.617457  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.617888  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:10.617916  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.618078  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHHostname
	I1010 19:23:10.620243  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.620635  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:10.620666  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.620789  147758 provision.go:143] copyHostCerts
	I1010 19:23:10.620895  147758 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem, removing ...
	I1010 19:23:10.620913  147758 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem
	I1010 19:23:10.620998  147758 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem (1078 bytes)
	I1010 19:23:10.621111  147758 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem, removing ...
	I1010 19:23:10.621123  147758 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem
	I1010 19:23:10.621159  147758 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem (1123 bytes)
	I1010 19:23:10.621245  147758 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem, removing ...
	I1010 19:23:10.621257  147758 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem
	I1010 19:23:10.621292  147758 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem (1675 bytes)
	I1010 19:23:10.621364  147758 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem org=jenkins.embed-certs-541370 san=[127.0.0.1 192.168.39.120 embed-certs-541370 localhost minikube]
	I1010 19:23:10.697456  147758 provision.go:177] copyRemoteCerts
	I1010 19:23:10.697515  147758 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1010 19:23:10.697547  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHHostname
	I1010 19:23:10.700439  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.700763  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:10.700799  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.700956  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHPort
	I1010 19:23:10.701162  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:23:10.701320  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHUsername
	I1010 19:23:10.701465  147758 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/embed-certs-541370/id_rsa Username:docker}
	I1010 19:23:10.783442  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1010 19:23:10.808446  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1010 19:23:10.832117  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1010 19:23:10.856286  147758 provision.go:87] duration metric: took 241.840139ms to configureAuth
	I1010 19:23:10.856318  147758 buildroot.go:189] setting minikube options for container-runtime
	I1010 19:23:10.856528  147758 config.go:182] Loaded profile config "embed-certs-541370": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 19:23:10.856640  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHHostname
	I1010 19:23:10.859252  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.859677  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:10.859708  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.859916  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHPort
	I1010 19:23:10.860087  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:23:10.860222  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:23:10.860344  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHUsername
	I1010 19:23:10.860524  147758 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:10.860688  147758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.120 22 <nil> <nil>}
	I1010 19:23:10.860702  147758 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1010 19:23:11.086349  147758 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1010 19:23:11.086375  147758 machine.go:96] duration metric: took 820.421344ms to provisionDockerMachine
	I1010 19:23:11.086386  147758 start.go:293] postStartSetup for "embed-certs-541370" (driver="kvm2")
	I1010 19:23:11.086401  147758 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1010 19:23:11.086423  147758 main.go:141] libmachine: (embed-certs-541370) Calling .DriverName
	I1010 19:23:11.086755  147758 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1010 19:23:11.086783  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHHostname
	I1010 19:23:11.089482  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:11.089838  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:11.089860  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:11.090042  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHPort
	I1010 19:23:11.090253  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:23:11.090410  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHUsername
	I1010 19:23:11.090535  147758 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/embed-certs-541370/id_rsa Username:docker}
	I1010 19:23:11.172474  147758 ssh_runner.go:195] Run: cat /etc/os-release
	I1010 19:23:11.176699  147758 info.go:137] Remote host: Buildroot 2023.02.9
	I1010 19:23:11.176733  147758 filesync.go:126] Scanning /home/jenkins/minikube-integration/19787-81676/.minikube/addons for local assets ...
	I1010 19:23:11.176800  147758 filesync.go:126] Scanning /home/jenkins/minikube-integration/19787-81676/.minikube/files for local assets ...
	I1010 19:23:11.176899  147758 filesync.go:149] local asset: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem -> 888762.pem in /etc/ssl/certs
	I1010 19:23:11.177044  147758 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1010 19:23:11.186985  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem --> /etc/ssl/certs/888762.pem (1708 bytes)
	I1010 19:23:11.211385  147758 start.go:296] duration metric: took 124.982089ms for postStartSetup
	I1010 19:23:11.211442  147758 fix.go:56] duration metric: took 19.913977793s for fixHost
	I1010 19:23:11.211472  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHHostname
	I1010 19:23:11.214421  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:11.214780  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:11.214812  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:11.214999  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHPort
	I1010 19:23:11.215219  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:23:11.215429  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:23:11.215612  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHUsername
	I1010 19:23:11.215786  147758 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:11.215974  147758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.120 22 <nil> <nil>}
	I1010 19:23:11.215985  147758 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1010 19:23:11.321786  147758 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728588191.295446348
	
	I1010 19:23:11.321814  147758 fix.go:216] guest clock: 1728588191.295446348
	I1010 19:23:11.321822  147758 fix.go:229] Guest: 2024-10-10 19:23:11.295446348 +0000 UTC Remote: 2024-10-10 19:23:11.211447413 +0000 UTC m=+249.373680838 (delta=83.998935ms)
	I1010 19:23:11.321870  147758 fix.go:200] guest clock delta is within tolerance: 83.998935ms
	I1010 19:23:11.321877  147758 start.go:83] releasing machines lock for "embed-certs-541370", held for 20.024455781s
	I1010 19:23:11.321905  147758 main.go:141] libmachine: (embed-certs-541370) Calling .DriverName
	I1010 19:23:11.322169  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetIP
	I1010 19:23:11.325004  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:11.325350  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:11.325375  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:11.325566  147758 main.go:141] libmachine: (embed-certs-541370) Calling .DriverName
	I1010 19:23:11.326090  147758 main.go:141] libmachine: (embed-certs-541370) Calling .DriverName
	I1010 19:23:11.326294  147758 main.go:141] libmachine: (embed-certs-541370) Calling .DriverName
	I1010 19:23:11.326383  147758 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1010 19:23:11.326444  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHHostname
	I1010 19:23:11.326501  147758 ssh_runner.go:195] Run: cat /version.json
	I1010 19:23:11.326529  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHHostname
	I1010 19:23:11.329311  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:11.329657  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:11.329690  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:11.329713  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:11.329866  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHPort
	I1010 19:23:11.330057  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:23:11.330160  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:11.330188  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:11.330204  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHUsername
	I1010 19:23:11.330344  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHPort
	I1010 19:23:11.330346  147758 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/embed-certs-541370/id_rsa Username:docker}
	I1010 19:23:11.330538  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:23:11.330687  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHUsername
	I1010 19:23:11.330821  147758 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/embed-certs-541370/id_rsa Username:docker}
	I1010 19:23:11.406525  147758 ssh_runner.go:195] Run: systemctl --version
	I1010 19:23:11.428958  147758 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1010 19:23:11.577663  147758 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1010 19:23:11.584024  147758 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1010 19:23:11.584112  147758 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1010 19:23:11.603163  147758 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1010 19:23:11.603190  147758 start.go:495] detecting cgroup driver to use...
	I1010 19:23:11.603291  147758 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1010 19:23:11.624744  147758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1010 19:23:11.645477  147758 docker.go:217] disabling cri-docker service (if available) ...
	I1010 19:23:11.645537  147758 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1010 19:23:11.660216  147758 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1010 19:23:11.675019  147758 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1010 19:23:11.796038  147758 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1010 19:23:11.967750  147758 docker.go:233] disabling docker service ...
	I1010 19:23:11.967828  147758 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1010 19:23:11.983184  147758 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1010 19:23:12.001603  147758 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1010 19:23:12.149408  147758 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1010 19:23:12.306724  147758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1010 19:23:12.324302  147758 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1010 19:23:12.345426  147758 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1010 19:23:12.345508  147758 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:12.357812  147758 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1010 19:23:12.357883  147758 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:12.370095  147758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:12.382389  147758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:12.395000  147758 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1010 19:23:12.408429  147758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:12.426851  147758 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:12.450568  147758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:12.463434  147758 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1010 19:23:12.474537  147758 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1010 19:23:12.474606  147758 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1010 19:23:12.489074  147758 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1010 19:23:12.500048  147758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 19:23:12.635695  147758 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1010 19:23:12.733511  147758 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1010 19:23:12.733593  147758 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1010 19:23:12.739072  147758 start.go:563] Will wait 60s for crictl version
	I1010 19:23:12.739138  147758 ssh_runner.go:195] Run: which crictl
	I1010 19:23:12.743675  147758 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1010 19:23:12.792272  147758 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1010 19:23:12.792379  147758 ssh_runner.go:195] Run: crio --version
	I1010 19:23:12.829968  147758 ssh_runner.go:195] Run: crio --version
	I1010 19:23:12.862579  147758 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1010 19:23:11.347158  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .Start
	I1010 19:23:11.347359  148123 main.go:141] libmachine: (old-k8s-version-947203) Ensuring networks are active...
	I1010 19:23:11.348252  148123 main.go:141] libmachine: (old-k8s-version-947203) Ensuring network default is active
	I1010 19:23:11.348689  148123 main.go:141] libmachine: (old-k8s-version-947203) Ensuring network mk-old-k8s-version-947203 is active
	I1010 19:23:11.349139  148123 main.go:141] libmachine: (old-k8s-version-947203) Getting domain xml...
	I1010 19:23:11.349799  148123 main.go:141] libmachine: (old-k8s-version-947203) Creating domain...
	I1010 19:23:12.671472  148123 main.go:141] libmachine: (old-k8s-version-947203) Waiting to get IP...
	I1010 19:23:12.672628  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:12.673145  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:12.673278  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:12.673132  149057 retry.go:31] will retry after 280.111667ms: waiting for machine to come up
	I1010 19:23:12.954865  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:12.955325  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:12.955377  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:12.955300  149057 retry.go:31] will retry after 289.967238ms: waiting for machine to come up
	I1010 19:23:13.247039  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:13.247663  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:13.247769  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:13.247632  149057 retry.go:31] will retry after 319.085935ms: waiting for machine to come up
	I1010 19:23:12.863797  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetIP
	I1010 19:23:12.867335  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:12.867760  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:12.867794  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:12.868029  147758 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1010 19:23:12.872503  147758 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 19:23:12.887684  147758 kubeadm.go:883] updating cluster {Name:embed-certs-541370 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:embed-certs-541370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.120 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1010 19:23:12.887809  147758 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1010 19:23:12.887853  147758 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 19:23:12.924155  147758 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1010 19:23:12.924240  147758 ssh_runner.go:195] Run: which lz4
	I1010 19:23:12.928613  147758 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1010 19:23:12.933024  147758 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1010 19:23:12.933069  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1010 19:23:14.450790  147758 crio.go:462] duration metric: took 1.522223644s to copy over tarball
	I1010 19:23:14.450893  147758 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1010 19:23:16.642155  147758 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.191220673s)
	I1010 19:23:16.642193  147758 crio.go:469] duration metric: took 2.191371146s to extract the tarball
	I1010 19:23:16.642202  147758 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1010 19:23:16.679611  147758 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 19:23:16.723840  147758 crio.go:514] all images are preloaded for cri-o runtime.
	I1010 19:23:16.723865  147758 cache_images.go:84] Images are preloaded, skipping loading
	I1010 19:23:16.723874  147758 kubeadm.go:934] updating node { 192.168.39.120 8443 v1.31.1 crio true true} ...
	I1010 19:23:16.723998  147758 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-541370 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.120
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-541370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1010 19:23:16.724081  147758 ssh_runner.go:195] Run: crio config
	I1010 19:23:16.779659  147758 cni.go:84] Creating CNI manager for ""
	I1010 19:23:16.779682  147758 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1010 19:23:16.779693  147758 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1010 19:23:16.779714  147758 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.120 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-541370 NodeName:embed-certs-541370 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.120"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.120 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1010 19:23:16.779842  147758 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.120
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-541370"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.120
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.120"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1010 19:23:16.779904  147758 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1010 19:23:16.791424  147758 binaries.go:44] Found k8s binaries, skipping transfer
	I1010 19:23:16.791493  147758 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1010 19:23:16.801715  147758 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I1010 19:23:16.821364  147758 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1010 19:23:16.842703  147758 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I1010 19:23:16.864835  147758 ssh_runner.go:195] Run: grep 192.168.39.120	control-plane.minikube.internal$ /etc/hosts
	I1010 19:23:16.868928  147758 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.120	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 19:23:13.568192  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:13.568690  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:13.568721  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:13.568657  149057 retry.go:31] will retry after 575.032546ms: waiting for machine to come up
	I1010 19:23:14.145650  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:14.146293  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:14.146319  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:14.146234  149057 retry.go:31] will retry after 702.803794ms: waiting for machine to come up
	I1010 19:23:14.851201  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:14.851736  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:14.851769  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:14.851706  149057 retry.go:31] will retry after 883.195401ms: waiting for machine to come up
	I1010 19:23:15.736627  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:15.737199  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:15.737252  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:15.737155  149057 retry.go:31] will retry after 794.699815ms: waiting for machine to come up
	I1010 19:23:16.533952  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:16.534510  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:16.534545  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:16.534443  149057 retry.go:31] will retry after 961.751912ms: waiting for machine to come up
	I1010 19:23:17.497535  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:17.498007  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:17.498035  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:17.497936  149057 retry.go:31] will retry after 1.423503471s: waiting for machine to come up
	I1010 19:23:16.883162  147758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 19:23:17.027646  147758 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 19:23:17.045083  147758 certs.go:68] Setting up /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/embed-certs-541370 for IP: 192.168.39.120
	I1010 19:23:17.045108  147758 certs.go:194] generating shared ca certs ...
	I1010 19:23:17.045130  147758 certs.go:226] acquiring lock for ca certs: {Name:mk934aa2a82050d7b4e8800492bf64f91d140517 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 19:23:17.045491  147758 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key
	I1010 19:23:17.045561  147758 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key
	I1010 19:23:17.045579  147758 certs.go:256] generating profile certs ...
	I1010 19:23:17.045730  147758 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/embed-certs-541370/client.key
	I1010 19:23:17.045814  147758 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/embed-certs-541370/apiserver.key.dd7630a8
	I1010 19:23:17.045874  147758 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/embed-certs-541370/proxy-client.key
	I1010 19:23:17.046015  147758 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876.pem (1338 bytes)
	W1010 19:23:17.046055  147758 certs.go:480] ignoring /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876_empty.pem, impossibly tiny 0 bytes
	I1010 19:23:17.046075  147758 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem (1679 bytes)
	I1010 19:23:17.046114  147758 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem (1078 bytes)
	I1010 19:23:17.046150  147758 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem (1123 bytes)
	I1010 19:23:17.046177  147758 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem (1675 bytes)
	I1010 19:23:17.046235  147758 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem (1708 bytes)
	I1010 19:23:17.047131  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1010 19:23:17.087057  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1010 19:23:17.137707  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1010 19:23:17.181707  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1010 19:23:17.213227  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/embed-certs-541370/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1010 19:23:17.247846  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/embed-certs-541370/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1010 19:23:17.275989  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/embed-certs-541370/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1010 19:23:17.301144  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/embed-certs-541370/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1010 19:23:17.326232  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem --> /usr/share/ca-certificates/888762.pem (1708 bytes)
	I1010 19:23:17.350586  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1010 19:23:17.374666  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876.pem --> /usr/share/ca-certificates/88876.pem (1338 bytes)
	I1010 19:23:17.399570  147758 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1010 19:23:17.417846  147758 ssh_runner.go:195] Run: openssl version
	I1010 19:23:17.424206  147758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/888762.pem && ln -fs /usr/share/ca-certificates/888762.pem /etc/ssl/certs/888762.pem"
	I1010 19:23:17.436091  147758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/888762.pem
	I1010 19:23:17.441020  147758 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 10 18:09 /usr/share/ca-certificates/888762.pem
	I1010 19:23:17.441090  147758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/888762.pem
	I1010 19:23:17.447318  147758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/888762.pem /etc/ssl/certs/3ec20f2e.0"
	I1010 19:23:17.459191  147758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1010 19:23:17.470878  147758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1010 19:23:17.476185  147758 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 10 17:58 /usr/share/ca-certificates/minikubeCA.pem
	I1010 19:23:17.476248  147758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1010 19:23:17.482808  147758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1010 19:23:17.494626  147758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/88876.pem && ln -fs /usr/share/ca-certificates/88876.pem /etc/ssl/certs/88876.pem"
	I1010 19:23:17.506522  147758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/88876.pem
	I1010 19:23:17.511484  147758 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 10 18:09 /usr/share/ca-certificates/88876.pem
	I1010 19:23:17.511558  147758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/88876.pem
	I1010 19:23:17.517445  147758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/88876.pem /etc/ssl/certs/51391683.0"
	I1010 19:23:17.529109  147758 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1010 19:23:17.534139  147758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1010 19:23:17.540846  147758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1010 19:23:17.547429  147758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1010 19:23:17.554350  147758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1010 19:23:17.561036  147758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1010 19:23:17.567571  147758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1010 19:23:17.574019  147758 kubeadm.go:392] StartCluster: {Name:embed-certs-541370 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:embed-certs-541370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.120 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 19:23:17.574128  147758 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1010 19:23:17.574187  147758 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1010 19:23:17.612699  147758 cri.go:89] found id: ""
	I1010 19:23:17.612804  147758 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1010 19:23:17.623827  147758 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1010 19:23:17.623856  147758 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1010 19:23:17.623917  147758 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1010 19:23:17.634732  147758 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1010 19:23:17.635754  147758 kubeconfig.go:125] found "embed-certs-541370" server: "https://192.168.39.120:8443"
	I1010 19:23:17.637813  147758 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1010 19:23:17.648543  147758 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.120
	I1010 19:23:17.648590  147758 kubeadm.go:1160] stopping kube-system containers ...
	I1010 19:23:17.648606  147758 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1010 19:23:17.648671  147758 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1010 19:23:17.693966  147758 cri.go:89] found id: ""
	I1010 19:23:17.694057  147758 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1010 19:23:17.715977  147758 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1010 19:23:17.727871  147758 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1010 19:23:17.727891  147758 kubeadm.go:157] found existing configuration files:
	
	I1010 19:23:17.727942  147758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1010 19:23:17.738274  147758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1010 19:23:17.738340  147758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1010 19:23:17.748925  147758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1010 19:23:17.758945  147758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1010 19:23:17.759008  147758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1010 19:23:17.769169  147758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1010 19:23:17.779196  147758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1010 19:23:17.779282  147758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1010 19:23:17.790948  147758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1010 19:23:17.802264  147758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1010 19:23:17.802332  147758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1010 19:23:17.814009  147758 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1010 19:23:17.826820  147758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:23:17.947270  147758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:23:19.128720  147758 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.181409785s)
	I1010 19:23:19.128770  147758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:23:19.343735  147758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:23:19.419728  147758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:23:19.529802  147758 api_server.go:52] waiting for apiserver process to appear ...
	I1010 19:23:19.529930  147758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:20.030019  147758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:20.530833  147758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:20.558314  147758 api_server.go:72] duration metric: took 1.028510044s to wait for apiserver process to appear ...
	I1010 19:23:20.558350  147758 api_server.go:88] waiting for apiserver healthz status ...
	I1010 19:23:20.558375  147758 api_server.go:253] Checking apiserver healthz at https://192.168.39.120:8443/healthz ...
	I1010 19:23:20.558991  147758 api_server.go:269] stopped: https://192.168.39.120:8443/healthz: Get "https://192.168.39.120:8443/healthz": dial tcp 192.168.39.120:8443: connect: connection refused
	I1010 19:23:21.058727  147758 api_server.go:253] Checking apiserver healthz at https://192.168.39.120:8443/healthz ...
	I1010 19:23:18.923057  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:18.923636  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:18.923671  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:18.923582  149057 retry.go:31] will retry after 2.09836426s: waiting for machine to come up
	I1010 19:23:21.023500  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:21.024054  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:21.024084  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:21.023999  149057 retry.go:31] will retry after 2.809962093s: waiting for machine to come up
	I1010 19:23:23.187135  147758 api_server.go:279] https://192.168.39.120:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1010 19:23:23.187187  147758 api_server.go:103] status: https://192.168.39.120:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1010 19:23:23.187203  147758 api_server.go:253] Checking apiserver healthz at https://192.168.39.120:8443/healthz ...
	I1010 19:23:23.233367  147758 api_server.go:279] https://192.168.39.120:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1010 19:23:23.233414  147758 api_server.go:103] status: https://192.168.39.120:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1010 19:23:23.558658  147758 api_server.go:253] Checking apiserver healthz at https://192.168.39.120:8443/healthz ...
	I1010 19:23:23.575108  147758 api_server.go:279] https://192.168.39.120:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1010 19:23:23.575139  147758 api_server.go:103] status: https://192.168.39.120:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1010 19:23:24.058679  147758 api_server.go:253] Checking apiserver healthz at https://192.168.39.120:8443/healthz ...
	I1010 19:23:24.065735  147758 api_server.go:279] https://192.168.39.120:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1010 19:23:24.065763  147758 api_server.go:103] status: https://192.168.39.120:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1010 19:23:24.559440  147758 api_server.go:253] Checking apiserver healthz at https://192.168.39.120:8443/healthz ...
	I1010 19:23:24.565460  147758 api_server.go:279] https://192.168.39.120:8443/healthz returned 200:
	ok
	I1010 19:23:24.571828  147758 api_server.go:141] control plane version: v1.31.1
	I1010 19:23:24.571859  147758 api_server.go:131] duration metric: took 4.013501806s to wait for apiserver health ...
	I1010 19:23:24.571869  147758 cni.go:84] Creating CNI manager for ""
	I1010 19:23:24.571875  147758 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1010 19:23:24.573875  147758 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1010 19:23:24.575458  147758 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1010 19:23:24.586870  147758 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1010 19:23:24.624362  147758 system_pods.go:43] waiting for kube-system pods to appear ...
	I1010 19:23:24.643465  147758 system_pods.go:59] 8 kube-system pods found
	I1010 19:23:24.643516  147758 system_pods.go:61] "coredns-7c65d6cfc9-fgtkg" [df696e79-ca6f-4d73-a57e-9c6cdc93c505] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1010 19:23:24.643532  147758 system_pods.go:61] "etcd-embed-certs-541370" [254fa12c-b0d2-499f-8dd9-c1505efeaaab] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1010 19:23:24.643543  147758 system_pods.go:61] "kube-apiserver-embed-certs-541370" [fcd3809d-d325-4481-8e86-c246e29458fe] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1010 19:23:24.643565  147758 system_pods.go:61] "kube-controller-manager-embed-certs-541370" [ab0fdd6b-d9b7-48dc-b82f-29b21d2295ee] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1010 19:23:24.643584  147758 system_pods.go:61] "kube-proxy-f5l6x" [446383fa-44c5-4b9e-bfc5-e38799597e75] Running
	I1010 19:23:24.643592  147758 system_pods.go:61] "kube-scheduler-embed-certs-541370" [1c6af7e7-ce16-4ae2-8feb-e5d474173de1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1010 19:23:24.643603  147758 system_pods.go:61] "metrics-server-6867b74b74-kw529" [aad00321-d499-4563-849e-286d6e699fc8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1010 19:23:24.643611  147758 system_pods.go:61] "storage-provisioner" [df4ae621-5066-4276-9276-a0538a9f9dd1] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1010 19:23:24.643620  147758 system_pods.go:74] duration metric: took 19.234558ms to wait for pod list to return data ...
	I1010 19:23:24.643637  147758 node_conditions.go:102] verifying NodePressure condition ...
	I1010 19:23:24.651647  147758 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1010 19:23:24.651683  147758 node_conditions.go:123] node cpu capacity is 2
	I1010 19:23:24.651699  147758 node_conditions.go:105] duration metric: took 8.056629ms to run NodePressure ...
	I1010 19:23:24.651720  147758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:23:24.915651  147758 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1010 19:23:24.921104  147758 kubeadm.go:739] kubelet initialised
	I1010 19:23:24.921131  147758 kubeadm.go:740] duration metric: took 5.44643ms waiting for restarted kubelet to initialise ...
	I1010 19:23:24.921142  147758 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 19:23:24.927535  147758 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-fgtkg" in "kube-system" namespace to be "Ready" ...
	I1010 19:23:23.837827  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:23.838271  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:23.838295  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:23.838220  149057 retry.go:31] will retry after 2.562944835s: waiting for machine to come up
	I1010 19:23:26.402699  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:26.403250  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:26.403294  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:26.403192  149057 retry.go:31] will retry after 3.867656846s: waiting for machine to come up
	I1010 19:23:26.932764  147758 pod_ready.go:103] pod "coredns-7c65d6cfc9-fgtkg" in "kube-system" namespace has status "Ready":"False"
	I1010 19:23:28.936055  147758 pod_ready.go:103] pod "coredns-7c65d6cfc9-fgtkg" in "kube-system" namespace has status "Ready":"False"
	I1010 19:23:31.434959  147758 pod_ready.go:103] pod "coredns-7c65d6cfc9-fgtkg" in "kube-system" namespace has status "Ready":"False"
	I1010 19:23:31.893914  148525 start.go:364] duration metric: took 2m17.836396131s to acquireMachinesLock for "default-k8s-diff-port-361847"
	I1010 19:23:31.893993  148525 start.go:96] Skipping create...Using existing machine configuration
	I1010 19:23:31.894007  148525 fix.go:54] fixHost starting: 
	I1010 19:23:31.894438  148525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:23:31.894502  148525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:23:31.914583  148525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37215
	I1010 19:23:31.915054  148525 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:23:31.915535  148525 main.go:141] libmachine: Using API Version  1
	I1010 19:23:31.915560  148525 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:23:31.915967  148525 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:23:31.916207  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .DriverName
	I1010 19:23:31.916387  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetState
	I1010 19:23:31.918035  148525 fix.go:112] recreateIfNeeded on default-k8s-diff-port-361847: state=Stopped err=<nil>
	I1010 19:23:31.918073  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .DriverName
	W1010 19:23:31.918241  148525 fix.go:138] unexpected machine state, will restart: <nil>
	I1010 19:23:31.920390  148525 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-361847" ...
	I1010 19:23:30.275222  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.275762  148123 main.go:141] libmachine: (old-k8s-version-947203) Found IP for machine: 192.168.61.112
	I1010 19:23:30.275779  148123 main.go:141] libmachine: (old-k8s-version-947203) Reserving static IP address...
	I1010 19:23:30.275794  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has current primary IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.276478  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "old-k8s-version-947203", mac: "52:54:00:b5:0b:b2", ip: "192.168.61.112"} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:30.276529  148123 main.go:141] libmachine: (old-k8s-version-947203) Reserved static IP address: 192.168.61.112
	I1010 19:23:30.276553  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | skip adding static IP to network mk-old-k8s-version-947203 - found existing host DHCP lease matching {name: "old-k8s-version-947203", mac: "52:54:00:b5:0b:b2", ip: "192.168.61.112"}
	I1010 19:23:30.276574  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | Getting to WaitForSSH function...
	I1010 19:23:30.276590  148123 main.go:141] libmachine: (old-k8s-version-947203) Waiting for SSH to be available...
	I1010 19:23:30.278486  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.278854  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:30.278890  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.278984  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | Using SSH client type: external
	I1010 19:23:30.279016  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | Using SSH private key: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/old-k8s-version-947203/id_rsa (-rw-------)
	I1010 19:23:30.279051  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.112 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19787-81676/.minikube/machines/old-k8s-version-947203/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1010 19:23:30.279068  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | About to run SSH command:
	I1010 19:23:30.279080  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | exit 0
	I1010 19:23:30.405053  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | SSH cmd err, output: <nil>: 
	I1010 19:23:30.405492  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetConfigRaw
	I1010 19:23:30.406224  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetIP
	I1010 19:23:30.408830  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.409178  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:30.409204  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.409462  148123 profile.go:143] Saving config to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/old-k8s-version-947203/config.json ...
	I1010 19:23:30.409673  148123 machine.go:93] provisionDockerMachine start ...
	I1010 19:23:30.409693  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .DriverName
	I1010 19:23:30.409916  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHHostname
	I1010 19:23:30.412131  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.412400  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:30.412427  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.412574  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHPort
	I1010 19:23:30.412770  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:30.412965  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:30.413117  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHUsername
	I1010 19:23:30.413368  148123 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:30.413653  148123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.112 22 <nil> <nil>}
	I1010 19:23:30.413668  148123 main.go:141] libmachine: About to run SSH command:
	hostname
	I1010 19:23:30.525149  148123 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1010 19:23:30.525176  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetMachineName
	I1010 19:23:30.525417  148123 buildroot.go:166] provisioning hostname "old-k8s-version-947203"
	I1010 19:23:30.525478  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetMachineName
	I1010 19:23:30.525703  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHHostname
	I1010 19:23:30.528343  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.528681  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:30.528708  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.528841  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHPort
	I1010 19:23:30.529035  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:30.529185  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:30.529303  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHUsername
	I1010 19:23:30.529460  148123 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:30.529635  148123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.112 22 <nil> <nil>}
	I1010 19:23:30.529645  148123 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-947203 && echo "old-k8s-version-947203" | sudo tee /etc/hostname
	I1010 19:23:30.655605  148123 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-947203
	
	I1010 19:23:30.655644  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHHostname
	I1010 19:23:30.658439  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.658834  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:30.658869  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.659063  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHPort
	I1010 19:23:30.659285  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:30.659458  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:30.659574  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHUsername
	I1010 19:23:30.659733  148123 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:30.659905  148123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.112 22 <nil> <nil>}
	I1010 19:23:30.659921  148123 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-947203' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-947203/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-947203' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1010 19:23:30.781974  148123 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1010 19:23:30.782011  148123 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19787-81676/.minikube CaCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19787-81676/.minikube}
	I1010 19:23:30.782033  148123 buildroot.go:174] setting up certificates
	I1010 19:23:30.782048  148123 provision.go:84] configureAuth start
	I1010 19:23:30.782059  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetMachineName
	I1010 19:23:30.782379  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetIP
	I1010 19:23:30.785189  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.785584  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:30.785613  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.785782  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHHostname
	I1010 19:23:30.788086  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.788432  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:30.788458  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.788582  148123 provision.go:143] copyHostCerts
	I1010 19:23:30.788645  148123 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem, removing ...
	I1010 19:23:30.788660  148123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem
	I1010 19:23:30.788729  148123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem (1078 bytes)
	I1010 19:23:30.788839  148123 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem, removing ...
	I1010 19:23:30.788867  148123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem
	I1010 19:23:30.788900  148123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem (1123 bytes)
	I1010 19:23:30.788977  148123 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem, removing ...
	I1010 19:23:30.788986  148123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem
	I1010 19:23:30.789013  148123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem (1675 bytes)
	I1010 19:23:30.789081  148123 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-947203 san=[127.0.0.1 192.168.61.112 localhost minikube old-k8s-version-947203]
	I1010 19:23:31.240626  148123 provision.go:177] copyRemoteCerts
	I1010 19:23:31.240698  148123 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1010 19:23:31.240737  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHHostname
	I1010 19:23:31.243678  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.244108  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:31.244138  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.244356  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHPort
	I1010 19:23:31.244565  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:31.244765  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHUsername
	I1010 19:23:31.244929  148123 sshutil.go:53] new ssh client: &{IP:192.168.61.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/old-k8s-version-947203/id_rsa Username:docker}
	I1010 19:23:31.331337  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1010 19:23:31.356266  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1010 19:23:31.381186  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1010 19:23:31.405301  148123 provision.go:87] duration metric: took 623.235619ms to configureAuth
	I1010 19:23:31.405337  148123 buildroot.go:189] setting minikube options for container-runtime
	I1010 19:23:31.405541  148123 config.go:182] Loaded profile config "old-k8s-version-947203": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1010 19:23:31.405620  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHHostname
	I1010 19:23:31.408111  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.408444  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:31.408470  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.408695  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHPort
	I1010 19:23:31.408947  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:31.409109  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:31.409229  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHUsername
	I1010 19:23:31.409374  148123 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:31.409583  148123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.112 22 <nil> <nil>}
	I1010 19:23:31.409609  148123 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1010 19:23:31.647023  148123 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1010 19:23:31.647050  148123 machine.go:96] duration metric: took 1.237364012s to provisionDockerMachine
	I1010 19:23:31.647063  148123 start.go:293] postStartSetup for "old-k8s-version-947203" (driver="kvm2")
	I1010 19:23:31.647073  148123 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1010 19:23:31.647104  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .DriverName
	I1010 19:23:31.647464  148123 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1010 19:23:31.647502  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHHostname
	I1010 19:23:31.649991  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.650288  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:31.650311  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.650484  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHPort
	I1010 19:23:31.650637  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:31.650832  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHUsername
	I1010 19:23:31.650968  148123 sshutil.go:53] new ssh client: &{IP:192.168.61.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/old-k8s-version-947203/id_rsa Username:docker}
	I1010 19:23:31.735985  148123 ssh_runner.go:195] Run: cat /etc/os-release
	I1010 19:23:31.740689  148123 info.go:137] Remote host: Buildroot 2023.02.9
	I1010 19:23:31.740725  148123 filesync.go:126] Scanning /home/jenkins/minikube-integration/19787-81676/.minikube/addons for local assets ...
	I1010 19:23:31.740803  148123 filesync.go:126] Scanning /home/jenkins/minikube-integration/19787-81676/.minikube/files for local assets ...
	I1010 19:23:31.740947  148123 filesync.go:149] local asset: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem -> 888762.pem in /etc/ssl/certs
	I1010 19:23:31.741057  148123 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1010 19:23:31.751383  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem --> /etc/ssl/certs/888762.pem (1708 bytes)
	I1010 19:23:31.777091  148123 start.go:296] duration metric: took 130.011618ms for postStartSetup
	I1010 19:23:31.777134  148123 fix.go:56] duration metric: took 20.455078936s for fixHost
	I1010 19:23:31.777158  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHHostname
	I1010 19:23:31.780083  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.780661  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:31.780708  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.780832  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHPort
	I1010 19:23:31.781069  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:31.781245  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:31.781382  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHUsername
	I1010 19:23:31.781534  148123 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:31.781711  148123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.112 22 <nil> <nil>}
	I1010 19:23:31.781721  148123 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1010 19:23:31.893727  148123 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728588211.849777581
	
	I1010 19:23:31.893756  148123 fix.go:216] guest clock: 1728588211.849777581
	I1010 19:23:31.893763  148123 fix.go:229] Guest: 2024-10-10 19:23:31.849777581 +0000 UTC Remote: 2024-10-10 19:23:31.777138808 +0000 UTC m=+218.253674512 (delta=72.638773ms)
	I1010 19:23:31.893806  148123 fix.go:200] guest clock delta is within tolerance: 72.638773ms
	I1010 19:23:31.893813  148123 start.go:83] releasing machines lock for "old-k8s-version-947203", held for 20.571797307s
	I1010 19:23:31.893848  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .DriverName
	I1010 19:23:31.894151  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetIP
	I1010 19:23:31.896747  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.897156  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:31.897207  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.897385  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .DriverName
	I1010 19:23:31.898036  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .DriverName
	I1010 19:23:31.898229  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .DriverName
	I1010 19:23:31.898375  148123 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1010 19:23:31.898422  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHHostname
	I1010 19:23:31.898461  148123 ssh_runner.go:195] Run: cat /version.json
	I1010 19:23:31.898487  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHHostname
	I1010 19:23:31.901274  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.901450  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.901608  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:31.901649  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.901784  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHPort
	I1010 19:23:31.901920  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:31.901945  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.901952  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:31.902112  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHPort
	I1010 19:23:31.902130  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHUsername
	I1010 19:23:31.902306  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:31.902348  148123 sshutil.go:53] new ssh client: &{IP:192.168.61.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/old-k8s-version-947203/id_rsa Username:docker}
	I1010 19:23:31.902412  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHUsername
	I1010 19:23:31.902533  148123 sshutil.go:53] new ssh client: &{IP:192.168.61.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/old-k8s-version-947203/id_rsa Username:docker}
	I1010 19:23:32.016264  148123 ssh_runner.go:195] Run: systemctl --version
	I1010 19:23:32.022618  148123 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1010 19:23:32.169502  148123 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1010 19:23:32.175960  148123 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1010 19:23:32.176039  148123 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1010 19:23:32.193584  148123 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1010 19:23:32.193615  148123 start.go:495] detecting cgroup driver to use...
	I1010 19:23:32.193698  148123 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1010 19:23:32.210923  148123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1010 19:23:32.227172  148123 docker.go:217] disabling cri-docker service (if available) ...
	I1010 19:23:32.227254  148123 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1010 19:23:32.242142  148123 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1010 19:23:32.256896  148123 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1010 19:23:32.387462  148123 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1010 19:23:32.575184  148123 docker.go:233] disabling docker service ...
	I1010 19:23:32.575257  148123 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1010 19:23:32.590667  148123 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1010 19:23:32.604825  148123 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1010 19:23:32.739827  148123 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1010 19:23:32.863648  148123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1010 19:23:32.879435  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1010 19:23:32.899582  148123 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1010 19:23:32.899659  148123 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:32.915010  148123 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1010 19:23:32.915081  148123 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:32.931181  148123 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:32.944038  148123 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:32.955182  148123 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1010 19:23:32.972220  148123 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1010 19:23:32.982681  148123 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1010 19:23:32.982739  148123 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1010 19:23:32.997012  148123 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1010 19:23:33.009468  148123 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 19:23:33.180403  148123 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1010 19:23:33.284934  148123 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1010 19:23:33.285008  148123 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1010 19:23:33.290063  148123 start.go:563] Will wait 60s for crictl version
	I1010 19:23:33.290124  148123 ssh_runner.go:195] Run: which crictl
	I1010 19:23:33.294752  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1010 19:23:33.342710  148123 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1010 19:23:33.342792  148123 ssh_runner.go:195] Run: crio --version
	I1010 19:23:33.372821  148123 ssh_runner.go:195] Run: crio --version
	I1010 19:23:33.409395  148123 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1010 19:23:33.411012  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetIP
	I1010 19:23:33.414374  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:33.414789  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:33.414836  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:33.415084  148123 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1010 19:23:33.420164  148123 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 19:23:33.433442  148123 kubeadm.go:883] updating cluster {Name:old-k8s-version-947203 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-947203 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.112 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1010 19:23:33.433631  148123 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1010 19:23:33.433717  148123 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 19:23:33.480013  148123 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1010 19:23:33.480078  148123 ssh_runner.go:195] Run: which lz4
	I1010 19:23:33.485443  148123 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1010 19:23:33.490986  148123 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1010 19:23:33.491032  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1010 19:23:31.921836  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .Start
	I1010 19:23:31.922036  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Ensuring networks are active...
	I1010 19:23:31.922890  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Ensuring network default is active
	I1010 19:23:31.923271  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Ensuring network mk-default-k8s-diff-port-361847 is active
	I1010 19:23:31.923685  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Getting domain xml...
	I1010 19:23:31.924449  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Creating domain...
	I1010 19:23:33.241164  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting to get IP...
	I1010 19:23:33.242273  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:33.242713  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | unable to find current IP address of domain default-k8s-diff-port-361847 in network mk-default-k8s-diff-port-361847
	I1010 19:23:33.242814  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | I1010 19:23:33.242702  149213 retry.go:31] will retry after 195.013046ms: waiting for machine to come up
	I1010 19:23:33.438965  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:33.439425  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | unable to find current IP address of domain default-k8s-diff-port-361847 in network mk-default-k8s-diff-port-361847
	I1010 19:23:33.439452  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | I1010 19:23:33.439379  149213 retry.go:31] will retry after 344.223823ms: waiting for machine to come up
	I1010 19:23:33.785167  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:33.785833  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | unable to find current IP address of domain default-k8s-diff-port-361847 in network mk-default-k8s-diff-port-361847
	I1010 19:23:33.785864  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | I1010 19:23:33.785780  149213 retry.go:31] will retry after 342.787658ms: waiting for machine to come up
	I1010 19:23:33.435066  147758 pod_ready.go:103] pod "coredns-7c65d6cfc9-fgtkg" in "kube-system" namespace has status "Ready":"False"
	I1010 19:23:34.936768  147758 pod_ready.go:93] pod "coredns-7c65d6cfc9-fgtkg" in "kube-system" namespace has status "Ready":"True"
	I1010 19:23:34.936800  147758 pod_ready.go:82] duration metric: took 10.009235225s for pod "coredns-7c65d6cfc9-fgtkg" in "kube-system" namespace to be "Ready" ...
	I1010 19:23:34.936814  147758 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:23:35.944395  147758 pod_ready.go:93] pod "etcd-embed-certs-541370" in "kube-system" namespace has status "Ready":"True"
	I1010 19:23:35.944430  147758 pod_ready.go:82] duration metric: took 1.007599746s for pod "etcd-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:23:35.944445  147758 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:23:35.953224  147758 pod_ready.go:93] pod "kube-apiserver-embed-certs-541370" in "kube-system" namespace has status "Ready":"True"
	I1010 19:23:35.953255  147758 pod_ready.go:82] duration metric: took 8.801702ms for pod "kube-apiserver-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:23:35.953266  147758 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:23:35.227279  148123 crio.go:462] duration metric: took 1.74186828s to copy over tarball
	I1010 19:23:35.227383  148123 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1010 19:23:38.216499  148123 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.989082447s)
	I1010 19:23:38.216533  148123 crio.go:469] duration metric: took 2.989218587s to extract the tarball
	I1010 19:23:38.216541  148123 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1010 19:23:38.259699  148123 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 19:23:38.308284  148123 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1010 19:23:38.308313  148123 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1010 19:23:38.308422  148123 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1010 19:23:38.308477  148123 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1010 19:23:38.308482  148123 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1010 19:23:38.308593  148123 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1010 19:23:38.308617  148123 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1010 19:23:38.308405  148123 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 19:23:38.308446  148123 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1010 19:23:38.308530  148123 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1010 19:23:38.310558  148123 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1010 19:23:38.310612  148123 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1010 19:23:38.310642  148123 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1010 19:23:38.310652  148123 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1010 19:23:38.310645  148123 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1010 19:23:38.310560  148123 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 19:23:38.310733  148123 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1010 19:23:38.311068  148123 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1010 19:23:38.469174  148123 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1010 19:23:38.469563  148123 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1010 19:23:38.488463  148123 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1010 19:23:38.490815  148123 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1010 19:23:38.491841  148123 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1010 19:23:38.496079  148123 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1010 19:23:38.501292  148123 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1010 19:23:34.130443  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:34.130968  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | unable to find current IP address of domain default-k8s-diff-port-361847 in network mk-default-k8s-diff-port-361847
	I1010 19:23:34.130998  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | I1010 19:23:34.130915  149213 retry.go:31] will retry after 393.100812ms: waiting for machine to come up
	I1010 19:23:34.525570  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:34.526032  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | unable to find current IP address of domain default-k8s-diff-port-361847 in network mk-default-k8s-diff-port-361847
	I1010 19:23:34.526060  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | I1010 19:23:34.525980  149213 retry.go:31] will retry after 465.468437ms: waiting for machine to come up
	I1010 19:23:34.992775  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:34.993348  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | unable to find current IP address of domain default-k8s-diff-port-361847 in network mk-default-k8s-diff-port-361847
	I1010 19:23:34.993386  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | I1010 19:23:34.993287  149213 retry.go:31] will retry after 907.884473ms: waiting for machine to come up
	I1010 19:23:35.902481  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:35.902942  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | unable to find current IP address of domain default-k8s-diff-port-361847 in network mk-default-k8s-diff-port-361847
	I1010 19:23:35.902974  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | I1010 19:23:35.902878  149213 retry.go:31] will retry after 1.157806188s: waiting for machine to come up
	I1010 19:23:37.062068  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:37.062749  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | unable to find current IP address of domain default-k8s-diff-port-361847 in network mk-default-k8s-diff-port-361847
	I1010 19:23:37.062777  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | I1010 19:23:37.062706  149213 retry.go:31] will retry after 1.432559208s: waiting for machine to come up
	I1010 19:23:38.496653  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:38.497120  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | unable to find current IP address of domain default-k8s-diff-port-361847 in network mk-default-k8s-diff-port-361847
	I1010 19:23:38.497153  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | I1010 19:23:38.497066  149213 retry.go:31] will retry after 1.559787003s: waiting for machine to come up
	I1010 19:23:37.961068  147758 pod_ready.go:103] pod "kube-controller-manager-embed-certs-541370" in "kube-system" namespace has status "Ready":"False"
	I1010 19:23:40.065559  147758 pod_ready.go:103] pod "kube-controller-manager-embed-certs-541370" in "kube-system" namespace has status "Ready":"False"
	I1010 19:23:40.528757  147758 pod_ready.go:93] pod "kube-controller-manager-embed-certs-541370" in "kube-system" namespace has status "Ready":"True"
	I1010 19:23:40.528786  147758 pod_ready.go:82] duration metric: took 4.575513259s for pod "kube-controller-manager-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:23:40.528802  147758 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-f5l6x" in "kube-system" namespace to be "Ready" ...
	I1010 19:23:40.538002  147758 pod_ready.go:93] pod "kube-proxy-f5l6x" in "kube-system" namespace has status "Ready":"True"
	I1010 19:23:40.538034  147758 pod_ready.go:82] duration metric: took 9.22357ms for pod "kube-proxy-f5l6x" in "kube-system" namespace to be "Ready" ...
	I1010 19:23:40.538049  147758 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:23:40.543594  147758 pod_ready.go:93] pod "kube-scheduler-embed-certs-541370" in "kube-system" namespace has status "Ready":"True"
	I1010 19:23:40.543615  147758 pod_ready.go:82] duration metric: took 5.558665ms for pod "kube-scheduler-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:23:40.543626  147758 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace to be "Ready" ...
	I1010 19:23:38.581315  148123 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1010 19:23:38.581361  148123 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1010 19:23:38.581385  148123 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1010 19:23:38.581407  148123 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1010 19:23:38.581442  148123 ssh_runner.go:195] Run: which crictl
	I1010 19:23:38.581457  148123 ssh_runner.go:195] Run: which crictl
	I1010 19:23:38.642668  148123 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1010 19:23:38.642721  148123 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1010 19:23:38.642777  148123 ssh_runner.go:195] Run: which crictl
	I1010 19:23:38.658235  148123 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1010 19:23:38.658291  148123 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1010 19:23:38.658288  148123 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1010 19:23:38.658328  148123 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1010 19:23:38.658346  148123 ssh_runner.go:195] Run: which crictl
	I1010 19:23:38.658373  148123 ssh_runner.go:195] Run: which crictl
	I1010 19:23:38.678038  148123 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1010 19:23:38.678097  148123 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1010 19:23:38.678155  148123 ssh_runner.go:195] Run: which crictl
	I1010 19:23:38.682719  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1010 19:23:38.682773  148123 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1010 19:23:38.682721  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1010 19:23:38.682791  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1010 19:23:38.682812  148123 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1010 19:23:38.682845  148123 ssh_runner.go:195] Run: which crictl
	I1010 19:23:38.682851  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1010 19:23:38.682872  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1010 19:23:38.691181  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1010 19:23:38.842626  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1010 19:23:38.842714  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1010 19:23:38.842754  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1010 19:23:38.842797  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1010 19:23:38.842721  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1010 19:23:38.842851  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1010 19:23:38.842789  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1010 19:23:38.989994  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1010 19:23:39.005815  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1010 19:23:39.005923  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1010 19:23:39.010029  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1010 19:23:39.010105  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1010 19:23:39.010150  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1010 19:23:39.010230  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1010 19:23:39.134646  148123 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 19:23:39.141431  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1010 19:23:39.176750  148123 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1010 19:23:39.176945  148123 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1010 19:23:39.176989  148123 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1010 19:23:39.177038  148123 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1010 19:23:39.177073  148123 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1010 19:23:39.177104  148123 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1010 19:23:39.335056  148123 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1010 19:23:39.335113  148123 cache_images.go:92] duration metric: took 1.026784312s to LoadCachedImages
	W1010 19:23:39.335180  148123 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I1010 19:23:39.335192  148123 kubeadm.go:934] updating node { 192.168.61.112 8443 v1.20.0 crio true true} ...
	I1010 19:23:39.335305  148123 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-947203 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.112
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-947203 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1010 19:23:39.335380  148123 ssh_runner.go:195] Run: crio config
	I1010 19:23:39.394307  148123 cni.go:84] Creating CNI manager for ""
	I1010 19:23:39.394338  148123 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1010 19:23:39.394350  148123 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1010 19:23:39.394378  148123 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.112 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-947203 NodeName:old-k8s-version-947203 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.112"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.112 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1010 19:23:39.394572  148123 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.112
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-947203"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.112
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.112"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1010 19:23:39.394662  148123 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1010 19:23:39.405510  148123 binaries.go:44] Found k8s binaries, skipping transfer
	I1010 19:23:39.405600  148123 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1010 19:23:39.415968  148123 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1010 19:23:39.443234  148123 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1010 19:23:39.462448  148123 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1010 19:23:39.481959  148123 ssh_runner.go:195] Run: grep 192.168.61.112	control-plane.minikube.internal$ /etc/hosts
	I1010 19:23:39.486188  148123 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.112	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 19:23:39.501922  148123 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 19:23:39.642769  148123 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 19:23:39.661335  148123 certs.go:68] Setting up /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/old-k8s-version-947203 for IP: 192.168.61.112
	I1010 19:23:39.661363  148123 certs.go:194] generating shared ca certs ...
	I1010 19:23:39.661384  148123 certs.go:226] acquiring lock for ca certs: {Name:mk934aa2a82050d7b4e8800492bf64f91d140517 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 19:23:39.661595  148123 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key
	I1010 19:23:39.661658  148123 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key
	I1010 19:23:39.661671  148123 certs.go:256] generating profile certs ...
	I1010 19:23:39.661779  148123 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/old-k8s-version-947203/client.key
	I1010 19:23:39.661867  148123 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/old-k8s-version-947203/apiserver.key.8a666a52
	I1010 19:23:39.661922  148123 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/old-k8s-version-947203/proxy-client.key
	I1010 19:23:39.662312  148123 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876.pem (1338 bytes)
	W1010 19:23:39.662395  148123 certs.go:480] ignoring /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876_empty.pem, impossibly tiny 0 bytes
	I1010 19:23:39.662410  148123 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem (1679 bytes)
	I1010 19:23:39.662447  148123 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem (1078 bytes)
	I1010 19:23:39.662501  148123 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem (1123 bytes)
	I1010 19:23:39.662531  148123 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem (1675 bytes)
	I1010 19:23:39.662622  148123 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem (1708 bytes)
	I1010 19:23:39.663372  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1010 19:23:39.710366  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1010 19:23:39.752724  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1010 19:23:39.801408  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1010 19:23:39.843557  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/old-k8s-version-947203/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1010 19:23:39.892522  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/old-k8s-version-947203/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1010 19:23:39.938519  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/old-k8s-version-947203/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1010 19:23:39.966140  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/old-k8s-version-947203/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1010 19:23:39.992974  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem --> /usr/share/ca-certificates/888762.pem (1708 bytes)
	I1010 19:23:40.019381  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1010 19:23:40.045769  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876.pem --> /usr/share/ca-certificates/88876.pem (1338 bytes)
	I1010 19:23:40.080683  148123 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1010 19:23:40.099747  148123 ssh_runner.go:195] Run: openssl version
	I1010 19:23:40.107865  148123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1010 19:23:40.123158  148123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1010 19:23:40.128594  148123 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 10 17:58 /usr/share/ca-certificates/minikubeCA.pem
	I1010 19:23:40.128660  148123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1010 19:23:40.135319  148123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1010 19:23:40.147514  148123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/88876.pem && ln -fs /usr/share/ca-certificates/88876.pem /etc/ssl/certs/88876.pem"
	I1010 19:23:40.162463  148123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/88876.pem
	I1010 19:23:40.168387  148123 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 10 18:09 /usr/share/ca-certificates/88876.pem
	I1010 19:23:40.168512  148123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/88876.pem
	I1010 19:23:40.176304  148123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/88876.pem /etc/ssl/certs/51391683.0"
	I1010 19:23:40.191306  148123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/888762.pem && ln -fs /usr/share/ca-certificates/888762.pem /etc/ssl/certs/888762.pem"
	I1010 19:23:40.204196  148123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/888762.pem
	I1010 19:23:40.211044  148123 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 10 18:09 /usr/share/ca-certificates/888762.pem
	I1010 19:23:40.211122  148123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/888762.pem
	I1010 19:23:40.219574  148123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/888762.pem /etc/ssl/certs/3ec20f2e.0"
	I1010 19:23:40.232634  148123 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1010 19:23:40.237639  148123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1010 19:23:40.244141  148123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1010 19:23:40.251188  148123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1010 19:23:40.258538  148123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1010 19:23:40.265029  148123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1010 19:23:40.271754  148123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1010 19:23:40.278352  148123 kubeadm.go:392] StartCluster: {Name:old-k8s-version-947203 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-947203 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.112 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 19:23:40.278453  148123 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1010 19:23:40.278518  148123 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1010 19:23:40.317771  148123 cri.go:89] found id: ""
	I1010 19:23:40.317838  148123 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1010 19:23:40.330494  148123 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1010 19:23:40.330520  148123 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1010 19:23:40.330576  148123 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1010 19:23:40.341984  148123 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1010 19:23:40.343386  148123 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-947203" does not appear in /home/jenkins/minikube-integration/19787-81676/kubeconfig
	I1010 19:23:40.344334  148123 kubeconfig.go:62] /home/jenkins/minikube-integration/19787-81676/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-947203" cluster setting kubeconfig missing "old-k8s-version-947203" context setting]
	I1010 19:23:40.345360  148123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19787-81676/kubeconfig: {Name:mk31297b607b774870865548d952622c04d970cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 19:23:40.347128  148123 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1010 19:23:40.357902  148123 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.112
	I1010 19:23:40.357944  148123 kubeadm.go:1160] stopping kube-system containers ...
	I1010 19:23:40.357961  148123 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1010 19:23:40.358031  148123 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1010 19:23:40.399970  148123 cri.go:89] found id: ""
	I1010 19:23:40.400057  148123 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1010 19:23:40.418527  148123 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1010 19:23:40.429182  148123 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1010 19:23:40.429217  148123 kubeadm.go:157] found existing configuration files:
	
	I1010 19:23:40.429262  148123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1010 19:23:40.439287  148123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1010 19:23:40.439363  148123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1010 19:23:40.449479  148123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1010 19:23:40.459288  148123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1010 19:23:40.459359  148123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1010 19:23:40.472733  148123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1010 19:23:40.484890  148123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1010 19:23:40.484958  148123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1010 19:23:40.495301  148123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1010 19:23:40.504750  148123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1010 19:23:40.504820  148123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1010 19:23:40.515034  148123 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1010 19:23:40.526168  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:23:40.675022  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:23:41.566371  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:23:41.815296  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:23:41.930082  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:23:42.027597  148123 api_server.go:52] waiting for apiserver process to appear ...
	I1010 19:23:42.027713  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:42.528219  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:43.027889  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:43.527735  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:40.058247  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:40.058783  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | unable to find current IP address of domain default-k8s-diff-port-361847 in network mk-default-k8s-diff-port-361847
	I1010 19:23:40.058835  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | I1010 19:23:40.058696  149213 retry.go:31] will retry after 2.214094081s: waiting for machine to come up
	I1010 19:23:42.274629  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:42.275167  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | unable to find current IP address of domain default-k8s-diff-port-361847 in network mk-default-k8s-diff-port-361847
	I1010 19:23:42.275194  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | I1010 19:23:42.275106  149213 retry.go:31] will retry after 2.126528577s: waiting for machine to come up
	I1010 19:23:42.550865  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:23:45.051043  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:23:44.028463  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:44.527916  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:45.027911  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:45.528797  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:46.028772  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:46.527982  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:47.027799  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:47.527894  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:48.028352  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:48.527935  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:44.403101  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:44.403575  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | unable to find current IP address of domain default-k8s-diff-port-361847 in network mk-default-k8s-diff-port-361847
	I1010 19:23:44.403616  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | I1010 19:23:44.403534  149213 retry.go:31] will retry after 3.603964622s: waiting for machine to come up
	I1010 19:23:48.008726  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:48.009142  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | unable to find current IP address of domain default-k8s-diff-port-361847 in network mk-default-k8s-diff-port-361847
	I1010 19:23:48.009191  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | I1010 19:23:48.009100  149213 retry.go:31] will retry after 3.639744981s: waiting for machine to come up
	I1010 19:23:47.551003  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:23:49.661572  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:23:52.858209  147213 start.go:364] duration metric: took 56.558774237s to acquireMachinesLock for "no-preload-320324"
	I1010 19:23:52.858274  147213 start.go:96] Skipping create...Using existing machine configuration
	I1010 19:23:52.858283  147213 fix.go:54] fixHost starting: 
	I1010 19:23:52.858705  147213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:23:52.858742  147213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:23:52.878428  147213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38525
	I1010 19:23:52.878955  147213 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:23:52.879563  147213 main.go:141] libmachine: Using API Version  1
	I1010 19:23:52.879599  147213 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:23:52.879945  147213 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:23:52.880144  147213 main.go:141] libmachine: (no-preload-320324) Calling .DriverName
	I1010 19:23:52.880282  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetState
	I1010 19:23:52.881626  147213 fix.go:112] recreateIfNeeded on no-preload-320324: state=Stopped err=<nil>
	I1010 19:23:52.881650  147213 main.go:141] libmachine: (no-preload-320324) Calling .DriverName
	W1010 19:23:52.881799  147213 fix.go:138] unexpected machine state, will restart: <nil>
	I1010 19:23:52.883912  147213 out.go:177] * Restarting existing kvm2 VM for "no-preload-320324" ...
	I1010 19:23:49.028421  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:49.528271  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:50.028462  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:50.527867  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:51.028782  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:51.528581  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:52.028732  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:52.527987  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:53.027852  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:53.528647  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:52.885239  147213 main.go:141] libmachine: (no-preload-320324) Calling .Start
	I1010 19:23:52.885429  147213 main.go:141] libmachine: (no-preload-320324) Ensuring networks are active...
	I1010 19:23:52.886211  147213 main.go:141] libmachine: (no-preload-320324) Ensuring network default is active
	I1010 19:23:52.886749  147213 main.go:141] libmachine: (no-preload-320324) Ensuring network mk-no-preload-320324 is active
	I1010 19:23:52.887310  147213 main.go:141] libmachine: (no-preload-320324) Getting domain xml...
	I1010 19:23:52.888034  147213 main.go:141] libmachine: (no-preload-320324) Creating domain...
	I1010 19:23:51.652975  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:51.653464  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Found IP for machine: 192.168.50.32
	I1010 19:23:51.653487  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Reserving static IP address...
	I1010 19:23:51.653509  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has current primary IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:51.653910  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-361847", mac: "52:54:00:a6:72:58", ip: "192.168.50.32"} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:51.653956  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | skip adding static IP to network mk-default-k8s-diff-port-361847 - found existing host DHCP lease matching {name: "default-k8s-diff-port-361847", mac: "52:54:00:a6:72:58", ip: "192.168.50.32"}
	I1010 19:23:51.653974  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Reserved static IP address: 192.168.50.32
	I1010 19:23:51.653993  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for SSH to be available...
	I1010 19:23:51.654006  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | Getting to WaitForSSH function...
	I1010 19:23:51.655927  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:51.656210  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:51.656240  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:51.656334  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | Using SSH client type: external
	I1010 19:23:51.656372  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | Using SSH private key: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/default-k8s-diff-port-361847/id_rsa (-rw-------)
	I1010 19:23:51.656409  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.32 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19787-81676/.minikube/machines/default-k8s-diff-port-361847/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1010 19:23:51.656425  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | About to run SSH command:
	I1010 19:23:51.656436  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | exit 0
	I1010 19:23:51.780839  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | SSH cmd err, output: <nil>: 
	I1010 19:23:51.781206  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetConfigRaw
	I1010 19:23:51.781939  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetIP
	I1010 19:23:51.784347  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:51.784663  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:51.784696  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:51.784918  148525 profile.go:143] Saving config to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/default-k8s-diff-port-361847/config.json ...
	I1010 19:23:51.785134  148525 machine.go:93] provisionDockerMachine start ...
	I1010 19:23:51.785158  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .DriverName
	I1010 19:23:51.785403  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHHostname
	I1010 19:23:51.787817  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:51.788306  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:51.788347  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:51.788547  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHPort
	I1010 19:23:51.788807  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:23:51.789038  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:23:51.789274  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHUsername
	I1010 19:23:51.789515  148525 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:51.789802  148525 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.32 22 <nil> <nil>}
	I1010 19:23:51.789825  148525 main.go:141] libmachine: About to run SSH command:
	hostname
	I1010 19:23:51.893367  148525 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1010 19:23:51.893399  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetMachineName
	I1010 19:23:51.893652  148525 buildroot.go:166] provisioning hostname "default-k8s-diff-port-361847"
	I1010 19:23:51.893699  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetMachineName
	I1010 19:23:51.893921  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHHostname
	I1010 19:23:51.896986  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:51.897377  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:51.897422  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:51.897662  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHPort
	I1010 19:23:51.897815  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:23:51.897949  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:23:51.898064  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHUsername
	I1010 19:23:51.898302  148525 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:51.898489  148525 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.32 22 <nil> <nil>}
	I1010 19:23:51.898502  148525 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-361847 && echo "default-k8s-diff-port-361847" | sudo tee /etc/hostname
	I1010 19:23:52.015158  148525 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-361847
	
	I1010 19:23:52.015199  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHHostname
	I1010 19:23:52.018094  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.018468  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:52.018497  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.018683  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHPort
	I1010 19:23:52.018901  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:23:52.019039  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:23:52.019209  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHUsername
	I1010 19:23:52.019474  148525 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:52.019690  148525 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.32 22 <nil> <nil>}
	I1010 19:23:52.019708  148525 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-361847' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-361847/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-361847' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1010 19:23:52.133923  148525 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1010 19:23:52.133960  148525 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19787-81676/.minikube CaCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19787-81676/.minikube}
	I1010 19:23:52.134007  148525 buildroot.go:174] setting up certificates
	I1010 19:23:52.134023  148525 provision.go:84] configureAuth start
	I1010 19:23:52.134043  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetMachineName
	I1010 19:23:52.134351  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetIP
	I1010 19:23:52.137242  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.137637  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:52.137670  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.137860  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHHostname
	I1010 19:23:52.140264  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.140640  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:52.140672  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.140833  148525 provision.go:143] copyHostCerts
	I1010 19:23:52.140907  148525 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem, removing ...
	I1010 19:23:52.140922  148525 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem
	I1010 19:23:52.140977  148525 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem (1078 bytes)
	I1010 19:23:52.141088  148525 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem, removing ...
	I1010 19:23:52.141098  148525 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem
	I1010 19:23:52.141118  148525 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem (1123 bytes)
	I1010 19:23:52.141175  148525 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem, removing ...
	I1010 19:23:52.141182  148525 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem
	I1010 19:23:52.141213  148525 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem (1675 bytes)
	I1010 19:23:52.141264  148525 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-361847 san=[127.0.0.1 192.168.50.32 default-k8s-diff-port-361847 localhost minikube]
	I1010 19:23:52.241146  148525 provision.go:177] copyRemoteCerts
	I1010 19:23:52.241212  148525 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1010 19:23:52.241241  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHHostname
	I1010 19:23:52.244061  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.244463  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:52.244490  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.244731  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHPort
	I1010 19:23:52.244929  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:23:52.245110  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHUsername
	I1010 19:23:52.245228  148525 sshutil.go:53] new ssh client: &{IP:192.168.50.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/default-k8s-diff-port-361847/id_rsa Username:docker}
	I1010 19:23:52.327309  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1010 19:23:52.352288  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1010 19:23:52.376308  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1010 19:23:52.400807  148525 provision.go:87] duration metric: took 266.765119ms to configureAuth
	I1010 19:23:52.400862  148525 buildroot.go:189] setting minikube options for container-runtime
	I1010 19:23:52.401065  148525 config.go:182] Loaded profile config "default-k8s-diff-port-361847": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 19:23:52.401171  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHHostname
	I1010 19:23:52.403552  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.403919  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:52.403950  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.404173  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHPort
	I1010 19:23:52.404371  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:23:52.404513  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:23:52.404629  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHUsername
	I1010 19:23:52.404743  148525 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:52.404927  148525 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.32 22 <nil> <nil>}
	I1010 19:23:52.404949  148525 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1010 19:23:52.622902  148525 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1010 19:23:52.622930  148525 machine.go:96] duration metric: took 837.779579ms to provisionDockerMachine
	I1010 19:23:52.622942  148525 start.go:293] postStartSetup for "default-k8s-diff-port-361847" (driver="kvm2")
	I1010 19:23:52.622952  148525 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1010 19:23:52.622968  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .DriverName
	I1010 19:23:52.623331  148525 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1010 19:23:52.623369  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHHostname
	I1010 19:23:52.626106  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.626435  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:52.626479  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.626721  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHPort
	I1010 19:23:52.626932  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:23:52.627091  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHUsername
	I1010 19:23:52.627262  148525 sshutil.go:53] new ssh client: &{IP:192.168.50.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/default-k8s-diff-port-361847/id_rsa Username:docker}
	I1010 19:23:52.708050  148525 ssh_runner.go:195] Run: cat /etc/os-release
	I1010 19:23:52.712524  148525 info.go:137] Remote host: Buildroot 2023.02.9
	I1010 19:23:52.712550  148525 filesync.go:126] Scanning /home/jenkins/minikube-integration/19787-81676/.minikube/addons for local assets ...
	I1010 19:23:52.712608  148525 filesync.go:126] Scanning /home/jenkins/minikube-integration/19787-81676/.minikube/files for local assets ...
	I1010 19:23:52.712688  148525 filesync.go:149] local asset: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem -> 888762.pem in /etc/ssl/certs
	I1010 19:23:52.712782  148525 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1010 19:23:52.723719  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem --> /etc/ssl/certs/888762.pem (1708 bytes)
	I1010 19:23:52.747686  148525 start.go:296] duration metric: took 124.729371ms for postStartSetup
	I1010 19:23:52.747727  148525 fix.go:56] duration metric: took 20.853721623s for fixHost
	I1010 19:23:52.747749  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHHostname
	I1010 19:23:52.750316  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.750645  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:52.750677  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.750817  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHPort
	I1010 19:23:52.751046  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:23:52.751195  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:23:52.751333  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHUsername
	I1010 19:23:52.751511  148525 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:52.751733  148525 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.32 22 <nil> <nil>}
	I1010 19:23:52.751749  148525 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1010 19:23:52.857986  148525 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728588232.831281012
	
	I1010 19:23:52.858019  148525 fix.go:216] guest clock: 1728588232.831281012
	I1010 19:23:52.858029  148525 fix.go:229] Guest: 2024-10-10 19:23:52.831281012 +0000 UTC Remote: 2024-10-10 19:23:52.747731551 +0000 UTC m=+158.845659062 (delta=83.549461ms)
	I1010 19:23:52.858075  148525 fix.go:200] guest clock delta is within tolerance: 83.549461ms
	I1010 19:23:52.858088  148525 start.go:83] releasing machines lock for "default-k8s-diff-port-361847", held for 20.964121636s
	I1010 19:23:52.858120  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .DriverName
	I1010 19:23:52.858491  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetIP
	I1010 19:23:52.861220  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.861640  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:52.861672  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.861828  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .DriverName
	I1010 19:23:52.862337  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .DriverName
	I1010 19:23:52.862548  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .DriverName
	I1010 19:23:52.862655  148525 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1010 19:23:52.862702  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHHostname
	I1010 19:23:52.862825  148525 ssh_runner.go:195] Run: cat /version.json
	I1010 19:23:52.862854  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHHostname
	I1010 19:23:52.865579  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.865840  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.865921  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:52.865960  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.866106  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHPort
	I1010 19:23:52.866290  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:23:52.866300  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:52.866319  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.866423  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHUsername
	I1010 19:23:52.866496  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHPort
	I1010 19:23:52.866648  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:23:52.866671  148525 sshutil.go:53] new ssh client: &{IP:192.168.50.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/default-k8s-diff-port-361847/id_rsa Username:docker}
	I1010 19:23:52.866798  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHUsername
	I1010 19:23:52.866910  148525 sshutil.go:53] new ssh client: &{IP:192.168.50.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/default-k8s-diff-port-361847/id_rsa Username:docker}
	I1010 19:23:52.966354  148525 ssh_runner.go:195] Run: systemctl --version
	I1010 19:23:52.972526  148525 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1010 19:23:53.119801  148525 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1010 19:23:53.126287  148525 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1010 19:23:53.126355  148525 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1010 19:23:53.147301  148525 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1010 19:23:53.147325  148525 start.go:495] detecting cgroup driver to use...
	I1010 19:23:53.147381  148525 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1010 19:23:53.167368  148525 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1010 19:23:53.183239  148525 docker.go:217] disabling cri-docker service (if available) ...
	I1010 19:23:53.183308  148525 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1010 19:23:53.203230  148525 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1010 19:23:53.217261  148525 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1010 19:23:53.343555  148525 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1010 19:23:53.491952  148525 docker.go:233] disabling docker service ...
	I1010 19:23:53.492054  148525 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1010 19:23:53.508136  148525 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1010 19:23:53.521662  148525 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1010 19:23:53.651858  148525 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1010 19:23:53.781954  148525 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1010 19:23:53.803934  148525 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1010 19:23:53.826070  148525 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1010 19:23:53.826146  148525 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:53.837506  148525 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1010 19:23:53.837587  148525 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:53.848653  148525 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:53.860511  148525 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:53.873254  148525 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1010 19:23:53.887862  148525 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:53.899507  148525 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:53.923325  148525 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:53.934999  148525 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1010 19:23:53.946869  148525 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1010 19:23:53.946945  148525 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1010 19:23:53.968116  148525 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1010 19:23:53.980109  148525 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 19:23:54.106345  148525 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1010 19:23:54.210345  148525 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1010 19:23:54.210417  148525 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1010 19:23:54.215968  148525 start.go:563] Will wait 60s for crictl version
	I1010 19:23:54.216037  148525 ssh_runner.go:195] Run: which crictl
	I1010 19:23:54.219885  148525 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1010 19:23:54.260286  148525 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1010 19:23:54.260375  148525 ssh_runner.go:195] Run: crio --version
	I1010 19:23:54.289908  148525 ssh_runner.go:195] Run: crio --version
	I1010 19:23:54.320940  148525 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1010 19:23:52.050137  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:23:54.060194  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:23:56.551981  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:23:54.027982  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:54.528065  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:55.027890  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:55.527753  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:56.027877  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:56.527790  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:57.028263  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:57.527790  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:58.028743  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:58.528039  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:54.234149  147213 main.go:141] libmachine: (no-preload-320324) Waiting to get IP...
	I1010 19:23:54.235147  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:23:54.235598  147213 main.go:141] libmachine: (no-preload-320324) DBG | unable to find current IP address of domain no-preload-320324 in network mk-no-preload-320324
	I1010 19:23:54.235657  147213 main.go:141] libmachine: (no-preload-320324) DBG | I1010 19:23:54.235580  149378 retry.go:31] will retry after 308.921504ms: waiting for machine to come up
	I1010 19:23:54.546327  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:23:54.547002  147213 main.go:141] libmachine: (no-preload-320324) DBG | unable to find current IP address of domain no-preload-320324 in network mk-no-preload-320324
	I1010 19:23:54.547029  147213 main.go:141] libmachine: (no-preload-320324) DBG | I1010 19:23:54.546956  149378 retry.go:31] will retry after 288.92327ms: waiting for machine to come up
	I1010 19:23:54.837625  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:23:54.838136  147213 main.go:141] libmachine: (no-preload-320324) DBG | unable to find current IP address of domain no-preload-320324 in network mk-no-preload-320324
	I1010 19:23:54.838164  147213 main.go:141] libmachine: (no-preload-320324) DBG | I1010 19:23:54.838054  149378 retry.go:31] will retry after 321.948113ms: waiting for machine to come up
	I1010 19:23:55.161940  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:23:55.162494  147213 main.go:141] libmachine: (no-preload-320324) DBG | unable to find current IP address of domain no-preload-320324 in network mk-no-preload-320324
	I1010 19:23:55.162526  147213 main.go:141] libmachine: (no-preload-320324) DBG | I1010 19:23:55.162441  149378 retry.go:31] will retry after 573.848095ms: waiting for machine to come up
	I1010 19:23:55.739080  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:23:55.739592  147213 main.go:141] libmachine: (no-preload-320324) DBG | unable to find current IP address of domain no-preload-320324 in network mk-no-preload-320324
	I1010 19:23:55.739620  147213 main.go:141] libmachine: (no-preload-320324) DBG | I1010 19:23:55.739494  149378 retry.go:31] will retry after 529.087622ms: waiting for machine to come up
	I1010 19:23:56.270324  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:23:56.270899  147213 main.go:141] libmachine: (no-preload-320324) DBG | unable to find current IP address of domain no-preload-320324 in network mk-no-preload-320324
	I1010 19:23:56.270929  147213 main.go:141] libmachine: (no-preload-320324) DBG | I1010 19:23:56.270850  149378 retry.go:31] will retry after 629.204989ms: waiting for machine to come up
	I1010 19:23:56.901836  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:23:56.902283  147213 main.go:141] libmachine: (no-preload-320324) DBG | unable to find current IP address of domain no-preload-320324 in network mk-no-preload-320324
	I1010 19:23:56.902325  147213 main.go:141] libmachine: (no-preload-320324) DBG | I1010 19:23:56.902222  149378 retry.go:31] will retry after 804.309499ms: waiting for machine to come up
	I1010 19:23:57.708806  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:23:57.709175  147213 main.go:141] libmachine: (no-preload-320324) DBG | unable to find current IP address of domain no-preload-320324 in network mk-no-preload-320324
	I1010 19:23:57.709208  147213 main.go:141] libmachine: (no-preload-320324) DBG | I1010 19:23:57.709151  149378 retry.go:31] will retry after 1.204078295s: waiting for machine to come up
	I1010 19:23:54.322534  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetIP
	I1010 19:23:54.325744  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:54.326217  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:54.326257  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:54.326533  148525 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1010 19:23:54.331527  148525 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 19:23:54.343881  148525 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-361847 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:default-k8s-diff-port-361847 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.32 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1010 19:23:54.344033  148525 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1010 19:23:54.344084  148525 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 19:23:54.389066  148525 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1010 19:23:54.389149  148525 ssh_runner.go:195] Run: which lz4
	I1010 19:23:54.393550  148525 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1010 19:23:54.397787  148525 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1010 19:23:54.397833  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1010 19:23:55.897111  148525 crio.go:462] duration metric: took 1.503593301s to copy over tarball
	I1010 19:23:55.897212  148525 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1010 19:23:58.060691  148525 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.16343467s)
	I1010 19:23:58.060731  148525 crio.go:469] duration metric: took 2.163580526s to extract the tarball
	I1010 19:23:58.060741  148525 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1010 19:23:58.103877  148525 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 19:23:58.162881  148525 crio.go:514] all images are preloaded for cri-o runtime.
	I1010 19:23:58.162907  148525 cache_images.go:84] Images are preloaded, skipping loading
	I1010 19:23:58.162915  148525 kubeadm.go:934] updating node { 192.168.50.32 8444 v1.31.1 crio true true} ...
	I1010 19:23:58.163031  148525 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-361847 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.32
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-361847 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1010 19:23:58.163098  148525 ssh_runner.go:195] Run: crio config
	I1010 19:23:58.219804  148525 cni.go:84] Creating CNI manager for ""
	I1010 19:23:58.219827  148525 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1010 19:23:58.219837  148525 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1010 19:23:58.219861  148525 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.32 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-361847 NodeName:default-k8s-diff-port-361847 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.32"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.32 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1010 19:23:58.219982  148525 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.32
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-361847"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.32
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.32"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1010 19:23:58.220042  148525 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1010 19:23:58.231444  148525 binaries.go:44] Found k8s binaries, skipping transfer
	I1010 19:23:58.231565  148525 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1010 19:23:58.241835  148525 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I1010 19:23:58.259408  148525 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1010 19:23:58.276571  148525 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I1010 19:23:58.294640  148525 ssh_runner.go:195] Run: grep 192.168.50.32	control-plane.minikube.internal$ /etc/hosts
	I1010 19:23:58.298503  148525 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.32	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 19:23:58.312286  148525 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 19:23:58.449757  148525 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 19:23:58.467342  148525 certs.go:68] Setting up /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/default-k8s-diff-port-361847 for IP: 192.168.50.32
	I1010 19:23:58.467377  148525 certs.go:194] generating shared ca certs ...
	I1010 19:23:58.467398  148525 certs.go:226] acquiring lock for ca certs: {Name:mk934aa2a82050d7b4e8800492bf64f91d140517 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 19:23:58.467583  148525 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key
	I1010 19:23:58.467642  148525 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key
	I1010 19:23:58.467655  148525 certs.go:256] generating profile certs ...
	I1010 19:23:58.467826  148525 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/default-k8s-diff-port-361847/client.key
	I1010 19:23:58.467895  148525 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/default-k8s-diff-port-361847/apiserver.key.ae5e3f04
	I1010 19:23:58.467951  148525 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/default-k8s-diff-port-361847/proxy-client.key
	I1010 19:23:58.468089  148525 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876.pem (1338 bytes)
	W1010 19:23:58.468136  148525 certs.go:480] ignoring /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876_empty.pem, impossibly tiny 0 bytes
	I1010 19:23:58.468153  148525 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem (1679 bytes)
	I1010 19:23:58.468194  148525 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem (1078 bytes)
	I1010 19:23:58.468226  148525 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem (1123 bytes)
	I1010 19:23:58.468260  148525 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem (1675 bytes)
	I1010 19:23:58.468317  148525 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem (1708 bytes)
	I1010 19:23:58.468931  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1010 19:23:58.529632  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1010 19:23:58.571900  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1010 19:23:58.612599  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1010 19:23:58.645536  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/default-k8s-diff-port-361847/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1010 19:23:58.675961  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/default-k8s-diff-port-361847/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1010 19:23:58.700712  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/default-k8s-diff-port-361847/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1010 19:23:58.725355  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/default-k8s-diff-port-361847/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1010 19:23:58.751138  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1010 19:23:58.775832  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876.pem --> /usr/share/ca-certificates/88876.pem (1338 bytes)
	I1010 19:23:58.800729  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem --> /usr/share/ca-certificates/888762.pem (1708 bytes)
	I1010 19:23:58.825558  148525 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1010 19:23:58.843331  148525 ssh_runner.go:195] Run: openssl version
	I1010 19:23:58.849271  148525 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/88876.pem && ln -fs /usr/share/ca-certificates/88876.pem /etc/ssl/certs/88876.pem"
	I1010 19:23:58.861031  148525 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/88876.pem
	I1010 19:23:58.865721  148525 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 10 18:09 /usr/share/ca-certificates/88876.pem
	I1010 19:23:58.865797  148525 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/88876.pem
	I1010 19:23:58.871961  148525 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/88876.pem /etc/ssl/certs/51391683.0"
	I1010 19:23:58.884520  148525 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/888762.pem && ln -fs /usr/share/ca-certificates/888762.pem /etc/ssl/certs/888762.pem"
	I1010 19:23:58.896744  148525 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/888762.pem
	I1010 19:23:58.901507  148525 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 10 18:09 /usr/share/ca-certificates/888762.pem
	I1010 19:23:58.901571  148525 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/888762.pem
	I1010 19:23:58.907366  148525 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/888762.pem /etc/ssl/certs/3ec20f2e.0"
	I1010 19:23:58.919784  148525 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1010 19:23:58.931972  148525 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1010 19:23:58.936897  148525 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 10 17:58 /usr/share/ca-certificates/minikubeCA.pem
	I1010 19:23:58.936981  148525 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1010 19:23:58.943007  148525 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1010 19:23:59.052037  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:01.551982  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:23:59.028519  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:59.527790  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:00.027889  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:00.528623  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:01.028697  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:01.528030  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:02.028062  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:02.527876  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:03.028745  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:03.528569  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:58.914409  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:23:58.914894  147213 main.go:141] libmachine: (no-preload-320324) DBG | unable to find current IP address of domain no-preload-320324 in network mk-no-preload-320324
	I1010 19:23:58.914927  147213 main.go:141] libmachine: (no-preload-320324) DBG | I1010 19:23:58.914831  149378 retry.go:31] will retry after 1.631827888s: waiting for machine to come up
	I1010 19:24:00.548505  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:00.549135  147213 main.go:141] libmachine: (no-preload-320324) DBG | unable to find current IP address of domain no-preload-320324 in network mk-no-preload-320324
	I1010 19:24:00.549164  147213 main.go:141] libmachine: (no-preload-320324) DBG | I1010 19:24:00.549043  149378 retry.go:31] will retry after 2.126895157s: waiting for machine to come up
	I1010 19:24:02.678328  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:02.678907  147213 main.go:141] libmachine: (no-preload-320324) DBG | unable to find current IP address of domain no-preload-320324 in network mk-no-preload-320324
	I1010 19:24:02.678969  147213 main.go:141] libmachine: (no-preload-320324) DBG | I1010 19:24:02.678891  149378 retry.go:31] will retry after 2.754376625s: waiting for machine to come up
	I1010 19:23:58.955104  148525 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1010 19:23:58.959833  148525 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1010 19:23:58.966528  148525 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1010 19:23:58.973590  148525 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1010 19:23:58.982390  148525 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1010 19:23:58.990767  148525 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1010 19:23:58.997162  148525 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1010 19:23:59.003647  148525 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-361847 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:default-k8s-diff-port-361847 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.32 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 19:23:59.003786  148525 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1010 19:23:59.003865  148525 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1010 19:23:59.048772  148525 cri.go:89] found id: ""
	I1010 19:23:59.048869  148525 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1010 19:23:59.061267  148525 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1010 19:23:59.061288  148525 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1010 19:23:59.061338  148525 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1010 19:23:59.072629  148525 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1010 19:23:59.074287  148525 kubeconfig.go:125] found "default-k8s-diff-port-361847" server: "https://192.168.50.32:8444"
	I1010 19:23:59.077880  148525 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1010 19:23:59.090738  148525 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.32
	I1010 19:23:59.090783  148525 kubeadm.go:1160] stopping kube-system containers ...
	I1010 19:23:59.090799  148525 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1010 19:23:59.090886  148525 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1010 19:23:59.136762  148525 cri.go:89] found id: ""
	I1010 19:23:59.136888  148525 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1010 19:23:59.155937  148525 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1010 19:23:59.166471  148525 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1010 19:23:59.166493  148525 kubeadm.go:157] found existing configuration files:
	
	I1010 19:23:59.166549  148525 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1010 19:23:59.178247  148525 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1010 19:23:59.178313  148525 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1010 19:23:59.189455  148525 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1010 19:23:59.200127  148525 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1010 19:23:59.200204  148525 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1010 19:23:59.210764  148525 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1010 19:23:59.221048  148525 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1010 19:23:59.221119  148525 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1010 19:23:59.231762  148525 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1010 19:23:59.242152  148525 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1010 19:23:59.242217  148525 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1010 19:23:59.252608  148525 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1010 19:23:59.265219  148525 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:23:59.391743  148525 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:24:00.243288  148525 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:24:00.453782  148525 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:24:00.532137  148525 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:24:00.623598  148525 api_server.go:52] waiting for apiserver process to appear ...
	I1010 19:24:00.623711  148525 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:01.124678  148525 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:01.624626  148525 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:01.667587  148525 api_server.go:72] duration metric: took 1.043987857s to wait for apiserver process to appear ...
	I1010 19:24:01.667621  148525 api_server.go:88] waiting for apiserver healthz status ...
	I1010 19:24:01.667649  148525 api_server.go:253] Checking apiserver healthz at https://192.168.50.32:8444/healthz ...
	I1010 19:24:01.668298  148525 api_server.go:269] stopped: https://192.168.50.32:8444/healthz: Get "https://192.168.50.32:8444/healthz": dial tcp 192.168.50.32:8444: connect: connection refused
	I1010 19:24:02.168273  148525 api_server.go:253] Checking apiserver healthz at https://192.168.50.32:8444/healthz ...
	I1010 19:24:05.275654  148525 api_server.go:279] https://192.168.50.32:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1010 19:24:05.275695  148525 api_server.go:103] status: https://192.168.50.32:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1010 19:24:05.275713  148525 api_server.go:253] Checking apiserver healthz at https://192.168.50.32:8444/healthz ...
	I1010 19:24:05.309713  148525 api_server.go:279] https://192.168.50.32:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1010 19:24:05.309770  148525 api_server.go:103] status: https://192.168.50.32:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1010 19:24:05.668325  148525 api_server.go:253] Checking apiserver healthz at https://192.168.50.32:8444/healthz ...
	I1010 19:24:05.684992  148525 api_server.go:279] https://192.168.50.32:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1010 19:24:05.685031  148525 api_server.go:103] status: https://192.168.50.32:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1010 19:24:06.168198  148525 api_server.go:253] Checking apiserver healthz at https://192.168.50.32:8444/healthz ...
	I1010 19:24:06.176584  148525 api_server.go:279] https://192.168.50.32:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1010 19:24:06.176627  148525 api_server.go:103] status: https://192.168.50.32:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1010 19:24:06.668130  148525 api_server.go:253] Checking apiserver healthz at https://192.168.50.32:8444/healthz ...
	I1010 19:24:06.682049  148525 api_server.go:279] https://192.168.50.32:8444/healthz returned 200:
	ok
	I1010 19:24:06.692780  148525 api_server.go:141] control plane version: v1.31.1
	I1010 19:24:06.692811  148525 api_server.go:131] duration metric: took 5.025182717s to wait for apiserver health ...
	I1010 19:24:06.692820  148525 cni.go:84] Creating CNI manager for ""
	I1010 19:24:06.692831  148525 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1010 19:24:06.694447  148525 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1010 19:24:03.558797  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:06.054012  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:04.028711  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:04.528770  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:05.028716  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:05.528083  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:06.028204  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:06.528430  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:07.027972  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:07.527987  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:08.027829  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:08.528676  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:05.435450  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:05.435940  147213 main.go:141] libmachine: (no-preload-320324) DBG | unable to find current IP address of domain no-preload-320324 in network mk-no-preload-320324
	I1010 19:24:05.435970  147213 main.go:141] libmachine: (no-preload-320324) DBG | I1010 19:24:05.435888  149378 retry.go:31] will retry after 2.981990051s: waiting for machine to come up
	I1010 19:24:08.419385  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:08.419982  147213 main.go:141] libmachine: (no-preload-320324) DBG | unable to find current IP address of domain no-preload-320324 in network mk-no-preload-320324
	I1010 19:24:08.420006  147213 main.go:141] libmachine: (no-preload-320324) DBG | I1010 19:24:08.419905  149378 retry.go:31] will retry after 3.976204267s: waiting for machine to come up
	I1010 19:24:06.695841  148525 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1010 19:24:06.711212  148525 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1010 19:24:06.747753  148525 system_pods.go:43] waiting for kube-system pods to appear ...
	I1010 19:24:06.768344  148525 system_pods.go:59] 8 kube-system pods found
	I1010 19:24:06.768429  148525 system_pods.go:61] "coredns-7c65d6cfc9-rv8vq" [93b209ea-bb5f-40c5-aea8-8771b785f021] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1010 19:24:06.768446  148525 system_pods.go:61] "etcd-default-k8s-diff-port-361847" [65129999-984d-497c-a6e1-9c53a5374991] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1010 19:24:06.768452  148525 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-361847" [5f18ba24-29cf-433e-a70d-23757278c04f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1010 19:24:06.768460  148525 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-361847" [c189c785-8ac5-4003-802d-9e7c089d450e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1010 19:24:06.768467  148525 system_pods.go:61] "kube-proxy-v5lm8" [e78eabf9-5c65-4cba-83fd-0837cef05126] Running
	I1010 19:24:06.768476  148525 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-361847" [4f84f0f5-e255-4534-9db3-e5cfee0b2447] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1010 19:24:06.768485  148525 system_pods.go:61] "metrics-server-6867b74b74-h5kjm" [a3979b79-bd21-490b-97ac-0a78efd43a99] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1010 19:24:06.768493  148525 system_pods.go:61] "storage-provisioner" [ca8606d3-9adb-46da-886a-3081b11b52a6] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1010 19:24:06.768499  148525 system_pods.go:74] duration metric: took 20.716461ms to wait for pod list to return data ...
	I1010 19:24:06.768509  148525 node_conditions.go:102] verifying NodePressure condition ...
	I1010 19:24:06.777935  148525 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1010 19:24:06.777973  148525 node_conditions.go:123] node cpu capacity is 2
	I1010 19:24:06.777988  148525 node_conditions.go:105] duration metric: took 9.473726ms to run NodePressure ...
	I1010 19:24:06.778019  148525 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:24:07.053296  148525 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1010 19:24:07.057585  148525 kubeadm.go:739] kubelet initialised
	I1010 19:24:07.057608  148525 kubeadm.go:740] duration metric: took 4.283027ms waiting for restarted kubelet to initialise ...
	I1010 19:24:07.057618  148525 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 19:24:07.064157  148525 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-rv8vq" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:07.069962  148525 pod_ready.go:98] node "default-k8s-diff-port-361847" hosting pod "coredns-7c65d6cfc9-rv8vq" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-361847" has status "Ready":"False"
	I1010 19:24:07.069989  148525 pod_ready.go:82] duration metric: took 5.791958ms for pod "coredns-7c65d6cfc9-rv8vq" in "kube-system" namespace to be "Ready" ...
	E1010 19:24:07.069999  148525 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-361847" hosting pod "coredns-7c65d6cfc9-rv8vq" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-361847" has status "Ready":"False"
	I1010 19:24:07.070022  148525 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:07.075615  148525 pod_ready.go:98] node "default-k8s-diff-port-361847" hosting pod "etcd-default-k8s-diff-port-361847" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-361847" has status "Ready":"False"
	I1010 19:24:07.075644  148525 pod_ready.go:82] duration metric: took 5.608749ms for pod "etcd-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	E1010 19:24:07.075654  148525 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-361847" hosting pod "etcd-default-k8s-diff-port-361847" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-361847" has status "Ready":"False"
	I1010 19:24:07.075661  148525 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:07.081717  148525 pod_ready.go:98] node "default-k8s-diff-port-361847" hosting pod "kube-apiserver-default-k8s-diff-port-361847" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-361847" has status "Ready":"False"
	I1010 19:24:07.081743  148525 pod_ready.go:82] duration metric: took 6.074977ms for pod "kube-apiserver-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	E1010 19:24:07.081754  148525 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-361847" hosting pod "kube-apiserver-default-k8s-diff-port-361847" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-361847" has status "Ready":"False"
	I1010 19:24:07.081761  148525 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:07.152204  148525 pod_ready.go:98] node "default-k8s-diff-port-361847" hosting pod "kube-controller-manager-default-k8s-diff-port-361847" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-361847" has status "Ready":"False"
	I1010 19:24:07.152244  148525 pod_ready.go:82] duration metric: took 70.475599ms for pod "kube-controller-manager-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	E1010 19:24:07.152258  148525 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-361847" hosting pod "kube-controller-manager-default-k8s-diff-port-361847" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-361847" has status "Ready":"False"
	I1010 19:24:07.152266  148525 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-v5lm8" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:07.551283  148525 pod_ready.go:93] pod "kube-proxy-v5lm8" in "kube-system" namespace has status "Ready":"True"
	I1010 19:24:07.551311  148525 pod_ready.go:82] duration metric: took 399.036581ms for pod "kube-proxy-v5lm8" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:07.551324  148525 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:08.550896  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:10.551437  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:09.028538  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:09.527883  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:10.028693  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:10.528439  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:11.028528  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:11.528679  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:12.028002  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:12.527904  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:13.028685  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:13.527833  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:12.401115  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.401808  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has current primary IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.401841  147213 main.go:141] libmachine: (no-preload-320324) Found IP for machine: 192.168.72.11
	I1010 19:24:12.401856  147213 main.go:141] libmachine: (no-preload-320324) Reserving static IP address...
	I1010 19:24:12.402368  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "no-preload-320324", mac: "52:54:00:95:03:cd", ip: "192.168.72.11"} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:12.402407  147213 main.go:141] libmachine: (no-preload-320324) DBG | skip adding static IP to network mk-no-preload-320324 - found existing host DHCP lease matching {name: "no-preload-320324", mac: "52:54:00:95:03:cd", ip: "192.168.72.11"}
	I1010 19:24:12.402426  147213 main.go:141] libmachine: (no-preload-320324) Reserved static IP address: 192.168.72.11
	I1010 19:24:12.402443  147213 main.go:141] libmachine: (no-preload-320324) Waiting for SSH to be available...
	I1010 19:24:12.402458  147213 main.go:141] libmachine: (no-preload-320324) DBG | Getting to WaitForSSH function...
	I1010 19:24:12.404803  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.405200  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:12.405226  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.405461  147213 main.go:141] libmachine: (no-preload-320324) DBG | Using SSH client type: external
	I1010 19:24:12.405494  147213 main.go:141] libmachine: (no-preload-320324) DBG | Using SSH private key: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/no-preload-320324/id_rsa (-rw-------)
	I1010 19:24:12.405527  147213 main.go:141] libmachine: (no-preload-320324) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.11 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19787-81676/.minikube/machines/no-preload-320324/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1010 19:24:12.405541  147213 main.go:141] libmachine: (no-preload-320324) DBG | About to run SSH command:
	I1010 19:24:12.405554  147213 main.go:141] libmachine: (no-preload-320324) DBG | exit 0
	I1010 19:24:12.529010  147213 main.go:141] libmachine: (no-preload-320324) DBG | SSH cmd err, output: <nil>: 
	I1010 19:24:12.529401  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetConfigRaw
	I1010 19:24:12.530257  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetIP
	I1010 19:24:12.533285  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.533692  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:12.533727  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.533963  147213 profile.go:143] Saving config to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/no-preload-320324/config.json ...
	I1010 19:24:12.534205  147213 machine.go:93] provisionDockerMachine start ...
	I1010 19:24:12.534230  147213 main.go:141] libmachine: (no-preload-320324) Calling .DriverName
	I1010 19:24:12.534450  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHHostname
	I1010 19:24:12.536585  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.536976  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:12.537003  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.537133  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHPort
	I1010 19:24:12.537323  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:12.537512  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:12.537689  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHUsername
	I1010 19:24:12.537925  147213 main.go:141] libmachine: Using SSH client type: native
	I1010 19:24:12.538138  147213 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.11 22 <nil> <nil>}
	I1010 19:24:12.538151  147213 main.go:141] libmachine: About to run SSH command:
	hostname
	I1010 19:24:12.641679  147213 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1010 19:24:12.641706  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetMachineName
	I1010 19:24:12.641964  147213 buildroot.go:166] provisioning hostname "no-preload-320324"
	I1010 19:24:12.642002  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetMachineName
	I1010 19:24:12.642235  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHHostname
	I1010 19:24:12.645149  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.645488  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:12.645521  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.645647  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHPort
	I1010 19:24:12.645836  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:12.646001  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:12.646155  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHUsername
	I1010 19:24:12.646352  147213 main.go:141] libmachine: Using SSH client type: native
	I1010 19:24:12.646533  147213 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.11 22 <nil> <nil>}
	I1010 19:24:12.646545  147213 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-320324 && echo "no-preload-320324" | sudo tee /etc/hostname
	I1010 19:24:12.766449  147213 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-320324
	
	I1010 19:24:12.766480  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHHostname
	I1010 19:24:12.769836  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.770331  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:12.770356  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.770584  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHPort
	I1010 19:24:12.770810  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:12.770962  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:12.771119  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHUsername
	I1010 19:24:12.771252  147213 main.go:141] libmachine: Using SSH client type: native
	I1010 19:24:12.771448  147213 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.11 22 <nil> <nil>}
	I1010 19:24:12.771470  147213 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-320324' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-320324/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-320324' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1010 19:24:12.882458  147213 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1010 19:24:12.882495  147213 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19787-81676/.minikube CaCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19787-81676/.minikube}
	I1010 19:24:12.882537  147213 buildroot.go:174] setting up certificates
	I1010 19:24:12.882547  147213 provision.go:84] configureAuth start
	I1010 19:24:12.882562  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetMachineName
	I1010 19:24:12.882865  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetIP
	I1010 19:24:12.885854  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.886139  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:12.886173  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.886308  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHHostname
	I1010 19:24:12.888479  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.888788  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:12.888819  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.888976  147213 provision.go:143] copyHostCerts
	I1010 19:24:12.889037  147213 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem, removing ...
	I1010 19:24:12.889049  147213 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem
	I1010 19:24:12.889102  147213 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem (1123 bytes)
	I1010 19:24:12.889235  147213 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem, removing ...
	I1010 19:24:12.889246  147213 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem
	I1010 19:24:12.889278  147213 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem (1675 bytes)
	I1010 19:24:12.889370  147213 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem, removing ...
	I1010 19:24:12.889381  147213 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem
	I1010 19:24:12.889406  147213 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem (1078 bytes)
	I1010 19:24:12.889493  147213 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem org=jenkins.no-preload-320324 san=[127.0.0.1 192.168.72.11 localhost minikube no-preload-320324]
	I1010 19:24:12.978176  147213 provision.go:177] copyRemoteCerts
	I1010 19:24:12.978235  147213 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1010 19:24:12.978261  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHHostname
	I1010 19:24:12.981662  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.982182  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:12.982218  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.982452  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHPort
	I1010 19:24:12.982647  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:12.982829  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHUsername
	I1010 19:24:12.983005  147213 sshutil.go:53] new ssh client: &{IP:192.168.72.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/no-preload-320324/id_rsa Username:docker}
	I1010 19:24:13.067269  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1010 19:24:13.092777  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1010 19:24:13.118530  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1010 19:24:13.143401  147213 provision.go:87] duration metric: took 260.833877ms to configureAuth
	I1010 19:24:13.143436  147213 buildroot.go:189] setting minikube options for container-runtime
	I1010 19:24:13.143678  147213 config.go:182] Loaded profile config "no-preload-320324": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 19:24:13.143776  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHHostname
	I1010 19:24:13.147086  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:13.147507  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:13.147531  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:13.147787  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHPort
	I1010 19:24:13.148032  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:13.148222  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:13.148452  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHUsername
	I1010 19:24:13.148660  147213 main.go:141] libmachine: Using SSH client type: native
	I1010 19:24:13.149013  147213 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.11 22 <nil> <nil>}
	I1010 19:24:13.149041  147213 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1010 19:24:13.375683  147213 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1010 19:24:13.375714  147213 machine.go:96] duration metric: took 841.493636ms to provisionDockerMachine
	I1010 19:24:13.375736  147213 start.go:293] postStartSetup for "no-preload-320324" (driver="kvm2")
	I1010 19:24:13.375754  147213 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1010 19:24:13.375775  147213 main.go:141] libmachine: (no-preload-320324) Calling .DriverName
	I1010 19:24:13.376085  147213 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1010 19:24:13.376116  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHHostname
	I1010 19:24:13.378855  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:13.379179  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:13.379224  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:13.379408  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHPort
	I1010 19:24:13.379608  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:13.379769  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHUsername
	I1010 19:24:13.379910  147213 sshutil.go:53] new ssh client: &{IP:192.168.72.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/no-preload-320324/id_rsa Username:docker}
	I1010 19:24:13.459580  147213 ssh_runner.go:195] Run: cat /etc/os-release
	I1010 19:24:13.463644  147213 info.go:137] Remote host: Buildroot 2023.02.9
	I1010 19:24:13.463674  147213 filesync.go:126] Scanning /home/jenkins/minikube-integration/19787-81676/.minikube/addons for local assets ...
	I1010 19:24:13.463751  147213 filesync.go:126] Scanning /home/jenkins/minikube-integration/19787-81676/.minikube/files for local assets ...
	I1010 19:24:13.463845  147213 filesync.go:149] local asset: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem -> 888762.pem in /etc/ssl/certs
	I1010 19:24:13.463963  147213 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1010 19:24:13.473483  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem --> /etc/ssl/certs/888762.pem (1708 bytes)
	I1010 19:24:13.498773  147213 start.go:296] duration metric: took 123.021762ms for postStartSetup
	I1010 19:24:13.498814  147213 fix.go:56] duration metric: took 20.640532088s for fixHost
	I1010 19:24:13.498834  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHHostname
	I1010 19:24:13.501681  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:13.502243  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:13.502281  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:13.502476  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHPort
	I1010 19:24:13.502679  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:13.502835  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:13.502993  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHUsername
	I1010 19:24:13.503177  147213 main.go:141] libmachine: Using SSH client type: native
	I1010 19:24:13.503383  147213 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.11 22 <nil> <nil>}
	I1010 19:24:13.503396  147213 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1010 19:24:13.613929  147213 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728588253.586950075
	
	I1010 19:24:13.613954  147213 fix.go:216] guest clock: 1728588253.586950075
	I1010 19:24:13.613963  147213 fix.go:229] Guest: 2024-10-10 19:24:13.586950075 +0000 UTC Remote: 2024-10-10 19:24:13.498818059 +0000 UTC m=+359.788559229 (delta=88.132016ms)
	I1010 19:24:13.613988  147213 fix.go:200] guest clock delta is within tolerance: 88.132016ms
	I1010 19:24:13.614020  147213 start.go:83] releasing machines lock for "no-preload-320324", held for 20.755775587s
	I1010 19:24:13.614063  147213 main.go:141] libmachine: (no-preload-320324) Calling .DriverName
	I1010 19:24:13.614473  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetIP
	I1010 19:24:13.617327  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:13.617694  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:13.617721  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:13.618016  147213 main.go:141] libmachine: (no-preload-320324) Calling .DriverName
	I1010 19:24:13.618670  147213 main.go:141] libmachine: (no-preload-320324) Calling .DriverName
	I1010 19:24:13.618884  147213 main.go:141] libmachine: (no-preload-320324) Calling .DriverName
	I1010 19:24:13.618989  147213 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1010 19:24:13.619047  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHHostname
	I1010 19:24:13.619142  147213 ssh_runner.go:195] Run: cat /version.json
	I1010 19:24:13.619185  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHHostname
	I1010 19:24:13.621972  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:13.622229  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:13.622322  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:13.622348  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:13.622533  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHPort
	I1010 19:24:13.622666  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:13.622697  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:13.622736  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:13.622865  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHPort
	I1010 19:24:13.622930  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHUsername
	I1010 19:24:13.623059  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:13.623073  147213 sshutil.go:53] new ssh client: &{IP:192.168.72.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/no-preload-320324/id_rsa Username:docker}
	I1010 19:24:13.623225  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHUsername
	I1010 19:24:13.623349  147213 sshutil.go:53] new ssh client: &{IP:192.168.72.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/no-preload-320324/id_rsa Username:docker}
	I1010 19:24:13.720999  147213 ssh_runner.go:195] Run: systemctl --version
	I1010 19:24:13.727679  147213 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1010 19:24:09.562834  148525 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-361847" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:12.058686  148525 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-361847" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:13.870558  147213 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1010 19:24:13.877853  147213 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1010 19:24:13.877923  147213 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1010 19:24:13.896295  147213 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1010 19:24:13.896325  147213 start.go:495] detecting cgroup driver to use...
	I1010 19:24:13.896400  147213 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1010 19:24:13.913122  147213 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1010 19:24:13.929359  147213 docker.go:217] disabling cri-docker service (if available) ...
	I1010 19:24:13.929437  147213 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1010 19:24:13.944840  147213 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1010 19:24:13.960062  147213 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1010 19:24:14.090774  147213 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1010 19:24:14.246094  147213 docker.go:233] disabling docker service ...
	I1010 19:24:14.246161  147213 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1010 19:24:14.264682  147213 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1010 19:24:14.280264  147213 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1010 19:24:14.437156  147213 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1010 19:24:14.569220  147213 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1010 19:24:14.585723  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1010 19:24:14.607349  147213 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1010 19:24:14.607429  147213 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:24:14.619113  147213 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1010 19:24:14.619198  147213 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:24:14.631818  147213 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:24:14.643977  147213 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:24:14.655753  147213 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1010 19:24:14.667235  147213 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:24:14.679225  147213 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:24:14.698760  147213 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:24:14.710440  147213 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1010 19:24:14.722565  147213 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1010 19:24:14.722625  147213 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1010 19:24:14.740587  147213 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1010 19:24:14.752630  147213 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 19:24:14.887728  147213 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1010 19:24:14.989026  147213 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1010 19:24:14.989109  147213 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1010 19:24:14.995309  147213 start.go:563] Will wait 60s for crictl version
	I1010 19:24:14.995366  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:24:14.999840  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1010 19:24:15.043758  147213 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1010 19:24:15.043856  147213 ssh_runner.go:195] Run: crio --version
	I1010 19:24:15.079274  147213 ssh_runner.go:195] Run: crio --version
	I1010 19:24:15.116630  147213 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1010 19:24:13.050633  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:15.552413  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:14.028090  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:14.528072  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:15.028648  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:15.528388  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:16.028060  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:16.528472  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:17.028098  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:17.528125  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:18.028563  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:18.528613  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:15.118343  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetIP
	I1010 19:24:15.121596  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:15.122101  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:15.122133  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:15.122396  147213 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1010 19:24:15.127140  147213 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 19:24:15.141249  147213 kubeadm.go:883] updating cluster {Name:no-preload-320324 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:no-preload-320324 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.11 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1010 19:24:15.141375  147213 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1010 19:24:15.141417  147213 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 19:24:15.183271  147213 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1010 19:24:15.183303  147213 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1010 19:24:15.183412  147213 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 19:24:15.183444  147213 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1010 19:24:15.183452  147213 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I1010 19:24:15.183459  147213 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1010 19:24:15.183422  147213 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1010 19:24:15.183493  147213 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1010 19:24:15.183512  147213 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I1010 19:24:15.183507  147213 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I1010 19:24:15.185097  147213 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I1010 19:24:15.185097  147213 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1010 19:24:15.185097  147213 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 19:24:15.185099  147213 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I1010 19:24:15.185097  147213 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I1010 19:24:15.185098  147213 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1010 19:24:15.185103  147213 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1010 19:24:15.185106  147213 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1010 19:24:15.328484  147213 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.1
	I1010 19:24:15.333573  147213 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I1010 19:24:15.340047  147213 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I1010 19:24:15.358922  147213 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I1010 19:24:15.359800  147213 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.1
	I1010 19:24:15.366668  147213 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.1
	I1010 19:24:15.409942  147213 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I1010 19:24:15.409995  147213 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1010 19:24:15.410050  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:24:15.416186  147213 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.1
	I1010 19:24:15.452343  147213 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I1010 19:24:15.452385  147213 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I1010 19:24:15.452426  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:24:15.533567  147213 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I1010 19:24:15.533620  147213 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I1010 19:24:15.533671  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:24:15.585611  147213 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I1010 19:24:15.585659  147213 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I1010 19:24:15.585685  147213 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I1010 19:24:15.585712  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:24:15.585724  147213 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I1010 19:24:15.585765  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:24:15.585769  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I1010 19:24:15.585805  147213 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I1010 19:24:15.585832  147213 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I1010 19:24:15.585856  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1010 19:24:15.585872  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:24:15.585943  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1010 19:24:15.603131  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I1010 19:24:15.661918  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1010 19:24:15.683739  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I1010 19:24:15.683760  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I1010 19:24:15.683833  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1010 19:24:15.683880  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I1010 19:24:15.685385  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I1010 19:24:15.792253  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1010 19:24:15.818116  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I1010 19:24:15.818183  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I1010 19:24:15.818289  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1010 19:24:15.818321  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I1010 19:24:15.818402  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I1010 19:24:15.878069  147213 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I1010 19:24:15.878202  147213 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I1010 19:24:15.940520  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I1010 19:24:15.953841  147213 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I1010 19:24:15.953955  147213 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I1010 19:24:15.953990  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I1010 19:24:15.954047  147213 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I1010 19:24:15.954115  147213 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I1010 19:24:15.954120  147213 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I1010 19:24:15.954130  147213 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I1010 19:24:15.954144  147213 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I1010 19:24:15.954157  147213 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I1010 19:24:15.954205  147213 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	I1010 19:24:16.005975  147213 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I1010 19:24:16.006028  147213 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I1010 19:24:16.006090  147213 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I1010 19:24:16.023905  147213 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I1010 19:24:16.023990  147213 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.1 (exists)
	I1010 19:24:16.024024  147213 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I1010 19:24:16.024023  147213 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.1 (exists)
	I1010 19:24:16.033715  147213 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 19:24:18.150881  147213 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1: (2.144766677s)
	I1010 19:24:18.150935  147213 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.1 (exists)
	I1010 19:24:18.150931  147213 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (2.196753845s)
	I1010 19:24:18.150944  147213 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1: (2.126894115s)
	I1010 19:24:18.150973  147213 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.1 (exists)
	I1010 19:24:18.150953  147213 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I1010 19:24:18.150982  147213 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.117235962s)
	I1010 19:24:18.151002  147213 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I1010 19:24:18.151014  147213 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1010 19:24:18.151053  147213 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 19:24:18.151069  147213 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I1010 19:24:18.151097  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:24:14.059223  148525 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-361847" in "kube-system" namespace has status "Ready":"True"
	I1010 19:24:14.059252  148525 pod_ready.go:82] duration metric: took 6.507918149s for pod "kube-scheduler-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:14.059266  148525 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:16.066908  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:18.082398  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:18.051799  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:20.552644  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:19.028306  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:19.527854  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:20.028765  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:20.528245  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:21.027936  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:21.528172  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:22.028527  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:22.528446  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:23.028709  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:23.528711  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:21.952099  147213 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.801005716s)
	I1010 19:24:21.952134  147213 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I1010 19:24:21.952163  147213 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I1010 19:24:21.952165  147213 ssh_runner.go:235] Completed: which crictl: (3.801048272s)
	I1010 19:24:21.952212  147213 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I1010 19:24:21.952225  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 19:24:21.993627  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 19:24:20.566055  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:22.567145  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:23.053514  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:25.554151  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:24.028402  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:24.528516  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:25.028207  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:25.527928  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:26.028032  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:26.527826  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:27.028609  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:27.528448  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:28.028018  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:28.528597  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:23.929370  147213 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1: (1.977128659s)
	I1010 19:24:23.929418  147213 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I1010 19:24:23.929450  147213 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I1010 19:24:23.929498  147213 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.935844384s)
	I1010 19:24:23.929532  147213 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1
	I1010 19:24:23.929551  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 19:24:26.009485  147213 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.079908324s)
	I1010 19:24:26.009567  147213 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1010 19:24:26.009484  147213 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1: (2.079925224s)
	I1010 19:24:26.009641  147213 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I1010 19:24:26.009671  147213 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I1010 19:24:26.009684  147213 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1010 19:24:26.009720  147213 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1
	I1010 19:24:27.968483  147213 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.958772952s)
	I1010 19:24:27.968534  147213 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1010 19:24:27.968559  147213 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1: (1.958813643s)
	I1010 19:24:27.968587  147213 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I1010 19:24:27.968619  147213 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I1010 19:24:27.968686  147213 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1
	I1010 19:24:25.069787  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:27.567013  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:28.050968  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:30.551528  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:29.027960  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:29.528054  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:30.028227  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:30.527791  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:31.027790  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:31.528761  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:32.028343  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:32.528553  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:33.028195  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:33.527895  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:29.315157  147213 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1: (1.346440456s)
	I1010 19:24:29.315211  147213 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I1010 19:24:29.315244  147213 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1010 19:24:29.315296  147213 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1010 19:24:30.173931  147213 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1010 19:24:30.173977  147213 cache_images.go:123] Successfully loaded all cached images
	I1010 19:24:30.173985  147213 cache_images.go:92] duration metric: took 14.990666845s to LoadCachedImages
	I1010 19:24:30.174001  147213 kubeadm.go:934] updating node { 192.168.72.11 8443 v1.31.1 crio true true} ...
	I1010 19:24:30.174129  147213 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-320324 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.11
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-320324 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1010 19:24:30.174221  147213 ssh_runner.go:195] Run: crio config
	I1010 19:24:30.222677  147213 cni.go:84] Creating CNI manager for ""
	I1010 19:24:30.222702  147213 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1010 19:24:30.222711  147213 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1010 19:24:30.222736  147213 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.11 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-320324 NodeName:no-preload-320324 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.11"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.11 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1010 19:24:30.222923  147213 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.11
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-320324"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.11
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.11"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1010 19:24:30.222998  147213 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1010 19:24:30.233755  147213 binaries.go:44] Found k8s binaries, skipping transfer
	I1010 19:24:30.233818  147213 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1010 19:24:30.243829  147213 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I1010 19:24:30.263056  147213 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1010 19:24:30.282362  147213 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I1010 19:24:30.300449  147213 ssh_runner.go:195] Run: grep 192.168.72.11	control-plane.minikube.internal$ /etc/hosts
	I1010 19:24:30.304661  147213 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.11	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 19:24:30.317462  147213 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 19:24:30.445515  147213 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 19:24:30.462816  147213 certs.go:68] Setting up /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/no-preload-320324 for IP: 192.168.72.11
	I1010 19:24:30.462847  147213 certs.go:194] generating shared ca certs ...
	I1010 19:24:30.462871  147213 certs.go:226] acquiring lock for ca certs: {Name:mk934aa2a82050d7b4e8800492bf64f91d140517 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 19:24:30.463074  147213 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key
	I1010 19:24:30.463132  147213 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key
	I1010 19:24:30.463145  147213 certs.go:256] generating profile certs ...
	I1010 19:24:30.463289  147213 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/no-preload-320324/client.key
	I1010 19:24:30.463364  147213 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/no-preload-320324/apiserver.key.a7785fc5
	I1010 19:24:30.463413  147213 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/no-preload-320324/proxy-client.key
	I1010 19:24:30.463565  147213 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876.pem (1338 bytes)
	W1010 19:24:30.463604  147213 certs.go:480] ignoring /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876_empty.pem, impossibly tiny 0 bytes
	I1010 19:24:30.463617  147213 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem (1679 bytes)
	I1010 19:24:30.463657  147213 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem (1078 bytes)
	I1010 19:24:30.463689  147213 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem (1123 bytes)
	I1010 19:24:30.463721  147213 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem (1675 bytes)
	I1010 19:24:30.463774  147213 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem (1708 bytes)
	I1010 19:24:30.464502  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1010 19:24:30.525320  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1010 19:24:30.565229  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1010 19:24:30.597731  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1010 19:24:30.626174  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/no-preload-320324/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1010 19:24:30.659991  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/no-preload-320324/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1010 19:24:30.685662  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/no-preload-320324/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1010 19:24:30.710757  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/no-preload-320324/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1010 19:24:30.736325  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem --> /usr/share/ca-certificates/888762.pem (1708 bytes)
	I1010 19:24:30.771239  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1010 19:24:30.796467  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876.pem --> /usr/share/ca-certificates/88876.pem (1338 bytes)
	I1010 19:24:30.821925  147213 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1010 19:24:30.840743  147213 ssh_runner.go:195] Run: openssl version
	I1010 19:24:30.846898  147213 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/88876.pem && ln -fs /usr/share/ca-certificates/88876.pem /etc/ssl/certs/88876.pem"
	I1010 19:24:30.858410  147213 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/88876.pem
	I1010 19:24:30.863188  147213 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 10 18:09 /usr/share/ca-certificates/88876.pem
	I1010 19:24:30.863260  147213 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/88876.pem
	I1010 19:24:30.869307  147213 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/88876.pem /etc/ssl/certs/51391683.0"
	I1010 19:24:30.880319  147213 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/888762.pem && ln -fs /usr/share/ca-certificates/888762.pem /etc/ssl/certs/888762.pem"
	I1010 19:24:30.891307  147213 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/888762.pem
	I1010 19:24:30.895771  147213 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 10 18:09 /usr/share/ca-certificates/888762.pem
	I1010 19:24:30.895828  147213 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/888762.pem
	I1010 19:24:30.901510  147213 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/888762.pem /etc/ssl/certs/3ec20f2e.0"
	I1010 19:24:30.912627  147213 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1010 19:24:30.924330  147213 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1010 19:24:30.929108  147213 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 10 17:58 /usr/share/ca-certificates/minikubeCA.pem
	I1010 19:24:30.929194  147213 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1010 19:24:30.935266  147213 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1010 19:24:30.946714  147213 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1010 19:24:30.951692  147213 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1010 19:24:30.957910  147213 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1010 19:24:30.964296  147213 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1010 19:24:30.971001  147213 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1010 19:24:30.977427  147213 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1010 19:24:30.984201  147213 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1010 19:24:30.990532  147213 kubeadm.go:392] StartCluster: {Name:no-preload-320324 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:no-preload-320324 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.11 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 19:24:30.990622  147213 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1010 19:24:30.990727  147213 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1010 19:24:31.033544  147213 cri.go:89] found id: ""
	I1010 19:24:31.033624  147213 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1010 19:24:31.044956  147213 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1010 19:24:31.044975  147213 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1010 19:24:31.045025  147213 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1010 19:24:31.056563  147213 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1010 19:24:31.057705  147213 kubeconfig.go:125] found "no-preload-320324" server: "https://192.168.72.11:8443"
	I1010 19:24:31.059853  147213 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1010 19:24:31.071304  147213 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.11
	I1010 19:24:31.071338  147213 kubeadm.go:1160] stopping kube-system containers ...
	I1010 19:24:31.071353  147213 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1010 19:24:31.071444  147213 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1010 19:24:31.107345  147213 cri.go:89] found id: ""
	I1010 19:24:31.107429  147213 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1010 19:24:31.125556  147213 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1010 19:24:31.135390  147213 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1010 19:24:31.135428  147213 kubeadm.go:157] found existing configuration files:
	
	I1010 19:24:31.135478  147213 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1010 19:24:31.144653  147213 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1010 19:24:31.144715  147213 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1010 19:24:31.154458  147213 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1010 19:24:31.163444  147213 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1010 19:24:31.163501  147213 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1010 19:24:31.172633  147213 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1010 19:24:31.181939  147213 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1010 19:24:31.182001  147213 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1010 19:24:31.191638  147213 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1010 19:24:31.200846  147213 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1010 19:24:31.200935  147213 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1010 19:24:31.211048  147213 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1010 19:24:31.221008  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:24:31.352733  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:24:32.270546  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:24:32.474510  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:24:32.551517  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:24:32.707737  147213 api_server.go:52] waiting for apiserver process to appear ...
	I1010 19:24:32.707826  147213 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:33.208647  147213 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:33.708539  147213 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:33.728647  147213 api_server.go:72] duration metric: took 1.020907246s to wait for apiserver process to appear ...
	I1010 19:24:33.728678  147213 api_server.go:88] waiting for apiserver healthz status ...
	I1010 19:24:33.728701  147213 api_server.go:253] Checking apiserver healthz at https://192.168.72.11:8443/healthz ...
	I1010 19:24:30.066635  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:32.066732  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:32.552277  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:35.051399  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:34.028434  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:34.527949  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:35.028224  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:35.528470  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:36.028540  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:36.528499  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:37.028362  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:37.528483  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:38.028531  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:38.527865  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:37.025756  147213 api_server.go:279] https://192.168.72.11:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1010 19:24:37.025787  147213 api_server.go:103] status: https://192.168.72.11:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1010 19:24:37.025802  147213 api_server.go:253] Checking apiserver healthz at https://192.168.72.11:8443/healthz ...
	I1010 19:24:37.078247  147213 api_server.go:279] https://192.168.72.11:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1010 19:24:37.078283  147213 api_server.go:103] status: https://192.168.72.11:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1010 19:24:37.229601  147213 api_server.go:253] Checking apiserver healthz at https://192.168.72.11:8443/healthz ...
	I1010 19:24:37.237166  147213 api_server.go:279] https://192.168.72.11:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1010 19:24:37.237204  147213 api_server.go:103] status: https://192.168.72.11:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1010 19:24:37.728824  147213 api_server.go:253] Checking apiserver healthz at https://192.168.72.11:8443/healthz ...
	I1010 19:24:37.735660  147213 api_server.go:279] https://192.168.72.11:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1010 19:24:37.735700  147213 api_server.go:103] status: https://192.168.72.11:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1010 19:24:38.229746  147213 api_server.go:253] Checking apiserver healthz at https://192.168.72.11:8443/healthz ...
	I1010 19:24:38.234449  147213 api_server.go:279] https://192.168.72.11:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1010 19:24:38.234491  147213 api_server.go:103] status: https://192.168.72.11:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1010 19:24:38.729000  147213 api_server.go:253] Checking apiserver healthz at https://192.168.72.11:8443/healthz ...
	I1010 19:24:38.737564  147213 api_server.go:279] https://192.168.72.11:8443/healthz returned 200:
	ok
	I1010 19:24:38.751982  147213 api_server.go:141] control plane version: v1.31.1
	I1010 19:24:38.752012  147213 api_server.go:131] duration metric: took 5.023326632s to wait for apiserver health ...
	I1010 19:24:38.752023  147213 cni.go:84] Creating CNI manager for ""
	I1010 19:24:38.752030  147213 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1010 19:24:38.753351  147213 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1010 19:24:34.067208  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:36.067413  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:38.566729  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:38.754645  147213 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1010 19:24:38.772086  147213 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1010 19:24:38.792017  147213 system_pods.go:43] waiting for kube-system pods to appear ...
	I1010 19:24:38.800547  147213 system_pods.go:59] 8 kube-system pods found
	I1010 19:24:38.800592  147213 system_pods.go:61] "coredns-7c65d6cfc9-86brb" [28e5f869-f82f-4bd4-9d9c-89499fa89c89] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1010 19:24:38.800602  147213 system_pods.go:61] "etcd-no-preload-320324" [3027fc7b-a2cd-47c9-a6f1-122bfd0475a8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1010 19:24:38.800609  147213 system_pods.go:61] "kube-apiserver-no-preload-320324" [6e3d2eab-4568-48e9-957c-e724ff89b5da] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1010 19:24:38.800617  147213 system_pods.go:61] "kube-controller-manager-no-preload-320324" [a6d65cd3-48b1-4faf-bb7f-f6433641d6f3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1010 19:24:38.800624  147213 system_pods.go:61] "kube-proxy-vn6sv" [e5b2c419-7299-4bc4-b263-99408b9484eb] Running
	I1010 19:24:38.800629  147213 system_pods.go:61] "kube-scheduler-no-preload-320324" [e089c258-e720-4c78-ada1-eebbe89556c1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1010 19:24:38.800638  147213 system_pods.go:61] "metrics-server-6867b74b74-8w9lk" [354939e6-2ca9-44f5-8e8e-c10493c68b79] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1010 19:24:38.800642  147213 system_pods.go:61] "storage-provisioner" [d4965b53-60c3-4a97-bd52-d164d977247a] Running
	I1010 19:24:38.800648  147213 system_pods.go:74] duration metric: took 8.60732ms to wait for pod list to return data ...
	I1010 19:24:38.800654  147213 node_conditions.go:102] verifying NodePressure condition ...
	I1010 19:24:38.804628  147213 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1010 19:24:38.804663  147213 node_conditions.go:123] node cpu capacity is 2
	I1010 19:24:38.804680  147213 node_conditions.go:105] duration metric: took 4.021699ms to run NodePressure ...
	I1010 19:24:38.804700  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:24:39.078452  147213 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1010 19:24:39.087090  147213 kubeadm.go:739] kubelet initialised
	I1010 19:24:39.087116  147213 kubeadm.go:740] duration metric: took 8.636436ms waiting for restarted kubelet to initialise ...
	I1010 19:24:39.087125  147213 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 19:24:39.094468  147213 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-86brb" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:39.108724  147213 pod_ready.go:98] node "no-preload-320324" hosting pod "coredns-7c65d6cfc9-86brb" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:39.108756  147213 pod_ready.go:82] duration metric: took 14.254631ms for pod "coredns-7c65d6cfc9-86brb" in "kube-system" namespace to be "Ready" ...
	E1010 19:24:39.108770  147213 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-320324" hosting pod "coredns-7c65d6cfc9-86brb" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:39.108780  147213 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:39.119304  147213 pod_ready.go:98] node "no-preload-320324" hosting pod "etcd-no-preload-320324" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:39.119335  147213 pod_ready.go:82] duration metric: took 10.543376ms for pod "etcd-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	E1010 19:24:39.119345  147213 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-320324" hosting pod "etcd-no-preload-320324" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:39.119352  147213 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:39.127243  147213 pod_ready.go:98] node "no-preload-320324" hosting pod "kube-apiserver-no-preload-320324" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:39.127268  147213 pod_ready.go:82] duration metric: took 7.907414ms for pod "kube-apiserver-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	E1010 19:24:39.127278  147213 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-320324" hosting pod "kube-apiserver-no-preload-320324" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:39.127285  147213 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:39.195549  147213 pod_ready.go:98] node "no-preload-320324" hosting pod "kube-controller-manager-no-preload-320324" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:39.195578  147213 pod_ready.go:82] duration metric: took 68.282333ms for pod "kube-controller-manager-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	E1010 19:24:39.195588  147213 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-320324" hosting pod "kube-controller-manager-no-preload-320324" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:39.195594  147213 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-vn6sv" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:39.595842  147213 pod_ready.go:98] node "no-preload-320324" hosting pod "kube-proxy-vn6sv" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:39.595871  147213 pod_ready.go:82] duration metric: took 400.267905ms for pod "kube-proxy-vn6sv" in "kube-system" namespace to be "Ready" ...
	E1010 19:24:39.595880  147213 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-320324" hosting pod "kube-proxy-vn6sv" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:39.595886  147213 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:39.995731  147213 pod_ready.go:98] node "no-preload-320324" hosting pod "kube-scheduler-no-preload-320324" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:39.995760  147213 pod_ready.go:82] duration metric: took 399.866947ms for pod "kube-scheduler-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	E1010 19:24:39.995770  147213 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-320324" hosting pod "kube-scheduler-no-preload-320324" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:39.995777  147213 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:40.396420  147213 pod_ready.go:98] node "no-preload-320324" hosting pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:40.396456  147213 pod_ready.go:82] duration metric: took 400.667834ms for pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace to be "Ready" ...
	E1010 19:24:40.396470  147213 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-320324" hosting pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:40.396482  147213 pod_ready.go:39] duration metric: took 1.309346973s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 19:24:40.396508  147213 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1010 19:24:40.409956  147213 ops.go:34] apiserver oom_adj: -16
	I1010 19:24:40.409980  147213 kubeadm.go:597] duration metric: took 9.364998977s to restartPrimaryControlPlane
	I1010 19:24:40.409991  147213 kubeadm.go:394] duration metric: took 9.419470024s to StartCluster
	I1010 19:24:40.410009  147213 settings.go:142] acquiring lock: {Name:mk8d73e472e6ca16e47be8eb712406a95625b8da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 19:24:40.410085  147213 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19787-81676/kubeconfig
	I1010 19:24:40.413037  147213 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19787-81676/kubeconfig: {Name:mk31297b607b774870865548d952622c04d970cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 19:24:40.413448  147213 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.11 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1010 19:24:40.413783  147213 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1010 19:24:40.413979  147213 config.go:182] Loaded profile config "no-preload-320324": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 19:24:40.413996  147213 addons.go:69] Setting default-storageclass=true in profile "no-preload-320324"
	I1010 19:24:40.414020  147213 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-320324"
	I1010 19:24:40.413983  147213 addons.go:69] Setting storage-provisioner=true in profile "no-preload-320324"
	I1010 19:24:40.414048  147213 addons.go:234] Setting addon storage-provisioner=true in "no-preload-320324"
	W1010 19:24:40.414057  147213 addons.go:243] addon storage-provisioner should already be in state true
	I1010 19:24:40.414091  147213 host.go:66] Checking if "no-preload-320324" exists ...
	I1010 19:24:40.414170  147213 addons.go:69] Setting metrics-server=true in profile "no-preload-320324"
	I1010 19:24:40.414230  147213 addons.go:234] Setting addon metrics-server=true in "no-preload-320324"
	W1010 19:24:40.414252  147213 addons.go:243] addon metrics-server should already be in state true
	I1010 19:24:40.414292  147213 host.go:66] Checking if "no-preload-320324" exists ...
	I1010 19:24:40.414612  147213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:24:40.414640  147213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:24:40.414678  147213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:24:40.414712  147213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:24:40.415409  147213 out.go:177] * Verifying Kubernetes components...
	I1010 19:24:40.415412  147213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:24:40.415553  147213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:24:40.416812  147213 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 19:24:40.431363  147213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44727
	I1010 19:24:40.431474  147213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46487
	I1010 19:24:40.431659  147213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33767
	I1010 19:24:40.431983  147213 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:24:40.432136  147213 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:24:40.432156  147213 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:24:40.432567  147213 main.go:141] libmachine: Using API Version  1
	I1010 19:24:40.432587  147213 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:24:40.432710  147213 main.go:141] libmachine: Using API Version  1
	I1010 19:24:40.432732  147213 main.go:141] libmachine: Using API Version  1
	I1010 19:24:40.432740  147213 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:24:40.432749  147213 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:24:40.433000  147213 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:24:40.433079  147213 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:24:40.433103  147213 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:24:40.433468  147213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:24:40.433498  147213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:24:40.436984  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetState
	I1010 19:24:40.453362  147213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:24:40.453426  147213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:24:40.454884  147213 addons.go:234] Setting addon default-storageclass=true in "no-preload-320324"
	W1010 19:24:40.454913  147213 addons.go:243] addon default-storageclass should already be in state true
	I1010 19:24:40.454947  147213 host.go:66] Checking if "no-preload-320324" exists ...
	I1010 19:24:40.455335  147213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:24:40.455394  147213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:24:40.470642  147213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38831
	I1010 19:24:40.471118  147213 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:24:40.471701  147213 main.go:141] libmachine: Using API Version  1
	I1010 19:24:40.471730  147213 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:24:40.472241  147213 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:24:40.472523  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetState
	I1010 19:24:40.473953  147213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36987
	I1010 19:24:40.474196  147213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34527
	I1010 19:24:40.474332  147213 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:24:40.474672  147213 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:24:40.474814  147213 main.go:141] libmachine: Using API Version  1
	I1010 19:24:40.474827  147213 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:24:40.475181  147213 main.go:141] libmachine: Using API Version  1
	I1010 19:24:40.475210  147213 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:24:40.475310  147213 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:24:40.475702  147213 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:24:40.475785  147213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:24:40.475825  147213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:24:40.475922  147213 main.go:141] libmachine: (no-preload-320324) Calling .DriverName
	I1010 19:24:40.476046  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetState
	I1010 19:24:40.478147  147213 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 19:24:40.478395  147213 main.go:141] libmachine: (no-preload-320324) Calling .DriverName
	I1010 19:24:40.479869  147213 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 19:24:40.479896  147213 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1010 19:24:40.479922  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHHostname
	I1010 19:24:40.480549  147213 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1010 19:24:37.051611  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:39.551952  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:41.553895  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:40.482101  147213 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1010 19:24:40.482119  147213 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1010 19:24:40.482144  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHHostname
	I1010 19:24:40.484066  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:40.484560  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:40.484588  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:40.484833  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHPort
	I1010 19:24:40.485065  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:40.485241  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHUsername
	I1010 19:24:40.485272  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:40.485443  147213 sshutil.go:53] new ssh client: &{IP:192.168.72.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/no-preload-320324/id_rsa Username:docker}
	I1010 19:24:40.485788  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:40.485807  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:40.485842  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHPort
	I1010 19:24:40.486017  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:40.486202  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHUsername
	I1010 19:24:40.486454  147213 sshutil.go:53] new ssh client: &{IP:192.168.72.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/no-preload-320324/id_rsa Username:docker}
	I1010 19:24:40.492533  147213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38967
	I1010 19:24:40.493012  147213 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:24:40.493566  147213 main.go:141] libmachine: Using API Version  1
	I1010 19:24:40.493595  147213 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:24:40.494056  147213 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:24:40.494325  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetState
	I1010 19:24:40.496053  147213 main.go:141] libmachine: (no-preload-320324) Calling .DriverName
	I1010 19:24:40.496301  147213 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1010 19:24:40.496321  147213 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1010 19:24:40.496344  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHHostname
	I1010 19:24:40.499125  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:40.499667  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:40.499690  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:40.499843  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHPort
	I1010 19:24:40.500022  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:40.500194  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHUsername
	I1010 19:24:40.500357  147213 sshutil.go:53] new ssh client: &{IP:192.168.72.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/no-preload-320324/id_rsa Username:docker}
	I1010 19:24:40.651454  147213 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 19:24:40.667056  147213 node_ready.go:35] waiting up to 6m0s for node "no-preload-320324" to be "Ready" ...
	I1010 19:24:40.782217  147213 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 19:24:40.803094  147213 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1010 19:24:40.803122  147213 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1010 19:24:40.812288  147213 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1010 19:24:40.837679  147213 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1010 19:24:40.837723  147213 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1010 19:24:40.882090  147213 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1010 19:24:40.882119  147213 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1010 19:24:40.940115  147213 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1010 19:24:41.949181  147213 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.136852217s)
	I1010 19:24:41.949258  147213 main.go:141] libmachine: Making call to close driver server
	I1010 19:24:41.949275  147213 main.go:141] libmachine: (no-preload-320324) Calling .Close
	I1010 19:24:41.949286  147213 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.167030419s)
	I1010 19:24:41.949327  147213 main.go:141] libmachine: Making call to close driver server
	I1010 19:24:41.949345  147213 main.go:141] libmachine: (no-preload-320324) Calling .Close
	I1010 19:24:41.949625  147213 main.go:141] libmachine: (no-preload-320324) DBG | Closing plugin on server side
	I1010 19:24:41.949652  147213 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:24:41.949660  147213 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:24:41.949661  147213 main.go:141] libmachine: (no-preload-320324) DBG | Closing plugin on server side
	I1010 19:24:41.949668  147213 main.go:141] libmachine: Making call to close driver server
	I1010 19:24:41.949679  147213 main.go:141] libmachine: (no-preload-320324) Calling .Close
	I1010 19:24:41.949761  147213 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:24:41.949804  147213 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:24:41.949819  147213 main.go:141] libmachine: Making call to close driver server
	I1010 19:24:41.949826  147213 main.go:141] libmachine: (no-preload-320324) Calling .Close
	I1010 19:24:41.950811  147213 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:24:41.950824  147213 main.go:141] libmachine: (no-preload-320324) DBG | Closing plugin on server side
	I1010 19:24:41.950827  147213 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:24:41.950822  147213 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:24:41.950845  147213 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:24:41.950811  147213 main.go:141] libmachine: (no-preload-320324) DBG | Closing plugin on server side
	I1010 19:24:41.957797  147213 main.go:141] libmachine: Making call to close driver server
	I1010 19:24:41.957814  147213 main.go:141] libmachine: (no-preload-320324) Calling .Close
	I1010 19:24:41.958071  147213 main.go:141] libmachine: (no-preload-320324) DBG | Closing plugin on server side
	I1010 19:24:41.958077  147213 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:24:41.958099  147213 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:24:42.005530  147213 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.065377363s)
	I1010 19:24:42.005590  147213 main.go:141] libmachine: Making call to close driver server
	I1010 19:24:42.005602  147213 main.go:141] libmachine: (no-preload-320324) Calling .Close
	I1010 19:24:42.005914  147213 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:24:42.005937  147213 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:24:42.005935  147213 main.go:141] libmachine: (no-preload-320324) DBG | Closing plugin on server side
	I1010 19:24:42.005972  147213 main.go:141] libmachine: Making call to close driver server
	I1010 19:24:42.006003  147213 main.go:141] libmachine: (no-preload-320324) Calling .Close
	I1010 19:24:42.006280  147213 main.go:141] libmachine: (no-preload-320324) DBG | Closing plugin on server side
	I1010 19:24:42.006313  147213 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:24:42.006335  147213 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:24:42.006354  147213 addons.go:475] Verifying addon metrics-server=true in "no-preload-320324"
	I1010 19:24:42.008523  147213 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1010 19:24:39.028363  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:39.528571  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:40.027992  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:40.528552  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:41.028220  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:41.527889  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:42.028462  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:24:42.028549  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:24:42.070669  148123 cri.go:89] found id: ""
	I1010 19:24:42.070701  148123 logs.go:282] 0 containers: []
	W1010 19:24:42.070710  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:24:42.070716  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:24:42.070775  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:24:42.110693  148123 cri.go:89] found id: ""
	I1010 19:24:42.110731  148123 logs.go:282] 0 containers: []
	W1010 19:24:42.110748  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:24:42.110756  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:24:42.110816  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:24:42.148471  148123 cri.go:89] found id: ""
	I1010 19:24:42.148511  148123 logs.go:282] 0 containers: []
	W1010 19:24:42.148525  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:24:42.148535  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:24:42.148603  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:24:42.184637  148123 cri.go:89] found id: ""
	I1010 19:24:42.184670  148123 logs.go:282] 0 containers: []
	W1010 19:24:42.184683  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:24:42.184691  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:24:42.184759  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:24:42.226794  148123 cri.go:89] found id: ""
	I1010 19:24:42.226834  148123 logs.go:282] 0 containers: []
	W1010 19:24:42.226848  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:24:42.226857  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:24:42.226942  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:24:42.262969  148123 cri.go:89] found id: ""
	I1010 19:24:42.263004  148123 logs.go:282] 0 containers: []
	W1010 19:24:42.263017  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:24:42.263027  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:24:42.263167  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:24:42.301063  148123 cri.go:89] found id: ""
	I1010 19:24:42.301088  148123 logs.go:282] 0 containers: []
	W1010 19:24:42.301096  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:24:42.301102  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:24:42.301153  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:24:42.338829  148123 cri.go:89] found id: ""
	I1010 19:24:42.338862  148123 logs.go:282] 0 containers: []
	W1010 19:24:42.338873  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:24:42.338886  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:24:42.338901  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:24:42.391426  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:24:42.391478  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:24:42.405520  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:24:42.405563  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:24:42.544245  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:24:42.544275  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:24:42.544292  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:24:42.620760  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:24:42.620804  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:24:42.009965  147213 addons.go:510] duration metric: took 1.596190602s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1010 19:24:42.672792  147213 node_ready.go:53] node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:41.066744  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:43.066850  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:43.557231  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:46.051820  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:45.164389  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:45.179714  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:24:45.179776  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:24:45.216271  148123 cri.go:89] found id: ""
	I1010 19:24:45.216308  148123 logs.go:282] 0 containers: []
	W1010 19:24:45.216319  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:24:45.216327  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:24:45.216394  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:24:45.255109  148123 cri.go:89] found id: ""
	I1010 19:24:45.255154  148123 logs.go:282] 0 containers: []
	W1010 19:24:45.255167  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:24:45.255176  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:24:45.255248  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:24:45.294338  148123 cri.go:89] found id: ""
	I1010 19:24:45.294369  148123 logs.go:282] 0 containers: []
	W1010 19:24:45.294380  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:24:45.294388  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:24:45.294457  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:24:45.339573  148123 cri.go:89] found id: ""
	I1010 19:24:45.339606  148123 logs.go:282] 0 containers: []
	W1010 19:24:45.339626  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:24:45.339636  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:24:45.339703  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:24:45.400184  148123 cri.go:89] found id: ""
	I1010 19:24:45.400214  148123 logs.go:282] 0 containers: []
	W1010 19:24:45.400225  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:24:45.400234  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:24:45.400301  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:24:45.436152  148123 cri.go:89] found id: ""
	I1010 19:24:45.436183  148123 logs.go:282] 0 containers: []
	W1010 19:24:45.436195  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:24:45.436203  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:24:45.436262  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:24:45.484321  148123 cri.go:89] found id: ""
	I1010 19:24:45.484347  148123 logs.go:282] 0 containers: []
	W1010 19:24:45.484355  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:24:45.484361  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:24:45.484441  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:24:45.532893  148123 cri.go:89] found id: ""
	I1010 19:24:45.532923  148123 logs.go:282] 0 containers: []
	W1010 19:24:45.532932  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:24:45.532943  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:24:45.532958  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:24:45.585183  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:24:45.585214  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:24:45.638800  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:24:45.638847  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:24:45.653928  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:24:45.653961  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:24:45.733534  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:24:45.733564  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:24:45.733580  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:24:48.314367  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:48.329687  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:24:48.329754  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:24:48.370372  148123 cri.go:89] found id: ""
	I1010 19:24:48.370400  148123 logs.go:282] 0 containers: []
	W1010 19:24:48.370409  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:24:48.370415  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:24:48.370490  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:24:48.409362  148123 cri.go:89] found id: ""
	I1010 19:24:48.409396  148123 logs.go:282] 0 containers: []
	W1010 19:24:48.409407  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:24:48.409415  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:24:48.409500  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:24:48.449640  148123 cri.go:89] found id: ""
	I1010 19:24:48.449672  148123 logs.go:282] 0 containers: []
	W1010 19:24:48.449681  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:24:48.449687  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:24:48.449746  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:24:48.485053  148123 cri.go:89] found id: ""
	I1010 19:24:48.485104  148123 logs.go:282] 0 containers: []
	W1010 19:24:48.485121  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:24:48.485129  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:24:48.485183  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:24:48.520083  148123 cri.go:89] found id: ""
	I1010 19:24:48.520113  148123 logs.go:282] 0 containers: []
	W1010 19:24:48.520121  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:24:48.520127  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:24:48.520185  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:24:48.555112  148123 cri.go:89] found id: ""
	I1010 19:24:48.555138  148123 logs.go:282] 0 containers: []
	W1010 19:24:48.555149  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:24:48.555156  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:24:48.555241  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:24:45.171882  147213 node_ready.go:53] node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:47.673073  147213 node_ready.go:49] node "no-preload-320324" has status "Ready":"True"
	I1010 19:24:47.673103  147213 node_ready.go:38] duration metric: took 7.00601327s for node "no-preload-320324" to be "Ready" ...
	I1010 19:24:47.673117  147213 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 19:24:47.682195  147213 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-86brb" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:47.690079  147213 pod_ready.go:93] pod "coredns-7c65d6cfc9-86brb" in "kube-system" namespace has status "Ready":"True"
	I1010 19:24:47.690111  147213 pod_ready.go:82] duration metric: took 7.882823ms for pod "coredns-7c65d6cfc9-86brb" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:47.690126  147213 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:47.698009  147213 pod_ready.go:93] pod "etcd-no-preload-320324" in "kube-system" namespace has status "Ready":"True"
	I1010 19:24:47.698038  147213 pod_ready.go:82] duration metric: took 7.903016ms for pod "etcd-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:47.698052  147213 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:45.066893  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:47.566144  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:48.551853  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:51.050365  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:48.592682  148123 cri.go:89] found id: ""
	I1010 19:24:48.592710  148123 logs.go:282] 0 containers: []
	W1010 19:24:48.592719  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:24:48.592725  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:24:48.592775  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:24:48.628449  148123 cri.go:89] found id: ""
	I1010 19:24:48.628482  148123 logs.go:282] 0 containers: []
	W1010 19:24:48.628490  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:24:48.628500  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:24:48.628513  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:24:48.709385  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:24:48.709428  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:24:48.752542  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:24:48.752677  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:24:48.812331  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:24:48.812385  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:24:48.827057  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:24:48.827095  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:24:48.908312  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:24:51.409122  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:51.423404  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:24:51.423482  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:24:51.465742  148123 cri.go:89] found id: ""
	I1010 19:24:51.465781  148123 logs.go:282] 0 containers: []
	W1010 19:24:51.465793  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:24:51.465803  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:24:51.465875  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:24:51.501597  148123 cri.go:89] found id: ""
	I1010 19:24:51.501630  148123 logs.go:282] 0 containers: []
	W1010 19:24:51.501641  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:24:51.501648  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:24:51.501711  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:24:51.537500  148123 cri.go:89] found id: ""
	I1010 19:24:51.537539  148123 logs.go:282] 0 containers: []
	W1010 19:24:51.537551  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:24:51.537559  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:24:51.537626  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:24:51.579546  148123 cri.go:89] found id: ""
	I1010 19:24:51.579576  148123 logs.go:282] 0 containers: []
	W1010 19:24:51.579587  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:24:51.579595  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:24:51.579660  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:24:51.619893  148123 cri.go:89] found id: ""
	I1010 19:24:51.619917  148123 logs.go:282] 0 containers: []
	W1010 19:24:51.619925  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:24:51.619931  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:24:51.619999  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:24:51.655787  148123 cri.go:89] found id: ""
	I1010 19:24:51.655824  148123 logs.go:282] 0 containers: []
	W1010 19:24:51.655837  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:24:51.655846  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:24:51.655921  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:24:51.699582  148123 cri.go:89] found id: ""
	I1010 19:24:51.699611  148123 logs.go:282] 0 containers: []
	W1010 19:24:51.699619  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:24:51.699625  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:24:51.699685  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:24:51.738658  148123 cri.go:89] found id: ""
	I1010 19:24:51.738689  148123 logs.go:282] 0 containers: []
	W1010 19:24:51.738697  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:24:51.738707  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:24:51.738721  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:24:51.789556  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:24:51.789587  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:24:51.858919  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:24:51.858968  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:24:51.887818  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:24:51.887854  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:24:51.964408  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:24:51.964434  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:24:51.964449  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:24:49.705130  147213 pod_ready.go:103] pod "kube-apiserver-no-preload-320324" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:51.705847  147213 pod_ready.go:103] pod "kube-apiserver-no-preload-320324" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:53.205374  147213 pod_ready.go:93] pod "kube-apiserver-no-preload-320324" in "kube-system" namespace has status "Ready":"True"
	I1010 19:24:53.205401  147213 pod_ready.go:82] duration metric: took 5.507341974s for pod "kube-apiserver-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:53.205413  147213 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:53.210237  147213 pod_ready.go:93] pod "kube-controller-manager-no-preload-320324" in "kube-system" namespace has status "Ready":"True"
	I1010 19:24:53.210259  147213 pod_ready.go:82] duration metric: took 4.83925ms for pod "kube-controller-manager-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:53.210269  147213 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-vn6sv" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:53.215158  147213 pod_ready.go:93] pod "kube-proxy-vn6sv" in "kube-system" namespace has status "Ready":"True"
	I1010 19:24:53.215186  147213 pod_ready.go:82] duration metric: took 4.909888ms for pod "kube-proxy-vn6sv" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:53.215198  147213 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:53.220077  147213 pod_ready.go:93] pod "kube-scheduler-no-preload-320324" in "kube-system" namespace has status "Ready":"True"
	I1010 19:24:53.220097  147213 pod_ready.go:82] duration metric: took 4.890652ms for pod "kube-scheduler-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:53.220105  147213 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:50.066165  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:52.066343  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:53.552604  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:56.050748  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:54.545627  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:54.560748  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:24:54.560830  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:24:54.595863  148123 cri.go:89] found id: ""
	I1010 19:24:54.595893  148123 logs.go:282] 0 containers: []
	W1010 19:24:54.595903  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:24:54.595912  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:24:54.595978  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:24:54.633645  148123 cri.go:89] found id: ""
	I1010 19:24:54.633681  148123 logs.go:282] 0 containers: []
	W1010 19:24:54.633693  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:24:54.633701  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:24:54.633761  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:24:54.668269  148123 cri.go:89] found id: ""
	I1010 19:24:54.668299  148123 logs.go:282] 0 containers: []
	W1010 19:24:54.668311  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:24:54.668317  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:24:54.668369  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:24:54.706559  148123 cri.go:89] found id: ""
	I1010 19:24:54.706591  148123 logs.go:282] 0 containers: []
	W1010 19:24:54.706600  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:24:54.706608  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:24:54.706673  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:24:54.746248  148123 cri.go:89] found id: ""
	I1010 19:24:54.746283  148123 logs.go:282] 0 containers: []
	W1010 19:24:54.746295  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:24:54.746303  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:24:54.746383  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:24:54.782982  148123 cri.go:89] found id: ""
	I1010 19:24:54.783017  148123 logs.go:282] 0 containers: []
	W1010 19:24:54.783027  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:24:54.783033  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:24:54.783085  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:24:54.819664  148123 cri.go:89] found id: ""
	I1010 19:24:54.819700  148123 logs.go:282] 0 containers: []
	W1010 19:24:54.819713  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:24:54.819722  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:24:54.819797  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:24:54.859603  148123 cri.go:89] found id: ""
	I1010 19:24:54.859632  148123 logs.go:282] 0 containers: []
	W1010 19:24:54.859640  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:24:54.859650  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:24:54.859662  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:24:54.910949  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:24:54.910987  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:24:54.925941  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:24:54.925975  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:24:55.009626  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:24:55.009654  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:24:55.009669  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:24:55.097196  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:24:55.097237  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:24:57.641732  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:57.661141  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:24:57.661222  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:24:57.716054  148123 cri.go:89] found id: ""
	I1010 19:24:57.716086  148123 logs.go:282] 0 containers: []
	W1010 19:24:57.716094  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:24:57.716100  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:24:57.716178  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:24:57.766862  148123 cri.go:89] found id: ""
	I1010 19:24:57.766892  148123 logs.go:282] 0 containers: []
	W1010 19:24:57.766906  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:24:57.766917  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:24:57.766989  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:24:57.814776  148123 cri.go:89] found id: ""
	I1010 19:24:57.814808  148123 logs.go:282] 0 containers: []
	W1010 19:24:57.814821  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:24:57.814829  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:24:57.814899  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:24:57.850458  148123 cri.go:89] found id: ""
	I1010 19:24:57.850495  148123 logs.go:282] 0 containers: []
	W1010 19:24:57.850505  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:24:57.850516  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:24:57.850667  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:24:57.886541  148123 cri.go:89] found id: ""
	I1010 19:24:57.886566  148123 logs.go:282] 0 containers: []
	W1010 19:24:57.886575  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:24:57.886581  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:24:57.886645  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:24:57.920748  148123 cri.go:89] found id: ""
	I1010 19:24:57.920783  148123 logs.go:282] 0 containers: []
	W1010 19:24:57.920795  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:24:57.920802  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:24:57.920887  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:24:57.957801  148123 cri.go:89] found id: ""
	I1010 19:24:57.957833  148123 logs.go:282] 0 containers: []
	W1010 19:24:57.957844  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:24:57.957852  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:24:57.957919  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:24:57.995648  148123 cri.go:89] found id: ""
	I1010 19:24:57.995683  148123 logs.go:282] 0 containers: []
	W1010 19:24:57.995694  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:24:57.995706  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:24:57.995723  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:24:58.034987  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:24:58.035030  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:24:58.089014  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:24:58.089066  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:24:58.104179  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:24:58.104211  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:24:58.178239  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:24:58.178271  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:24:58.178291  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:24:55.229459  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:57.727298  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:54.566779  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:57.065902  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:58.051248  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:00.550512  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:00.756057  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:00.770086  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:00.770150  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:00.811400  148123 cri.go:89] found id: ""
	I1010 19:25:00.811444  148123 logs.go:282] 0 containers: []
	W1010 19:25:00.811456  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:00.811466  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:00.811533  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:00.848821  148123 cri.go:89] found id: ""
	I1010 19:25:00.848859  148123 logs.go:282] 0 containers: []
	W1010 19:25:00.848870  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:00.848877  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:00.848947  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:00.883529  148123 cri.go:89] found id: ""
	I1010 19:25:00.883557  148123 logs.go:282] 0 containers: []
	W1010 19:25:00.883566  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:00.883573  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:00.883631  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:00.922952  148123 cri.go:89] found id: ""
	I1010 19:25:00.922982  148123 logs.go:282] 0 containers: []
	W1010 19:25:00.922992  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:00.922998  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:00.923057  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:00.956116  148123 cri.go:89] found id: ""
	I1010 19:25:00.956147  148123 logs.go:282] 0 containers: []
	W1010 19:25:00.956159  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:00.956168  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:00.956233  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:00.998892  148123 cri.go:89] found id: ""
	I1010 19:25:00.998919  148123 logs.go:282] 0 containers: []
	W1010 19:25:00.998930  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:00.998939  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:00.998996  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:01.037617  148123 cri.go:89] found id: ""
	I1010 19:25:01.037649  148123 logs.go:282] 0 containers: []
	W1010 19:25:01.037657  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:01.037663  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:01.037717  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:01.078003  148123 cri.go:89] found id: ""
	I1010 19:25:01.078034  148123 logs.go:282] 0 containers: []
	W1010 19:25:01.078046  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:01.078055  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:01.078069  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:01.118948  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:01.118977  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:01.171053  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:01.171091  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:01.187027  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:01.187060  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:01.261288  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:01.261315  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:01.261330  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:24:59.728997  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:02.227142  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:59.566448  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:02.066184  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:02.551951  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:05.050558  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:03.848144  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:03.862527  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:03.862601  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:03.899120  148123 cri.go:89] found id: ""
	I1010 19:25:03.899159  148123 logs.go:282] 0 containers: []
	W1010 19:25:03.899180  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:03.899187  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:03.899276  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:03.935461  148123 cri.go:89] found id: ""
	I1010 19:25:03.935492  148123 logs.go:282] 0 containers: []
	W1010 19:25:03.935500  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:03.935507  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:03.935569  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:03.971892  148123 cri.go:89] found id: ""
	I1010 19:25:03.971925  148123 logs.go:282] 0 containers: []
	W1010 19:25:03.971937  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:03.971945  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:03.972019  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:04.008341  148123 cri.go:89] found id: ""
	I1010 19:25:04.008368  148123 logs.go:282] 0 containers: []
	W1010 19:25:04.008377  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:04.008390  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:04.008447  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:04.042007  148123 cri.go:89] found id: ""
	I1010 19:25:04.042036  148123 logs.go:282] 0 containers: []
	W1010 19:25:04.042044  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:04.042051  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:04.042101  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:04.078017  148123 cri.go:89] found id: ""
	I1010 19:25:04.078045  148123 logs.go:282] 0 containers: []
	W1010 19:25:04.078053  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:04.078059  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:04.078112  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:04.116792  148123 cri.go:89] found id: ""
	I1010 19:25:04.116823  148123 logs.go:282] 0 containers: []
	W1010 19:25:04.116832  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:04.116839  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:04.116928  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:04.153468  148123 cri.go:89] found id: ""
	I1010 19:25:04.153496  148123 logs.go:282] 0 containers: []
	W1010 19:25:04.153503  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:04.153513  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:04.153525  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:04.230646  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:04.230683  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:04.270975  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:04.271015  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:04.320845  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:04.320902  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:04.337789  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:04.337828  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:04.416077  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:06.916412  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:06.931309  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:06.931376  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:06.969224  148123 cri.go:89] found id: ""
	I1010 19:25:06.969257  148123 logs.go:282] 0 containers: []
	W1010 19:25:06.969269  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:06.969277  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:06.969346  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:07.006598  148123 cri.go:89] found id: ""
	I1010 19:25:07.006653  148123 logs.go:282] 0 containers: []
	W1010 19:25:07.006667  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:07.006676  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:07.006744  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:07.046392  148123 cri.go:89] found id: ""
	I1010 19:25:07.046418  148123 logs.go:282] 0 containers: []
	W1010 19:25:07.046427  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:07.046433  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:07.046491  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:07.089570  148123 cri.go:89] found id: ""
	I1010 19:25:07.089603  148123 logs.go:282] 0 containers: []
	W1010 19:25:07.089615  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:07.089624  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:07.089689  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:07.127152  148123 cri.go:89] found id: ""
	I1010 19:25:07.127185  148123 logs.go:282] 0 containers: []
	W1010 19:25:07.127195  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:07.127204  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:07.127282  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:07.160516  148123 cri.go:89] found id: ""
	I1010 19:25:07.160543  148123 logs.go:282] 0 containers: []
	W1010 19:25:07.160554  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:07.160563  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:07.160635  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:07.197270  148123 cri.go:89] found id: ""
	I1010 19:25:07.197307  148123 logs.go:282] 0 containers: []
	W1010 19:25:07.197318  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:07.197327  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:07.197395  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:07.232658  148123 cri.go:89] found id: ""
	I1010 19:25:07.232687  148123 logs.go:282] 0 containers: []
	W1010 19:25:07.232696  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:07.232706  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:07.232720  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:07.246579  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:07.246609  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:07.315200  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:07.315230  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:07.315251  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:07.389532  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:07.389580  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:07.438808  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:07.438837  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:04.227537  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:06.727865  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:04.067121  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:06.565089  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:08.565565  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:07.051371  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:09.051420  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:11.054211  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:09.990294  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:10.004155  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:10.004270  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:10.039034  148123 cri.go:89] found id: ""
	I1010 19:25:10.039068  148123 logs.go:282] 0 containers: []
	W1010 19:25:10.039078  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:10.039087  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:10.039174  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:10.079936  148123 cri.go:89] found id: ""
	I1010 19:25:10.079967  148123 logs.go:282] 0 containers: []
	W1010 19:25:10.079978  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:10.079985  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:10.080068  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:10.116441  148123 cri.go:89] found id: ""
	I1010 19:25:10.116471  148123 logs.go:282] 0 containers: []
	W1010 19:25:10.116483  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:10.116491  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:10.116556  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:10.151998  148123 cri.go:89] found id: ""
	I1010 19:25:10.152045  148123 logs.go:282] 0 containers: []
	W1010 19:25:10.152058  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:10.152075  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:10.152153  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:10.188514  148123 cri.go:89] found id: ""
	I1010 19:25:10.188547  148123 logs.go:282] 0 containers: []
	W1010 19:25:10.188565  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:10.188574  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:10.188640  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:10.223648  148123 cri.go:89] found id: ""
	I1010 19:25:10.223682  148123 logs.go:282] 0 containers: []
	W1010 19:25:10.223694  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:10.223702  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:10.223771  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:10.264117  148123 cri.go:89] found id: ""
	I1010 19:25:10.264143  148123 logs.go:282] 0 containers: []
	W1010 19:25:10.264151  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:10.264158  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:10.264215  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:10.302566  148123 cri.go:89] found id: ""
	I1010 19:25:10.302601  148123 logs.go:282] 0 containers: []
	W1010 19:25:10.302613  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:10.302625  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:10.302649  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:10.316567  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:10.316606  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:10.388642  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:10.388674  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:10.388692  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:10.471035  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:10.471090  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:10.519600  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:10.519645  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:13.075428  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:13.090830  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:13.090915  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:13.126302  148123 cri.go:89] found id: ""
	I1010 19:25:13.126336  148123 logs.go:282] 0 containers: []
	W1010 19:25:13.126348  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:13.126357  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:13.126429  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:13.163309  148123 cri.go:89] found id: ""
	I1010 19:25:13.163344  148123 logs.go:282] 0 containers: []
	W1010 19:25:13.163357  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:13.163365  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:13.163428  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:13.197569  148123 cri.go:89] found id: ""
	I1010 19:25:13.197605  148123 logs.go:282] 0 containers: []
	W1010 19:25:13.197615  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:13.197621  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:13.197692  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:13.236299  148123 cri.go:89] found id: ""
	I1010 19:25:13.236328  148123 logs.go:282] 0 containers: []
	W1010 19:25:13.236336  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:13.236342  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:13.236406  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:13.275667  148123 cri.go:89] found id: ""
	I1010 19:25:13.275696  148123 logs.go:282] 0 containers: []
	W1010 19:25:13.275705  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:13.275711  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:13.275776  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:13.309728  148123 cri.go:89] found id: ""
	I1010 19:25:13.309763  148123 logs.go:282] 0 containers: []
	W1010 19:25:13.309774  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:13.309786  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:13.309854  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:13.347464  148123 cri.go:89] found id: ""
	I1010 19:25:13.347493  148123 logs.go:282] 0 containers: []
	W1010 19:25:13.347504  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:13.347513  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:13.347576  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:13.385086  148123 cri.go:89] found id: ""
	I1010 19:25:13.385119  148123 logs.go:282] 0 containers: []
	W1010 19:25:13.385130  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:13.385142  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:13.385156  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:13.466513  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:13.466553  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:13.508610  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:13.508651  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:09.226850  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:11.227241  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:13.726879  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:10.565663  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:12.565845  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:13.555465  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:16.051764  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:13.564897  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:13.564932  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:13.578771  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:13.578803  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:13.651857  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:16.153066  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:16.167613  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:16.167698  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:16.203867  148123 cri.go:89] found id: ""
	I1010 19:25:16.203897  148123 logs.go:282] 0 containers: []
	W1010 19:25:16.203906  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:16.203912  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:16.203965  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:16.242077  148123 cri.go:89] found id: ""
	I1010 19:25:16.242109  148123 logs.go:282] 0 containers: []
	W1010 19:25:16.242130  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:16.242138  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:16.242234  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:16.283792  148123 cri.go:89] found id: ""
	I1010 19:25:16.283825  148123 logs.go:282] 0 containers: []
	W1010 19:25:16.283836  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:16.283844  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:16.283907  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:16.319934  148123 cri.go:89] found id: ""
	I1010 19:25:16.319969  148123 logs.go:282] 0 containers: []
	W1010 19:25:16.319980  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:16.319990  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:16.320063  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:16.360439  148123 cri.go:89] found id: ""
	I1010 19:25:16.360482  148123 logs.go:282] 0 containers: []
	W1010 19:25:16.360495  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:16.360504  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:16.360569  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:16.395890  148123 cri.go:89] found id: ""
	I1010 19:25:16.395922  148123 logs.go:282] 0 containers: []
	W1010 19:25:16.395931  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:16.395941  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:16.396009  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:16.431396  148123 cri.go:89] found id: ""
	I1010 19:25:16.431442  148123 logs.go:282] 0 containers: []
	W1010 19:25:16.431456  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:16.431463  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:16.431531  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:16.471768  148123 cri.go:89] found id: ""
	I1010 19:25:16.471796  148123 logs.go:282] 0 containers: []
	W1010 19:25:16.471804  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:16.471814  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:16.471830  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:16.526112  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:16.526155  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:16.541041  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:16.541074  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:16.623245  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:16.623273  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:16.623289  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:16.710840  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:16.710884  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:15.727171  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:17.728705  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:15.067362  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:17.566242  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:18.551207  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:21.050222  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:19.257566  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:19.273982  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:19.274063  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:19.311360  148123 cri.go:89] found id: ""
	I1010 19:25:19.311392  148123 logs.go:282] 0 containers: []
	W1010 19:25:19.311404  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:19.311411  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:19.311475  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:19.347025  148123 cri.go:89] found id: ""
	I1010 19:25:19.347053  148123 logs.go:282] 0 containers: []
	W1010 19:25:19.347062  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:19.347068  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:19.347120  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:19.383575  148123 cri.go:89] found id: ""
	I1010 19:25:19.383608  148123 logs.go:282] 0 containers: []
	W1010 19:25:19.383620  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:19.383633  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:19.383698  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:19.419245  148123 cri.go:89] found id: ""
	I1010 19:25:19.419270  148123 logs.go:282] 0 containers: []
	W1010 19:25:19.419278  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:19.419284  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:19.419334  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:19.453563  148123 cri.go:89] found id: ""
	I1010 19:25:19.453591  148123 logs.go:282] 0 containers: []
	W1010 19:25:19.453601  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:19.453658  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:19.453715  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:19.496430  148123 cri.go:89] found id: ""
	I1010 19:25:19.496458  148123 logs.go:282] 0 containers: []
	W1010 19:25:19.496466  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:19.496473  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:19.496533  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:19.534626  148123 cri.go:89] found id: ""
	I1010 19:25:19.534670  148123 logs.go:282] 0 containers: []
	W1010 19:25:19.534682  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:19.534691  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:19.534757  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:19.574186  148123 cri.go:89] found id: ""
	I1010 19:25:19.574236  148123 logs.go:282] 0 containers: []
	W1010 19:25:19.574248  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:19.574261  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:19.574283  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:19.587952  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:19.587989  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:19.664607  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:19.664644  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:19.664664  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:19.742185  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:19.742224  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:19.781530  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:19.781562  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:22.335398  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:22.349627  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:22.349693  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:22.389075  148123 cri.go:89] found id: ""
	I1010 19:25:22.389102  148123 logs.go:282] 0 containers: []
	W1010 19:25:22.389110  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:22.389117  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:22.389168  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:22.424331  148123 cri.go:89] found id: ""
	I1010 19:25:22.424368  148123 logs.go:282] 0 containers: []
	W1010 19:25:22.424381  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:22.424389  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:22.424460  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:22.464011  148123 cri.go:89] found id: ""
	I1010 19:25:22.464051  148123 logs.go:282] 0 containers: []
	W1010 19:25:22.464062  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:22.464070  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:22.464141  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:22.500786  148123 cri.go:89] found id: ""
	I1010 19:25:22.500819  148123 logs.go:282] 0 containers: []
	W1010 19:25:22.500828  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:22.500835  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:22.500914  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:22.538016  148123 cri.go:89] found id: ""
	I1010 19:25:22.538053  148123 logs.go:282] 0 containers: []
	W1010 19:25:22.538061  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:22.538068  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:22.538124  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:22.576600  148123 cri.go:89] found id: ""
	I1010 19:25:22.576629  148123 logs.go:282] 0 containers: []
	W1010 19:25:22.576637  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:22.576643  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:22.576702  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:22.616020  148123 cri.go:89] found id: ""
	I1010 19:25:22.616048  148123 logs.go:282] 0 containers: []
	W1010 19:25:22.616057  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:22.616064  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:22.616114  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:22.652238  148123 cri.go:89] found id: ""
	I1010 19:25:22.652271  148123 logs.go:282] 0 containers: []
	W1010 19:25:22.652283  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:22.652295  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:22.652311  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:22.726182  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:22.726216  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:22.726234  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:22.814040  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:22.814086  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:22.859254  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:22.859289  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:22.916565  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:22.916607  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:20.227871  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:22.732566  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:20.066872  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:22.566173  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:23.050833  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:25.551662  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:25.432636  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:25.446982  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:25.447047  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:25.482440  148123 cri.go:89] found id: ""
	I1010 19:25:25.482472  148123 logs.go:282] 0 containers: []
	W1010 19:25:25.482484  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:25.482492  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:25.482552  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:25.517709  148123 cri.go:89] found id: ""
	I1010 19:25:25.517742  148123 logs.go:282] 0 containers: []
	W1010 19:25:25.517756  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:25.517764  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:25.517867  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:25.561504  148123 cri.go:89] found id: ""
	I1010 19:25:25.561532  148123 logs.go:282] 0 containers: []
	W1010 19:25:25.561544  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:25.561552  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:25.561616  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:25.601704  148123 cri.go:89] found id: ""
	I1010 19:25:25.601741  148123 logs.go:282] 0 containers: []
	W1010 19:25:25.601753  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:25.601762  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:25.601825  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:25.638519  148123 cri.go:89] found id: ""
	I1010 19:25:25.638544  148123 logs.go:282] 0 containers: []
	W1010 19:25:25.638552  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:25.638558  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:25.638609  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:25.673390  148123 cri.go:89] found id: ""
	I1010 19:25:25.673425  148123 logs.go:282] 0 containers: []
	W1010 19:25:25.673436  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:25.673447  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:25.673525  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:25.708251  148123 cri.go:89] found id: ""
	I1010 19:25:25.708278  148123 logs.go:282] 0 containers: []
	W1010 19:25:25.708286  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:25.708293  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:25.708354  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:25.749793  148123 cri.go:89] found id: ""
	I1010 19:25:25.749826  148123 logs.go:282] 0 containers: []
	W1010 19:25:25.749837  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:25.749846  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:25.749861  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:25.802511  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:25.802554  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:25.817523  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:25.817562  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:25.898595  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:25.898619  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:25.898635  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:25.978629  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:25.978684  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:28.520165  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:28.532725  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:28.532793  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:25.226875  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:27.729015  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:25.066298  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:27.066963  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:27.551915  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:29.558497  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:28.571667  148123 cri.go:89] found id: ""
	I1010 19:25:28.571695  148123 logs.go:282] 0 containers: []
	W1010 19:25:28.571704  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:28.571710  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:28.571770  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:28.608136  148123 cri.go:89] found id: ""
	I1010 19:25:28.608165  148123 logs.go:282] 0 containers: []
	W1010 19:25:28.608174  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:28.608181  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:28.608244  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:28.642123  148123 cri.go:89] found id: ""
	I1010 19:25:28.642161  148123 logs.go:282] 0 containers: []
	W1010 19:25:28.642173  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:28.642181  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:28.642242  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:28.678600  148123 cri.go:89] found id: ""
	I1010 19:25:28.678633  148123 logs.go:282] 0 containers: []
	W1010 19:25:28.678643  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:28.678651  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:28.678702  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:28.716627  148123 cri.go:89] found id: ""
	I1010 19:25:28.716660  148123 logs.go:282] 0 containers: []
	W1010 19:25:28.716673  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:28.716681  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:28.716766  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:28.752750  148123 cri.go:89] found id: ""
	I1010 19:25:28.752786  148123 logs.go:282] 0 containers: []
	W1010 19:25:28.752798  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:28.752806  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:28.752898  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:28.790021  148123 cri.go:89] found id: ""
	I1010 19:25:28.790054  148123 logs.go:282] 0 containers: []
	W1010 19:25:28.790066  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:28.790075  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:28.790128  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:28.828760  148123 cri.go:89] found id: ""
	I1010 19:25:28.828803  148123 logs.go:282] 0 containers: []
	W1010 19:25:28.828827  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:28.828839  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:28.828877  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:28.879773  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:28.879811  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:28.894311  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:28.894342  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:28.968675  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:28.968706  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:28.968729  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:29.045862  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:29.045913  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:31.588772  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:31.601453  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:31.601526  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:31.637056  148123 cri.go:89] found id: ""
	I1010 19:25:31.637090  148123 logs.go:282] 0 containers: []
	W1010 19:25:31.637101  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:31.637108  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:31.637183  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:31.677825  148123 cri.go:89] found id: ""
	I1010 19:25:31.677854  148123 logs.go:282] 0 containers: []
	W1010 19:25:31.677862  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:31.677867  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:31.677920  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:31.715372  148123 cri.go:89] found id: ""
	I1010 19:25:31.715402  148123 logs.go:282] 0 containers: []
	W1010 19:25:31.715411  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:31.715419  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:31.715470  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:31.753767  148123 cri.go:89] found id: ""
	I1010 19:25:31.753794  148123 logs.go:282] 0 containers: []
	W1010 19:25:31.753806  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:31.753814  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:31.753879  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:31.792999  148123 cri.go:89] found id: ""
	I1010 19:25:31.793024  148123 logs.go:282] 0 containers: []
	W1010 19:25:31.793035  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:31.793050  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:31.793106  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:31.831571  148123 cri.go:89] found id: ""
	I1010 19:25:31.831598  148123 logs.go:282] 0 containers: []
	W1010 19:25:31.831607  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:31.831614  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:31.831673  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:31.880382  148123 cri.go:89] found id: ""
	I1010 19:25:31.880422  148123 logs.go:282] 0 containers: []
	W1010 19:25:31.880436  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:31.880447  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:31.880527  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:31.919596  148123 cri.go:89] found id: ""
	I1010 19:25:31.919627  148123 logs.go:282] 0 containers: []
	W1010 19:25:31.919637  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:31.919647  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:31.919661  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:31.967963  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:31.968001  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:31.983064  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:31.983106  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:32.056073  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:32.056105  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:32.056122  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:32.142927  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:32.142978  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:30.226683  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:32.227047  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:29.565699  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:31.566109  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:32.051411  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:34.052064  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:36.550062  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:34.690025  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:34.705326  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:34.705413  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:34.740756  148123 cri.go:89] found id: ""
	I1010 19:25:34.740786  148123 logs.go:282] 0 containers: []
	W1010 19:25:34.740793  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:34.740799  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:34.740872  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:34.777021  148123 cri.go:89] found id: ""
	I1010 19:25:34.777049  148123 logs.go:282] 0 containers: []
	W1010 19:25:34.777061  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:34.777069  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:34.777123  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:34.826699  148123 cri.go:89] found id: ""
	I1010 19:25:34.826735  148123 logs.go:282] 0 containers: []
	W1010 19:25:34.826747  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:34.826754  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:34.826896  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:34.864297  148123 cri.go:89] found id: ""
	I1010 19:25:34.864329  148123 logs.go:282] 0 containers: []
	W1010 19:25:34.864340  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:34.864348  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:34.864415  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:34.899198  148123 cri.go:89] found id: ""
	I1010 19:25:34.899234  148123 logs.go:282] 0 containers: []
	W1010 19:25:34.899247  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:34.899256  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:34.899315  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:34.934132  148123 cri.go:89] found id: ""
	I1010 19:25:34.934162  148123 logs.go:282] 0 containers: []
	W1010 19:25:34.934170  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:34.934176  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:34.934238  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:34.969858  148123 cri.go:89] found id: ""
	I1010 19:25:34.969891  148123 logs.go:282] 0 containers: []
	W1010 19:25:34.969903  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:34.969911  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:34.969978  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:35.006082  148123 cri.go:89] found id: ""
	I1010 19:25:35.006121  148123 logs.go:282] 0 containers: []
	W1010 19:25:35.006132  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:35.006145  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:35.006160  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:35.060596  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:35.060631  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:35.076246  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:35.076281  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:35.151879  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:35.151912  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:35.151931  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:35.231802  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:35.231845  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:37.774550  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:37.787871  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:37.787938  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:37.824273  148123 cri.go:89] found id: ""
	I1010 19:25:37.824306  148123 logs.go:282] 0 containers: []
	W1010 19:25:37.824317  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:37.824325  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:37.824410  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:37.860548  148123 cri.go:89] found id: ""
	I1010 19:25:37.860582  148123 logs.go:282] 0 containers: []
	W1010 19:25:37.860593  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:37.860601  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:37.860677  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:37.894616  148123 cri.go:89] found id: ""
	I1010 19:25:37.894644  148123 logs.go:282] 0 containers: []
	W1010 19:25:37.894654  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:37.894662  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:37.894730  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:37.931247  148123 cri.go:89] found id: ""
	I1010 19:25:37.931272  148123 logs.go:282] 0 containers: []
	W1010 19:25:37.931280  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:37.931287  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:37.931337  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:37.968028  148123 cri.go:89] found id: ""
	I1010 19:25:37.968068  148123 logs.go:282] 0 containers: []
	W1010 19:25:37.968079  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:37.968086  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:37.968153  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:38.004661  148123 cri.go:89] found id: ""
	I1010 19:25:38.004692  148123 logs.go:282] 0 containers: []
	W1010 19:25:38.004700  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:38.004706  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:38.004760  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:38.040874  148123 cri.go:89] found id: ""
	I1010 19:25:38.040906  148123 logs.go:282] 0 containers: []
	W1010 19:25:38.040915  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:38.040922  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:38.040990  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:38.083771  148123 cri.go:89] found id: ""
	I1010 19:25:38.083802  148123 logs.go:282] 0 containers: []
	W1010 19:25:38.083811  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:38.083821  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:38.083836  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:38.099645  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:38.099683  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:38.172168  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:38.172207  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:38.172223  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:38.248523  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:38.248561  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:38.306594  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:38.306630  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:34.728106  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:37.226285  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:34.065919  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:36.066751  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:38.067361  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:38.550359  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:40.551190  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:40.873868  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:40.889561  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:40.889668  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:40.926769  148123 cri.go:89] found id: ""
	I1010 19:25:40.926800  148123 logs.go:282] 0 containers: []
	W1010 19:25:40.926808  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:40.926816  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:40.926869  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:40.962564  148123 cri.go:89] found id: ""
	I1010 19:25:40.962592  148123 logs.go:282] 0 containers: []
	W1010 19:25:40.962600  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:40.962606  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:40.962659  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:40.997856  148123 cri.go:89] found id: ""
	I1010 19:25:40.997885  148123 logs.go:282] 0 containers: []
	W1010 19:25:40.997894  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:40.997900  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:40.997951  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:41.035479  148123 cri.go:89] found id: ""
	I1010 19:25:41.035506  148123 logs.go:282] 0 containers: []
	W1010 19:25:41.035514  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:41.035520  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:41.035575  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:41.077733  148123 cri.go:89] found id: ""
	I1010 19:25:41.077817  148123 logs.go:282] 0 containers: []
	W1010 19:25:41.077830  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:41.077840  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:41.077916  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:41.115439  148123 cri.go:89] found id: ""
	I1010 19:25:41.115476  148123 logs.go:282] 0 containers: []
	W1010 19:25:41.115489  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:41.115498  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:41.115552  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:41.152586  148123 cri.go:89] found id: ""
	I1010 19:25:41.152625  148123 logs.go:282] 0 containers: []
	W1010 19:25:41.152636  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:41.152643  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:41.152695  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:41.186807  148123 cri.go:89] found id: ""
	I1010 19:25:41.186840  148123 logs.go:282] 0 containers: []
	W1010 19:25:41.186851  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:41.186862  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:41.186880  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:41.200404  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:41.200447  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:41.280717  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:41.280744  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:41.280760  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:41.360561  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:41.360611  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:41.404461  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:41.404497  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:39.226903  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:41.227077  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:43.727197  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:40.570404  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:43.066523  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:43.050813  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:45.051094  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:43.955723  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:43.970895  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:43.970964  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:44.012162  148123 cri.go:89] found id: ""
	I1010 19:25:44.012194  148123 logs.go:282] 0 containers: []
	W1010 19:25:44.012203  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:44.012209  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:44.012282  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:44.074583  148123 cri.go:89] found id: ""
	I1010 19:25:44.074606  148123 logs.go:282] 0 containers: []
	W1010 19:25:44.074614  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:44.074620  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:44.074675  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:44.133562  148123 cri.go:89] found id: ""
	I1010 19:25:44.133588  148123 logs.go:282] 0 containers: []
	W1010 19:25:44.133596  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:44.133602  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:44.133658  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:44.168832  148123 cri.go:89] found id: ""
	I1010 19:25:44.168883  148123 logs.go:282] 0 containers: []
	W1010 19:25:44.168896  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:44.168908  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:44.168967  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:44.204896  148123 cri.go:89] found id: ""
	I1010 19:25:44.204923  148123 logs.go:282] 0 containers: []
	W1010 19:25:44.204934  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:44.204943  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:44.205002  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:44.242410  148123 cri.go:89] found id: ""
	I1010 19:25:44.242437  148123 logs.go:282] 0 containers: []
	W1010 19:25:44.242448  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:44.242456  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:44.242524  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:44.278553  148123 cri.go:89] found id: ""
	I1010 19:25:44.278581  148123 logs.go:282] 0 containers: []
	W1010 19:25:44.278589  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:44.278595  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:44.278658  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:44.316099  148123 cri.go:89] found id: ""
	I1010 19:25:44.316125  148123 logs.go:282] 0 containers: []
	W1010 19:25:44.316132  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:44.316141  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:44.316155  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:44.365809  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:44.365860  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:44.380129  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:44.380162  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:44.454241  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:44.454267  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:44.454281  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:44.530928  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:44.530974  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:47.078279  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:47.092233  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:47.092310  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:47.129517  148123 cri.go:89] found id: ""
	I1010 19:25:47.129557  148123 logs.go:282] 0 containers: []
	W1010 19:25:47.129573  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:47.129582  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:47.129654  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:47.170309  148123 cri.go:89] found id: ""
	I1010 19:25:47.170345  148123 logs.go:282] 0 containers: []
	W1010 19:25:47.170357  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:47.170365  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:47.170432  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:47.205110  148123 cri.go:89] found id: ""
	I1010 19:25:47.205145  148123 logs.go:282] 0 containers: []
	W1010 19:25:47.205156  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:47.205162  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:47.205228  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:47.245668  148123 cri.go:89] found id: ""
	I1010 19:25:47.245695  148123 logs.go:282] 0 containers: []
	W1010 19:25:47.245704  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:47.245711  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:47.245768  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:47.284549  148123 cri.go:89] found id: ""
	I1010 19:25:47.284576  148123 logs.go:282] 0 containers: []
	W1010 19:25:47.284584  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:47.284590  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:47.284643  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:47.324676  148123 cri.go:89] found id: ""
	I1010 19:25:47.324708  148123 logs.go:282] 0 containers: []
	W1010 19:25:47.324720  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:47.324729  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:47.324782  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:47.364315  148123 cri.go:89] found id: ""
	I1010 19:25:47.364347  148123 logs.go:282] 0 containers: []
	W1010 19:25:47.364356  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:47.364362  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:47.364433  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:47.400611  148123 cri.go:89] found id: ""
	I1010 19:25:47.400641  148123 logs.go:282] 0 containers: []
	W1010 19:25:47.400652  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:47.400664  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:47.400680  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:47.415185  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:47.415227  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:47.481638  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:47.481665  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:47.481681  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:47.560135  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:47.560171  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:47.602144  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:47.602174  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:46.227386  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:48.227699  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:45.066887  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:47.565340  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:47.051459  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:49.550170  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:51.554542  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:50.151770  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:50.165336  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:50.165403  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:50.201045  148123 cri.go:89] found id: ""
	I1010 19:25:50.201072  148123 logs.go:282] 0 containers: []
	W1010 19:25:50.201082  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:50.201089  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:50.201154  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:50.237047  148123 cri.go:89] found id: ""
	I1010 19:25:50.237082  148123 logs.go:282] 0 containers: []
	W1010 19:25:50.237094  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:50.237102  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:50.237174  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:50.273727  148123 cri.go:89] found id: ""
	I1010 19:25:50.273756  148123 logs.go:282] 0 containers: []
	W1010 19:25:50.273767  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:50.273780  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:50.273843  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:50.311397  148123 cri.go:89] found id: ""
	I1010 19:25:50.311424  148123 logs.go:282] 0 containers: []
	W1010 19:25:50.311433  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:50.311450  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:50.311513  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:50.347593  148123 cri.go:89] found id: ""
	I1010 19:25:50.347625  148123 logs.go:282] 0 containers: []
	W1010 19:25:50.347637  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:50.347705  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:50.347782  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:50.382195  148123 cri.go:89] found id: ""
	I1010 19:25:50.382228  148123 logs.go:282] 0 containers: []
	W1010 19:25:50.382240  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:50.382247  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:50.382313  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:50.419189  148123 cri.go:89] found id: ""
	I1010 19:25:50.419221  148123 logs.go:282] 0 containers: []
	W1010 19:25:50.419229  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:50.419236  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:50.419297  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:50.454368  148123 cri.go:89] found id: ""
	I1010 19:25:50.454399  148123 logs.go:282] 0 containers: []
	W1010 19:25:50.454410  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:50.454421  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:50.454440  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:50.495575  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:50.495623  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:50.548691  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:50.548737  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:50.564585  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:50.564621  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:50.637152  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:50.637174  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:50.637190  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:53.221939  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:53.235889  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:53.235968  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:53.278589  148123 cri.go:89] found id: ""
	I1010 19:25:53.278620  148123 logs.go:282] 0 containers: []
	W1010 19:25:53.278632  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:53.278640  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:53.278705  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:53.315062  148123 cri.go:89] found id: ""
	I1010 19:25:53.315090  148123 logs.go:282] 0 containers: []
	W1010 19:25:53.315101  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:53.315109  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:53.315182  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:53.348994  148123 cri.go:89] found id: ""
	I1010 19:25:53.349031  148123 logs.go:282] 0 containers: []
	W1010 19:25:53.349040  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:53.349046  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:53.349099  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:53.383334  148123 cri.go:89] found id: ""
	I1010 19:25:53.383363  148123 logs.go:282] 0 containers: []
	W1010 19:25:53.383373  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:53.383380  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:53.383450  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:53.417888  148123 cri.go:89] found id: ""
	I1010 19:25:53.417920  148123 logs.go:282] 0 containers: []
	W1010 19:25:53.417931  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:53.417938  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:53.418013  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:53.454757  148123 cri.go:89] found id: ""
	I1010 19:25:53.454787  148123 logs.go:282] 0 containers: []
	W1010 19:25:53.454798  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:53.454806  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:53.454871  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:53.491422  148123 cri.go:89] found id: ""
	I1010 19:25:53.491452  148123 logs.go:282] 0 containers: []
	W1010 19:25:53.491464  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:53.491472  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:53.491541  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:53.531240  148123 cri.go:89] found id: ""
	I1010 19:25:53.531271  148123 logs.go:282] 0 containers: []
	W1010 19:25:53.531282  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:53.531294  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:53.531310  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:50.727196  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:53.226957  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:50.065907  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:52.567056  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:54.051112  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:56.554137  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:53.582576  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:53.582608  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:53.596481  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:53.596511  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:53.666134  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:53.666162  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:53.666178  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:53.745621  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:53.745659  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:56.290744  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:56.307536  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:56.307610  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:56.353456  148123 cri.go:89] found id: ""
	I1010 19:25:56.353484  148123 logs.go:282] 0 containers: []
	W1010 19:25:56.353494  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:56.353501  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:56.353553  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:56.394093  148123 cri.go:89] found id: ""
	I1010 19:25:56.394120  148123 logs.go:282] 0 containers: []
	W1010 19:25:56.394131  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:56.394138  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:56.394203  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:56.428388  148123 cri.go:89] found id: ""
	I1010 19:25:56.428428  148123 logs.go:282] 0 containers: []
	W1010 19:25:56.428441  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:56.428450  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:56.428524  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:56.464414  148123 cri.go:89] found id: ""
	I1010 19:25:56.464459  148123 logs.go:282] 0 containers: []
	W1010 19:25:56.464472  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:56.464480  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:56.464563  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:56.505271  148123 cri.go:89] found id: ""
	I1010 19:25:56.505307  148123 logs.go:282] 0 containers: []
	W1010 19:25:56.505316  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:56.505322  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:56.505374  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:56.546424  148123 cri.go:89] found id: ""
	I1010 19:25:56.546456  148123 logs.go:282] 0 containers: []
	W1010 19:25:56.546467  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:56.546475  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:56.546545  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:56.587458  148123 cri.go:89] found id: ""
	I1010 19:25:56.587489  148123 logs.go:282] 0 containers: []
	W1010 19:25:56.587501  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:56.587509  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:56.587588  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:56.628248  148123 cri.go:89] found id: ""
	I1010 19:25:56.628279  148123 logs.go:282] 0 containers: []
	W1010 19:25:56.628291  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:56.628304  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:56.628320  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:56.642109  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:56.642141  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:56.723646  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:56.723678  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:56.723696  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:56.805849  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:56.805899  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:56.849863  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:56.849891  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:55.230447  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:57.726896  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:55.066248  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:57.565240  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:59.051145  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:01.554276  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:59.407093  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:59.422915  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:59.423005  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:59.462867  148123 cri.go:89] found id: ""
	I1010 19:25:59.462898  148123 logs.go:282] 0 containers: []
	W1010 19:25:59.462909  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:59.462917  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:59.462980  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:59.496931  148123 cri.go:89] found id: ""
	I1010 19:25:59.496958  148123 logs.go:282] 0 containers: []
	W1010 19:25:59.496968  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:59.496983  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:59.497049  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:59.532241  148123 cri.go:89] found id: ""
	I1010 19:25:59.532271  148123 logs.go:282] 0 containers: []
	W1010 19:25:59.532283  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:59.532291  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:59.532352  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:59.571285  148123 cri.go:89] found id: ""
	I1010 19:25:59.571313  148123 logs.go:282] 0 containers: []
	W1010 19:25:59.571324  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:59.571331  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:59.571391  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:59.606687  148123 cri.go:89] found id: ""
	I1010 19:25:59.606721  148123 logs.go:282] 0 containers: []
	W1010 19:25:59.606734  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:59.606741  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:59.606800  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:59.643247  148123 cri.go:89] found id: ""
	I1010 19:25:59.643276  148123 logs.go:282] 0 containers: []
	W1010 19:25:59.643286  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:59.643294  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:59.643369  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:59.679305  148123 cri.go:89] found id: ""
	I1010 19:25:59.679335  148123 logs.go:282] 0 containers: []
	W1010 19:25:59.679344  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:59.679350  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:59.679407  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:59.716149  148123 cri.go:89] found id: ""
	I1010 19:25:59.716198  148123 logs.go:282] 0 containers: []
	W1010 19:25:59.716210  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:59.716222  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:59.716239  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:59.730334  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:59.730364  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:59.799759  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:59.799786  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:59.799804  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:59.881883  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:59.881925  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:59.923755  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:59.923784  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:02.478043  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:02.492627  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:02.492705  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:02.532133  148123 cri.go:89] found id: ""
	I1010 19:26:02.532160  148123 logs.go:282] 0 containers: []
	W1010 19:26:02.532172  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:02.532179  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:02.532244  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:02.571915  148123 cri.go:89] found id: ""
	I1010 19:26:02.571943  148123 logs.go:282] 0 containers: []
	W1010 19:26:02.571951  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:02.571957  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:02.572014  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:02.616330  148123 cri.go:89] found id: ""
	I1010 19:26:02.616365  148123 logs.go:282] 0 containers: []
	W1010 19:26:02.616376  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:02.616385  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:02.616455  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:02.662245  148123 cri.go:89] found id: ""
	I1010 19:26:02.662275  148123 logs.go:282] 0 containers: []
	W1010 19:26:02.662284  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:02.662291  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:02.662354  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:02.698692  148123 cri.go:89] found id: ""
	I1010 19:26:02.698720  148123 logs.go:282] 0 containers: []
	W1010 19:26:02.698731  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:02.698739  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:02.698805  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:02.736643  148123 cri.go:89] found id: ""
	I1010 19:26:02.736667  148123 logs.go:282] 0 containers: []
	W1010 19:26:02.736677  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:02.736685  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:02.736742  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:02.780356  148123 cri.go:89] found id: ""
	I1010 19:26:02.780388  148123 logs.go:282] 0 containers: []
	W1010 19:26:02.780399  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:02.780407  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:02.780475  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:02.816716  148123 cri.go:89] found id: ""
	I1010 19:26:02.816744  148123 logs.go:282] 0 containers: []
	W1010 19:26:02.816752  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:02.816761  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:02.816775  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:02.899002  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:02.899046  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:02.942060  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:02.942093  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:02.997605  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:02.997661  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:03.011768  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:03.011804  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:03.096404  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:59.727075  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:02.227526  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:59.565903  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:02.066179  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:04.049656  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:06.050425  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:05.596971  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:05.612524  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:05.612598  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:05.649999  148123 cri.go:89] found id: ""
	I1010 19:26:05.650032  148123 logs.go:282] 0 containers: []
	W1010 19:26:05.650044  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:05.650052  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:05.650119  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:05.684365  148123 cri.go:89] found id: ""
	I1010 19:26:05.684398  148123 logs.go:282] 0 containers: []
	W1010 19:26:05.684419  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:05.684427  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:05.684494  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:05.721801  148123 cri.go:89] found id: ""
	I1010 19:26:05.721832  148123 logs.go:282] 0 containers: []
	W1010 19:26:05.721840  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:05.721847  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:05.721908  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:05.762010  148123 cri.go:89] found id: ""
	I1010 19:26:05.762037  148123 logs.go:282] 0 containers: []
	W1010 19:26:05.762046  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:05.762056  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:05.762117  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:05.803425  148123 cri.go:89] found id: ""
	I1010 19:26:05.803457  148123 logs.go:282] 0 containers: []
	W1010 19:26:05.803468  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:05.803476  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:05.803551  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:05.838800  148123 cri.go:89] found id: ""
	I1010 19:26:05.838833  148123 logs.go:282] 0 containers: []
	W1010 19:26:05.838845  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:05.838853  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:05.838921  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:05.875110  148123 cri.go:89] found id: ""
	I1010 19:26:05.875147  148123 logs.go:282] 0 containers: []
	W1010 19:26:05.875158  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:05.875167  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:05.875245  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:05.908951  148123 cri.go:89] found id: ""
	I1010 19:26:05.908988  148123 logs.go:282] 0 containers: []
	W1010 19:26:05.909000  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:05.909012  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:05.909027  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:05.922263  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:05.922293  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:05.996441  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:05.996465  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:05.996484  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:06.078113  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:06.078160  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:06.122919  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:06.122956  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:04.726335  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:06.728178  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:04.066573  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:06.564991  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:08.566655  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:08.050522  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:10.550288  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:08.674464  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:08.688452  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:08.688570  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:08.726240  148123 cri.go:89] found id: ""
	I1010 19:26:08.726269  148123 logs.go:282] 0 containers: []
	W1010 19:26:08.726279  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:08.726287  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:08.726370  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:08.764762  148123 cri.go:89] found id: ""
	I1010 19:26:08.764790  148123 logs.go:282] 0 containers: []
	W1010 19:26:08.764799  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:08.764806  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:08.764887  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:08.805522  148123 cri.go:89] found id: ""
	I1010 19:26:08.805552  148123 logs.go:282] 0 containers: []
	W1010 19:26:08.805563  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:08.805572  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:08.805642  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:08.845694  148123 cri.go:89] found id: ""
	I1010 19:26:08.845729  148123 logs.go:282] 0 containers: []
	W1010 19:26:08.845740  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:08.845747  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:08.845817  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:08.885024  148123 cri.go:89] found id: ""
	I1010 19:26:08.885054  148123 logs.go:282] 0 containers: []
	W1010 19:26:08.885066  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:08.885074  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:08.885138  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:08.927500  148123 cri.go:89] found id: ""
	I1010 19:26:08.927531  148123 logs.go:282] 0 containers: []
	W1010 19:26:08.927540  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:08.927547  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:08.927616  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:08.963870  148123 cri.go:89] found id: ""
	I1010 19:26:08.963904  148123 logs.go:282] 0 containers: []
	W1010 19:26:08.963916  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:08.963924  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:08.963988  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:08.997984  148123 cri.go:89] found id: ""
	I1010 19:26:08.998017  148123 logs.go:282] 0 containers: []
	W1010 19:26:08.998027  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:08.998039  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:08.998056  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:09.049307  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:09.049341  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:09.063341  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:09.063432  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:09.145190  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:09.145214  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:09.145226  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:09.231409  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:09.231445  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:11.773475  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:11.788981  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:11.789055  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:11.830289  148123 cri.go:89] found id: ""
	I1010 19:26:11.830319  148123 logs.go:282] 0 containers: []
	W1010 19:26:11.830327  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:11.830333  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:11.830399  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:11.867093  148123 cri.go:89] found id: ""
	I1010 19:26:11.867120  148123 logs.go:282] 0 containers: []
	W1010 19:26:11.867131  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:11.867138  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:11.867212  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:11.902146  148123 cri.go:89] found id: ""
	I1010 19:26:11.902181  148123 logs.go:282] 0 containers: []
	W1010 19:26:11.902192  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:11.902201  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:11.902271  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:11.938652  148123 cri.go:89] found id: ""
	I1010 19:26:11.938681  148123 logs.go:282] 0 containers: []
	W1010 19:26:11.938691  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:11.938703  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:11.938771  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:11.975903  148123 cri.go:89] found id: ""
	I1010 19:26:11.975937  148123 logs.go:282] 0 containers: []
	W1010 19:26:11.975947  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:11.975955  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:11.976020  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:12.014083  148123 cri.go:89] found id: ""
	I1010 19:26:12.014111  148123 logs.go:282] 0 containers: []
	W1010 19:26:12.014119  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:12.014126  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:12.014190  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:12.057832  148123 cri.go:89] found id: ""
	I1010 19:26:12.057858  148123 logs.go:282] 0 containers: []
	W1010 19:26:12.057867  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:12.057874  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:12.057925  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:12.103253  148123 cri.go:89] found id: ""
	I1010 19:26:12.103286  148123 logs.go:282] 0 containers: []
	W1010 19:26:12.103298  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:12.103311  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:12.103326  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:12.171062  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:12.171104  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:12.185463  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:12.185496  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:12.261568  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:12.261592  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:12.261607  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:12.345819  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:12.345860  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:09.226954  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:11.227205  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:13.227457  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:11.066777  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:13.565854  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:12.551323  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:15.051745  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:14.886283  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:14.901117  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:14.901213  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:14.935235  148123 cri.go:89] found id: ""
	I1010 19:26:14.935264  148123 logs.go:282] 0 containers: []
	W1010 19:26:14.935273  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:14.935280  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:14.935336  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:14.970617  148123 cri.go:89] found id: ""
	I1010 19:26:14.970642  148123 logs.go:282] 0 containers: []
	W1010 19:26:14.970652  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:14.970658  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:14.970722  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:15.011086  148123 cri.go:89] found id: ""
	I1010 19:26:15.011113  148123 logs.go:282] 0 containers: []
	W1010 19:26:15.011122  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:15.011128  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:15.011188  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:15.053004  148123 cri.go:89] found id: ""
	I1010 19:26:15.053026  148123 logs.go:282] 0 containers: []
	W1010 19:26:15.053033  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:15.053039  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:15.053099  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:15.089275  148123 cri.go:89] found id: ""
	I1010 19:26:15.089304  148123 logs.go:282] 0 containers: []
	W1010 19:26:15.089312  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:15.089319  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:15.089383  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:15.126620  148123 cri.go:89] found id: ""
	I1010 19:26:15.126645  148123 logs.go:282] 0 containers: []
	W1010 19:26:15.126654  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:15.126660  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:15.126723  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:15.160920  148123 cri.go:89] found id: ""
	I1010 19:26:15.160956  148123 logs.go:282] 0 containers: []
	W1010 19:26:15.160969  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:15.160979  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:15.161037  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:15.197430  148123 cri.go:89] found id: ""
	I1010 19:26:15.197462  148123 logs.go:282] 0 containers: []
	W1010 19:26:15.197474  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:15.197486  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:15.197507  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:15.240905  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:15.240953  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:15.297663  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:15.297719  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:15.312279  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:15.312313  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:15.395296  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:15.395320  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:15.395336  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:17.978030  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:17.992574  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:17.992651  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:18.028846  148123 cri.go:89] found id: ""
	I1010 19:26:18.028890  148123 logs.go:282] 0 containers: []
	W1010 19:26:18.028901  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:18.028908  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:18.028965  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:18.073004  148123 cri.go:89] found id: ""
	I1010 19:26:18.073033  148123 logs.go:282] 0 containers: []
	W1010 19:26:18.073041  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:18.073047  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:18.073118  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:18.112062  148123 cri.go:89] found id: ""
	I1010 19:26:18.112098  148123 logs.go:282] 0 containers: []
	W1010 19:26:18.112111  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:18.112121  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:18.112209  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:18.151565  148123 cri.go:89] found id: ""
	I1010 19:26:18.151597  148123 logs.go:282] 0 containers: []
	W1010 19:26:18.151608  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:18.151616  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:18.151675  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:18.197483  148123 cri.go:89] found id: ""
	I1010 19:26:18.197509  148123 logs.go:282] 0 containers: []
	W1010 19:26:18.197518  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:18.197524  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:18.197591  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:18.235151  148123 cri.go:89] found id: ""
	I1010 19:26:18.235181  148123 logs.go:282] 0 containers: []
	W1010 19:26:18.235193  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:18.235201  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:18.235279  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:18.273807  148123 cri.go:89] found id: ""
	I1010 19:26:18.273841  148123 logs.go:282] 0 containers: []
	W1010 19:26:18.273852  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:18.273858  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:18.273920  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:18.311644  148123 cri.go:89] found id: ""
	I1010 19:26:18.311677  148123 logs.go:282] 0 containers: []
	W1010 19:26:18.311688  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:18.311700  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:18.311716  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:18.355599  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:18.355635  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:18.407786  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:18.407830  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:18.422944  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:18.422976  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:18.496830  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:18.496873  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:18.496887  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:15.227600  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:17.726712  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:16.065701  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:18.066861  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:17.558257  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:20.050914  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:21.108300  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:21.123177  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:21.123243  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:21.162544  148123 cri.go:89] found id: ""
	I1010 19:26:21.162575  148123 logs.go:282] 0 containers: []
	W1010 19:26:21.162586  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:21.162595  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:21.162662  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:21.199799  148123 cri.go:89] found id: ""
	I1010 19:26:21.199828  148123 logs.go:282] 0 containers: []
	W1010 19:26:21.199839  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:21.199845  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:21.199901  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:21.246333  148123 cri.go:89] found id: ""
	I1010 19:26:21.246357  148123 logs.go:282] 0 containers: []
	W1010 19:26:21.246366  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:21.246372  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:21.246433  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:21.282681  148123 cri.go:89] found id: ""
	I1010 19:26:21.282719  148123 logs.go:282] 0 containers: []
	W1010 19:26:21.282732  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:21.282740  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:21.282810  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:21.319500  148123 cri.go:89] found id: ""
	I1010 19:26:21.319535  148123 logs.go:282] 0 containers: []
	W1010 19:26:21.319546  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:21.319554  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:21.319623  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:21.355779  148123 cri.go:89] found id: ""
	I1010 19:26:21.355809  148123 logs.go:282] 0 containers: []
	W1010 19:26:21.355820  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:21.355828  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:21.355902  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:21.394567  148123 cri.go:89] found id: ""
	I1010 19:26:21.394608  148123 logs.go:282] 0 containers: []
	W1010 19:26:21.394617  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:21.394623  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:21.394684  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:21.430009  148123 cri.go:89] found id: ""
	I1010 19:26:21.430046  148123 logs.go:282] 0 containers: []
	W1010 19:26:21.430058  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:21.430069  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:21.430089  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:21.443267  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:21.443301  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:21.517046  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:21.517068  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:21.517083  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:21.594927  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:21.594978  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:21.634274  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:21.634311  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:20.227157  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:22.727736  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:20.566652  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:23.066459  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:22.550526  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:25.050647  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:24.189760  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:24.204355  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:24.204420  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:24.248130  148123 cri.go:89] found id: ""
	I1010 19:26:24.248160  148123 logs.go:282] 0 containers: []
	W1010 19:26:24.248171  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:24.248178  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:24.248244  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:24.293281  148123 cri.go:89] found id: ""
	I1010 19:26:24.293312  148123 logs.go:282] 0 containers: []
	W1010 19:26:24.293324  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:24.293331  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:24.293400  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:24.350709  148123 cri.go:89] found id: ""
	I1010 19:26:24.350743  148123 logs.go:282] 0 containers: []
	W1010 19:26:24.350755  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:24.350765  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:24.350838  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:24.394113  148123 cri.go:89] found id: ""
	I1010 19:26:24.394152  148123 logs.go:282] 0 containers: []
	W1010 19:26:24.394170  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:24.394181  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:24.394256  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:24.445999  148123 cri.go:89] found id: ""
	I1010 19:26:24.446030  148123 logs.go:282] 0 containers: []
	W1010 19:26:24.446042  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:24.446051  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:24.446120  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:24.483566  148123 cri.go:89] found id: ""
	I1010 19:26:24.483596  148123 logs.go:282] 0 containers: []
	W1010 19:26:24.483605  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:24.483612  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:24.483665  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:24.520705  148123 cri.go:89] found id: ""
	I1010 19:26:24.520736  148123 logs.go:282] 0 containers: []
	W1010 19:26:24.520748  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:24.520757  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:24.520825  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:24.557316  148123 cri.go:89] found id: ""
	I1010 19:26:24.557346  148123 logs.go:282] 0 containers: []
	W1010 19:26:24.557355  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:24.557364  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:24.557376  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:24.608065  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:24.608109  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:24.623202  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:24.623250  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:24.694665  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:24.694697  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:24.694716  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:24.783369  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:24.783414  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:27.327524  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:27.341132  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:27.341214  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:27.376589  148123 cri.go:89] found id: ""
	I1010 19:26:27.376618  148123 logs.go:282] 0 containers: []
	W1010 19:26:27.376627  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:27.376633  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:27.376684  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:27.414452  148123 cri.go:89] found id: ""
	I1010 19:26:27.414491  148123 logs.go:282] 0 containers: []
	W1010 19:26:27.414502  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:27.414510  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:27.414572  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:27.449839  148123 cri.go:89] found id: ""
	I1010 19:26:27.449867  148123 logs.go:282] 0 containers: []
	W1010 19:26:27.449875  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:27.449882  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:27.449932  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:27.492445  148123 cri.go:89] found id: ""
	I1010 19:26:27.492472  148123 logs.go:282] 0 containers: []
	W1010 19:26:27.492480  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:27.492487  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:27.492549  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:27.533032  148123 cri.go:89] found id: ""
	I1010 19:26:27.533085  148123 logs.go:282] 0 containers: []
	W1010 19:26:27.533095  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:27.533122  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:27.533199  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:27.571096  148123 cri.go:89] found id: ""
	I1010 19:26:27.571122  148123 logs.go:282] 0 containers: []
	W1010 19:26:27.571130  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:27.571135  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:27.571204  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:27.608762  148123 cri.go:89] found id: ""
	I1010 19:26:27.608798  148123 logs.go:282] 0 containers: []
	W1010 19:26:27.608809  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:27.608818  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:27.608896  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:27.648200  148123 cri.go:89] found id: ""
	I1010 19:26:27.648236  148123 logs.go:282] 0 containers: []
	W1010 19:26:27.648248  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:27.648260  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:27.648275  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:27.698124  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:27.698154  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:27.755009  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:27.755052  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:27.770048  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:27.770084  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:27.845475  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:27.845505  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:27.845519  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:24.729352  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:26.731831  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:25.566028  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:27.567052  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:27.555698  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:30.049914  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:30.423117  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:30.436706  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:30.436768  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:30.473367  148123 cri.go:89] found id: ""
	I1010 19:26:30.473396  148123 logs.go:282] 0 containers: []
	W1010 19:26:30.473408  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:30.473417  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:30.473470  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:30.510560  148123 cri.go:89] found id: ""
	I1010 19:26:30.510599  148123 logs.go:282] 0 containers: []
	W1010 19:26:30.510610  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:30.510619  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:30.510687  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:30.549682  148123 cri.go:89] found id: ""
	I1010 19:26:30.549715  148123 logs.go:282] 0 containers: []
	W1010 19:26:30.549726  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:30.549734  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:30.549798  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:30.586401  148123 cri.go:89] found id: ""
	I1010 19:26:30.586425  148123 logs.go:282] 0 containers: []
	W1010 19:26:30.586434  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:30.586441  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:30.586492  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:30.622012  148123 cri.go:89] found id: ""
	I1010 19:26:30.622037  148123 logs.go:282] 0 containers: []
	W1010 19:26:30.622045  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:30.622052  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:30.622107  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:30.658336  148123 cri.go:89] found id: ""
	I1010 19:26:30.658364  148123 logs.go:282] 0 containers: []
	W1010 19:26:30.658372  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:30.658378  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:30.658442  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:30.695495  148123 cri.go:89] found id: ""
	I1010 19:26:30.695523  148123 logs.go:282] 0 containers: []
	W1010 19:26:30.695532  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:30.695538  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:30.695591  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:30.735093  148123 cri.go:89] found id: ""
	I1010 19:26:30.735125  148123 logs.go:282] 0 containers: []
	W1010 19:26:30.735136  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:30.735148  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:30.735163  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:30.816097  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:30.816140  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:30.853878  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:30.853915  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:30.906289  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:30.906341  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:30.923742  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:30.923769  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:30.993226  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:33.493799  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:33.508083  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:33.508166  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:33.545019  148123 cri.go:89] found id: ""
	I1010 19:26:33.545055  148123 logs.go:282] 0 containers: []
	W1010 19:26:33.545072  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:33.545081  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:33.545150  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:29.226673  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:31.227117  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:33.727777  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:30.068231  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:32.566025  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:32.050118  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:34.051720  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:36.550138  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:33.584215  148123 cri.go:89] found id: ""
	I1010 19:26:33.584242  148123 logs.go:282] 0 containers: []
	W1010 19:26:33.584252  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:33.584259  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:33.584315  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:33.620598  148123 cri.go:89] found id: ""
	I1010 19:26:33.620627  148123 logs.go:282] 0 containers: []
	W1010 19:26:33.620636  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:33.620643  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:33.620691  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:33.655225  148123 cri.go:89] found id: ""
	I1010 19:26:33.655252  148123 logs.go:282] 0 containers: []
	W1010 19:26:33.655260  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:33.655267  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:33.655322  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:33.693464  148123 cri.go:89] found id: ""
	I1010 19:26:33.693490  148123 logs.go:282] 0 containers: []
	W1010 19:26:33.693498  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:33.693505  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:33.693554  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:33.734024  148123 cri.go:89] found id: ""
	I1010 19:26:33.734059  148123 logs.go:282] 0 containers: []
	W1010 19:26:33.734071  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:33.734079  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:33.734141  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:33.769434  148123 cri.go:89] found id: ""
	I1010 19:26:33.769467  148123 logs.go:282] 0 containers: []
	W1010 19:26:33.769476  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:33.769483  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:33.769533  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:33.809011  148123 cri.go:89] found id: ""
	I1010 19:26:33.809040  148123 logs.go:282] 0 containers: []
	W1010 19:26:33.809049  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:33.809059  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:33.809071  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:33.864186  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:33.864230  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:33.878681  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:33.878711  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:33.958225  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:33.958250  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:33.958265  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:34.034730  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:34.034774  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:36.577123  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:36.591494  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:36.591560  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:36.625170  148123 cri.go:89] found id: ""
	I1010 19:26:36.625210  148123 logs.go:282] 0 containers: []
	W1010 19:26:36.625222  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:36.625230  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:36.625295  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:36.660790  148123 cri.go:89] found id: ""
	I1010 19:26:36.660821  148123 logs.go:282] 0 containers: []
	W1010 19:26:36.660831  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:36.660838  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:36.660911  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:36.695742  148123 cri.go:89] found id: ""
	I1010 19:26:36.695774  148123 logs.go:282] 0 containers: []
	W1010 19:26:36.695786  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:36.695793  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:36.695854  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:36.729523  148123 cri.go:89] found id: ""
	I1010 19:26:36.729552  148123 logs.go:282] 0 containers: []
	W1010 19:26:36.729561  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:36.729569  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:36.729630  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:36.765515  148123 cri.go:89] found id: ""
	I1010 19:26:36.765549  148123 logs.go:282] 0 containers: []
	W1010 19:26:36.765561  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:36.765571  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:36.765635  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:36.799110  148123 cri.go:89] found id: ""
	I1010 19:26:36.799144  148123 logs.go:282] 0 containers: []
	W1010 19:26:36.799155  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:36.799164  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:36.799224  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:36.832806  148123 cri.go:89] found id: ""
	I1010 19:26:36.832834  148123 logs.go:282] 0 containers: []
	W1010 19:26:36.832845  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:36.832865  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:36.832933  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:36.869093  148123 cri.go:89] found id: ""
	I1010 19:26:36.869126  148123 logs.go:282] 0 containers: []
	W1010 19:26:36.869137  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:36.869149  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:36.869165  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:36.922229  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:36.922276  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:36.936973  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:36.937019  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:37.016400  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:37.016425  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:37.016448  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:37.106308  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:37.106354  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:36.227451  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:38.726229  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:35.067396  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:37.565711  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:38.550438  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:41.050698  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:39.652101  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:39.665546  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:39.665639  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:39.699646  148123 cri.go:89] found id: ""
	I1010 19:26:39.699675  148123 logs.go:282] 0 containers: []
	W1010 19:26:39.699686  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:39.699693  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:39.699745  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:39.735568  148123 cri.go:89] found id: ""
	I1010 19:26:39.735592  148123 logs.go:282] 0 containers: []
	W1010 19:26:39.735600  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:39.735606  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:39.735656  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:39.770021  148123 cri.go:89] found id: ""
	I1010 19:26:39.770049  148123 logs.go:282] 0 containers: []
	W1010 19:26:39.770060  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:39.770067  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:39.770139  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:39.804867  148123 cri.go:89] found id: ""
	I1010 19:26:39.804894  148123 logs.go:282] 0 containers: []
	W1010 19:26:39.804903  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:39.804910  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:39.804972  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:39.847074  148123 cri.go:89] found id: ""
	I1010 19:26:39.847104  148123 logs.go:282] 0 containers: []
	W1010 19:26:39.847115  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:39.847124  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:39.847200  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:39.890334  148123 cri.go:89] found id: ""
	I1010 19:26:39.890368  148123 logs.go:282] 0 containers: []
	W1010 19:26:39.890379  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:39.890387  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:39.890456  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:39.926316  148123 cri.go:89] found id: ""
	I1010 19:26:39.926346  148123 logs.go:282] 0 containers: []
	W1010 19:26:39.926355  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:39.926362  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:39.926416  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:39.966179  148123 cri.go:89] found id: ""
	I1010 19:26:39.966215  148123 logs.go:282] 0 containers: []
	W1010 19:26:39.966227  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:39.966239  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:39.966255  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:40.047457  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:40.047505  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:40.086611  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:40.086656  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:40.139123  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:40.139167  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:40.152966  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:40.152995  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:40.231009  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:42.731851  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:42.748201  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:42.748287  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:42.787567  148123 cri.go:89] found id: ""
	I1010 19:26:42.787599  148123 logs.go:282] 0 containers: []
	W1010 19:26:42.787607  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:42.787614  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:42.787697  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:42.831051  148123 cri.go:89] found id: ""
	I1010 19:26:42.831089  148123 logs.go:282] 0 containers: []
	W1010 19:26:42.831100  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:42.831109  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:42.831180  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:42.871352  148123 cri.go:89] found id: ""
	I1010 19:26:42.871390  148123 logs.go:282] 0 containers: []
	W1010 19:26:42.871402  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:42.871410  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:42.871484  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:42.910283  148123 cri.go:89] found id: ""
	I1010 19:26:42.910316  148123 logs.go:282] 0 containers: []
	W1010 19:26:42.910327  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:42.910335  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:42.910403  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:42.946971  148123 cri.go:89] found id: ""
	I1010 19:26:42.947003  148123 logs.go:282] 0 containers: []
	W1010 19:26:42.947012  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:42.947019  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:42.947087  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:42.985484  148123 cri.go:89] found id: ""
	I1010 19:26:42.985517  148123 logs.go:282] 0 containers: []
	W1010 19:26:42.985528  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:42.985537  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:42.985603  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:43.024500  148123 cri.go:89] found id: ""
	I1010 19:26:43.024535  148123 logs.go:282] 0 containers: []
	W1010 19:26:43.024544  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:43.024552  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:43.024608  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:43.064008  148123 cri.go:89] found id: ""
	I1010 19:26:43.064039  148123 logs.go:282] 0 containers: []
	W1010 19:26:43.064049  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:43.064060  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:43.064075  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:43.119367  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:43.119415  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:43.133396  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:43.133428  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:43.207447  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:43.207470  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:43.207491  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:43.290416  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:43.290456  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:40.727919  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:43.227782  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:40.066461  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:42.565505  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:43.051835  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:45.052308  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:45.834115  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:45.849172  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:45.849238  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:45.886020  148123 cri.go:89] found id: ""
	I1010 19:26:45.886049  148123 logs.go:282] 0 containers: []
	W1010 19:26:45.886060  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:45.886067  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:45.886152  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:45.923188  148123 cri.go:89] found id: ""
	I1010 19:26:45.923218  148123 logs.go:282] 0 containers: []
	W1010 19:26:45.923245  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:45.923253  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:45.923316  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:45.960297  148123 cri.go:89] found id: ""
	I1010 19:26:45.960337  148123 logs.go:282] 0 containers: []
	W1010 19:26:45.960351  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:45.960361  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:45.960432  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:45.995140  148123 cri.go:89] found id: ""
	I1010 19:26:45.995173  148123 logs.go:282] 0 containers: []
	W1010 19:26:45.995184  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:45.995191  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:45.995265  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:46.040390  148123 cri.go:89] found id: ""
	I1010 19:26:46.040418  148123 logs.go:282] 0 containers: []
	W1010 19:26:46.040426  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:46.040433  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:46.040500  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:46.079999  148123 cri.go:89] found id: ""
	I1010 19:26:46.080029  148123 logs.go:282] 0 containers: []
	W1010 19:26:46.080042  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:46.080052  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:46.080105  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:46.117331  148123 cri.go:89] found id: ""
	I1010 19:26:46.117357  148123 logs.go:282] 0 containers: []
	W1010 19:26:46.117364  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:46.117370  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:46.117441  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:46.152248  148123 cri.go:89] found id: ""
	I1010 19:26:46.152279  148123 logs.go:282] 0 containers: []
	W1010 19:26:46.152290  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:46.152303  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:46.152319  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:46.192503  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:46.192533  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:46.246419  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:46.246456  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:46.259864  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:46.259895  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:46.338518  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:46.338549  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:46.338567  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:45.726776  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:48.228318  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:44.567056  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:47.065636  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:47.551013  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:50.053824  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:48.924216  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:48.938741  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:48.938805  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:48.973310  148123 cri.go:89] found id: ""
	I1010 19:26:48.973339  148123 logs.go:282] 0 containers: []
	W1010 19:26:48.973348  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:48.973356  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:48.973411  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:49.010467  148123 cri.go:89] found id: ""
	I1010 19:26:49.010492  148123 logs.go:282] 0 containers: []
	W1010 19:26:49.010500  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:49.010506  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:49.010585  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:49.051621  148123 cri.go:89] found id: ""
	I1010 19:26:49.051646  148123 logs.go:282] 0 containers: []
	W1010 19:26:49.051653  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:49.051664  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:49.051727  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:49.091090  148123 cri.go:89] found id: ""
	I1010 19:26:49.091121  148123 logs.go:282] 0 containers: []
	W1010 19:26:49.091132  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:49.091140  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:49.091202  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:49.127669  148123 cri.go:89] found id: ""
	I1010 19:26:49.127712  148123 logs.go:282] 0 containers: []
	W1010 19:26:49.127724  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:49.127732  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:49.127804  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:49.164446  148123 cri.go:89] found id: ""
	I1010 19:26:49.164476  148123 logs.go:282] 0 containers: []
	W1010 19:26:49.164485  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:49.164491  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:49.164553  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:49.201222  148123 cri.go:89] found id: ""
	I1010 19:26:49.201263  148123 logs.go:282] 0 containers: []
	W1010 19:26:49.201275  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:49.201284  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:49.201345  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:49.238258  148123 cri.go:89] found id: ""
	I1010 19:26:49.238283  148123 logs.go:282] 0 containers: []
	W1010 19:26:49.238290  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:49.238299  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:49.238311  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:49.313576  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:49.313619  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:49.351066  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:49.351096  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:49.402772  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:49.402823  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:49.417713  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:49.417752  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:49.488834  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:51.989031  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:52.003056  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:52.003140  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:52.039682  148123 cri.go:89] found id: ""
	I1010 19:26:52.039709  148123 logs.go:282] 0 containers: []
	W1010 19:26:52.039718  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:52.039725  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:52.039788  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:52.081402  148123 cri.go:89] found id: ""
	I1010 19:26:52.081433  148123 logs.go:282] 0 containers: []
	W1010 19:26:52.081443  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:52.081449  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:52.081502  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:52.118270  148123 cri.go:89] found id: ""
	I1010 19:26:52.118304  148123 logs.go:282] 0 containers: []
	W1010 19:26:52.118315  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:52.118325  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:52.118392  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:52.154692  148123 cri.go:89] found id: ""
	I1010 19:26:52.154724  148123 logs.go:282] 0 containers: []
	W1010 19:26:52.154735  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:52.154743  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:52.154807  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:52.190056  148123 cri.go:89] found id: ""
	I1010 19:26:52.190085  148123 logs.go:282] 0 containers: []
	W1010 19:26:52.190094  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:52.190101  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:52.190161  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:52.225468  148123 cri.go:89] found id: ""
	I1010 19:26:52.225501  148123 logs.go:282] 0 containers: []
	W1010 19:26:52.225511  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:52.225521  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:52.225589  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:52.260682  148123 cri.go:89] found id: ""
	I1010 19:26:52.260710  148123 logs.go:282] 0 containers: []
	W1010 19:26:52.260718  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:52.260724  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:52.260774  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:52.297613  148123 cri.go:89] found id: ""
	I1010 19:26:52.297642  148123 logs.go:282] 0 containers: []
	W1010 19:26:52.297659  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:52.297672  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:52.297689  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:52.352224  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:52.352267  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:52.367003  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:52.367033  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:52.443124  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:52.443157  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:52.443173  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:52.522391  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:52.522433  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:50.726363  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:52.727069  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:49.069109  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:51.566132  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:53.567867  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:52.554195  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:55.050995  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:55.066877  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:55.082191  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:55.082258  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:55.120729  148123 cri.go:89] found id: ""
	I1010 19:26:55.120763  148123 logs.go:282] 0 containers: []
	W1010 19:26:55.120775  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:55.120784  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:55.120863  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:55.154795  148123 cri.go:89] found id: ""
	I1010 19:26:55.154827  148123 logs.go:282] 0 containers: []
	W1010 19:26:55.154839  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:55.154848  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:55.154904  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:55.194846  148123 cri.go:89] found id: ""
	I1010 19:26:55.194876  148123 logs.go:282] 0 containers: []
	W1010 19:26:55.194889  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:55.194897  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:55.194958  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:55.230173  148123 cri.go:89] found id: ""
	I1010 19:26:55.230201  148123 logs.go:282] 0 containers: []
	W1010 19:26:55.230212  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:55.230220  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:55.230290  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:55.264999  148123 cri.go:89] found id: ""
	I1010 19:26:55.265025  148123 logs.go:282] 0 containers: []
	W1010 19:26:55.265032  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:55.265039  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:55.265096  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:55.303666  148123 cri.go:89] found id: ""
	I1010 19:26:55.303695  148123 logs.go:282] 0 containers: []
	W1010 19:26:55.303703  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:55.303724  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:55.303797  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:55.342061  148123 cri.go:89] found id: ""
	I1010 19:26:55.342087  148123 logs.go:282] 0 containers: []
	W1010 19:26:55.342098  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:55.342106  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:55.342171  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:55.378305  148123 cri.go:89] found id: ""
	I1010 19:26:55.378336  148123 logs.go:282] 0 containers: []
	W1010 19:26:55.378345  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:55.378354  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:55.378365  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:55.416815  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:55.416865  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:55.467371  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:55.467415  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:55.481704  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:55.481744  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:55.559219  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:55.559242  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:55.559254  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:58.141472  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:58.154326  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:58.154394  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:58.193053  148123 cri.go:89] found id: ""
	I1010 19:26:58.193080  148123 logs.go:282] 0 containers: []
	W1010 19:26:58.193088  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:58.193094  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:58.193142  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:58.232514  148123 cri.go:89] found id: ""
	I1010 19:26:58.232538  148123 logs.go:282] 0 containers: []
	W1010 19:26:58.232545  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:58.232551  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:58.232599  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:58.266724  148123 cri.go:89] found id: ""
	I1010 19:26:58.266756  148123 logs.go:282] 0 containers: []
	W1010 19:26:58.266765  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:58.266771  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:58.266824  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:58.301986  148123 cri.go:89] found id: ""
	I1010 19:26:58.302012  148123 logs.go:282] 0 containers: []
	W1010 19:26:58.302019  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:58.302031  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:58.302080  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:58.340394  148123 cri.go:89] found id: ""
	I1010 19:26:58.340431  148123 logs.go:282] 0 containers: []
	W1010 19:26:58.340440  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:58.340448  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:58.340500  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:58.407035  148123 cri.go:89] found id: ""
	I1010 19:26:58.407069  148123 logs.go:282] 0 containers: []
	W1010 19:26:58.407081  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:58.407088  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:58.407151  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:58.446650  148123 cri.go:89] found id: ""
	I1010 19:26:58.446682  148123 logs.go:282] 0 containers: []
	W1010 19:26:58.446694  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:58.446703  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:58.446763  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:58.480719  148123 cri.go:89] found id: ""
	I1010 19:26:58.480750  148123 logs.go:282] 0 containers: []
	W1010 19:26:58.480759  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:58.480768  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:58.480781  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:58.561568  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:58.561602  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:55.227199  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:57.726841  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:56.065787  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:58.566732  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:57.550718  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:59.550793  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:58.609237  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:58.609264  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:58.659705  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:58.659748  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:58.674646  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:58.674680  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:58.750922  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:01.252098  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:01.265420  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:01.265497  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:01.300133  148123 cri.go:89] found id: ""
	I1010 19:27:01.300168  148123 logs.go:282] 0 containers: []
	W1010 19:27:01.300180  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:01.300188  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:01.300272  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:01.337451  148123 cri.go:89] found id: ""
	I1010 19:27:01.337477  148123 logs.go:282] 0 containers: []
	W1010 19:27:01.337485  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:01.337492  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:01.337539  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:01.371987  148123 cri.go:89] found id: ""
	I1010 19:27:01.372019  148123 logs.go:282] 0 containers: []
	W1010 19:27:01.372028  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:01.372034  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:01.372098  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:01.408996  148123 cri.go:89] found id: ""
	I1010 19:27:01.409022  148123 logs.go:282] 0 containers: []
	W1010 19:27:01.409033  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:01.409042  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:01.409109  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:01.443558  148123 cri.go:89] found id: ""
	I1010 19:27:01.443587  148123 logs.go:282] 0 containers: []
	W1010 19:27:01.443595  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:01.443602  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:01.443663  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:01.478517  148123 cri.go:89] found id: ""
	I1010 19:27:01.478546  148123 logs.go:282] 0 containers: []
	W1010 19:27:01.478555  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:01.478561  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:01.478611  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:01.513528  148123 cri.go:89] found id: ""
	I1010 19:27:01.513556  148123 logs.go:282] 0 containers: []
	W1010 19:27:01.513568  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:01.513576  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:01.513641  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:01.558861  148123 cri.go:89] found id: ""
	I1010 19:27:01.558897  148123 logs.go:282] 0 containers: []
	W1010 19:27:01.558909  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:01.558921  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:01.558937  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:01.599719  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:01.599747  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:01.650736  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:01.650774  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:01.664643  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:01.664669  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:01.736533  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:01.736557  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:01.736570  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:00.225540  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:02.226962  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:00.567193  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:03.066587  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:02.050439  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:04.050984  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:06.550977  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:04.317302  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:04.330450  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:04.330519  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:04.368893  148123 cri.go:89] found id: ""
	I1010 19:27:04.368923  148123 logs.go:282] 0 containers: []
	W1010 19:27:04.368932  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:04.368939  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:04.368993  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:04.406598  148123 cri.go:89] found id: ""
	I1010 19:27:04.406625  148123 logs.go:282] 0 containers: []
	W1010 19:27:04.406634  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:04.406640  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:04.406692  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:04.441485  148123 cri.go:89] found id: ""
	I1010 19:27:04.441515  148123 logs.go:282] 0 containers: []
	W1010 19:27:04.441525  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:04.441532  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:04.441581  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:04.476663  148123 cri.go:89] found id: ""
	I1010 19:27:04.476690  148123 logs.go:282] 0 containers: []
	W1010 19:27:04.476698  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:04.476704  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:04.476765  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:04.512124  148123 cri.go:89] found id: ""
	I1010 19:27:04.512171  148123 logs.go:282] 0 containers: []
	W1010 19:27:04.512186  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:04.512195  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:04.512260  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:04.547902  148123 cri.go:89] found id: ""
	I1010 19:27:04.547929  148123 logs.go:282] 0 containers: []
	W1010 19:27:04.547940  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:04.547949  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:04.548008  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:04.585257  148123 cri.go:89] found id: ""
	I1010 19:27:04.585287  148123 logs.go:282] 0 containers: []
	W1010 19:27:04.585297  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:04.585303  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:04.585352  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:04.621019  148123 cri.go:89] found id: ""
	I1010 19:27:04.621048  148123 logs.go:282] 0 containers: []
	W1010 19:27:04.621057  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:04.621068  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:04.621080  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:04.662770  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:04.662801  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:04.714551  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:04.714592  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:04.730922  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:04.730951  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:04.841139  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:04.841163  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:04.841178  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:07.428528  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:07.442423  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:07.442498  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:07.476722  148123 cri.go:89] found id: ""
	I1010 19:27:07.476753  148123 logs.go:282] 0 containers: []
	W1010 19:27:07.476764  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:07.476772  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:07.476835  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:07.512211  148123 cri.go:89] found id: ""
	I1010 19:27:07.512245  148123 logs.go:282] 0 containers: []
	W1010 19:27:07.512256  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:07.512263  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:07.512325  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:07.546546  148123 cri.go:89] found id: ""
	I1010 19:27:07.546588  148123 logs.go:282] 0 containers: []
	W1010 19:27:07.546601  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:07.546619  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:07.546688  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:07.585743  148123 cri.go:89] found id: ""
	I1010 19:27:07.585768  148123 logs.go:282] 0 containers: []
	W1010 19:27:07.585777  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:07.585783  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:07.585848  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:07.626827  148123 cri.go:89] found id: ""
	I1010 19:27:07.626855  148123 logs.go:282] 0 containers: []
	W1010 19:27:07.626865  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:07.626874  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:07.626960  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:07.661910  148123 cri.go:89] found id: ""
	I1010 19:27:07.661940  148123 logs.go:282] 0 containers: []
	W1010 19:27:07.661948  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:07.661955  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:07.662072  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:07.698255  148123 cri.go:89] found id: ""
	I1010 19:27:07.698288  148123 logs.go:282] 0 containers: []
	W1010 19:27:07.698301  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:07.698309  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:07.698374  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:07.734389  148123 cri.go:89] found id: ""
	I1010 19:27:07.734417  148123 logs.go:282] 0 containers: []
	W1010 19:27:07.734428  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:07.734440  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:07.734454  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:07.814089  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:07.814130  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:07.854799  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:07.854831  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:07.906595  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:07.906635  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:07.921928  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:07.921966  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:07.989839  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:04.727522  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:07.226694  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:05.565868  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:07.567139  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:09.050772  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:11.051291  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:10.490115  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:10.504216  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:10.504292  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:10.554789  148123 cri.go:89] found id: ""
	I1010 19:27:10.554815  148123 logs.go:282] 0 containers: []
	W1010 19:27:10.554825  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:10.554831  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:10.554889  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:10.618970  148123 cri.go:89] found id: ""
	I1010 19:27:10.618997  148123 logs.go:282] 0 containers: []
	W1010 19:27:10.619005  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:10.619012  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:10.619061  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:10.666900  148123 cri.go:89] found id: ""
	I1010 19:27:10.666933  148123 logs.go:282] 0 containers: []
	W1010 19:27:10.666946  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:10.666953  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:10.667023  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:10.704364  148123 cri.go:89] found id: ""
	I1010 19:27:10.704405  148123 logs.go:282] 0 containers: []
	W1010 19:27:10.704430  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:10.704450  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:10.704525  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:10.742341  148123 cri.go:89] found id: ""
	I1010 19:27:10.742380  148123 logs.go:282] 0 containers: []
	W1010 19:27:10.742389  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:10.742396  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:10.742461  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:10.783056  148123 cri.go:89] found id: ""
	I1010 19:27:10.783088  148123 logs.go:282] 0 containers: []
	W1010 19:27:10.783098  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:10.783105  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:10.783174  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:10.823198  148123 cri.go:89] found id: ""
	I1010 19:27:10.823224  148123 logs.go:282] 0 containers: []
	W1010 19:27:10.823233  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:10.823243  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:10.823302  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:10.858154  148123 cri.go:89] found id: ""
	I1010 19:27:10.858195  148123 logs.go:282] 0 containers: []
	W1010 19:27:10.858204  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:10.858213  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:10.858229  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:10.935491  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:10.935518  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:10.935532  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:11.015653  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:11.015694  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:11.058765  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:11.058793  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:11.116748  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:11.116799  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:09.727270  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:12.225797  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:10.065372  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:12.065695  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:13.550669  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:16.051044  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:13.631579  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:13.644885  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:13.644953  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:13.685242  148123 cri.go:89] found id: ""
	I1010 19:27:13.685273  148123 logs.go:282] 0 containers: []
	W1010 19:27:13.685285  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:13.685294  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:13.685364  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:13.720831  148123 cri.go:89] found id: ""
	I1010 19:27:13.720866  148123 logs.go:282] 0 containers: []
	W1010 19:27:13.720877  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:13.720885  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:13.720947  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:13.766374  148123 cri.go:89] found id: ""
	I1010 19:27:13.766404  148123 logs.go:282] 0 containers: []
	W1010 19:27:13.766413  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:13.766419  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:13.766468  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:13.804550  148123 cri.go:89] found id: ""
	I1010 19:27:13.804585  148123 logs.go:282] 0 containers: []
	W1010 19:27:13.804597  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:13.804606  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:13.804674  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:13.842406  148123 cri.go:89] found id: ""
	I1010 19:27:13.842432  148123 logs.go:282] 0 containers: []
	W1010 19:27:13.842440  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:13.842460  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:13.842678  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:13.882365  148123 cri.go:89] found id: ""
	I1010 19:27:13.882402  148123 logs.go:282] 0 containers: []
	W1010 19:27:13.882415  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:13.882423  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:13.882505  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:13.926016  148123 cri.go:89] found id: ""
	I1010 19:27:13.926052  148123 logs.go:282] 0 containers: []
	W1010 19:27:13.926065  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:13.926075  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:13.926177  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:13.960485  148123 cri.go:89] found id: ""
	I1010 19:27:13.960523  148123 logs.go:282] 0 containers: []
	W1010 19:27:13.960533  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:13.960542  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:13.960558  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:14.013013  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:14.013055  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:14.026906  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:14.026935  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:14.104469  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:14.104494  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:14.104507  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:14.185917  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:14.185959  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:16.732088  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:16.746646  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:16.746710  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:16.784715  148123 cri.go:89] found id: ""
	I1010 19:27:16.784739  148123 logs.go:282] 0 containers: []
	W1010 19:27:16.784749  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:16.784755  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:16.784807  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:16.824181  148123 cri.go:89] found id: ""
	I1010 19:27:16.824210  148123 logs.go:282] 0 containers: []
	W1010 19:27:16.824220  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:16.824228  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:16.824289  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:16.860901  148123 cri.go:89] found id: ""
	I1010 19:27:16.860932  148123 logs.go:282] 0 containers: []
	W1010 19:27:16.860941  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:16.860947  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:16.860997  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:16.895127  148123 cri.go:89] found id: ""
	I1010 19:27:16.895159  148123 logs.go:282] 0 containers: []
	W1010 19:27:16.895175  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:16.895183  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:16.895266  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:16.931989  148123 cri.go:89] found id: ""
	I1010 19:27:16.932020  148123 logs.go:282] 0 containers: []
	W1010 19:27:16.932028  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:16.932035  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:16.932086  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:16.967254  148123 cri.go:89] found id: ""
	I1010 19:27:16.967283  148123 logs.go:282] 0 containers: []
	W1010 19:27:16.967292  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:16.967299  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:16.967347  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:17.003010  148123 cri.go:89] found id: ""
	I1010 19:27:17.003035  148123 logs.go:282] 0 containers: []
	W1010 19:27:17.003043  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:17.003050  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:17.003097  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:17.040053  148123 cri.go:89] found id: ""
	I1010 19:27:17.040093  148123 logs.go:282] 0 containers: []
	W1010 19:27:17.040102  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:17.040118  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:17.040131  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:17.093176  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:17.093217  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:17.108086  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:17.108116  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:17.186423  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:17.186452  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:17.186467  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:17.271884  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:17.271926  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:14.227197  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:16.739354  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:14.066233  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:16.565852  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:18.566337  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:18.051613  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:20.549888  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:19.817954  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:19.831924  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:19.831993  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:19.873415  148123 cri.go:89] found id: ""
	I1010 19:27:19.873441  148123 logs.go:282] 0 containers: []
	W1010 19:27:19.873459  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:19.873466  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:19.873527  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:19.912174  148123 cri.go:89] found id: ""
	I1010 19:27:19.912207  148123 logs.go:282] 0 containers: []
	W1010 19:27:19.912219  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:19.912227  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:19.912298  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:19.948422  148123 cri.go:89] found id: ""
	I1010 19:27:19.948456  148123 logs.go:282] 0 containers: []
	W1010 19:27:19.948466  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:19.948472  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:19.948524  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:19.983917  148123 cri.go:89] found id: ""
	I1010 19:27:19.983949  148123 logs.go:282] 0 containers: []
	W1010 19:27:19.983962  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:19.983970  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:19.984024  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:20.019160  148123 cri.go:89] found id: ""
	I1010 19:27:20.019187  148123 logs.go:282] 0 containers: []
	W1010 19:27:20.019198  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:20.019207  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:20.019271  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:20.056073  148123 cri.go:89] found id: ""
	I1010 19:27:20.056104  148123 logs.go:282] 0 containers: []
	W1010 19:27:20.056116  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:20.056124  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:20.056187  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:20.095877  148123 cri.go:89] found id: ""
	I1010 19:27:20.095916  148123 logs.go:282] 0 containers: []
	W1010 19:27:20.095928  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:20.095935  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:20.096007  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:20.132360  148123 cri.go:89] found id: ""
	I1010 19:27:20.132385  148123 logs.go:282] 0 containers: []
	W1010 19:27:20.132394  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:20.132402  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:20.132413  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:20.190573  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:20.190619  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:20.205785  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:20.205819  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:20.278882  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:20.278911  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:20.278924  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:20.363982  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:20.364024  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:22.906059  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:22.919839  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:22.919912  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:22.958203  148123 cri.go:89] found id: ""
	I1010 19:27:22.958242  148123 logs.go:282] 0 containers: []
	W1010 19:27:22.958252  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:22.958258  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:22.958312  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:22.995834  148123 cri.go:89] found id: ""
	I1010 19:27:22.995866  148123 logs.go:282] 0 containers: []
	W1010 19:27:22.995874  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:22.995880  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:22.995945  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:23.035913  148123 cri.go:89] found id: ""
	I1010 19:27:23.035950  148123 logs.go:282] 0 containers: []
	W1010 19:27:23.035962  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:23.035970  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:23.036039  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:23.073995  148123 cri.go:89] found id: ""
	I1010 19:27:23.074036  148123 logs.go:282] 0 containers: []
	W1010 19:27:23.074049  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:23.074057  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:23.074117  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:23.111190  148123 cri.go:89] found id: ""
	I1010 19:27:23.111222  148123 logs.go:282] 0 containers: []
	W1010 19:27:23.111234  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:23.111243  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:23.111305  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:23.147771  148123 cri.go:89] found id: ""
	I1010 19:27:23.147797  148123 logs.go:282] 0 containers: []
	W1010 19:27:23.147806  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:23.147814  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:23.147878  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:23.185485  148123 cri.go:89] found id: ""
	I1010 19:27:23.185517  148123 logs.go:282] 0 containers: []
	W1010 19:27:23.185527  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:23.185535  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:23.185598  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:23.222030  148123 cri.go:89] found id: ""
	I1010 19:27:23.222060  148123 logs.go:282] 0 containers: []
	W1010 19:27:23.222070  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:23.222081  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:23.222097  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:23.301826  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:23.301849  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:23.301861  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:23.378688  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:23.378730  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:23.426683  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:23.426717  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:23.478425  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:23.478467  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:19.226994  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:21.727366  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:21.067094  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:23.567075  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:22.550076  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:24.551681  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:25.993768  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:26.008311  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:26.008383  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:26.046315  148123 cri.go:89] found id: ""
	I1010 19:27:26.046343  148123 logs.go:282] 0 containers: []
	W1010 19:27:26.046355  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:26.046364  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:26.046418  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:26.081823  148123 cri.go:89] found id: ""
	I1010 19:27:26.081847  148123 logs.go:282] 0 containers: []
	W1010 19:27:26.081855  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:26.081861  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:26.081913  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:26.117139  148123 cri.go:89] found id: ""
	I1010 19:27:26.117167  148123 logs.go:282] 0 containers: []
	W1010 19:27:26.117183  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:26.117191  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:26.117242  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:26.154477  148123 cri.go:89] found id: ""
	I1010 19:27:26.154501  148123 logs.go:282] 0 containers: []
	W1010 19:27:26.154510  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:26.154517  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:26.154568  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:26.187497  148123 cri.go:89] found id: ""
	I1010 19:27:26.187527  148123 logs.go:282] 0 containers: []
	W1010 19:27:26.187535  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:26.187541  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:26.187604  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:26.223110  148123 cri.go:89] found id: ""
	I1010 19:27:26.223140  148123 logs.go:282] 0 containers: []
	W1010 19:27:26.223151  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:26.223160  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:26.223226  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:26.264975  148123 cri.go:89] found id: ""
	I1010 19:27:26.265003  148123 logs.go:282] 0 containers: []
	W1010 19:27:26.265011  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:26.265017  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:26.265067  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:26.307467  148123 cri.go:89] found id: ""
	I1010 19:27:26.307494  148123 logs.go:282] 0 containers: []
	W1010 19:27:26.307503  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:26.307512  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:26.307524  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:26.346313  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:26.346342  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:26.400069  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:26.400109  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:26.415467  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:26.415500  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:26.486986  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:26.487016  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:26.487033  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:24.226736  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:26.228720  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:28.726470  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:26.067100  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:28.565675  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:27.051110  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:29.051207  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:31.553085  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:29.064522  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:29.077631  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:29.077696  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:29.116127  148123 cri.go:89] found id: ""
	I1010 19:27:29.116154  148123 logs.go:282] 0 containers: []
	W1010 19:27:29.116164  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:29.116172  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:29.116241  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:29.152863  148123 cri.go:89] found id: ""
	I1010 19:27:29.152893  148123 logs.go:282] 0 containers: []
	W1010 19:27:29.152901  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:29.152907  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:29.152963  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:29.189951  148123 cri.go:89] found id: ""
	I1010 19:27:29.189983  148123 logs.go:282] 0 containers: []
	W1010 19:27:29.189992  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:29.189999  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:29.190060  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:29.226611  148123 cri.go:89] found id: ""
	I1010 19:27:29.226646  148123 logs.go:282] 0 containers: []
	W1010 19:27:29.226657  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:29.226665  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:29.226729  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:29.271884  148123 cri.go:89] found id: ""
	I1010 19:27:29.271921  148123 logs.go:282] 0 containers: []
	W1010 19:27:29.271934  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:29.271944  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:29.272016  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:29.306128  148123 cri.go:89] found id: ""
	I1010 19:27:29.306168  148123 logs.go:282] 0 containers: []
	W1010 19:27:29.306181  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:29.306191  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:29.306255  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:29.341869  148123 cri.go:89] found id: ""
	I1010 19:27:29.341893  148123 logs.go:282] 0 containers: []
	W1010 19:27:29.341901  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:29.341908  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:29.341962  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:29.376213  148123 cri.go:89] found id: ""
	I1010 19:27:29.376240  148123 logs.go:282] 0 containers: []
	W1010 19:27:29.376249  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:29.376259  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:29.376273  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:29.428827  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:29.428878  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:29.443719  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:29.443754  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:29.516166  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:29.516193  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:29.516208  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:29.596643  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:29.596681  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:32.135286  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:32.148791  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:32.148893  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:32.185511  148123 cri.go:89] found id: ""
	I1010 19:27:32.185542  148123 logs.go:282] 0 containers: []
	W1010 19:27:32.185553  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:32.185560  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:32.185626  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:32.222908  148123 cri.go:89] found id: ""
	I1010 19:27:32.222934  148123 logs.go:282] 0 containers: []
	W1010 19:27:32.222942  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:32.222948  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:32.222999  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:32.260998  148123 cri.go:89] found id: ""
	I1010 19:27:32.261033  148123 logs.go:282] 0 containers: []
	W1010 19:27:32.261045  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:32.261063  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:32.261128  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:32.298311  148123 cri.go:89] found id: ""
	I1010 19:27:32.298339  148123 logs.go:282] 0 containers: []
	W1010 19:27:32.298348  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:32.298355  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:32.298409  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:32.334134  148123 cri.go:89] found id: ""
	I1010 19:27:32.334220  148123 logs.go:282] 0 containers: []
	W1010 19:27:32.334236  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:32.334249  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:32.334319  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:32.369690  148123 cri.go:89] found id: ""
	I1010 19:27:32.369723  148123 logs.go:282] 0 containers: []
	W1010 19:27:32.369735  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:32.369743  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:32.369807  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:32.407164  148123 cri.go:89] found id: ""
	I1010 19:27:32.407210  148123 logs.go:282] 0 containers: []
	W1010 19:27:32.407218  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:32.407224  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:32.407279  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:32.446468  148123 cri.go:89] found id: ""
	I1010 19:27:32.446494  148123 logs.go:282] 0 containers: []
	W1010 19:27:32.446505  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:32.446519  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:32.446535  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:32.497953  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:32.497997  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:32.513590  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:32.513620  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:32.594037  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:32.594072  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:32.594090  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:32.673546  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:32.673587  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:30.727725  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:32.727813  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:31.066731  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:33.067815  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:34.050574  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:36.550119  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:35.226084  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:35.241152  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:35.241222  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:35.283208  148123 cri.go:89] found id: ""
	I1010 19:27:35.283245  148123 logs.go:282] 0 containers: []
	W1010 19:27:35.283269  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:35.283286  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:35.283346  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:35.320406  148123 cri.go:89] found id: ""
	I1010 19:27:35.320444  148123 logs.go:282] 0 containers: []
	W1010 19:27:35.320457  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:35.320466  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:35.320523  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:35.360984  148123 cri.go:89] found id: ""
	I1010 19:27:35.361015  148123 logs.go:282] 0 containers: []
	W1010 19:27:35.361027  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:35.361034  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:35.361097  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:35.400191  148123 cri.go:89] found id: ""
	I1010 19:27:35.400219  148123 logs.go:282] 0 containers: []
	W1010 19:27:35.400230  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:35.400244  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:35.400314  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:35.438092  148123 cri.go:89] found id: ""
	I1010 19:27:35.438126  148123 logs.go:282] 0 containers: []
	W1010 19:27:35.438139  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:35.438147  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:35.438215  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:35.472656  148123 cri.go:89] found id: ""
	I1010 19:27:35.472681  148123 logs.go:282] 0 containers: []
	W1010 19:27:35.472688  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:35.472694  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:35.472743  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:35.505895  148123 cri.go:89] found id: ""
	I1010 19:27:35.505931  148123 logs.go:282] 0 containers: []
	W1010 19:27:35.505942  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:35.505950  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:35.506015  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:35.540406  148123 cri.go:89] found id: ""
	I1010 19:27:35.540441  148123 logs.go:282] 0 containers: []
	W1010 19:27:35.540451  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:35.540462  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:35.540478  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:35.555874  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:35.555903  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:35.628400  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:35.628427  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:35.628453  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:35.708262  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:35.708305  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:35.752796  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:35.752824  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:38.306212  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:38.319473  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:38.319543  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:38.357493  148123 cri.go:89] found id: ""
	I1010 19:27:38.357520  148123 logs.go:282] 0 containers: []
	W1010 19:27:38.357529  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:38.357536  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:38.357588  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:38.395908  148123 cri.go:89] found id: ""
	I1010 19:27:38.395935  148123 logs.go:282] 0 containers: []
	W1010 19:27:38.395943  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:38.395949  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:38.396010  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:38.434495  148123 cri.go:89] found id: ""
	I1010 19:27:38.434529  148123 logs.go:282] 0 containers: []
	W1010 19:27:38.434539  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:38.434545  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:38.434608  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:38.474647  148123 cri.go:89] found id: ""
	I1010 19:27:38.474686  148123 logs.go:282] 0 containers: []
	W1010 19:27:38.474698  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:38.474710  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:38.474784  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:38.511105  148123 cri.go:89] found id: ""
	I1010 19:27:38.511133  148123 logs.go:282] 0 containers: []
	W1010 19:27:38.511141  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:38.511148  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:38.511199  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:38.551011  148123 cri.go:89] found id: ""
	I1010 19:27:38.551055  148123 logs.go:282] 0 containers: []
	W1010 19:27:38.551066  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:38.551074  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:38.551139  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:35.227301  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:37.726528  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:35.567838  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:38.066658  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:38.552499  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:40.544561  147758 pod_ready.go:82] duration metric: took 4m0.00091784s for pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace to be "Ready" ...
	E1010 19:27:40.544600  147758 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace to be "Ready" (will not retry!)
	I1010 19:27:40.544623  147758 pod_ready.go:39] duration metric: took 4m15.623470592s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 19:27:40.544664  147758 kubeadm.go:597] duration metric: took 4m22.92080204s to restartPrimaryControlPlane
	W1010 19:27:40.544737  147758 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1010 19:27:40.544829  147758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1010 19:27:38.590579  148123 cri.go:89] found id: ""
	I1010 19:27:38.590606  148123 logs.go:282] 0 containers: []
	W1010 19:27:38.590615  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:38.590621  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:38.590686  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:38.627775  148123 cri.go:89] found id: ""
	I1010 19:27:38.627812  148123 logs.go:282] 0 containers: []
	W1010 19:27:38.627826  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:38.627838  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:38.627854  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:38.683195  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:38.683239  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:38.697220  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:38.697250  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:38.771619  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:38.771651  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:38.771668  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:38.851324  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:38.851362  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:41.408165  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:41.422958  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:41.423032  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:41.466774  148123 cri.go:89] found id: ""
	I1010 19:27:41.466806  148123 logs.go:282] 0 containers: []
	W1010 19:27:41.466816  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:41.466823  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:41.466874  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:41.506429  148123 cri.go:89] found id: ""
	I1010 19:27:41.506470  148123 logs.go:282] 0 containers: []
	W1010 19:27:41.506482  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:41.506489  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:41.506555  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:41.548929  148123 cri.go:89] found id: ""
	I1010 19:27:41.548965  148123 logs.go:282] 0 containers: []
	W1010 19:27:41.548976  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:41.548983  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:41.549044  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:41.592243  148123 cri.go:89] found id: ""
	I1010 19:27:41.592274  148123 logs.go:282] 0 containers: []
	W1010 19:27:41.592283  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:41.592290  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:41.592342  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:41.628516  148123 cri.go:89] found id: ""
	I1010 19:27:41.628558  148123 logs.go:282] 0 containers: []
	W1010 19:27:41.628570  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:41.628578  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:41.628650  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:41.665011  148123 cri.go:89] found id: ""
	I1010 19:27:41.665041  148123 logs.go:282] 0 containers: []
	W1010 19:27:41.665053  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:41.665060  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:41.665112  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:41.702650  148123 cri.go:89] found id: ""
	I1010 19:27:41.702681  148123 logs.go:282] 0 containers: []
	W1010 19:27:41.702692  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:41.702700  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:41.702764  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:41.738971  148123 cri.go:89] found id: ""
	I1010 19:27:41.739002  148123 logs.go:282] 0 containers: []
	W1010 19:27:41.739021  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:41.739031  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:41.739062  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:41.813296  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:41.813321  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:41.813335  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:41.895138  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:41.895185  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:41.941975  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:41.942012  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:41.994838  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:41.994888  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:39.727140  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:41.728263  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:40.566241  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:43.065219  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:44.511153  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:44.524871  148123 kubeadm.go:597] duration metric: took 4m4.194337013s to restartPrimaryControlPlane
	W1010 19:27:44.524953  148123 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1010 19:27:44.524988  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1010 19:27:44.994063  148123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 19:27:45.010926  148123 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1010 19:27:45.021757  148123 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1010 19:27:45.032246  148123 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1010 19:27:45.032270  148123 kubeadm.go:157] found existing configuration files:
	
	I1010 19:27:45.032326  148123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1010 19:27:45.042799  148123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1010 19:27:45.042866  148123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1010 19:27:45.053017  148123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1010 19:27:45.063373  148123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1010 19:27:45.063445  148123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1010 19:27:45.074689  148123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1010 19:27:45.085187  148123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1010 19:27:45.085241  148123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1010 19:27:45.096275  148123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1010 19:27:45.106686  148123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1010 19:27:45.106750  148123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1010 19:27:45.117018  148123 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1010 19:27:45.358920  148123 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1010 19:27:44.226853  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:46.227586  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:48.727469  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:45.066410  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:47.569864  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:51.230704  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:53.727351  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:50.065845  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:52.066267  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:55.727457  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:58.226861  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:54.564611  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:56.566702  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:00.728542  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:03.225779  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:59.065614  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:01.068088  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:03.566502  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:06.739904  147758 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.195045639s)
	I1010 19:28:06.739984  147758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 19:28:06.756046  147758 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1010 19:28:06.768580  147758 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1010 19:28:06.780663  147758 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1010 19:28:06.780732  147758 kubeadm.go:157] found existing configuration files:
	
	I1010 19:28:06.780807  147758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1010 19:28:06.792092  147758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1010 19:28:06.792179  147758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1010 19:28:06.804515  147758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1010 19:28:06.814969  147758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1010 19:28:06.815040  147758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1010 19:28:06.826056  147758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1010 19:28:06.836050  147758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1010 19:28:06.836108  147758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1010 19:28:06.846125  147758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1010 19:28:06.855505  147758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1010 19:28:06.855559  147758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1010 19:28:06.865367  147758 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1010 19:28:06.916227  147758 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1010 19:28:06.916375  147758 kubeadm.go:310] [preflight] Running pre-flight checks
	I1010 19:28:07.036539  147758 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1010 19:28:07.036652  147758 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1010 19:28:07.036762  147758 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1010 19:28:07.044897  147758 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1010 19:28:07.046978  147758 out.go:235]   - Generating certificates and keys ...
	I1010 19:28:07.047117  147758 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1010 19:28:07.047229  147758 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1010 19:28:07.047384  147758 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1010 19:28:07.047467  147758 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1010 19:28:07.047584  147758 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1010 19:28:07.047675  147758 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1010 19:28:07.047794  147758 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1010 19:28:07.047902  147758 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1010 19:28:07.048005  147758 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1010 19:28:07.048093  147758 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1010 19:28:07.048142  147758 kubeadm.go:310] [certs] Using the existing "sa" key
	I1010 19:28:07.048210  147758 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1010 19:28:07.127836  147758 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1010 19:28:07.434492  147758 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1010 19:28:07.487567  147758 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1010 19:28:07.731314  147758 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1010 19:28:07.919060  147758 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1010 19:28:07.919565  147758 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1010 19:28:07.922740  147758 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1010 19:28:05.227611  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:07.229836  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:06.065246  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:08.067360  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:07.925140  147758 out.go:235]   - Booting up control plane ...
	I1010 19:28:07.925239  147758 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1010 19:28:07.925356  147758 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1010 19:28:07.925444  147758 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1010 19:28:07.944375  147758 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1010 19:28:07.951182  147758 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1010 19:28:07.951274  147758 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1010 19:28:08.087325  147758 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1010 19:28:08.087560  147758 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1010 19:28:08.598361  147758 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 511.081439ms
	I1010 19:28:08.598502  147758 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1010 19:28:09.727932  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:12.227939  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:10.566945  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:13.067142  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:14.100517  147758 kubeadm.go:310] [api-check] The API server is healthy after 5.501985157s
	I1010 19:28:14.119932  147758 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1010 19:28:14.149557  147758 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1010 19:28:14.207413  147758 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1010 19:28:14.207735  147758 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-541370 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1010 19:28:14.226199  147758 kubeadm.go:310] [bootstrap-token] Using token: sbg4v0.t5me93bb5vn8m913
	I1010 19:28:14.228059  147758 out.go:235]   - Configuring RBAC rules ...
	I1010 19:28:14.228208  147758 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1010 19:28:14.241706  147758 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1010 19:28:14.256554  147758 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1010 19:28:14.263129  147758 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1010 19:28:14.274346  147758 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1010 19:28:14.282313  147758 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1010 19:28:14.507850  147758 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1010 19:28:14.970234  147758 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1010 19:28:15.508328  147758 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1010 19:28:15.509530  147758 kubeadm.go:310] 
	I1010 19:28:15.509635  147758 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1010 19:28:15.509653  147758 kubeadm.go:310] 
	I1010 19:28:15.509743  147758 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1010 19:28:15.509762  147758 kubeadm.go:310] 
	I1010 19:28:15.509795  147758 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1010 19:28:15.509888  147758 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1010 19:28:15.509954  147758 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1010 19:28:15.509970  147758 kubeadm.go:310] 
	I1010 19:28:15.510083  147758 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1010 19:28:15.510103  147758 kubeadm.go:310] 
	I1010 19:28:15.510203  147758 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1010 19:28:15.510214  147758 kubeadm.go:310] 
	I1010 19:28:15.510297  147758 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1010 19:28:15.510410  147758 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1010 19:28:15.510489  147758 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1010 19:28:15.510495  147758 kubeadm.go:310] 
	I1010 19:28:15.510603  147758 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1010 19:28:15.510707  147758 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1010 19:28:15.510724  147758 kubeadm.go:310] 
	I1010 19:28:15.510807  147758 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token sbg4v0.t5me93bb5vn8m913 \
	I1010 19:28:15.510958  147758 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:bc8568f96f96483e4403a7eff9866ac7bd8b0e76d8203796a49dd1a769f11bda \
	I1010 19:28:15.511005  147758 kubeadm.go:310] 	--control-plane 
	I1010 19:28:15.511034  147758 kubeadm.go:310] 
	I1010 19:28:15.511161  147758 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1010 19:28:15.511173  147758 kubeadm.go:310] 
	I1010 19:28:15.511268  147758 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token sbg4v0.t5me93bb5vn8m913 \
	I1010 19:28:15.511403  147758 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:bc8568f96f96483e4403a7eff9866ac7bd8b0e76d8203796a49dd1a769f11bda 
	I1010 19:28:15.512298  147758 kubeadm.go:310] W1010 19:28:06.890572    2539 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1010 19:28:15.512594  147758 kubeadm.go:310] W1010 19:28:06.891448    2539 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1010 19:28:15.512702  147758 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1010 19:28:15.512734  147758 cni.go:84] Creating CNI manager for ""
	I1010 19:28:15.512744  147758 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1010 19:28:15.514703  147758 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1010 19:28:15.516229  147758 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1010 19:28:15.527554  147758 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1010 19:28:15.549266  147758 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1010 19:28:15.549362  147758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:15.549399  147758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-541370 minikube.k8s.io/updated_at=2024_10_10T19_28_15_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=e90d7de550e839433dc631623053398b652747fc minikube.k8s.io/name=embed-certs-541370 minikube.k8s.io/primary=true
	I1010 19:28:15.590732  147758 ops.go:34] apiserver oom_adj: -16
	I1010 19:28:15.740942  147758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:16.241392  147758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:16.741807  147758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:14.229241  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:16.727260  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:14.059512  148525 pod_ready.go:82] duration metric: took 4m0.00022742s for pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace to be "Ready" ...
	E1010 19:28:14.059550  148525 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace to be "Ready" (will not retry!)
	I1010 19:28:14.059569  148525 pod_ready.go:39] duration metric: took 4m7.001942194s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 19:28:14.059614  148525 kubeadm.go:597] duration metric: took 4m14.998320151s to restartPrimaryControlPlane
	W1010 19:28:14.059672  148525 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1010 19:28:14.059698  148525 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1010 19:28:17.241315  147758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:17.741580  147758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:18.241006  147758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:18.742042  147758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:19.241251  147758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:19.741030  147758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:19.862541  147758 kubeadm.go:1113] duration metric: took 4.313246481s to wait for elevateKubeSystemPrivileges
	I1010 19:28:19.862579  147758 kubeadm.go:394] duration metric: took 5m2.288571479s to StartCluster
	I1010 19:28:19.862628  147758 settings.go:142] acquiring lock: {Name:mk8d73e472e6ca16e47be8eb712406a95625b8da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 19:28:19.862751  147758 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19787-81676/kubeconfig
	I1010 19:28:19.864528  147758 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19787-81676/kubeconfig: {Name:mk31297b607b774870865548d952622c04d970cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 19:28:19.864812  147758 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.120 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1010 19:28:19.864910  147758 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1010 19:28:19.865019  147758 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-541370"
	I1010 19:28:19.865041  147758 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-541370"
	W1010 19:28:19.865053  147758 addons.go:243] addon storage-provisioner should already be in state true
	I1010 19:28:19.865062  147758 addons.go:69] Setting default-storageclass=true in profile "embed-certs-541370"
	I1010 19:28:19.865085  147758 host.go:66] Checking if "embed-certs-541370" exists ...
	I1010 19:28:19.865077  147758 config.go:182] Loaded profile config "embed-certs-541370": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 19:28:19.865129  147758 addons.go:69] Setting metrics-server=true in profile "embed-certs-541370"
	I1010 19:28:19.865164  147758 addons.go:234] Setting addon metrics-server=true in "embed-certs-541370"
	W1010 19:28:19.865179  147758 addons.go:243] addon metrics-server should already be in state true
	I1010 19:28:19.865115  147758 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-541370"
	I1010 19:28:19.865215  147758 host.go:66] Checking if "embed-certs-541370" exists ...
	I1010 19:28:19.865558  147758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:28:19.865593  147758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:28:19.865607  147758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:28:19.865629  147758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:28:19.865595  147758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:28:19.865725  147758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:28:19.866857  147758 out.go:177] * Verifying Kubernetes components...
	I1010 19:28:19.868590  147758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 19:28:19.882524  147758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42899
	I1010 19:28:19.882595  147758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39477
	I1010 19:28:19.882678  147758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42999
	I1010 19:28:19.883065  147758 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:28:19.883168  147758 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:28:19.883281  147758 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:28:19.883559  147758 main.go:141] libmachine: Using API Version  1
	I1010 19:28:19.883575  147758 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:28:19.883657  147758 main.go:141] libmachine: Using API Version  1
	I1010 19:28:19.883669  147758 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:28:19.883802  147758 main.go:141] libmachine: Using API Version  1
	I1010 19:28:19.883818  147758 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:28:19.883968  147758 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:28:19.883976  147758 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:28:19.884141  147758 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:28:19.884194  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetState
	I1010 19:28:19.884408  147758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:28:19.884437  147758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:28:19.884684  147758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:28:19.884746  147758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:28:19.887912  147758 addons.go:234] Setting addon default-storageclass=true in "embed-certs-541370"
	W1010 19:28:19.887942  147758 addons.go:243] addon default-storageclass should already be in state true
	I1010 19:28:19.887973  147758 host.go:66] Checking if "embed-certs-541370" exists ...
	I1010 19:28:19.888333  147758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:28:19.888383  147758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:28:19.901588  147758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37895
	I1010 19:28:19.902131  147758 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:28:19.902597  147758 main.go:141] libmachine: Using API Version  1
	I1010 19:28:19.902621  147758 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:28:19.902927  147758 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:28:19.903101  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetState
	I1010 19:28:19.904556  147758 main.go:141] libmachine: (embed-certs-541370) Calling .DriverName
	I1010 19:28:19.905207  147758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33927
	I1010 19:28:19.905621  147758 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:28:19.906188  147758 main.go:141] libmachine: Using API Version  1
	I1010 19:28:19.906209  147758 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:28:19.906599  147758 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:28:19.906647  147758 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 19:28:19.906837  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetState
	I1010 19:28:19.907699  147758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34245
	I1010 19:28:19.908147  147758 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:28:19.908557  147758 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 19:28:19.908584  147758 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1010 19:28:19.908610  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHHostname
	I1010 19:28:19.908705  147758 main.go:141] libmachine: (embed-certs-541370) Calling .DriverName
	I1010 19:28:19.908717  147758 main.go:141] libmachine: Using API Version  1
	I1010 19:28:19.908745  147758 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:28:19.909364  147758 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:28:19.910154  147758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:28:19.910208  147758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:28:19.910840  147758 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1010 19:28:19.912716  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:28:19.912722  147758 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1010 19:28:19.912743  147758 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1010 19:28:19.912769  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHHostname
	I1010 19:28:19.913199  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:28:19.913224  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:28:19.913500  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHPort
	I1010 19:28:19.913682  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:28:19.913845  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHUsername
	I1010 19:28:19.913972  147758 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/embed-certs-541370/id_rsa Username:docker}
	I1010 19:28:19.921800  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:28:19.922343  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:28:19.922374  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:28:19.922653  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHPort
	I1010 19:28:19.922842  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:28:19.922965  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHUsername
	I1010 19:28:19.923108  147758 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/embed-certs-541370/id_rsa Username:docker}
	I1010 19:28:19.935097  147758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39933
	I1010 19:28:19.935605  147758 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:28:19.936123  147758 main.go:141] libmachine: Using API Version  1
	I1010 19:28:19.936146  147758 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:28:19.936561  147758 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:28:19.936747  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetState
	I1010 19:28:19.938789  147758 main.go:141] libmachine: (embed-certs-541370) Calling .DriverName
	I1010 19:28:19.939019  147758 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1010 19:28:19.939034  147758 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1010 19:28:19.939054  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHHostname
	I1010 19:28:19.941682  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:28:19.942137  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:28:19.942165  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:28:19.942404  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHPort
	I1010 19:28:19.942642  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:28:19.942767  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHUsername
	I1010 19:28:19.942915  147758 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/embed-certs-541370/id_rsa Username:docker}
	I1010 19:28:20.108247  147758 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 19:28:20.149819  147758 node_ready.go:35] waiting up to 6m0s for node "embed-certs-541370" to be "Ready" ...
	I1010 19:28:20.163096  147758 node_ready.go:49] node "embed-certs-541370" has status "Ready":"True"
	I1010 19:28:20.163118  147758 node_ready.go:38] duration metric: took 13.26779ms for node "embed-certs-541370" to be "Ready" ...
	I1010 19:28:20.163128  147758 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 19:28:20.168620  147758 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-59752" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:20.241952  147758 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1010 19:28:20.241978  147758 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1010 19:28:20.249679  147758 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1010 19:28:20.290149  147758 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1010 19:28:20.290190  147758 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1010 19:28:20.291475  147758 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 19:28:20.410539  147758 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1010 19:28:20.410582  147758 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1010 19:28:20.491567  147758 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1010 19:28:20.684370  147758 main.go:141] libmachine: Making call to close driver server
	I1010 19:28:20.684403  147758 main.go:141] libmachine: (embed-certs-541370) Calling .Close
	I1010 19:28:20.684695  147758 main.go:141] libmachine: (embed-certs-541370) DBG | Closing plugin on server side
	I1010 19:28:20.684742  147758 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:28:20.684749  147758 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:28:20.684756  147758 main.go:141] libmachine: Making call to close driver server
	I1010 19:28:20.684764  147758 main.go:141] libmachine: (embed-certs-541370) Calling .Close
	I1010 19:28:20.685029  147758 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:28:20.685059  147758 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:28:20.685036  147758 main.go:141] libmachine: (embed-certs-541370) DBG | Closing plugin on server side
	I1010 19:28:20.695901  147758 main.go:141] libmachine: Making call to close driver server
	I1010 19:28:20.695926  147758 main.go:141] libmachine: (embed-certs-541370) Calling .Close
	I1010 19:28:20.696202  147758 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:28:20.696249  147758 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:28:21.439463  147758 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.147952803s)
	I1010 19:28:21.439626  147758 main.go:141] libmachine: Making call to close driver server
	I1010 19:28:21.439659  147758 main.go:141] libmachine: (embed-certs-541370) Calling .Close
	I1010 19:28:21.439951  147758 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:28:21.439969  147758 main.go:141] libmachine: (embed-certs-541370) DBG | Closing plugin on server side
	I1010 19:28:21.439976  147758 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:28:21.439997  147758 main.go:141] libmachine: Making call to close driver server
	I1010 19:28:21.440009  147758 main.go:141] libmachine: (embed-certs-541370) Calling .Close
	I1010 19:28:21.440299  147758 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:28:21.440298  147758 main.go:141] libmachine: (embed-certs-541370) DBG | Closing plugin on server side
	I1010 19:28:21.440314  147758 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:28:21.780486  147758 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.288854773s)
	I1010 19:28:21.780551  147758 main.go:141] libmachine: Making call to close driver server
	I1010 19:28:21.780567  147758 main.go:141] libmachine: (embed-certs-541370) Calling .Close
	I1010 19:28:21.780948  147758 main.go:141] libmachine: (embed-certs-541370) DBG | Closing plugin on server side
	I1010 19:28:21.780980  147758 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:28:21.780996  147758 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:28:21.781007  147758 main.go:141] libmachine: Making call to close driver server
	I1010 19:28:21.781016  147758 main.go:141] libmachine: (embed-certs-541370) Calling .Close
	I1010 19:28:21.781289  147758 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:28:21.781310  147758 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:28:21.781331  147758 addons.go:475] Verifying addon metrics-server=true in "embed-certs-541370"
	I1010 19:28:21.783512  147758 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1010 19:28:21.784958  147758 addons.go:510] duration metric: took 1.92006141s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1010 19:28:19.225844  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:21.227960  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:23.726439  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:22.195129  147758 pod_ready.go:103] pod "coredns-7c65d6cfc9-59752" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:24.678736  147758 pod_ready.go:103] pod "coredns-7c65d6cfc9-59752" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:25.727053  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:27.727657  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:27.177348  147758 pod_ready.go:103] pod "coredns-7c65d6cfc9-59752" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:29.177459  147758 pod_ready.go:93] pod "coredns-7c65d6cfc9-59752" in "kube-system" namespace has status "Ready":"True"
	I1010 19:28:29.177485  147758 pod_ready.go:82] duration metric: took 9.008841503s for pod "coredns-7c65d6cfc9-59752" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:29.177495  147758 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-n7wxs" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:29.182744  147758 pod_ready.go:93] pod "coredns-7c65d6cfc9-n7wxs" in "kube-system" namespace has status "Ready":"True"
	I1010 19:28:29.182777  147758 pod_ready.go:82] duration metric: took 5.273263ms for pod "coredns-7c65d6cfc9-n7wxs" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:29.182791  147758 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:29.191507  147758 pod_ready.go:93] pod "etcd-embed-certs-541370" in "kube-system" namespace has status "Ready":"True"
	I1010 19:28:29.191539  147758 pod_ready.go:82] duration metric: took 8.738961ms for pod "etcd-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:29.191554  147758 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:29.199167  147758 pod_ready.go:93] pod "kube-apiserver-embed-certs-541370" in "kube-system" namespace has status "Ready":"True"
	I1010 19:28:29.199218  147758 pod_ready.go:82] duration metric: took 7.635672ms for pod "kube-apiserver-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:29.199234  147758 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:29.204558  147758 pod_ready.go:93] pod "kube-controller-manager-embed-certs-541370" in "kube-system" namespace has status "Ready":"True"
	I1010 19:28:29.204581  147758 pod_ready.go:82] duration metric: took 5.337574ms for pod "kube-controller-manager-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:29.204591  147758 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-6hdds" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:29.573781  147758 pod_ready.go:93] pod "kube-proxy-6hdds" in "kube-system" namespace has status "Ready":"True"
	I1010 19:28:29.573808  147758 pod_ready.go:82] duration metric: took 369.210969ms for pod "kube-proxy-6hdds" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:29.573818  147758 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:29.974015  147758 pod_ready.go:93] pod "kube-scheduler-embed-certs-541370" in "kube-system" namespace has status "Ready":"True"
	I1010 19:28:29.974039  147758 pod_ready.go:82] duration metric: took 400.214845ms for pod "kube-scheduler-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:29.974048  147758 pod_ready.go:39] duration metric: took 9.810911064s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 19:28:29.974066  147758 api_server.go:52] waiting for apiserver process to appear ...
	I1010 19:28:29.974120  147758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:28:29.991332  147758 api_server.go:72] duration metric: took 10.126480862s to wait for apiserver process to appear ...
	I1010 19:28:29.991356  147758 api_server.go:88] waiting for apiserver healthz status ...
	I1010 19:28:29.991382  147758 api_server.go:253] Checking apiserver healthz at https://192.168.39.120:8443/healthz ...
	I1010 19:28:29.995855  147758 api_server.go:279] https://192.168.39.120:8443/healthz returned 200:
	ok
	I1010 19:28:29.997488  147758 api_server.go:141] control plane version: v1.31.1
	I1010 19:28:29.997516  147758 api_server.go:131] duration metric: took 6.152312ms to wait for apiserver health ...
	I1010 19:28:29.997526  147758 system_pods.go:43] waiting for kube-system pods to appear ...
	I1010 19:28:30.176631  147758 system_pods.go:59] 9 kube-system pods found
	I1010 19:28:30.176662  147758 system_pods.go:61] "coredns-7c65d6cfc9-59752" [f7980c69-dd8e-42e0-a0ab-1dedf2203367] Running
	I1010 19:28:30.176668  147758 system_pods.go:61] "coredns-7c65d6cfc9-n7wxs" [ac89df92-bbdb-432b-8c4a-d9ced3c2f2b5] Running
	I1010 19:28:30.176672  147758 system_pods.go:61] "etcd-embed-certs-541370" [94f92d38-753f-47c7-95b6-7ef0486a172d] Running
	I1010 19:28:30.176676  147758 system_pods.go:61] "kube-apiserver-embed-certs-541370" [b60f7ec1-6416-4e0b-8f49-d673496187b6] Running
	I1010 19:28:30.176680  147758 system_pods.go:61] "kube-controller-manager-embed-certs-541370" [ca09511b-ec93-4146-9d43-5fbc0880394a] Running
	I1010 19:28:30.176683  147758 system_pods.go:61] "kube-proxy-6hdds" [fe7cbbf4-12be-469d-b176-37c4daccab96] Running
	I1010 19:28:30.176686  147758 system_pods.go:61] "kube-scheduler-embed-certs-541370" [37c96f22-d5a4-4233-ba65-7367b075656a] Running
	I1010 19:28:30.176693  147758 system_pods.go:61] "metrics-server-6867b74b74-znhn4" [5dc1f764-c7c7-480e-b787-5f5cf6c14a84] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1010 19:28:30.176699  147758 system_pods.go:61] "storage-provisioner" [cfb28184-daef-40be-9170-b42058727418] Running
	I1010 19:28:30.176707  147758 system_pods.go:74] duration metric: took 179.174083ms to wait for pod list to return data ...
	I1010 19:28:30.176714  147758 default_sa.go:34] waiting for default service account to be created ...
	I1010 19:28:30.375326  147758 default_sa.go:45] found service account: "default"
	I1010 19:28:30.375361  147758 default_sa.go:55] duration metric: took 198.640267ms for default service account to be created ...
	I1010 19:28:30.375374  147758 system_pods.go:116] waiting for k8s-apps to be running ...
	I1010 19:28:30.578749  147758 system_pods.go:86] 9 kube-system pods found
	I1010 19:28:30.578780  147758 system_pods.go:89] "coredns-7c65d6cfc9-59752" [f7980c69-dd8e-42e0-a0ab-1dedf2203367] Running
	I1010 19:28:30.578786  147758 system_pods.go:89] "coredns-7c65d6cfc9-n7wxs" [ac89df92-bbdb-432b-8c4a-d9ced3c2f2b5] Running
	I1010 19:28:30.578790  147758 system_pods.go:89] "etcd-embed-certs-541370" [94f92d38-753f-47c7-95b6-7ef0486a172d] Running
	I1010 19:28:30.578794  147758 system_pods.go:89] "kube-apiserver-embed-certs-541370" [b60f7ec1-6416-4e0b-8f49-d673496187b6] Running
	I1010 19:28:30.578797  147758 system_pods.go:89] "kube-controller-manager-embed-certs-541370" [ca09511b-ec93-4146-9d43-5fbc0880394a] Running
	I1010 19:28:30.578801  147758 system_pods.go:89] "kube-proxy-6hdds" [fe7cbbf4-12be-469d-b176-37c4daccab96] Running
	I1010 19:28:30.578804  147758 system_pods.go:89] "kube-scheduler-embed-certs-541370" [37c96f22-d5a4-4233-ba65-7367b075656a] Running
	I1010 19:28:30.578810  147758 system_pods.go:89] "metrics-server-6867b74b74-znhn4" [5dc1f764-c7c7-480e-b787-5f5cf6c14a84] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1010 19:28:30.578814  147758 system_pods.go:89] "storage-provisioner" [cfb28184-daef-40be-9170-b42058727418] Running
	I1010 19:28:30.578822  147758 system_pods.go:126] duration metric: took 203.441477ms to wait for k8s-apps to be running ...
	I1010 19:28:30.578829  147758 system_svc.go:44] waiting for kubelet service to be running ....
	I1010 19:28:30.578877  147758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 19:28:30.596523  147758 system_svc.go:56] duration metric: took 17.684729ms WaitForService to wait for kubelet
	I1010 19:28:30.596553  147758 kubeadm.go:582] duration metric: took 10.731708748s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1010 19:28:30.596573  147758 node_conditions.go:102] verifying NodePressure condition ...
	I1010 19:28:30.774749  147758 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1010 19:28:30.774783  147758 node_conditions.go:123] node cpu capacity is 2
	I1010 19:28:30.774807  147758 node_conditions.go:105] duration metric: took 178.228671ms to run NodePressure ...
	I1010 19:28:30.774822  147758 start.go:241] waiting for startup goroutines ...
	I1010 19:28:30.774831  147758 start.go:246] waiting for cluster config update ...
	I1010 19:28:30.774845  147758 start.go:255] writing updated cluster config ...
	I1010 19:28:30.775121  147758 ssh_runner.go:195] Run: rm -f paused
	I1010 19:28:30.826689  147758 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1010 19:28:30.828795  147758 out.go:177] * Done! kubectl is now configured to use "embed-certs-541370" cluster and "default" namespace by default
	I1010 19:28:29.728096  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:32.229632  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:34.726536  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:36.727032  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:38.727488  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:40.372903  148525 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.31317648s)
	I1010 19:28:40.372991  148525 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 19:28:40.389319  148525 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1010 19:28:40.400123  148525 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1010 19:28:40.411906  148525 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1010 19:28:40.411932  148525 kubeadm.go:157] found existing configuration files:
	
	I1010 19:28:40.411976  148525 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1010 19:28:40.421840  148525 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1010 19:28:40.421904  148525 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1010 19:28:40.432229  148525 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1010 19:28:40.442121  148525 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1010 19:28:40.442203  148525 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1010 19:28:40.452969  148525 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1010 19:28:40.463085  148525 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1010 19:28:40.463146  148525 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1010 19:28:40.473103  148525 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1010 19:28:40.482854  148525 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1010 19:28:40.482914  148525 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1010 19:28:40.494023  148525 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1010 19:28:40.543369  148525 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1010 19:28:40.543466  148525 kubeadm.go:310] [preflight] Running pre-flight checks
	I1010 19:28:40.657301  148525 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1010 19:28:40.657462  148525 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1010 19:28:40.657579  148525 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1010 19:28:40.669222  148525 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1010 19:28:40.670995  148525 out.go:235]   - Generating certificates and keys ...
	I1010 19:28:40.671102  148525 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1010 19:28:40.671171  148525 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1010 19:28:40.671284  148525 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1010 19:28:40.671374  148525 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1010 19:28:40.671471  148525 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1010 19:28:40.671557  148525 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1010 19:28:40.671650  148525 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1010 19:28:40.671751  148525 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1010 19:28:40.671895  148525 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1010 19:28:40.672000  148525 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1010 19:28:40.672056  148525 kubeadm.go:310] [certs] Using the existing "sa" key
	I1010 19:28:40.672136  148525 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1010 19:28:40.876613  148525 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1010 19:28:41.109518  148525 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1010 19:28:41.186751  148525 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1010 19:28:41.424710  148525 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1010 19:28:41.479611  148525 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1010 19:28:41.480235  148525 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1010 19:28:41.483222  148525 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1010 19:28:41.227521  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:43.728023  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:41.484809  148525 out.go:235]   - Booting up control plane ...
	I1010 19:28:41.484935  148525 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1010 19:28:41.485020  148525 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1010 19:28:41.485317  148525 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1010 19:28:41.506919  148525 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1010 19:28:41.517006  148525 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1010 19:28:41.517077  148525 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1010 19:28:41.653211  148525 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1010 19:28:41.653364  148525 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1010 19:28:42.655360  148525 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001910447s
	I1010 19:28:42.655482  148525 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1010 19:28:47.658431  148525 kubeadm.go:310] [api-check] The API server is healthy after 5.003169217s
	I1010 19:28:47.676178  148525 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1010 19:28:47.694752  148525 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1010 19:28:47.720376  148525 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1010 19:28:47.720645  148525 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-361847 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1010 19:28:47.736489  148525 kubeadm.go:310] [bootstrap-token] Using token: cprf0t.lm4xp75yi0cdu4sy
	I1010 19:28:46.228217  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:48.726740  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:47.737958  148525 out.go:235]   - Configuring RBAC rules ...
	I1010 19:28:47.738089  148525 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1010 19:28:47.750073  148525 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1010 19:28:47.758010  148525 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1010 19:28:47.761649  148525 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1010 19:28:47.768953  148525 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1010 19:28:47.774428  148525 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1010 19:28:48.065988  148525 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1010 19:28:48.502538  148525 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1010 19:28:49.066479  148525 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1010 19:28:49.069842  148525 kubeadm.go:310] 
	I1010 19:28:49.069937  148525 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1010 19:28:49.069947  148525 kubeadm.go:310] 
	I1010 19:28:49.070046  148525 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1010 19:28:49.070058  148525 kubeadm.go:310] 
	I1010 19:28:49.070089  148525 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1010 19:28:49.070166  148525 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1010 19:28:49.070254  148525 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1010 19:28:49.070265  148525 kubeadm.go:310] 
	I1010 19:28:49.070342  148525 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1010 19:28:49.070353  148525 kubeadm.go:310] 
	I1010 19:28:49.070446  148525 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1010 19:28:49.070478  148525 kubeadm.go:310] 
	I1010 19:28:49.070544  148525 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1010 19:28:49.070640  148525 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1010 19:28:49.070750  148525 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1010 19:28:49.070773  148525 kubeadm.go:310] 
	I1010 19:28:49.070880  148525 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1010 19:28:49.070990  148525 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1010 19:28:49.071001  148525 kubeadm.go:310] 
	I1010 19:28:49.071153  148525 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token cprf0t.lm4xp75yi0cdu4sy \
	I1010 19:28:49.071299  148525 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:bc8568f96f96483e4403a7eff9866ac7bd8b0e76d8203796a49dd1a769f11bda \
	I1010 19:28:49.071330  148525 kubeadm.go:310] 	--control-plane 
	I1010 19:28:49.071349  148525 kubeadm.go:310] 
	I1010 19:28:49.071468  148525 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1010 19:28:49.071497  148525 kubeadm.go:310] 
	I1010 19:28:49.072228  148525 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token cprf0t.lm4xp75yi0cdu4sy \
	I1010 19:28:49.072354  148525 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:bc8568f96f96483e4403a7eff9866ac7bd8b0e76d8203796a49dd1a769f11bda 
	I1010 19:28:49.074595  148525 kubeadm.go:310] W1010 19:28:40.525557    2560 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1010 19:28:49.074944  148525 kubeadm.go:310] W1010 19:28:40.526329    2560 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1010 19:28:49.075102  148525 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1010 19:28:49.075143  148525 cni.go:84] Creating CNI manager for ""
	I1010 19:28:49.075166  148525 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1010 19:28:49.077190  148525 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1010 19:28:49.078665  148525 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1010 19:28:49.091792  148525 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1010 19:28:49.113801  148525 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1010 19:28:49.113920  148525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-361847 minikube.k8s.io/updated_at=2024_10_10T19_28_49_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=e90d7de550e839433dc631623053398b652747fc minikube.k8s.io/name=default-k8s-diff-port-361847 minikube.k8s.io/primary=true
	I1010 19:28:49.114074  148525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:49.154398  148525 ops.go:34] apiserver oom_adj: -16
	I1010 19:28:49.351271  148525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:49.852049  148525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:50.351441  148525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:50.852022  148525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:51.351391  148525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:51.851329  148525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:52.351840  148525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:52.852392  148525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:53.351397  148525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:53.443325  148525 kubeadm.go:1113] duration metric: took 4.329288133s to wait for elevateKubeSystemPrivileges
	I1010 19:28:53.443363  148525 kubeadm.go:394] duration metric: took 4m54.439732071s to StartCluster
	I1010 19:28:53.443386  148525 settings.go:142] acquiring lock: {Name:mk8d73e472e6ca16e47be8eb712406a95625b8da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 19:28:53.443481  148525 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19787-81676/kubeconfig
	I1010 19:28:53.445465  148525 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19787-81676/kubeconfig: {Name:mk31297b607b774870865548d952622c04d970cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 19:28:53.445747  148525 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.32 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1010 19:28:53.445842  148525 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1010 19:28:53.445957  148525 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-361847"
	I1010 19:28:53.445980  148525 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-361847"
	W1010 19:28:53.445992  148525 addons.go:243] addon storage-provisioner should already be in state true
	I1010 19:28:53.446004  148525 config.go:182] Loaded profile config "default-k8s-diff-port-361847": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 19:28:53.446026  148525 host.go:66] Checking if "default-k8s-diff-port-361847" exists ...
	I1010 19:28:53.446065  148525 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-361847"
	I1010 19:28:53.446100  148525 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-361847"
	I1010 19:28:53.446085  148525 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-361847"
	I1010 19:28:53.446137  148525 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-361847"
	W1010 19:28:53.446151  148525 addons.go:243] addon metrics-server should already be in state true
	I1010 19:28:53.446242  148525 host.go:66] Checking if "default-k8s-diff-port-361847" exists ...
	I1010 19:28:53.446515  148525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:28:53.446562  148525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:28:53.447089  148525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:28:53.447135  148525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:28:53.447315  148525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:28:53.447360  148525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:28:53.450779  148525 out.go:177] * Verifying Kubernetes components...
	I1010 19:28:53.452838  148525 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 19:28:53.465502  148525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36411
	I1010 19:28:53.466020  148525 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:28:53.466572  148525 main.go:141] libmachine: Using API Version  1
	I1010 19:28:53.466594  148525 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:28:53.466772  148525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42293
	I1010 19:28:53.467034  148525 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:28:53.467209  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetState
	I1010 19:28:53.467310  148525 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:28:53.467828  148525 main.go:141] libmachine: Using API Version  1
	I1010 19:28:53.467857  148525 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:28:53.467899  148525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36387
	I1010 19:28:53.468270  148525 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:28:53.468451  148525 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:28:53.468866  148525 main.go:141] libmachine: Using API Version  1
	I1010 19:28:53.468891  148525 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:28:53.469102  148525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:28:53.469150  148525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:28:53.469484  148525 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:28:53.470068  148525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:28:53.470114  148525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:28:53.471192  148525 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-361847"
	W1010 19:28:53.471213  148525 addons.go:243] addon default-storageclass should already be in state true
	I1010 19:28:53.471261  148525 host.go:66] Checking if "default-k8s-diff-port-361847" exists ...
	I1010 19:28:53.471618  148525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:28:53.471664  148525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:28:53.486550  148525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38971
	I1010 19:28:53.487068  148525 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:28:53.487608  148525 main.go:141] libmachine: Using API Version  1
	I1010 19:28:53.487626  148525 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:28:53.488015  148525 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:28:53.488329  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetState
	I1010 19:28:53.490200  148525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44971
	I1010 19:28:53.490240  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .DriverName
	I1010 19:28:53.490790  148525 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:28:53.491318  148525 main.go:141] libmachine: Using API Version  1
	I1010 19:28:53.491341  148525 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:28:53.491682  148525 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:28:53.491957  148525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45297
	I1010 19:28:53.492100  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetState
	I1010 19:28:53.492423  148525 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:28:53.492731  148525 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1010 19:28:53.492811  148525 main.go:141] libmachine: Using API Version  1
	I1010 19:28:53.492831  148525 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:28:53.493240  148525 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:28:53.493885  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .DriverName
	I1010 19:28:53.493979  148525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:28:53.494031  148525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:28:53.494359  148525 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1010 19:28:53.494381  148525 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1010 19:28:53.494397  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHHostname
	I1010 19:28:53.495771  148525 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 19:28:51.226596  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:53.227299  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:53.227335  147213 pod_ready.go:82] duration metric: took 4m0.007224391s for pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace to be "Ready" ...
	E1010 19:28:53.227346  147213 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1010 19:28:53.227355  147213 pod_ready.go:39] duration metric: took 4m5.554224355s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 19:28:53.227375  147213 api_server.go:52] waiting for apiserver process to appear ...
	I1010 19:28:53.227419  147213 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:28:53.227484  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:28:53.288713  147213 cri.go:89] found id: "20a9cb514f18abab89d82173a52c43693b13548712f96726c491e14100e5deba"
	I1010 19:28:53.288749  147213 cri.go:89] found id: ""
	I1010 19:28:53.288759  147213 logs.go:282] 1 containers: [20a9cb514f18abab89d82173a52c43693b13548712f96726c491e14100e5deba]
	I1010 19:28:53.288823  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:53.294819  147213 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:28:53.294904  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:28:53.340169  147213 cri.go:89] found id: "bfc9f1f069a0299235908025d3c8840e059ddc2fc2f579932edb134cff568c36"
	I1010 19:28:53.340197  147213 cri.go:89] found id: ""
	I1010 19:28:53.340207  147213 logs.go:282] 1 containers: [bfc9f1f069a0299235908025d3c8840e059ddc2fc2f579932edb134cff568c36]
	I1010 19:28:53.340271  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:53.345214  147213 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:28:53.345292  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:28:53.392808  147213 cri.go:89] found id: "3c98f0e3e46ce82feb8638281ffb8c8cdf326b15115a6edfe91de59de6868846"
	I1010 19:28:53.392838  147213 cri.go:89] found id: ""
	I1010 19:28:53.392859  147213 logs.go:282] 1 containers: [3c98f0e3e46ce82feb8638281ffb8c8cdf326b15115a6edfe91de59de6868846]
	I1010 19:28:53.392921  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:53.398275  147213 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:28:53.398361  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:28:53.439567  147213 cri.go:89] found id: "d397ef1d012accb6f9cb41fa5bd5ed4649588e9ad8552f87d2f9a579a9c3f023"
	I1010 19:28:53.439594  147213 cri.go:89] found id: ""
	I1010 19:28:53.439604  147213 logs.go:282] 1 containers: [d397ef1d012accb6f9cb41fa5bd5ed4649588e9ad8552f87d2f9a579a9c3f023]
	I1010 19:28:53.439665  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:53.444366  147213 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:28:53.444436  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:28:53.522580  147213 cri.go:89] found id: "3a26f9cbec8dc8e6e1b1c1cc24a2432dc85819a78557be2263db6bc34e847664"
	I1010 19:28:53.522597  147213 cri.go:89] found id: ""
	I1010 19:28:53.522605  147213 logs.go:282] 1 containers: [3a26f9cbec8dc8e6e1b1c1cc24a2432dc85819a78557be2263db6bc34e847664]
	I1010 19:28:53.522654  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:53.528890  147213 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:28:53.528974  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:28:53.575933  147213 cri.go:89] found id: "d59196636b2826805e226757e3c5d3683d49f2fddb43f2e965aeae02026742ea"
	I1010 19:28:53.575963  147213 cri.go:89] found id: ""
	I1010 19:28:53.575975  147213 logs.go:282] 1 containers: [d59196636b2826805e226757e3c5d3683d49f2fddb43f2e965aeae02026742ea]
	I1010 19:28:53.576035  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:53.581693  147213 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:28:53.581763  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:28:53.619789  147213 cri.go:89] found id: ""
	I1010 19:28:53.619819  147213 logs.go:282] 0 containers: []
	W1010 19:28:53.619831  147213 logs.go:284] No container was found matching "kindnet"
	I1010 19:28:53.619839  147213 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1010 19:28:53.619899  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1010 19:28:53.659715  147213 cri.go:89] found id: "dfabbf70cd44985666c6b54a456e49c66a41cc19300a483565decd937ce240ab"
	I1010 19:28:53.659746  147213 cri.go:89] found id: "e14d37c6da3f630d60720c5dc7ec414f060a7b327fafd27b633f3402e552840e"
	I1010 19:28:53.659752  147213 cri.go:89] found id: ""
	I1010 19:28:53.659762  147213 logs.go:282] 2 containers: [dfabbf70cd44985666c6b54a456e49c66a41cc19300a483565decd937ce240ab e14d37c6da3f630d60720c5dc7ec414f060a7b327fafd27b633f3402e552840e]
	I1010 19:28:53.659828  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:53.664377  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:53.668766  147213 logs.go:123] Gathering logs for dmesg ...
	I1010 19:28:53.668796  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:28:53.685976  147213 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:28:53.686007  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 19:28:53.497232  148525 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 19:28:53.497251  148525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1010 19:28:53.497273  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHHostname
	I1010 19:28:53.497732  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:28:53.498599  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:28:53.498627  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:28:53.498971  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHPort
	I1010 19:28:53.499159  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:28:53.499312  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHUsername
	I1010 19:28:53.499414  148525 sshutil.go:53] new ssh client: &{IP:192.168.50.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/default-k8s-diff-port-361847/id_rsa Username:docker}
	I1010 19:28:53.501044  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:28:53.501509  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:28:53.501531  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:28:53.501782  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHPort
	I1010 19:28:53.501956  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:28:53.502080  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHUsername
	I1010 19:28:53.502232  148525 sshutil.go:53] new ssh client: &{IP:192.168.50.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/default-k8s-diff-port-361847/id_rsa Username:docker}
	I1010 19:28:53.512240  148525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34899
	I1010 19:28:53.512809  148525 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:28:53.513347  148525 main.go:141] libmachine: Using API Version  1
	I1010 19:28:53.513368  148525 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:28:53.513787  148525 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:28:53.514001  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetState
	I1010 19:28:53.515436  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .DriverName
	I1010 19:28:53.515639  148525 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1010 19:28:53.515659  148525 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1010 19:28:53.515681  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHHostname
	I1010 19:28:53.518128  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:28:53.518596  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:28:53.518628  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:28:53.518909  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHPort
	I1010 19:28:53.519080  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:28:53.519216  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHUsername
	I1010 19:28:53.519376  148525 sshutil.go:53] new ssh client: &{IP:192.168.50.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/default-k8s-diff-port-361847/id_rsa Username:docker}
	I1010 19:28:53.712871  148525 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 19:28:53.755059  148525 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-361847" to be "Ready" ...
	I1010 19:28:53.766564  148525 node_ready.go:49] node "default-k8s-diff-port-361847" has status "Ready":"True"
	I1010 19:28:53.766590  148525 node_ready.go:38] duration metric: took 11.490223ms for node "default-k8s-diff-port-361847" to be "Ready" ...
	I1010 19:28:53.766603  148525 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 19:28:53.777458  148525 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-dh9th" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:53.875493  148525 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1010 19:28:53.875525  148525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1010 19:28:53.911443  148525 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1010 19:28:53.944885  148525 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1010 19:28:53.944919  148525 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1010 19:28:53.945487  148525 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 19:28:54.011209  148525 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1010 19:28:54.011239  148525 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1010 19:28:54.039679  148525 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1010 19:28:54.598172  148525 main.go:141] libmachine: Making call to close driver server
	I1010 19:28:54.598226  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .Close
	I1010 19:28:54.598584  148525 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:28:54.598608  148525 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:28:54.598619  148525 main.go:141] libmachine: Making call to close driver server
	I1010 19:28:54.598629  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .Close
	I1010 19:28:54.598898  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | Closing plugin on server side
	I1010 19:28:54.598931  148525 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:28:54.598939  148525 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:28:54.643365  148525 main.go:141] libmachine: Making call to close driver server
	I1010 19:28:54.643392  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .Close
	I1010 19:28:54.643734  148525 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:28:54.643760  148525 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:28:55.287018  148525 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.341483807s)
	I1010 19:28:55.287045  148525 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.247326452s)
	I1010 19:28:55.287089  148525 main.go:141] libmachine: Making call to close driver server
	I1010 19:28:55.287094  148525 main.go:141] libmachine: Making call to close driver server
	I1010 19:28:55.287106  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .Close
	I1010 19:28:55.287112  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .Close
	I1010 19:28:55.287440  148525 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:28:55.287479  148525 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:28:55.287506  148525 main.go:141] libmachine: Making call to close driver server
	I1010 19:28:55.287524  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .Close
	I1010 19:28:55.287570  148525 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:28:55.287589  148525 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:28:55.287598  148525 main.go:141] libmachine: Making call to close driver server
	I1010 19:28:55.287607  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .Close
	I1010 19:28:55.287818  148525 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:28:55.287831  148525 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:28:55.287840  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | Closing plugin on server side
	I1010 19:28:55.287855  148525 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:28:55.287862  148525 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:28:55.287872  148525 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-361847"
	I1010 19:28:55.287880  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | Closing plugin on server side
	I1010 19:28:55.289944  148525 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1010 19:28:53.841387  147213 logs.go:123] Gathering logs for kube-apiserver [20a9cb514f18abab89d82173a52c43693b13548712f96726c491e14100e5deba] ...
	I1010 19:28:53.841441  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 20a9cb514f18abab89d82173a52c43693b13548712f96726c491e14100e5deba"
	I1010 19:28:53.892951  147213 logs.go:123] Gathering logs for coredns [3c98f0e3e46ce82feb8638281ffb8c8cdf326b15115a6edfe91de59de6868846] ...
	I1010 19:28:53.893005  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c98f0e3e46ce82feb8638281ffb8c8cdf326b15115a6edfe91de59de6868846"
	I1010 19:28:53.947636  147213 logs.go:123] Gathering logs for kube-scheduler [d397ef1d012accb6f9cb41fa5bd5ed4649588e9ad8552f87d2f9a579a9c3f023] ...
	I1010 19:28:53.947668  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d397ef1d012accb6f9cb41fa5bd5ed4649588e9ad8552f87d2f9a579a9c3f023"
	I1010 19:28:53.992969  147213 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:28:53.992998  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:28:54.520652  147213 logs.go:123] Gathering logs for kubelet ...
	I1010 19:28:54.520703  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:28:54.588366  147213 logs.go:123] Gathering logs for etcd [bfc9f1f069a0299235908025d3c8840e059ddc2fc2f579932edb134cff568c36] ...
	I1010 19:28:54.588418  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bfc9f1f069a0299235908025d3c8840e059ddc2fc2f579932edb134cff568c36"
	I1010 19:28:54.651179  147213 logs.go:123] Gathering logs for kube-proxy [3a26f9cbec8dc8e6e1b1c1cc24a2432dc85819a78557be2263db6bc34e847664] ...
	I1010 19:28:54.651227  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3a26f9cbec8dc8e6e1b1c1cc24a2432dc85819a78557be2263db6bc34e847664"
	I1010 19:28:54.712881  147213 logs.go:123] Gathering logs for kube-controller-manager [d59196636b2826805e226757e3c5d3683d49f2fddb43f2e965aeae02026742ea] ...
	I1010 19:28:54.712925  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d59196636b2826805e226757e3c5d3683d49f2fddb43f2e965aeae02026742ea"
	I1010 19:28:54.779030  147213 logs.go:123] Gathering logs for storage-provisioner [dfabbf70cd44985666c6b54a456e49c66a41cc19300a483565decd937ce240ab] ...
	I1010 19:28:54.779094  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dfabbf70cd44985666c6b54a456e49c66a41cc19300a483565decd937ce240ab"
	I1010 19:28:54.821961  147213 logs.go:123] Gathering logs for storage-provisioner [e14d37c6da3f630d60720c5dc7ec414f060a7b327fafd27b633f3402e552840e] ...
	I1010 19:28:54.822002  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e14d37c6da3f630d60720c5dc7ec414f060a7b327fafd27b633f3402e552840e"
	I1010 19:28:54.871409  147213 logs.go:123] Gathering logs for container status ...
	I1010 19:28:54.871446  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:28:57.425310  147213 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:28:57.442308  147213 api_server.go:72] duration metric: took 4m17.02881034s to wait for apiserver process to appear ...
	I1010 19:28:57.442343  147213 api_server.go:88] waiting for apiserver healthz status ...
	I1010 19:28:57.442383  147213 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:28:57.442444  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:28:57.481392  147213 cri.go:89] found id: "20a9cb514f18abab89d82173a52c43693b13548712f96726c491e14100e5deba"
	I1010 19:28:57.481420  147213 cri.go:89] found id: ""
	I1010 19:28:57.481430  147213 logs.go:282] 1 containers: [20a9cb514f18abab89d82173a52c43693b13548712f96726c491e14100e5deba]
	I1010 19:28:57.481503  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:57.486191  147213 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:28:57.486269  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:28:57.532238  147213 cri.go:89] found id: "bfc9f1f069a0299235908025d3c8840e059ddc2fc2f579932edb134cff568c36"
	I1010 19:28:57.532271  147213 cri.go:89] found id: ""
	I1010 19:28:57.532284  147213 logs.go:282] 1 containers: [bfc9f1f069a0299235908025d3c8840e059ddc2fc2f579932edb134cff568c36]
	I1010 19:28:57.532357  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:57.538105  147213 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:28:57.538188  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:28:57.579729  147213 cri.go:89] found id: "3c98f0e3e46ce82feb8638281ffb8c8cdf326b15115a6edfe91de59de6868846"
	I1010 19:28:57.579757  147213 cri.go:89] found id: ""
	I1010 19:28:57.579767  147213 logs.go:282] 1 containers: [3c98f0e3e46ce82feb8638281ffb8c8cdf326b15115a6edfe91de59de6868846]
	I1010 19:28:57.579833  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:57.584494  147213 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:28:57.584568  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:28:57.623920  147213 cri.go:89] found id: "d397ef1d012accb6f9cb41fa5bd5ed4649588e9ad8552f87d2f9a579a9c3f023"
	I1010 19:28:57.623949  147213 cri.go:89] found id: ""
	I1010 19:28:57.623960  147213 logs.go:282] 1 containers: [d397ef1d012accb6f9cb41fa5bd5ed4649588e9ad8552f87d2f9a579a9c3f023]
	I1010 19:28:57.624028  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:57.628927  147213 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:28:57.629018  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:28:57.669669  147213 cri.go:89] found id: "3a26f9cbec8dc8e6e1b1c1cc24a2432dc85819a78557be2263db6bc34e847664"
	I1010 19:28:57.669698  147213 cri.go:89] found id: ""
	I1010 19:28:57.669707  147213 logs.go:282] 1 containers: [3a26f9cbec8dc8e6e1b1c1cc24a2432dc85819a78557be2263db6bc34e847664]
	I1010 19:28:57.669771  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:57.674449  147213 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:28:57.674526  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:28:57.721856  147213 cri.go:89] found id: "d59196636b2826805e226757e3c5d3683d49f2fddb43f2e965aeae02026742ea"
	I1010 19:28:57.721881  147213 cri.go:89] found id: ""
	I1010 19:28:57.721891  147213 logs.go:282] 1 containers: [d59196636b2826805e226757e3c5d3683d49f2fddb43f2e965aeae02026742ea]
	I1010 19:28:57.721955  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:57.726422  147213 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:28:57.726497  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:28:57.764464  147213 cri.go:89] found id: ""
	I1010 19:28:57.764499  147213 logs.go:282] 0 containers: []
	W1010 19:28:57.764512  147213 logs.go:284] No container was found matching "kindnet"
	I1010 19:28:57.764521  147213 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1010 19:28:57.764595  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1010 19:28:57.809758  147213 cri.go:89] found id: "dfabbf70cd44985666c6b54a456e49c66a41cc19300a483565decd937ce240ab"
	I1010 19:28:57.809784  147213 cri.go:89] found id: "e14d37c6da3f630d60720c5dc7ec414f060a7b327fafd27b633f3402e552840e"
	I1010 19:28:57.809788  147213 cri.go:89] found id: ""
	I1010 19:28:57.809797  147213 logs.go:282] 2 containers: [dfabbf70cd44985666c6b54a456e49c66a41cc19300a483565decd937ce240ab e14d37c6da3f630d60720c5dc7ec414f060a7b327fafd27b633f3402e552840e]
	I1010 19:28:57.809854  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:57.815576  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:57.820152  147213 logs.go:123] Gathering logs for kube-apiserver [20a9cb514f18abab89d82173a52c43693b13548712f96726c491e14100e5deba] ...
	I1010 19:28:57.820181  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 20a9cb514f18abab89d82173a52c43693b13548712f96726c491e14100e5deba"
	I1010 19:28:57.869339  147213 logs.go:123] Gathering logs for etcd [bfc9f1f069a0299235908025d3c8840e059ddc2fc2f579932edb134cff568c36] ...
	I1010 19:28:57.869383  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bfc9f1f069a0299235908025d3c8840e059ddc2fc2f579932edb134cff568c36"
	I1010 19:28:57.918698  147213 logs.go:123] Gathering logs for kube-proxy [3a26f9cbec8dc8e6e1b1c1cc24a2432dc85819a78557be2263db6bc34e847664] ...
	I1010 19:28:57.918739  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3a26f9cbec8dc8e6e1b1c1cc24a2432dc85819a78557be2263db6bc34e847664"
	I1010 19:28:57.960939  147213 logs.go:123] Gathering logs for kube-controller-manager [d59196636b2826805e226757e3c5d3683d49f2fddb43f2e965aeae02026742ea] ...
	I1010 19:28:57.960985  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d59196636b2826805e226757e3c5d3683d49f2fddb43f2e965aeae02026742ea"
	I1010 19:28:58.013572  147213 logs.go:123] Gathering logs for storage-provisioner [dfabbf70cd44985666c6b54a456e49c66a41cc19300a483565decd937ce240ab] ...
	I1010 19:28:58.013612  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dfabbf70cd44985666c6b54a456e49c66a41cc19300a483565decd937ce240ab"
	I1010 19:28:58.053247  147213 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:28:58.053277  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:28:58.507428  147213 logs.go:123] Gathering logs for container status ...
	I1010 19:28:58.507473  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:28:58.552704  147213 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:28:58.552742  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 19:28:58.672077  147213 logs.go:123] Gathering logs for dmesg ...
	I1010 19:28:58.672127  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:28:58.690997  147213 logs.go:123] Gathering logs for coredns [3c98f0e3e46ce82feb8638281ffb8c8cdf326b15115a6edfe91de59de6868846] ...
	I1010 19:28:58.691049  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c98f0e3e46ce82feb8638281ffb8c8cdf326b15115a6edfe91de59de6868846"
	I1010 19:28:58.735251  147213 logs.go:123] Gathering logs for kube-scheduler [d397ef1d012accb6f9cb41fa5bd5ed4649588e9ad8552f87d2f9a579a9c3f023] ...
	I1010 19:28:58.735287  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d397ef1d012accb6f9cb41fa5bd5ed4649588e9ad8552f87d2f9a579a9c3f023"
	I1010 19:28:55.291700  148525 addons.go:510] duration metric: took 1.845864985s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1010 19:28:55.785186  148525 pod_ready.go:103] pod "coredns-7c65d6cfc9-dh9th" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:57.789567  148525 pod_ready.go:103] pod "coredns-7c65d6cfc9-dh9th" in "kube-system" namespace has status "Ready":"False"
	I1010 19:29:00.284444  148525 pod_ready.go:103] pod "coredns-7c65d6cfc9-dh9th" in "kube-system" namespace has status "Ready":"False"
	I1010 19:29:01.297627  148525 pod_ready.go:93] pod "coredns-7c65d6cfc9-dh9th" in "kube-system" namespace has status "Ready":"True"
	I1010 19:29:01.297660  148525 pod_ready.go:82] duration metric: took 7.520173084s for pod "coredns-7c65d6cfc9-dh9th" in "kube-system" namespace to be "Ready" ...
	I1010 19:29:01.297676  148525 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-fgxh7" in "kube-system" namespace to be "Ready" ...
	I1010 19:29:01.804654  148525 pod_ready.go:93] pod "coredns-7c65d6cfc9-fgxh7" in "kube-system" namespace has status "Ready":"True"
	I1010 19:29:01.804676  148525 pod_ready.go:82] duration metric: took 506.992872ms for pod "coredns-7c65d6cfc9-fgxh7" in "kube-system" namespace to be "Ready" ...
	I1010 19:29:01.804690  148525 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	I1010 19:29:01.809788  148525 pod_ready.go:93] pod "etcd-default-k8s-diff-port-361847" in "kube-system" namespace has status "Ready":"True"
	I1010 19:29:01.809814  148525 pod_ready.go:82] duration metric: took 5.116023ms for pod "etcd-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	I1010 19:29:01.809825  148525 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	I1010 19:29:01.814460  148525 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-361847" in "kube-system" namespace has status "Ready":"True"
	I1010 19:29:01.814486  148525 pod_ready.go:82] duration metric: took 4.652085ms for pod "kube-apiserver-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	I1010 19:29:01.814501  148525 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	I1010 19:29:01.819719  148525 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-361847" in "kube-system" namespace has status "Ready":"True"
	I1010 19:29:01.819741  148525 pod_ready.go:82] duration metric: took 5.231258ms for pod "kube-controller-manager-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	I1010 19:29:01.819753  148525 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-jlvn6" in "kube-system" namespace to be "Ready" ...
	I1010 19:29:02.082285  148525 pod_ready.go:93] pod "kube-proxy-jlvn6" in "kube-system" namespace has status "Ready":"True"
	I1010 19:29:02.082325  148525 pod_ready.go:82] duration metric: took 262.562954ms for pod "kube-proxy-jlvn6" in "kube-system" namespace to be "Ready" ...
	I1010 19:29:02.082342  148525 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	I1010 19:29:02.481705  148525 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-361847" in "kube-system" namespace has status "Ready":"True"
	I1010 19:29:02.481730  148525 pod_ready.go:82] duration metric: took 399.378957ms for pod "kube-scheduler-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	I1010 19:29:02.481742  148525 pod_ready.go:39] duration metric: took 8.715126416s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 19:29:02.481779  148525 api_server.go:52] waiting for apiserver process to appear ...
	I1010 19:29:02.481832  148525 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:29:02.498706  148525 api_server.go:72] duration metric: took 9.052891898s to wait for apiserver process to appear ...
	I1010 19:29:02.498760  148525 api_server.go:88] waiting for apiserver healthz status ...
	I1010 19:29:02.498795  148525 api_server.go:253] Checking apiserver healthz at https://192.168.50.32:8444/healthz ...
	I1010 19:29:02.503501  148525 api_server.go:279] https://192.168.50.32:8444/healthz returned 200:
	ok
	I1010 19:29:02.504594  148525 api_server.go:141] control plane version: v1.31.1
	I1010 19:29:02.504620  148525 api_server.go:131] duration metric: took 5.850548ms to wait for apiserver health ...
	I1010 19:29:02.504629  148525 system_pods.go:43] waiting for kube-system pods to appear ...
	I1010 19:29:02.685579  148525 system_pods.go:59] 9 kube-system pods found
	I1010 19:29:02.685611  148525 system_pods.go:61] "coredns-7c65d6cfc9-dh9th" [ff14d755-810a-497a-b1fc-7fe231748af3] Running
	I1010 19:29:02.685618  148525 system_pods.go:61] "coredns-7c65d6cfc9-fgxh7" [b4faa977-3205-4395-bda3-8fe24fdcf6cc] Running
	I1010 19:29:02.685624  148525 system_pods.go:61] "etcd-default-k8s-diff-port-361847" [d7e1e625-2945-4755-927a-aab5e40f5392] Running
	I1010 19:29:02.685630  148525 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-361847" [0f7dace6-6cdb-4438-a4b3-3fecac56c709] Running
	I1010 19:29:02.685635  148525 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-361847" [d08fb57e-d916-4d4f-929a-509c1c8c0f89] Running
	I1010 19:29:02.685639  148525 system_pods.go:61] "kube-proxy-jlvn6" [6336f682-0362-4855-b848-3540052aec19] Running
	I1010 19:29:02.685644  148525 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-361847" [60282768-5672-42b9-b9f5-6d5cd0b186cf] Running
	I1010 19:29:02.685653  148525 system_pods.go:61] "metrics-server-6867b74b74-fdf7p" [6f8ca204-13fe-4adb-9c09-33ec6821ff2d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1010 19:29:02.685658  148525 system_pods.go:61] "storage-provisioner" [ea1a4ade-9648-401f-a0ad-633ab3c1196b] Running
	I1010 19:29:02.685669  148525 system_pods.go:74] duration metric: took 181.032548ms to wait for pod list to return data ...
	I1010 19:29:02.685683  148525 default_sa.go:34] waiting for default service account to be created ...
	I1010 19:29:02.883256  148525 default_sa.go:45] found service account: "default"
	I1010 19:29:02.883288  148525 default_sa.go:55] duration metric: took 197.59742ms for default service account to be created ...
	I1010 19:29:02.883298  148525 system_pods.go:116] waiting for k8s-apps to be running ...
	I1010 19:29:03.084706  148525 system_pods.go:86] 9 kube-system pods found
	I1010 19:29:03.084737  148525 system_pods.go:89] "coredns-7c65d6cfc9-dh9th" [ff14d755-810a-497a-b1fc-7fe231748af3] Running
	I1010 19:29:03.084742  148525 system_pods.go:89] "coredns-7c65d6cfc9-fgxh7" [b4faa977-3205-4395-bda3-8fe24fdcf6cc] Running
	I1010 19:29:03.084746  148525 system_pods.go:89] "etcd-default-k8s-diff-port-361847" [d7e1e625-2945-4755-927a-aab5e40f5392] Running
	I1010 19:29:03.084751  148525 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-361847" [0f7dace6-6cdb-4438-a4b3-3fecac56c709] Running
	I1010 19:29:03.084755  148525 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-361847" [d08fb57e-d916-4d4f-929a-509c1c8c0f89] Running
	I1010 19:29:03.084759  148525 system_pods.go:89] "kube-proxy-jlvn6" [6336f682-0362-4855-b848-3540052aec19] Running
	I1010 19:29:03.084762  148525 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-361847" [60282768-5672-42b9-b9f5-6d5cd0b186cf] Running
	I1010 19:29:03.084768  148525 system_pods.go:89] "metrics-server-6867b74b74-fdf7p" [6f8ca204-13fe-4adb-9c09-33ec6821ff2d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1010 19:29:03.084772  148525 system_pods.go:89] "storage-provisioner" [ea1a4ade-9648-401f-a0ad-633ab3c1196b] Running
	I1010 19:29:03.084779  148525 system_pods.go:126] duration metric: took 201.476637ms to wait for k8s-apps to be running ...
	I1010 19:29:03.084787  148525 system_svc.go:44] waiting for kubelet service to be running ....
	I1010 19:29:03.084832  148525 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 19:29:03.100986  148525 system_svc.go:56] duration metric: took 16.183062ms WaitForService to wait for kubelet
	I1010 19:29:03.101026  148525 kubeadm.go:582] duration metric: took 9.655245557s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1010 19:29:03.101050  148525 node_conditions.go:102] verifying NodePressure condition ...
	I1010 19:29:03.282063  148525 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1010 19:29:03.282095  148525 node_conditions.go:123] node cpu capacity is 2
	I1010 19:29:03.282106  148525 node_conditions.go:105] duration metric: took 181.049888ms to run NodePressure ...
	I1010 19:29:03.282119  148525 start.go:241] waiting for startup goroutines ...
	I1010 19:29:03.282125  148525 start.go:246] waiting for cluster config update ...
	I1010 19:29:03.282135  148525 start.go:255] writing updated cluster config ...
	I1010 19:29:03.282414  148525 ssh_runner.go:195] Run: rm -f paused
	I1010 19:29:03.331838  148525 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1010 19:29:03.333698  148525 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-361847" cluster and "default" namespace by default
	I1010 19:28:58.775358  147213 logs.go:123] Gathering logs for storage-provisioner [e14d37c6da3f630d60720c5dc7ec414f060a7b327fafd27b633f3402e552840e] ...
	I1010 19:28:58.775396  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e14d37c6da3f630d60720c5dc7ec414f060a7b327fafd27b633f3402e552840e"
	I1010 19:28:58.812210  147213 logs.go:123] Gathering logs for kubelet ...
	I1010 19:28:58.812269  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:29:01.381750  147213 api_server.go:253] Checking apiserver healthz at https://192.168.72.11:8443/healthz ...
	I1010 19:29:01.386658  147213 api_server.go:279] https://192.168.72.11:8443/healthz returned 200:
	ok
	I1010 19:29:01.387793  147213 api_server.go:141] control plane version: v1.31.1
	I1010 19:29:01.387819  147213 api_server.go:131] duration metric: took 3.945468552s to wait for apiserver health ...
	I1010 19:29:01.387829  147213 system_pods.go:43] waiting for kube-system pods to appear ...
	I1010 19:29:01.387861  147213 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:29:01.387948  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:29:01.433312  147213 cri.go:89] found id: "20a9cb514f18abab89d82173a52c43693b13548712f96726c491e14100e5deba"
	I1010 19:29:01.433344  147213 cri.go:89] found id: ""
	I1010 19:29:01.433433  147213 logs.go:282] 1 containers: [20a9cb514f18abab89d82173a52c43693b13548712f96726c491e14100e5deba]
	I1010 19:29:01.433521  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:29:01.437920  147213 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:29:01.437983  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:29:01.476429  147213 cri.go:89] found id: "bfc9f1f069a0299235908025d3c8840e059ddc2fc2f579932edb134cff568c36"
	I1010 19:29:01.476458  147213 cri.go:89] found id: ""
	I1010 19:29:01.476470  147213 logs.go:282] 1 containers: [bfc9f1f069a0299235908025d3c8840e059ddc2fc2f579932edb134cff568c36]
	I1010 19:29:01.476522  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:29:01.480912  147213 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:29:01.480987  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:29:01.522141  147213 cri.go:89] found id: "3c98f0e3e46ce82feb8638281ffb8c8cdf326b15115a6edfe91de59de6868846"
	I1010 19:29:01.522164  147213 cri.go:89] found id: ""
	I1010 19:29:01.522173  147213 logs.go:282] 1 containers: [3c98f0e3e46ce82feb8638281ffb8c8cdf326b15115a6edfe91de59de6868846]
	I1010 19:29:01.522238  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:29:01.526742  147213 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:29:01.526803  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:29:01.572715  147213 cri.go:89] found id: "d397ef1d012accb6f9cb41fa5bd5ed4649588e9ad8552f87d2f9a579a9c3f023"
	I1010 19:29:01.572747  147213 cri.go:89] found id: ""
	I1010 19:29:01.572759  147213 logs.go:282] 1 containers: [d397ef1d012accb6f9cb41fa5bd5ed4649588e9ad8552f87d2f9a579a9c3f023]
	I1010 19:29:01.572814  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:29:01.577754  147213 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:29:01.577832  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:29:01.616077  147213 cri.go:89] found id: "3a26f9cbec8dc8e6e1b1c1cc24a2432dc85819a78557be2263db6bc34e847664"
	I1010 19:29:01.616104  147213 cri.go:89] found id: ""
	I1010 19:29:01.616121  147213 logs.go:282] 1 containers: [3a26f9cbec8dc8e6e1b1c1cc24a2432dc85819a78557be2263db6bc34e847664]
	I1010 19:29:01.616185  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:29:01.620622  147213 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:29:01.620702  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:29:01.662859  147213 cri.go:89] found id: "d59196636b2826805e226757e3c5d3683d49f2fddb43f2e965aeae02026742ea"
	I1010 19:29:01.662889  147213 cri.go:89] found id: ""
	I1010 19:29:01.662903  147213 logs.go:282] 1 containers: [d59196636b2826805e226757e3c5d3683d49f2fddb43f2e965aeae02026742ea]
	I1010 19:29:01.662964  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:29:01.667491  147213 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:29:01.667585  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:29:01.706191  147213 cri.go:89] found id: ""
	I1010 19:29:01.706217  147213 logs.go:282] 0 containers: []
	W1010 19:29:01.706228  147213 logs.go:284] No container was found matching "kindnet"
	I1010 19:29:01.706234  147213 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1010 19:29:01.706299  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1010 19:29:01.753559  147213 cri.go:89] found id: "dfabbf70cd44985666c6b54a456e49c66a41cc19300a483565decd937ce240ab"
	I1010 19:29:01.753581  147213 cri.go:89] found id: "e14d37c6da3f630d60720c5dc7ec414f060a7b327fafd27b633f3402e552840e"
	I1010 19:29:01.753584  147213 cri.go:89] found id: ""
	I1010 19:29:01.753591  147213 logs.go:282] 2 containers: [dfabbf70cd44985666c6b54a456e49c66a41cc19300a483565decd937ce240ab e14d37c6da3f630d60720c5dc7ec414f060a7b327fafd27b633f3402e552840e]
	I1010 19:29:01.753645  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:29:01.758179  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:29:01.762336  147213 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:29:01.762358  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 19:29:01.867667  147213 logs.go:123] Gathering logs for etcd [bfc9f1f069a0299235908025d3c8840e059ddc2fc2f579932edb134cff568c36] ...
	I1010 19:29:01.867698  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bfc9f1f069a0299235908025d3c8840e059ddc2fc2f579932edb134cff568c36"
	I1010 19:29:01.911722  147213 logs.go:123] Gathering logs for coredns [3c98f0e3e46ce82feb8638281ffb8c8cdf326b15115a6edfe91de59de6868846] ...
	I1010 19:29:01.911756  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c98f0e3e46ce82feb8638281ffb8c8cdf326b15115a6edfe91de59de6868846"
	I1010 19:29:01.955152  147213 logs.go:123] Gathering logs for kube-proxy [3a26f9cbec8dc8e6e1b1c1cc24a2432dc85819a78557be2263db6bc34e847664] ...
	I1010 19:29:01.955189  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3a26f9cbec8dc8e6e1b1c1cc24a2432dc85819a78557be2263db6bc34e847664"
	I1010 19:29:01.995010  147213 logs.go:123] Gathering logs for kube-controller-manager [d59196636b2826805e226757e3c5d3683d49f2fddb43f2e965aeae02026742ea] ...
	I1010 19:29:01.995041  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d59196636b2826805e226757e3c5d3683d49f2fddb43f2e965aeae02026742ea"
	I1010 19:29:02.047505  147213 logs.go:123] Gathering logs for storage-provisioner [dfabbf70cd44985666c6b54a456e49c66a41cc19300a483565decd937ce240ab] ...
	I1010 19:29:02.047546  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dfabbf70cd44985666c6b54a456e49c66a41cc19300a483565decd937ce240ab"
	I1010 19:29:02.085080  147213 logs.go:123] Gathering logs for storage-provisioner [e14d37c6da3f630d60720c5dc7ec414f060a7b327fafd27b633f3402e552840e] ...
	I1010 19:29:02.085110  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e14d37c6da3f630d60720c5dc7ec414f060a7b327fafd27b633f3402e552840e"
	I1010 19:29:02.128482  147213 logs.go:123] Gathering logs for kubelet ...
	I1010 19:29:02.128527  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:29:02.194867  147213 logs.go:123] Gathering logs for dmesg ...
	I1010 19:29:02.194904  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:29:02.211881  147213 logs.go:123] Gathering logs for kube-apiserver [20a9cb514f18abab89d82173a52c43693b13548712f96726c491e14100e5deba] ...
	I1010 19:29:02.211911  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 20a9cb514f18abab89d82173a52c43693b13548712f96726c491e14100e5deba"
	I1010 19:29:02.262969  147213 logs.go:123] Gathering logs for kube-scheduler [d397ef1d012accb6f9cb41fa5bd5ed4649588e9ad8552f87d2f9a579a9c3f023] ...
	I1010 19:29:02.263013  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d397ef1d012accb6f9cb41fa5bd5ed4649588e9ad8552f87d2f9a579a9c3f023"
	I1010 19:29:02.302921  147213 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:29:02.302956  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:29:02.671102  147213 logs.go:123] Gathering logs for container status ...
	I1010 19:29:02.671169  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:29:05.241477  147213 system_pods.go:59] 8 kube-system pods found
	I1010 19:29:05.241508  147213 system_pods.go:61] "coredns-7c65d6cfc9-86brb" [28e5f869-f82f-4bd4-9d9c-89499fa89c89] Running
	I1010 19:29:05.241513  147213 system_pods.go:61] "etcd-no-preload-320324" [3027fc7b-a2cd-47c9-a6f1-122bfd0475a8] Running
	I1010 19:29:05.241517  147213 system_pods.go:61] "kube-apiserver-no-preload-320324" [6e3d2eab-4568-48e9-957c-e724ff89b5da] Running
	I1010 19:29:05.241521  147213 system_pods.go:61] "kube-controller-manager-no-preload-320324" [a6d65cd3-48b1-4faf-bb7f-f6433641d6f3] Running
	I1010 19:29:05.241525  147213 system_pods.go:61] "kube-proxy-vn6sv" [e5b2c419-7299-4bc4-b263-99408b9484eb] Running
	I1010 19:29:05.241528  147213 system_pods.go:61] "kube-scheduler-no-preload-320324" [e089c258-e720-4c78-ada1-eebbe89556c1] Running
	I1010 19:29:05.241534  147213 system_pods.go:61] "metrics-server-6867b74b74-8w9lk" [354939e6-2ca9-44f5-8e8e-c10493c68b79] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1010 19:29:05.241540  147213 system_pods.go:61] "storage-provisioner" [d4965b53-60c3-4a97-bd52-d164d977247a] Running
	I1010 19:29:05.241549  147213 system_pods.go:74] duration metric: took 3.853712488s to wait for pod list to return data ...
	I1010 19:29:05.241556  147213 default_sa.go:34] waiting for default service account to be created ...
	I1010 19:29:05.244686  147213 default_sa.go:45] found service account: "default"
	I1010 19:29:05.244721  147213 default_sa.go:55] duration metric: took 3.158069ms for default service account to be created ...
	I1010 19:29:05.244733  147213 system_pods.go:116] waiting for k8s-apps to be running ...
	I1010 19:29:05.249372  147213 system_pods.go:86] 8 kube-system pods found
	I1010 19:29:05.249398  147213 system_pods.go:89] "coredns-7c65d6cfc9-86brb" [28e5f869-f82f-4bd4-9d9c-89499fa89c89] Running
	I1010 19:29:05.249404  147213 system_pods.go:89] "etcd-no-preload-320324" [3027fc7b-a2cd-47c9-a6f1-122bfd0475a8] Running
	I1010 19:29:05.249408  147213 system_pods.go:89] "kube-apiserver-no-preload-320324" [6e3d2eab-4568-48e9-957c-e724ff89b5da] Running
	I1010 19:29:05.249413  147213 system_pods.go:89] "kube-controller-manager-no-preload-320324" [a6d65cd3-48b1-4faf-bb7f-f6433641d6f3] Running
	I1010 19:29:05.249418  147213 system_pods.go:89] "kube-proxy-vn6sv" [e5b2c419-7299-4bc4-b263-99408b9484eb] Running
	I1010 19:29:05.249425  147213 system_pods.go:89] "kube-scheduler-no-preload-320324" [e089c258-e720-4c78-ada1-eebbe89556c1] Running
	I1010 19:29:05.249433  147213 system_pods.go:89] "metrics-server-6867b74b74-8w9lk" [354939e6-2ca9-44f5-8e8e-c10493c68b79] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1010 19:29:05.249442  147213 system_pods.go:89] "storage-provisioner" [d4965b53-60c3-4a97-bd52-d164d977247a] Running
	I1010 19:29:05.249455  147213 system_pods.go:126] duration metric: took 4.715381ms to wait for k8s-apps to be running ...
	I1010 19:29:05.249467  147213 system_svc.go:44] waiting for kubelet service to be running ....
	I1010 19:29:05.249519  147213 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 19:29:05.265180  147213 system_svc.go:56] duration metric: took 15.703413ms WaitForService to wait for kubelet
	I1010 19:29:05.265216  147213 kubeadm.go:582] duration metric: took 4m24.851723603s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1010 19:29:05.265237  147213 node_conditions.go:102] verifying NodePressure condition ...
	I1010 19:29:05.268775  147213 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1010 19:29:05.268807  147213 node_conditions.go:123] node cpu capacity is 2
	I1010 19:29:05.268821  147213 node_conditions.go:105] duration metric: took 3.575195ms to run NodePressure ...
	I1010 19:29:05.268834  147213 start.go:241] waiting for startup goroutines ...
	I1010 19:29:05.268840  147213 start.go:246] waiting for cluster config update ...
	I1010 19:29:05.268869  147213 start.go:255] writing updated cluster config ...
	I1010 19:29:05.269148  147213 ssh_runner.go:195] Run: rm -f paused
	I1010 19:29:05.319999  147213 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1010 19:29:05.322189  147213 out.go:177] * Done! kubectl is now configured to use "no-preload-320324" cluster and "default" namespace by default
	I1010 19:29:41.275119  148123 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1010 19:29:41.275272  148123 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1010 19:29:41.276822  148123 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1010 19:29:41.276919  148123 kubeadm.go:310] [preflight] Running pre-flight checks
	I1010 19:29:41.277017  148123 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1010 19:29:41.277142  148123 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1010 19:29:41.277254  148123 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1010 19:29:41.277357  148123 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1010 19:29:41.279069  148123 out.go:235]   - Generating certificates and keys ...
	I1010 19:29:41.279160  148123 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1010 19:29:41.279217  148123 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1010 19:29:41.279306  148123 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1010 19:29:41.279381  148123 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1010 19:29:41.279484  148123 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1010 19:29:41.279576  148123 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1010 19:29:41.279674  148123 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1010 19:29:41.279779  148123 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1010 19:29:41.279906  148123 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1010 19:29:41.279971  148123 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1010 19:29:41.280005  148123 kubeadm.go:310] [certs] Using the existing "sa" key
	I1010 19:29:41.280052  148123 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1010 19:29:41.280095  148123 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1010 19:29:41.280144  148123 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1010 19:29:41.280219  148123 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1010 19:29:41.280317  148123 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1010 19:29:41.280474  148123 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1010 19:29:41.280583  148123 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1010 19:29:41.280648  148123 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1010 19:29:41.280736  148123 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1010 19:29:41.282180  148123 out.go:235]   - Booting up control plane ...
	I1010 19:29:41.282266  148123 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1010 19:29:41.282336  148123 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1010 19:29:41.282414  148123 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1010 19:29:41.282538  148123 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1010 19:29:41.282748  148123 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1010 19:29:41.282822  148123 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1010 19:29:41.282896  148123 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1010 19:29:41.283061  148123 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1010 19:29:41.283123  148123 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1010 19:29:41.283287  148123 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1010 19:29:41.283348  148123 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1010 19:29:41.283519  148123 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1010 19:29:41.283623  148123 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1010 19:29:41.283809  148123 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1010 19:29:41.283900  148123 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1010 19:29:41.284115  148123 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1010 19:29:41.284135  148123 kubeadm.go:310] 
	I1010 19:29:41.284201  148123 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1010 19:29:41.284243  148123 kubeadm.go:310] 		timed out waiting for the condition
	I1010 19:29:41.284257  148123 kubeadm.go:310] 
	I1010 19:29:41.284307  148123 kubeadm.go:310] 	This error is likely caused by:
	I1010 19:29:41.284346  148123 kubeadm.go:310] 		- The kubelet is not running
	I1010 19:29:41.284481  148123 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1010 19:29:41.284505  148123 kubeadm.go:310] 
	I1010 19:29:41.284655  148123 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1010 19:29:41.284714  148123 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1010 19:29:41.284752  148123 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1010 19:29:41.284758  148123 kubeadm.go:310] 
	I1010 19:29:41.284913  148123 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1010 19:29:41.285038  148123 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1010 19:29:41.285055  148123 kubeadm.go:310] 
	I1010 19:29:41.285169  148123 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1010 19:29:41.285307  148123 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1010 19:29:41.285475  148123 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1010 19:29:41.285587  148123 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1010 19:29:41.285644  148123 kubeadm.go:310] 
	W1010 19:29:41.285747  148123 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1010 19:29:41.285792  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1010 19:29:46.681684  148123 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.395862838s)
	I1010 19:29:46.681797  148123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 19:29:46.696927  148123 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1010 19:29:46.708250  148123 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1010 19:29:46.708280  148123 kubeadm.go:157] found existing configuration files:
	
	I1010 19:29:46.708339  148123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1010 19:29:46.719748  148123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1010 19:29:46.719828  148123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1010 19:29:46.730892  148123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1010 19:29:46.742329  148123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1010 19:29:46.742401  148123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1010 19:29:46.752919  148123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1010 19:29:46.762860  148123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1010 19:29:46.762932  148123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1010 19:29:46.772513  148123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1010 19:29:46.781672  148123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1010 19:29:46.781746  148123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1010 19:29:46.791250  148123 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1010 19:29:47.018706  148123 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1010 19:31:43.351851  148123 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1010 19:31:43.351992  148123 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1010 19:31:43.353664  148123 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1010 19:31:43.353752  148123 kubeadm.go:310] [preflight] Running pre-flight checks
	I1010 19:31:43.353861  148123 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1010 19:31:43.353998  148123 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1010 19:31:43.354130  148123 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1010 19:31:43.354235  148123 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1010 19:31:43.356450  148123 out.go:235]   - Generating certificates and keys ...
	I1010 19:31:43.356557  148123 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1010 19:31:43.356652  148123 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1010 19:31:43.356783  148123 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1010 19:31:43.356902  148123 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1010 19:31:43.357002  148123 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1010 19:31:43.357074  148123 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1010 19:31:43.357181  148123 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1010 19:31:43.357247  148123 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1010 19:31:43.357325  148123 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1010 19:31:43.357408  148123 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1010 19:31:43.357459  148123 kubeadm.go:310] [certs] Using the existing "sa" key
	I1010 19:31:43.357529  148123 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1010 19:31:43.357604  148123 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1010 19:31:43.357676  148123 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1010 19:31:43.357751  148123 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1010 19:31:43.357830  148123 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1010 19:31:43.357979  148123 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1010 19:31:43.358092  148123 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1010 19:31:43.358158  148123 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1010 19:31:43.358263  148123 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1010 19:31:43.360216  148123 out.go:235]   - Booting up control plane ...
	I1010 19:31:43.360332  148123 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1010 19:31:43.360463  148123 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1010 19:31:43.360551  148123 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1010 19:31:43.360673  148123 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1010 19:31:43.360907  148123 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1010 19:31:43.360976  148123 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1010 19:31:43.361058  148123 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1010 19:31:43.361309  148123 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1010 19:31:43.361410  148123 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1010 19:31:43.361648  148123 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1010 19:31:43.361742  148123 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1010 19:31:43.361960  148123 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1010 19:31:43.362058  148123 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1010 19:31:43.362308  148123 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1010 19:31:43.362399  148123 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1010 19:31:43.362640  148123 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1010 19:31:43.362652  148123 kubeadm.go:310] 
	I1010 19:31:43.362708  148123 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1010 19:31:43.362758  148123 kubeadm.go:310] 		timed out waiting for the condition
	I1010 19:31:43.362768  148123 kubeadm.go:310] 
	I1010 19:31:43.362809  148123 kubeadm.go:310] 	This error is likely caused by:
	I1010 19:31:43.362858  148123 kubeadm.go:310] 		- The kubelet is not running
	I1010 19:31:43.362981  148123 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1010 19:31:43.362993  148123 kubeadm.go:310] 
	I1010 19:31:43.363119  148123 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1010 19:31:43.363164  148123 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1010 19:31:43.363204  148123 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1010 19:31:43.363219  148123 kubeadm.go:310] 
	I1010 19:31:43.363344  148123 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1010 19:31:43.363454  148123 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1010 19:31:43.363465  148123 kubeadm.go:310] 
	I1010 19:31:43.363591  148123 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1010 19:31:43.363708  148123 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1010 19:31:43.363803  148123 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1010 19:31:43.363892  148123 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1010 19:31:43.363978  148123 kubeadm.go:394] duration metric: took 8m3.085634474s to StartCluster
	I1010 19:31:43.364052  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:31:43.364159  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:31:43.364268  148123 kubeadm.go:310] 
	I1010 19:31:43.413041  148123 cri.go:89] found id: ""
	I1010 19:31:43.413075  148123 logs.go:282] 0 containers: []
	W1010 19:31:43.413086  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:31:43.413092  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:31:43.413206  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:31:43.456454  148123 cri.go:89] found id: ""
	I1010 19:31:43.456487  148123 logs.go:282] 0 containers: []
	W1010 19:31:43.456499  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:31:43.456507  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:31:43.456567  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:31:43.498582  148123 cri.go:89] found id: ""
	I1010 19:31:43.498614  148123 logs.go:282] 0 containers: []
	W1010 19:31:43.498624  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:31:43.498633  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:31:43.498694  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:31:43.540557  148123 cri.go:89] found id: ""
	I1010 19:31:43.540589  148123 logs.go:282] 0 containers: []
	W1010 19:31:43.540598  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:31:43.540605  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:31:43.540661  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:31:43.576743  148123 cri.go:89] found id: ""
	I1010 19:31:43.576776  148123 logs.go:282] 0 containers: []
	W1010 19:31:43.576787  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:31:43.576796  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:31:43.576887  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:31:43.614620  148123 cri.go:89] found id: ""
	I1010 19:31:43.614652  148123 logs.go:282] 0 containers: []
	W1010 19:31:43.614662  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:31:43.614668  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:31:43.614730  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:31:43.652008  148123 cri.go:89] found id: ""
	I1010 19:31:43.652061  148123 logs.go:282] 0 containers: []
	W1010 19:31:43.652075  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:31:43.652084  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:31:43.652153  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:31:43.689990  148123 cri.go:89] found id: ""
	I1010 19:31:43.690019  148123 logs.go:282] 0 containers: []
	W1010 19:31:43.690028  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:31:43.690043  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:31:43.690056  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:31:43.745634  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:31:43.745673  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:31:43.760064  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:31:43.760095  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:31:43.840168  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:31:43.840197  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:31:43.840214  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:31:43.943265  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:31:43.943303  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1010 19:31:43.987758  148123 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1010 19:31:43.987816  148123 out.go:270] * 
	W1010 19:31:43.987891  148123 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1010 19:31:43.987906  148123 out.go:270] * 
	W1010 19:31:43.988703  148123 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1010 19:31:43.991928  148123 out.go:201] 
	W1010 19:31:43.993494  148123 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1010 19:31:43.993556  148123 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1010 19:31:43.993585  148123 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1010 19:31:43.995273  148123 out.go:201] 
	
	
	==> CRI-O <==
	Oct 10 19:37:32 embed-certs-541370 crio[703]: time="2024-10-10 19:37:32.913409650Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728589052913387790,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a8ef19af-be4b-47cf-aa08-cbb6fdbfcfdd name=/runtime.v1.ImageService/ImageFsInfo
	Oct 10 19:37:32 embed-certs-541370 crio[703]: time="2024-10-10 19:37:32.914099724Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=08088ccc-206c-4bb0-b9d2-907fd8a6a188 name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 19:37:32 embed-certs-541370 crio[703]: time="2024-10-10 19:37:32.914171512Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=08088ccc-206c-4bb0-b9d2-907fd8a6a188 name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 19:37:32 embed-certs-541370 crio[703]: time="2024-10-10 19:37:32.914355688Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0def20106145a96ba01f0e844ac07a0c46313190a6835ec2ad8167db5f072e2f,PodSandboxId:3dcb6748315187166471c06f6429a87db9b7528ec88e1aa713728283129efd09,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728588501941363227,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfb28184-daef-40be-9170-b42058727418,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df9196310387b44e529ddad250a9c2322eb9daccc42f72fb7c3e06b51b182d24,PodSandboxId:e2fb0a2cbe21d7ebcc79a87f810888033c4d35c79e94b2ec7c7c591a618cc989,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728588501347414391,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-n7wxs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac89df92-bbdb-432b-8c4a-d9ced3c2f2b5,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6eb76f8e0d16bf490c5b0a16c6b7b62555965c8d0c861c7b8b68c27da4dcd936,PodSandboxId:fc83c022f5ff686a625fce3b0f99700157dbe5e393ebb8fc3f1a4b89554ec274,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728588501282332611,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-59752,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f
7980c69-dd8e-42e0-a0ab-1dedf2203367,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff67e9a4d0b9d09d6489b72b20e6c96dde13d1c32f246a901abe1da570d4218b,PodSandboxId:c7a0c173a7780258fdd91a72fa84622c49d87586253ab7b313e5ceb98582b031,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1728588500229316816,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6hdds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe7cbbf4-12be-469d-b176-37c4daccab96,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6373a0366f8bd51fdb46f462b2b7d6b68e53f3726cfce9da4d1272094b33cfb,PodSandboxId:8412c7aa1ef71a8d51a14ef57fa83858096e2f5c20c25b0e8e898baf828bd79d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728588489225563567,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-541370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb6f95bcc8a65be3ea45039a22662946,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73c9d5c03b79507aec9719e9caf167d73e80671922639bec8a689fd1a4190ad0,PodSandboxId:ef03a0bfea565b3bd4550a95e429ed9bc9d7337edcbd8fe110914f581dcbc973,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728588489215686149,Labels:map[
string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-541370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac5c1e154d45345ed7b2f0bd497cc877,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:408a273bb46695bd06d19af956d4f1088d12f627e4715751845732fcb2e6e5b9,PodSandboxId:037867becb2b264537a1035b4af6b727abd224bd3ced4139e4e22bed0de67403,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728588489247220640,Labels:map[stri
ng]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-541370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ff06946beb7a9457525c6f483ee8641,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f7d921296cb4c649b76fbfd34f3670185a813610d065cfb2722c8b53977ab43,PodSandboxId:09baaf777b3b1146a8da3c69966ab512e19611eb713d50867a5a79ee4264490d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728588489124804774,Labels:map[string]string{io.kubernetes.containe
r.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-541370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a510bdbe8cdaaff1bec4f8a189faa6ac,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f81ee864d6e25c939a6b71b6b5507fed3cd1d7211fc7c2ff4963d9df8733e685,PodSandboxId:0ab0c873c1e1c3c2b5cb5c6fb3ef49585a8419193abfb7c4aa167da814115830,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728588200154518322,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-541370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb6f95bcc8a65be3ea45039a22662946,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=08088ccc-206c-4bb0-b9d2-907fd8a6a188 name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 19:37:32 embed-certs-541370 crio[703]: time="2024-10-10 19:37:32.954049377Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=99efbe89-8a81-4009-a3c1-b12c919b1b7b name=/runtime.v1.RuntimeService/Version
	Oct 10 19:37:32 embed-certs-541370 crio[703]: time="2024-10-10 19:37:32.954141386Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=99efbe89-8a81-4009-a3c1-b12c919b1b7b name=/runtime.v1.RuntimeService/Version
	Oct 10 19:37:32 embed-certs-541370 crio[703]: time="2024-10-10 19:37:32.955478477Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6aa86cf5-f78a-4b61-af9a-ae459e39d7e0 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 10 19:37:32 embed-certs-541370 crio[703]: time="2024-10-10 19:37:32.956055843Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728589052956032366,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6aa86cf5-f78a-4b61-af9a-ae459e39d7e0 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 10 19:37:32 embed-certs-541370 crio[703]: time="2024-10-10 19:37:32.956632656Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8defa585-fda3-4f3f-91b7-a4512f28802c name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 19:37:32 embed-certs-541370 crio[703]: time="2024-10-10 19:37:32.956683248Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8defa585-fda3-4f3f-91b7-a4512f28802c name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 19:37:32 embed-certs-541370 crio[703]: time="2024-10-10 19:37:32.956944682Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0def20106145a96ba01f0e844ac07a0c46313190a6835ec2ad8167db5f072e2f,PodSandboxId:3dcb6748315187166471c06f6429a87db9b7528ec88e1aa713728283129efd09,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728588501941363227,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfb28184-daef-40be-9170-b42058727418,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df9196310387b44e529ddad250a9c2322eb9daccc42f72fb7c3e06b51b182d24,PodSandboxId:e2fb0a2cbe21d7ebcc79a87f810888033c4d35c79e94b2ec7c7c591a618cc989,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728588501347414391,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-n7wxs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac89df92-bbdb-432b-8c4a-d9ced3c2f2b5,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6eb76f8e0d16bf490c5b0a16c6b7b62555965c8d0c861c7b8b68c27da4dcd936,PodSandboxId:fc83c022f5ff686a625fce3b0f99700157dbe5e393ebb8fc3f1a4b89554ec274,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728588501282332611,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-59752,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f
7980c69-dd8e-42e0-a0ab-1dedf2203367,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff67e9a4d0b9d09d6489b72b20e6c96dde13d1c32f246a901abe1da570d4218b,PodSandboxId:c7a0c173a7780258fdd91a72fa84622c49d87586253ab7b313e5ceb98582b031,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1728588500229316816,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6hdds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe7cbbf4-12be-469d-b176-37c4daccab96,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6373a0366f8bd51fdb46f462b2b7d6b68e53f3726cfce9da4d1272094b33cfb,PodSandboxId:8412c7aa1ef71a8d51a14ef57fa83858096e2f5c20c25b0e8e898baf828bd79d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728588489225563567,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-541370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb6f95bcc8a65be3ea45039a22662946,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73c9d5c03b79507aec9719e9caf167d73e80671922639bec8a689fd1a4190ad0,PodSandboxId:ef03a0bfea565b3bd4550a95e429ed9bc9d7337edcbd8fe110914f581dcbc973,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728588489215686149,Labels:map[
string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-541370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac5c1e154d45345ed7b2f0bd497cc877,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:408a273bb46695bd06d19af956d4f1088d12f627e4715751845732fcb2e6e5b9,PodSandboxId:037867becb2b264537a1035b4af6b727abd224bd3ced4139e4e22bed0de67403,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728588489247220640,Labels:map[stri
ng]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-541370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ff06946beb7a9457525c6f483ee8641,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f7d921296cb4c649b76fbfd34f3670185a813610d065cfb2722c8b53977ab43,PodSandboxId:09baaf777b3b1146a8da3c69966ab512e19611eb713d50867a5a79ee4264490d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728588489124804774,Labels:map[string]string{io.kubernetes.containe
r.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-541370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a510bdbe8cdaaff1bec4f8a189faa6ac,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f81ee864d6e25c939a6b71b6b5507fed3cd1d7211fc7c2ff4963d9df8733e685,PodSandboxId:0ab0c873c1e1c3c2b5cb5c6fb3ef49585a8419193abfb7c4aa167da814115830,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728588200154518322,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-541370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb6f95bcc8a65be3ea45039a22662946,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8defa585-fda3-4f3f-91b7-a4512f28802c name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 19:37:33 embed-certs-541370 crio[703]: time="2024-10-10 19:37:32.998561194Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d0c99f19-de89-411d-ab0b-e5dfab4076c2 name=/runtime.v1.RuntimeService/Version
	Oct 10 19:37:33 embed-certs-541370 crio[703]: time="2024-10-10 19:37:32.998656227Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d0c99f19-de89-411d-ab0b-e5dfab4076c2 name=/runtime.v1.RuntimeService/Version
	Oct 10 19:37:33 embed-certs-541370 crio[703]: time="2024-10-10 19:37:33.000347438Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c50e94a5-db5d-4d5d-94ce-24a2ffc5232a name=/runtime.v1.ImageService/ImageFsInfo
	Oct 10 19:37:33 embed-certs-541370 crio[703]: time="2024-10-10 19:37:33.000754231Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728589053000730762,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c50e94a5-db5d-4d5d-94ce-24a2ffc5232a name=/runtime.v1.ImageService/ImageFsInfo
	Oct 10 19:37:33 embed-certs-541370 crio[703]: time="2024-10-10 19:37:33.004453888Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=aa7c99b5-52a1-4746-8565-21a2b0d2b7fb name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 19:37:33 embed-certs-541370 crio[703]: time="2024-10-10 19:37:33.004549836Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=aa7c99b5-52a1-4746-8565-21a2b0d2b7fb name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 19:37:33 embed-certs-541370 crio[703]: time="2024-10-10 19:37:33.004735930Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0def20106145a96ba01f0e844ac07a0c46313190a6835ec2ad8167db5f072e2f,PodSandboxId:3dcb6748315187166471c06f6429a87db9b7528ec88e1aa713728283129efd09,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728588501941363227,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfb28184-daef-40be-9170-b42058727418,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df9196310387b44e529ddad250a9c2322eb9daccc42f72fb7c3e06b51b182d24,PodSandboxId:e2fb0a2cbe21d7ebcc79a87f810888033c4d35c79e94b2ec7c7c591a618cc989,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728588501347414391,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-n7wxs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac89df92-bbdb-432b-8c4a-d9ced3c2f2b5,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6eb76f8e0d16bf490c5b0a16c6b7b62555965c8d0c861c7b8b68c27da4dcd936,PodSandboxId:fc83c022f5ff686a625fce3b0f99700157dbe5e393ebb8fc3f1a4b89554ec274,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728588501282332611,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-59752,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f
7980c69-dd8e-42e0-a0ab-1dedf2203367,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff67e9a4d0b9d09d6489b72b20e6c96dde13d1c32f246a901abe1da570d4218b,PodSandboxId:c7a0c173a7780258fdd91a72fa84622c49d87586253ab7b313e5ceb98582b031,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1728588500229316816,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6hdds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe7cbbf4-12be-469d-b176-37c4daccab96,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6373a0366f8bd51fdb46f462b2b7d6b68e53f3726cfce9da4d1272094b33cfb,PodSandboxId:8412c7aa1ef71a8d51a14ef57fa83858096e2f5c20c25b0e8e898baf828bd79d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728588489225563567,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-541370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb6f95bcc8a65be3ea45039a22662946,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73c9d5c03b79507aec9719e9caf167d73e80671922639bec8a689fd1a4190ad0,PodSandboxId:ef03a0bfea565b3bd4550a95e429ed9bc9d7337edcbd8fe110914f581dcbc973,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728588489215686149,Labels:map[
string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-541370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac5c1e154d45345ed7b2f0bd497cc877,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:408a273bb46695bd06d19af956d4f1088d12f627e4715751845732fcb2e6e5b9,PodSandboxId:037867becb2b264537a1035b4af6b727abd224bd3ced4139e4e22bed0de67403,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728588489247220640,Labels:map[stri
ng]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-541370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ff06946beb7a9457525c6f483ee8641,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f7d921296cb4c649b76fbfd34f3670185a813610d065cfb2722c8b53977ab43,PodSandboxId:09baaf777b3b1146a8da3c69966ab512e19611eb713d50867a5a79ee4264490d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728588489124804774,Labels:map[string]string{io.kubernetes.containe
r.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-541370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a510bdbe8cdaaff1bec4f8a189faa6ac,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f81ee864d6e25c939a6b71b6b5507fed3cd1d7211fc7c2ff4963d9df8733e685,PodSandboxId:0ab0c873c1e1c3c2b5cb5c6fb3ef49585a8419193abfb7c4aa167da814115830,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728588200154518322,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-541370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb6f95bcc8a65be3ea45039a22662946,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=aa7c99b5-52a1-4746-8565-21a2b0d2b7fb name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 19:37:33 embed-certs-541370 crio[703]: time="2024-10-10 19:37:33.045688933Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c1295ab2-55cd-41c0-bd35-820761b9be28 name=/runtime.v1.RuntimeService/Version
	Oct 10 19:37:33 embed-certs-541370 crio[703]: time="2024-10-10 19:37:33.045783523Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c1295ab2-55cd-41c0-bd35-820761b9be28 name=/runtime.v1.RuntimeService/Version
	Oct 10 19:37:33 embed-certs-541370 crio[703]: time="2024-10-10 19:37:33.047361401Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=80d0ca24-8ca8-4a8a-8932-a93c4a28669d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 10 19:37:33 embed-certs-541370 crio[703]: time="2024-10-10 19:37:33.049165411Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728589053049139148,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=80d0ca24-8ca8-4a8a-8932-a93c4a28669d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 10 19:37:33 embed-certs-541370 crio[703]: time="2024-10-10 19:37:33.049949195Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c2ad8af7-26b0-4e05-8720-201470ccc571 name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 19:37:33 embed-certs-541370 crio[703]: time="2024-10-10 19:37:33.050030715Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c2ad8af7-26b0-4e05-8720-201470ccc571 name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 19:37:33 embed-certs-541370 crio[703]: time="2024-10-10 19:37:33.050237108Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0def20106145a96ba01f0e844ac07a0c46313190a6835ec2ad8167db5f072e2f,PodSandboxId:3dcb6748315187166471c06f6429a87db9b7528ec88e1aa713728283129efd09,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728588501941363227,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfb28184-daef-40be-9170-b42058727418,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df9196310387b44e529ddad250a9c2322eb9daccc42f72fb7c3e06b51b182d24,PodSandboxId:e2fb0a2cbe21d7ebcc79a87f810888033c4d35c79e94b2ec7c7c591a618cc989,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728588501347414391,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-n7wxs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac89df92-bbdb-432b-8c4a-d9ced3c2f2b5,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6eb76f8e0d16bf490c5b0a16c6b7b62555965c8d0c861c7b8b68c27da4dcd936,PodSandboxId:fc83c022f5ff686a625fce3b0f99700157dbe5e393ebb8fc3f1a4b89554ec274,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728588501282332611,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-59752,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f
7980c69-dd8e-42e0-a0ab-1dedf2203367,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff67e9a4d0b9d09d6489b72b20e6c96dde13d1c32f246a901abe1da570d4218b,PodSandboxId:c7a0c173a7780258fdd91a72fa84622c49d87586253ab7b313e5ceb98582b031,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1728588500229316816,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6hdds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe7cbbf4-12be-469d-b176-37c4daccab96,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6373a0366f8bd51fdb46f462b2b7d6b68e53f3726cfce9da4d1272094b33cfb,PodSandboxId:8412c7aa1ef71a8d51a14ef57fa83858096e2f5c20c25b0e8e898baf828bd79d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728588489225563567,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-541370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb6f95bcc8a65be3ea45039a22662946,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73c9d5c03b79507aec9719e9caf167d73e80671922639bec8a689fd1a4190ad0,PodSandboxId:ef03a0bfea565b3bd4550a95e429ed9bc9d7337edcbd8fe110914f581dcbc973,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728588489215686149,Labels:map[
string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-541370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac5c1e154d45345ed7b2f0bd497cc877,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:408a273bb46695bd06d19af956d4f1088d12f627e4715751845732fcb2e6e5b9,PodSandboxId:037867becb2b264537a1035b4af6b727abd224bd3ced4139e4e22bed0de67403,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728588489247220640,Labels:map[stri
ng]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-541370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ff06946beb7a9457525c6f483ee8641,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f7d921296cb4c649b76fbfd34f3670185a813610d065cfb2722c8b53977ab43,PodSandboxId:09baaf777b3b1146a8da3c69966ab512e19611eb713d50867a5a79ee4264490d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728588489124804774,Labels:map[string]string{io.kubernetes.containe
r.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-541370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a510bdbe8cdaaff1bec4f8a189faa6ac,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f81ee864d6e25c939a6b71b6b5507fed3cd1d7211fc7c2ff4963d9df8733e685,PodSandboxId:0ab0c873c1e1c3c2b5cb5c6fb3ef49585a8419193abfb7c4aa167da814115830,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728588200154518322,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-541370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb6f95bcc8a65be3ea45039a22662946,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c2ad8af7-26b0-4e05-8720-201470ccc571 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	0def20106145a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   3dcb674831518       storage-provisioner
	df9196310387b       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   e2fb0a2cbe21d       coredns-7c65d6cfc9-n7wxs
	6eb76f8e0d16b       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   fc83c022f5ff6       coredns-7c65d6cfc9-59752
	ff67e9a4d0b9d       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   9 minutes ago       Running             kube-proxy                0                   c7a0c173a7780       kube-proxy-6hdds
	408a273bb4669       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   9 minutes ago       Running             etcd                      2                   037867becb2b2       etcd-embed-certs-541370
	c6373a0366f8b       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   9 minutes ago       Running             kube-apiserver            2                   8412c7aa1ef71       kube-apiserver-embed-certs-541370
	73c9d5c03b795       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   9 minutes ago       Running             kube-controller-manager   2                   ef03a0bfea565       kube-controller-manager-embed-certs-541370
	2f7d921296cb4       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   9 minutes ago       Running             kube-scheduler            2                   09baaf777b3b1       kube-scheduler-embed-certs-541370
	f81ee864d6e25       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   14 minutes ago      Exited              kube-apiserver            1                   0ab0c873c1e1c       kube-apiserver-embed-certs-541370
	
	
	==> coredns [6eb76f8e0d16bf490c5b0a16c6b7b62555965c8d0c861c7b8b68c27da4dcd936] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [df9196310387b44e529ddad250a9c2322eb9daccc42f72fb7c3e06b51b182d24] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               embed-certs-541370
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-541370
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e90d7de550e839433dc631623053398b652747fc
	                    minikube.k8s.io/name=embed-certs-541370
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_10T19_28_15_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 10 Oct 2024 19:28:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-541370
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 10 Oct 2024 19:37:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 10 Oct 2024 19:33:30 +0000   Thu, 10 Oct 2024 19:28:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 10 Oct 2024 19:33:30 +0000   Thu, 10 Oct 2024 19:28:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 10 Oct 2024 19:33:30 +0000   Thu, 10 Oct 2024 19:28:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 10 Oct 2024 19:33:30 +0000   Thu, 10 Oct 2024 19:28:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.120
	  Hostname:    embed-certs-541370
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e13a128c52ea4352aceae98f7e8f44c9
	  System UUID:                e13a128c-52ea-4352-acea-e98f7e8f44c9
	  Boot ID:                    8c4a8121-3b24-41fb-98a7-05e8fae9b2c6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-59752                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m14s
	  kube-system                 coredns-7c65d6cfc9-n7wxs                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m14s
	  kube-system                 etcd-embed-certs-541370                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         9m19s
	  kube-system                 kube-apiserver-embed-certs-541370             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m19s
	  kube-system                 kube-controller-manager-embed-certs-541370    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m19s
	  kube-system                 kube-proxy-6hdds                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m14s
	  kube-system                 kube-scheduler-embed-certs-541370             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m19s
	  kube-system                 metrics-server-6867b74b74-znhn4               100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         9m12s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m12s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m12s                  kube-proxy       
	  Normal  Starting                 9m25s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m25s (x8 over 9m25s)  kubelet          Node embed-certs-541370 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m25s (x8 over 9m25s)  kubelet          Node embed-certs-541370 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m25s (x7 over 9m25s)  kubelet          Node embed-certs-541370 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m25s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m19s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m19s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m18s                  kubelet          Node embed-certs-541370 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m18s                  kubelet          Node embed-certs-541370 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m18s                  kubelet          Node embed-certs-541370 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m15s                  node-controller  Node embed-certs-541370 event: Registered Node embed-certs-541370 in Controller
	
	
	==> dmesg <==
	[  +0.050711] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040360] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.897453] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[Oct10 19:23] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.469146] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.852281] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.057338] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.071939] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +0.220739] systemd-fstab-generator[649]: Ignoring "noauto" option for root device
	[  +0.149348] systemd-fstab-generator[661]: Ignoring "noauto" option for root device
	[  +0.336470] systemd-fstab-generator[692]: Ignoring "noauto" option for root device
	[  +4.390101] systemd-fstab-generator[781]: Ignoring "noauto" option for root device
	[  +0.062272] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.231427] systemd-fstab-generator[903]: Ignoring "noauto" option for root device
	[  +4.581702] kauditd_printk_skb: 97 callbacks suppressed
	[  +6.018338] kauditd_printk_skb: 85 callbacks suppressed
	[Oct10 19:28] kauditd_printk_skb: 7 callbacks suppressed
	[  +1.285654] systemd-fstab-generator[2565]: Ignoring "noauto" option for root device
	[  +4.604140] kauditd_printk_skb: 54 callbacks suppressed
	[  +1.959778] systemd-fstab-generator[2884]: Ignoring "noauto" option for root device
	[  +5.398379] systemd-fstab-generator[3000]: Ignoring "noauto" option for root device
	[  +0.130563] kauditd_printk_skb: 14 callbacks suppressed
	[  +8.016002] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [408a273bb46695bd06d19af956d4f1088d12f627e4715751845732fcb2e6e5b9] <==
	{"level":"info","ts":"2024-10-10T19:28:09.627549Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-10-10T19:28:09.627795Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.120:2380"}
	{"level":"info","ts":"2024-10-10T19:28:09.628003Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.120:2380"}
	{"level":"info","ts":"2024-10-10T19:28:09.627932Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"af2c917f7a70ddd0","initial-advertise-peer-urls":["https://192.168.39.120:2380"],"listen-peer-urls":["https://192.168.39.120:2380"],"advertise-client-urls":["https://192.168.39.120:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.120:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-10-10T19:28:09.627954Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-10-10T19:28:10.033897Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"af2c917f7a70ddd0 is starting a new election at term 1"}
	{"level":"info","ts":"2024-10-10T19:28:10.034015Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"af2c917f7a70ddd0 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-10-10T19:28:10.034066Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"af2c917f7a70ddd0 received MsgPreVoteResp from af2c917f7a70ddd0 at term 1"}
	{"level":"info","ts":"2024-10-10T19:28:10.034101Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"af2c917f7a70ddd0 became candidate at term 2"}
	{"level":"info","ts":"2024-10-10T19:28:10.034133Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"af2c917f7a70ddd0 received MsgVoteResp from af2c917f7a70ddd0 at term 2"}
	{"level":"info","ts":"2024-10-10T19:28:10.034171Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"af2c917f7a70ddd0 became leader at term 2"}
	{"level":"info","ts":"2024-10-10T19:28:10.034196Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: af2c917f7a70ddd0 elected leader af2c917f7a70ddd0 at term 2"}
	{"level":"info","ts":"2024-10-10T19:28:10.039249Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-10T19:28:10.042148Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"af2c917f7a70ddd0","local-member-attributes":"{Name:embed-certs-541370 ClientURLs:[https://192.168.39.120:2379]}","request-path":"/0/members/af2c917f7a70ddd0/attributes","cluster-id":"f3de5e1602edc73b","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-10T19:28:10.042395Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-10T19:28:10.042792Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-10T19:28:10.043643Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-10T19:28:10.050386Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.120:2379"}
	{"level":"info","ts":"2024-10-10T19:28:10.053188Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"f3de5e1602edc73b","local-member-id":"af2c917f7a70ddd0","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-10T19:28:10.057992Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-10T19:28:10.059884Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-10T19:28:10.053670Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-10T19:28:10.060631Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-10T19:28:10.063944Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-10T19:28:10.064316Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 19:37:33 up 14 min,  0 users,  load average: 0.04, 0.14, 0.13
	Linux embed-certs-541370 5.10.207 #1 SMP Tue Oct 8 15:16:25 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [c6373a0366f8bd51fdb46f462b2b7d6b68e53f3726cfce9da4d1272094b33cfb] <==
	E1010 19:33:13.065197       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E1010 19:33:13.065127       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1010 19:33:13.066414       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1010 19:33:13.066447       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1010 19:34:13.067195       1 handler_proxy.go:99] no RequestInfo found in the context
	E1010 19:34:13.067283       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1010 19:34:13.067323       1 handler_proxy.go:99] no RequestInfo found in the context
	E1010 19:34:13.067419       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1010 19:34:13.069882       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1010 19:34:13.069877       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1010 19:36:13.070443       1 handler_proxy.go:99] no RequestInfo found in the context
	E1010 19:36:13.070658       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1010 19:36:13.070993       1 handler_proxy.go:99] no RequestInfo found in the context
	E1010 19:36:13.071033       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1010 19:36:13.073485       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1010 19:36:13.073548       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [f81ee864d6e25c939a6b71b6b5507fed3cd1d7211fc7c2ff4963d9df8733e685] <==
	W1010 19:28:00.986345       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1010 19:28:04.491191       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1010 19:28:04.678344       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1010 19:28:04.688806       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1010 19:28:04.839451       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1010 19:28:05.203085       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1010 19:28:05.310810       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1010 19:28:05.315530       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1010 19:28:05.554372       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1010 19:28:05.607042       1 logging.go:55] [core] [Channel #17 SubChannel #18]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1010 19:28:05.627185       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1010 19:28:05.692668       1 logging.go:55] [core] [Channel #10 SubChannel #11]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1010 19:28:05.989573       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1010 19:28:06.017705       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1010 19:28:06.029339       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1010 19:28:06.109177       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1010 19:28:06.121763       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1010 19:28:06.226352       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1010 19:28:06.230120       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1010 19:28:06.249101       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1010 19:28:06.284490       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1010 19:28:06.375669       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1010 19:28:06.384344       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1010 19:28:06.451123       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1010 19:28:06.451498       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [73c9d5c03b79507aec9719e9caf167d73e80671922639bec8a689fd1a4190ad0] <==
	E1010 19:32:19.085482       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1010 19:32:19.561970       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1010 19:32:49.093709       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1010 19:32:49.569997       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1010 19:33:19.100140       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1010 19:33:19.578965       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1010 19:33:30.777528       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="embed-certs-541370"
	E1010 19:33:49.107143       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1010 19:33:49.588008       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1010 19:34:05.890286       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="243.84µs"
	I1010 19:34:16.891056       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="87.968µs"
	E1010 19:34:19.114017       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1010 19:34:19.595588       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1010 19:34:49.121243       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1010 19:34:49.604276       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1010 19:35:19.127053       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1010 19:35:19.616182       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1010 19:35:49.133752       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1010 19:35:49.624312       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1010 19:36:19.140412       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1010 19:36:19.634233       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1010 19:36:49.146771       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1010 19:36:49.642083       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1010 19:37:19.155279       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1010 19:37:19.649626       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [ff67e9a4d0b9d09d6489b72b20e6c96dde13d1c32f246a901abe1da570d4218b] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1010 19:28:20.736976       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1010 19:28:20.752734       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.120"]
	E1010 19:28:20.752803       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1010 19:28:20.840970       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1010 19:28:20.841022       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1010 19:28:20.841044       1 server_linux.go:169] "Using iptables Proxier"
	I1010 19:28:20.846444       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1010 19:28:20.846742       1 server.go:483] "Version info" version="v1.31.1"
	I1010 19:28:20.846755       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1010 19:28:20.848975       1 config.go:199] "Starting service config controller"
	I1010 19:28:20.849000       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1010 19:28:20.849063       1 config.go:105] "Starting endpoint slice config controller"
	I1010 19:28:20.849067       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1010 19:28:20.849452       1 config.go:328] "Starting node config controller"
	I1010 19:28:20.849458       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1010 19:28:20.950957       1 shared_informer.go:320] Caches are synced for node config
	I1010 19:28:20.951021       1 shared_informer.go:320] Caches are synced for service config
	I1010 19:28:20.951040       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [2f7d921296cb4c649b76fbfd34f3670185a813610d065cfb2722c8b53977ab43] <==
	W1010 19:28:12.939259       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1010 19:28:12.939372       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1010 19:28:13.007748       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1010 19:28:13.007919       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1010 19:28:13.027416       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1010 19:28:13.027506       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1010 19:28:13.060343       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1010 19:28:13.060471       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1010 19:28:13.079037       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1010 19:28:13.079130       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1010 19:28:13.134236       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1010 19:28:13.134390       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1010 19:28:13.150917       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1010 19:28:13.151047       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1010 19:28:13.377384       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1010 19:28:13.377434       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1010 19:28:13.446226       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1010 19:28:13.446277       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1010 19:28:13.450042       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1010 19:28:13.450090       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1010 19:28:13.464009       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1010 19:28:13.464058       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1010 19:28:13.472984       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1010 19:28:13.473147       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1010 19:28:15.983437       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 10 19:36:23 embed-certs-541370 kubelet[2891]: E1010 19:36:23.871892    2891 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-znhn4" podUID="5dc1f764-c7c7-480e-b787-5f5cf6c14a84"
	Oct 10 19:36:25 embed-certs-541370 kubelet[2891]: E1010 19:36:25.062875    2891 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728588985062533228,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 19:36:25 embed-certs-541370 kubelet[2891]: E1010 19:36:25.062916    2891 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728588985062533228,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 19:36:35 embed-certs-541370 kubelet[2891]: E1010 19:36:35.064798    2891 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728588995064243966,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 19:36:35 embed-certs-541370 kubelet[2891]: E1010 19:36:35.064917    2891 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728588995064243966,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 19:36:35 embed-certs-541370 kubelet[2891]: E1010 19:36:35.871780    2891 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-znhn4" podUID="5dc1f764-c7c7-480e-b787-5f5cf6c14a84"
	Oct 10 19:36:45 embed-certs-541370 kubelet[2891]: E1010 19:36:45.067915    2891 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728589005067381040,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 19:36:45 embed-certs-541370 kubelet[2891]: E1010 19:36:45.067958    2891 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728589005067381040,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 19:36:46 embed-certs-541370 kubelet[2891]: E1010 19:36:46.871741    2891 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-znhn4" podUID="5dc1f764-c7c7-480e-b787-5f5cf6c14a84"
	Oct 10 19:36:55 embed-certs-541370 kubelet[2891]: E1010 19:36:55.070276    2891 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728589015069988311,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 19:36:55 embed-certs-541370 kubelet[2891]: E1010 19:36:55.070329    2891 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728589015069988311,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 19:37:00 embed-certs-541370 kubelet[2891]: E1010 19:37:00.871381    2891 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-znhn4" podUID="5dc1f764-c7c7-480e-b787-5f5cf6c14a84"
	Oct 10 19:37:05 embed-certs-541370 kubelet[2891]: E1010 19:37:05.071937    2891 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728589025071525810,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 19:37:05 embed-certs-541370 kubelet[2891]: E1010 19:37:05.072075    2891 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728589025071525810,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 19:37:11 embed-certs-541370 kubelet[2891]: E1010 19:37:11.872231    2891 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-znhn4" podUID="5dc1f764-c7c7-480e-b787-5f5cf6c14a84"
	Oct 10 19:37:14 embed-certs-541370 kubelet[2891]: E1010 19:37:14.894324    2891 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 10 19:37:14 embed-certs-541370 kubelet[2891]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 10 19:37:14 embed-certs-541370 kubelet[2891]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 10 19:37:14 embed-certs-541370 kubelet[2891]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 10 19:37:14 embed-certs-541370 kubelet[2891]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 10 19:37:15 embed-certs-541370 kubelet[2891]: E1010 19:37:15.075345    2891 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728589035074961159,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 19:37:15 embed-certs-541370 kubelet[2891]: E1010 19:37:15.075372    2891 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728589035074961159,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 19:37:24 embed-certs-541370 kubelet[2891]: E1010 19:37:24.873741    2891 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-znhn4" podUID="5dc1f764-c7c7-480e-b787-5f5cf6c14a84"
	Oct 10 19:37:25 embed-certs-541370 kubelet[2891]: E1010 19:37:25.078149    2891 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728589045076722123,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 19:37:25 embed-certs-541370 kubelet[2891]: E1010 19:37:25.078258    2891 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728589045076722123,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [0def20106145a96ba01f0e844ac07a0c46313190a6835ec2ad8167db5f072e2f] <==
	I1010 19:28:22.054302       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1010 19:28:22.080099       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1010 19:28:22.080194       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1010 19:28:22.123905       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1010 19:28:22.128072       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-541370_93528444-997f-4b6b-ab97-82466bf6ac65!
	I1010 19:28:22.130073       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1112543b-e8f4-4def-b9b6-5b576a2e4ce3", APIVersion:"v1", ResourceVersion:"432", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-541370_93528444-997f-4b6b-ab97-82466bf6ac65 became leader
	I1010 19:28:22.229427       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-541370_93528444-997f-4b6b-ab97-82466bf6ac65!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-541370 -n embed-certs-541370
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-541370 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-znhn4
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-541370 describe pod metrics-server-6867b74b74-znhn4
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-541370 describe pod metrics-server-6867b74b74-znhn4: exit status 1 (66.002785ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-znhn4" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-541370 describe pod metrics-server-6867b74b74-znhn4: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.36s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.61s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-361847 -n default-k8s-diff-port-361847
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-10-10 19:38:03.89358682 +0000 UTC m=+6031.578522018
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-361847 -n default-k8s-diff-port-361847
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-361847 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-361847 logs -n 25: (2.208017536s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p no-preload-320324                                   | no-preload-320324            | jenkins | v1.34.0 | 10 Oct 24 19:15 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-029826             | newest-cni-029826            | jenkins | v1.34.0 | 10 Oct 24 19:16 UTC | 10 Oct 24 19:16 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-029826                                   | newest-cni-029826            | jenkins | v1.34.0 | 10 Oct 24 19:16 UTC | 10 Oct 24 19:16 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-029826                  | newest-cni-029826            | jenkins | v1.34.0 | 10 Oct 24 19:16 UTC | 10 Oct 24 19:16 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-029826 --memory=2200 --alsologtostderr   | newest-cni-029826            | jenkins | v1.34.0 | 10 Oct 24 19:16 UTC | 10 Oct 24 19:16 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-541370            | embed-certs-541370           | jenkins | v1.34.0 | 10 Oct 24 19:16 UTC | 10 Oct 24 19:16 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-541370                                  | embed-certs-541370           | jenkins | v1.34.0 | 10 Oct 24 19:16 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| image   | newest-cni-029826 image list                           | newest-cni-029826            | jenkins | v1.34.0 | 10 Oct 24 19:16 UTC | 10 Oct 24 19:16 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-029826                                   | newest-cni-029826            | jenkins | v1.34.0 | 10 Oct 24 19:16 UTC | 10 Oct 24 19:16 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-029826                                   | newest-cni-029826            | jenkins | v1.34.0 | 10 Oct 24 19:16 UTC | 10 Oct 24 19:16 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-029826                                   | newest-cni-029826            | jenkins | v1.34.0 | 10 Oct 24 19:16 UTC | 10 Oct 24 19:16 UTC |
	| delete  | -p newest-cni-029826                                   | newest-cni-029826            | jenkins | v1.34.0 | 10 Oct 24 19:16 UTC | 10 Oct 24 19:17 UTC |
	| start   | -p                                                     | default-k8s-diff-port-361847 | jenkins | v1.34.0 | 10 Oct 24 19:17 UTC | 10 Oct 24 19:18 UTC |
	|         | default-k8s-diff-port-361847                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-320324                  | no-preload-320324            | jenkins | v1.34.0 | 10 Oct 24 19:18 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-320324                                   | no-preload-320324            | jenkins | v1.34.0 | 10 Oct 24 19:18 UTC | 10 Oct 24 19:29 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-947203        | old-k8s-version-947203       | jenkins | v1.34.0 | 10 Oct 24 19:18 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-361847  | default-k8s-diff-port-361847 | jenkins | v1.34.0 | 10 Oct 24 19:18 UTC | 10 Oct 24 19:18 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-361847 | jenkins | v1.34.0 | 10 Oct 24 19:18 UTC |                     |
	|         | default-k8s-diff-port-361847                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-541370                 | embed-certs-541370           | jenkins | v1.34.0 | 10 Oct 24 19:18 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-541370                                  | embed-certs-541370           | jenkins | v1.34.0 | 10 Oct 24 19:19 UTC | 10 Oct 24 19:28 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-947203                              | old-k8s-version-947203       | jenkins | v1.34.0 | 10 Oct 24 19:19 UTC | 10 Oct 24 19:19 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-947203             | old-k8s-version-947203       | jenkins | v1.34.0 | 10 Oct 24 19:19 UTC | 10 Oct 24 19:19 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-947203                              | old-k8s-version-947203       | jenkins | v1.34.0 | 10 Oct 24 19:19 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-361847       | default-k8s-diff-port-361847 | jenkins | v1.34.0 | 10 Oct 24 19:21 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-361847 | jenkins | v1.34.0 | 10 Oct 24 19:21 UTC | 10 Oct 24 19:29 UTC |
	|         | default-k8s-diff-port-361847                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/10 19:21:13
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1010 19:21:13.943219  148525 out.go:345] Setting OutFile to fd 1 ...
	I1010 19:21:13.943336  148525 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 19:21:13.943343  148525 out.go:358] Setting ErrFile to fd 2...
	I1010 19:21:13.943347  148525 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 19:21:13.943560  148525 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19787-81676/.minikube/bin
	I1010 19:21:13.944109  148525 out.go:352] Setting JSON to false
	I1010 19:21:13.945219  148525 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":11020,"bootTime":1728577054,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1010 19:21:13.945321  148525 start.go:139] virtualization: kvm guest
	I1010 19:21:13.947915  148525 out.go:177] * [default-k8s-diff-port-361847] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1010 19:21:13.950021  148525 out.go:177]   - MINIKUBE_LOCATION=19787
	I1010 19:21:13.950037  148525 notify.go:220] Checking for updates...
	I1010 19:21:13.952994  148525 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1010 19:21:13.954661  148525 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19787-81676/kubeconfig
	I1010 19:21:13.956438  148525 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19787-81676/.minikube
	I1010 19:21:13.958502  148525 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1010 19:21:13.960099  148525 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1010 19:21:13.961930  148525 config.go:182] Loaded profile config "default-k8s-diff-port-361847": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 19:21:13.962374  148525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:21:13.962450  148525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:21:13.978323  148525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41247
	I1010 19:21:13.978926  148525 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:21:13.979520  148525 main.go:141] libmachine: Using API Version  1
	I1010 19:21:13.979538  148525 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:21:13.979954  148525 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:21:13.980144  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .DriverName
	I1010 19:21:13.980446  148525 driver.go:394] Setting default libvirt URI to qemu:///system
	I1010 19:21:13.980745  148525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:21:13.980784  148525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:21:13.996046  148525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37617
	I1010 19:21:13.996534  148525 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:21:13.997069  148525 main.go:141] libmachine: Using API Version  1
	I1010 19:21:13.997097  148525 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:21:13.997530  148525 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:21:13.997788  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .DriverName
	I1010 19:21:14.033593  148525 out.go:177] * Using the kvm2 driver based on existing profile
	I1010 19:21:14.035367  148525 start.go:297] selected driver: kvm2
	I1010 19:21:14.035394  148525 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-361847 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-361847 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.32 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 19:21:14.035526  148525 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1010 19:21:14.036341  148525 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 19:21:14.036452  148525 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19787-81676/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1010 19:21:14.052462  148525 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1010 19:21:14.052918  148525 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1010 19:21:14.052967  148525 cni.go:84] Creating CNI manager for ""
	I1010 19:21:14.053019  148525 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1010 19:21:14.053067  148525 start.go:340] cluster config:
	{Name:default-k8s-diff-port-361847 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-361847 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.32 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-h
ost Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 19:21:14.053178  148525 iso.go:125] acquiring lock: {Name:mk53f4624ffe67cac4f5d6b29b9ed87292a010a5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 19:21:14.055485  148525 out.go:177] * Starting "default-k8s-diff-port-361847" primary control-plane node in "default-k8s-diff-port-361847" cluster
	I1010 19:21:16.773106  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:21:14.056945  148525 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1010 19:21:14.057002  148525 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1010 19:21:14.057014  148525 cache.go:56] Caching tarball of preloaded images
	I1010 19:21:14.057118  148525 preload.go:172] Found /home/jenkins/minikube-integration/19787-81676/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1010 19:21:14.057134  148525 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1010 19:21:14.057268  148525 profile.go:143] Saving config to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/default-k8s-diff-port-361847/config.json ...
	I1010 19:21:14.057476  148525 start.go:360] acquireMachinesLock for default-k8s-diff-port-361847: {Name:mkf50a8a3600473176c10a3ea212772e747151e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1010 19:21:22.853158  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:21:25.925174  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:21:32.005160  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:21:35.077198  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:21:41.157130  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:21:44.229127  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:21:50.309136  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:21:53.381191  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:21:59.461129  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:22:02.533201  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:22:08.613124  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:22:11.685169  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:22:17.765161  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:22:20.837208  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:22:26.917127  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:22:29.989172  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:22:36.069147  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:22:39.141173  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:22:45.221160  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:22:48.293141  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:22:51.297376  147758 start.go:364] duration metric: took 3m49.312490934s to acquireMachinesLock for "embed-certs-541370"
	I1010 19:22:51.297453  147758 start.go:96] Skipping create...Using existing machine configuration
	I1010 19:22:51.297464  147758 fix.go:54] fixHost starting: 
	I1010 19:22:51.297787  147758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:22:51.297848  147758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:22:51.314087  147758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43639
	I1010 19:22:51.314588  147758 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:22:51.315115  147758 main.go:141] libmachine: Using API Version  1
	I1010 19:22:51.315138  147758 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:22:51.315509  147758 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:22:51.315691  147758 main.go:141] libmachine: (embed-certs-541370) Calling .DriverName
	I1010 19:22:51.315879  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetState
	I1010 19:22:51.317597  147758 fix.go:112] recreateIfNeeded on embed-certs-541370: state=Stopped err=<nil>
	I1010 19:22:51.317621  147758 main.go:141] libmachine: (embed-certs-541370) Calling .DriverName
	W1010 19:22:51.317781  147758 fix.go:138] unexpected machine state, will restart: <nil>
	I1010 19:22:51.319664  147758 out.go:177] * Restarting existing kvm2 VM for "embed-certs-541370" ...
	I1010 19:22:51.320967  147758 main.go:141] libmachine: (embed-certs-541370) Calling .Start
	I1010 19:22:51.321134  147758 main.go:141] libmachine: (embed-certs-541370) Ensuring networks are active...
	I1010 19:22:51.322026  147758 main.go:141] libmachine: (embed-certs-541370) Ensuring network default is active
	I1010 19:22:51.322468  147758 main.go:141] libmachine: (embed-certs-541370) Ensuring network mk-embed-certs-541370 is active
	I1010 19:22:51.322874  147758 main.go:141] libmachine: (embed-certs-541370) Getting domain xml...
	I1010 19:22:51.323687  147758 main.go:141] libmachine: (embed-certs-541370) Creating domain...
	I1010 19:22:51.294881  147213 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1010 19:22:51.294927  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetMachineName
	I1010 19:22:51.295226  147213 buildroot.go:166] provisioning hostname "no-preload-320324"
	I1010 19:22:51.295256  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetMachineName
	I1010 19:22:51.295454  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHHostname
	I1010 19:22:51.297198  147213 machine.go:96] duration metric: took 4m37.414594306s to provisionDockerMachine
	I1010 19:22:51.297252  147213 fix.go:56] duration metric: took 4m37.436635356s for fixHost
	I1010 19:22:51.297259  147213 start.go:83] releasing machines lock for "no-preload-320324", held for 4m37.436668423s
	W1010 19:22:51.297278  147213 start.go:714] error starting host: provision: host is not running
	W1010 19:22:51.297382  147213 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1010 19:22:51.297396  147213 start.go:729] Will try again in 5 seconds ...
	I1010 19:22:52.568699  147758 main.go:141] libmachine: (embed-certs-541370) Waiting to get IP...
	I1010 19:22:52.569582  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:22:52.569952  147758 main.go:141] libmachine: (embed-certs-541370) DBG | unable to find current IP address of domain embed-certs-541370 in network mk-embed-certs-541370
	I1010 19:22:52.570018  147758 main.go:141] libmachine: (embed-certs-541370) DBG | I1010 19:22:52.569935  148914 retry.go:31] will retry after 261.244287ms: waiting for machine to come up
	I1010 19:22:52.832639  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:22:52.833280  147758 main.go:141] libmachine: (embed-certs-541370) DBG | unable to find current IP address of domain embed-certs-541370 in network mk-embed-certs-541370
	I1010 19:22:52.833310  147758 main.go:141] libmachine: (embed-certs-541370) DBG | I1010 19:22:52.833200  148914 retry.go:31] will retry after 304.116732ms: waiting for machine to come up
	I1010 19:22:53.138770  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:22:53.139091  147758 main.go:141] libmachine: (embed-certs-541370) DBG | unable to find current IP address of domain embed-certs-541370 in network mk-embed-certs-541370
	I1010 19:22:53.139124  147758 main.go:141] libmachine: (embed-certs-541370) DBG | I1010 19:22:53.139055  148914 retry.go:31] will retry after 484.354474ms: waiting for machine to come up
	I1010 19:22:53.624831  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:22:53.625293  147758 main.go:141] libmachine: (embed-certs-541370) DBG | unable to find current IP address of domain embed-certs-541370 in network mk-embed-certs-541370
	I1010 19:22:53.625323  147758 main.go:141] libmachine: (embed-certs-541370) DBG | I1010 19:22:53.625234  148914 retry.go:31] will retry after 591.916836ms: waiting for machine to come up
	I1010 19:22:54.219214  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:22:54.219732  147758 main.go:141] libmachine: (embed-certs-541370) DBG | unable to find current IP address of domain embed-certs-541370 in network mk-embed-certs-541370
	I1010 19:22:54.219763  147758 main.go:141] libmachine: (embed-certs-541370) DBG | I1010 19:22:54.219673  148914 retry.go:31] will retry after 614.162479ms: waiting for machine to come up
	I1010 19:22:54.835573  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:22:54.836038  147758 main.go:141] libmachine: (embed-certs-541370) DBG | unable to find current IP address of domain embed-certs-541370 in network mk-embed-certs-541370
	I1010 19:22:54.836063  147758 main.go:141] libmachine: (embed-certs-541370) DBG | I1010 19:22:54.835988  148914 retry.go:31] will retry after 824.170953ms: waiting for machine to come up
	I1010 19:22:55.662092  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:22:55.662646  147758 main.go:141] libmachine: (embed-certs-541370) DBG | unable to find current IP address of domain embed-certs-541370 in network mk-embed-certs-541370
	I1010 19:22:55.662668  147758 main.go:141] libmachine: (embed-certs-541370) DBG | I1010 19:22:55.662586  148914 retry.go:31] will retry after 928.483848ms: waiting for machine to come up
	I1010 19:22:56.593200  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:22:56.593724  147758 main.go:141] libmachine: (embed-certs-541370) DBG | unable to find current IP address of domain embed-certs-541370 in network mk-embed-certs-541370
	I1010 19:22:56.593756  147758 main.go:141] libmachine: (embed-certs-541370) DBG | I1010 19:22:56.593679  148914 retry.go:31] will retry after 941.138644ms: waiting for machine to come up
	I1010 19:22:56.299351  147213 start.go:360] acquireMachinesLock for no-preload-320324: {Name:mkf50a8a3600473176c10a3ea212772e747151e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1010 19:22:57.536977  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:22:57.537403  147758 main.go:141] libmachine: (embed-certs-541370) DBG | unable to find current IP address of domain embed-certs-541370 in network mk-embed-certs-541370
	I1010 19:22:57.537429  147758 main.go:141] libmachine: (embed-certs-541370) DBG | I1010 19:22:57.537331  148914 retry.go:31] will retry after 1.262203584s: waiting for machine to come up
	I1010 19:22:58.801921  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:22:58.802420  147758 main.go:141] libmachine: (embed-certs-541370) DBG | unable to find current IP address of domain embed-certs-541370 in network mk-embed-certs-541370
	I1010 19:22:58.802454  147758 main.go:141] libmachine: (embed-certs-541370) DBG | I1010 19:22:58.802381  148914 retry.go:31] will retry after 2.154751391s: waiting for machine to come up
	I1010 19:23:00.960100  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:00.960661  147758 main.go:141] libmachine: (embed-certs-541370) DBG | unable to find current IP address of domain embed-certs-541370 in network mk-embed-certs-541370
	I1010 19:23:00.960684  147758 main.go:141] libmachine: (embed-certs-541370) DBG | I1010 19:23:00.960607  148914 retry.go:31] will retry after 1.945155171s: waiting for machine to come up
	I1010 19:23:02.907705  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:02.908097  147758 main.go:141] libmachine: (embed-certs-541370) DBG | unable to find current IP address of domain embed-certs-541370 in network mk-embed-certs-541370
	I1010 19:23:02.908129  147758 main.go:141] libmachine: (embed-certs-541370) DBG | I1010 19:23:02.908038  148914 retry.go:31] will retry after 3.245262469s: waiting for machine to come up
	I1010 19:23:06.157527  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:06.157897  147758 main.go:141] libmachine: (embed-certs-541370) DBG | unable to find current IP address of domain embed-certs-541370 in network mk-embed-certs-541370
	I1010 19:23:06.157925  147758 main.go:141] libmachine: (embed-certs-541370) DBG | I1010 19:23:06.157858  148914 retry.go:31] will retry after 3.973579024s: waiting for machine to come up
	I1010 19:23:11.321975  148123 start.go:364] duration metric: took 3m17.648521443s to acquireMachinesLock for "old-k8s-version-947203"
	I1010 19:23:11.322043  148123 start.go:96] Skipping create...Using existing machine configuration
	I1010 19:23:11.322056  148123 fix.go:54] fixHost starting: 
	I1010 19:23:11.322537  148123 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:23:11.322596  148123 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:23:11.339586  148123 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42157
	I1010 19:23:11.340091  148123 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:23:11.340686  148123 main.go:141] libmachine: Using API Version  1
	I1010 19:23:11.340719  148123 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:23:11.341101  148123 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:23:11.341295  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .DriverName
	I1010 19:23:11.341453  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetState
	I1010 19:23:11.343215  148123 fix.go:112] recreateIfNeeded on old-k8s-version-947203: state=Stopped err=<nil>
	I1010 19:23:11.343247  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .DriverName
	W1010 19:23:11.343408  148123 fix.go:138] unexpected machine state, will restart: <nil>
	I1010 19:23:11.345653  148123 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-947203" ...
	I1010 19:23:10.135369  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.135799  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has current primary IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.135830  147758 main.go:141] libmachine: (embed-certs-541370) Found IP for machine: 192.168.39.120
	I1010 19:23:10.135839  147758 main.go:141] libmachine: (embed-certs-541370) Reserving static IP address...
	I1010 19:23:10.136283  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "embed-certs-541370", mac: "52:54:00:e2:ee:d0", ip: "192.168.39.120"} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:10.136311  147758 main.go:141] libmachine: (embed-certs-541370) Reserved static IP address: 192.168.39.120
	I1010 19:23:10.136327  147758 main.go:141] libmachine: (embed-certs-541370) DBG | skip adding static IP to network mk-embed-certs-541370 - found existing host DHCP lease matching {name: "embed-certs-541370", mac: "52:54:00:e2:ee:d0", ip: "192.168.39.120"}
	I1010 19:23:10.136339  147758 main.go:141] libmachine: (embed-certs-541370) DBG | Getting to WaitForSSH function...
	I1010 19:23:10.136351  147758 main.go:141] libmachine: (embed-certs-541370) Waiting for SSH to be available...
	I1010 19:23:10.138861  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.139259  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:10.139293  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.139438  147758 main.go:141] libmachine: (embed-certs-541370) DBG | Using SSH client type: external
	I1010 19:23:10.139472  147758 main.go:141] libmachine: (embed-certs-541370) DBG | Using SSH private key: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/embed-certs-541370/id_rsa (-rw-------)
	I1010 19:23:10.139517  147758 main.go:141] libmachine: (embed-certs-541370) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.120 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19787-81676/.minikube/machines/embed-certs-541370/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1010 19:23:10.139541  147758 main.go:141] libmachine: (embed-certs-541370) DBG | About to run SSH command:
	I1010 19:23:10.139562  147758 main.go:141] libmachine: (embed-certs-541370) DBG | exit 0
	I1010 19:23:10.261078  147758 main.go:141] libmachine: (embed-certs-541370) DBG | SSH cmd err, output: <nil>: 
	I1010 19:23:10.261533  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetConfigRaw
	I1010 19:23:10.262192  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetIP
	I1010 19:23:10.265071  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.265467  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:10.265515  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.265737  147758 profile.go:143] Saving config to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/embed-certs-541370/config.json ...
	I1010 19:23:10.265941  147758 machine.go:93] provisionDockerMachine start ...
	I1010 19:23:10.265960  147758 main.go:141] libmachine: (embed-certs-541370) Calling .DriverName
	I1010 19:23:10.266188  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHHostname
	I1010 19:23:10.269186  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.269618  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:10.269649  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.269799  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHPort
	I1010 19:23:10.269984  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:23:10.270206  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:23:10.270345  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHUsername
	I1010 19:23:10.270550  147758 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:10.270834  147758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.120 22 <nil> <nil>}
	I1010 19:23:10.270849  147758 main.go:141] libmachine: About to run SSH command:
	hostname
	I1010 19:23:10.373285  147758 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1010 19:23:10.373316  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetMachineName
	I1010 19:23:10.373625  147758 buildroot.go:166] provisioning hostname "embed-certs-541370"
	I1010 19:23:10.373660  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetMachineName
	I1010 19:23:10.373835  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHHostname
	I1010 19:23:10.376552  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.376951  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:10.376994  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.377132  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHPort
	I1010 19:23:10.377332  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:23:10.377489  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:23:10.377606  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHUsername
	I1010 19:23:10.377745  147758 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:10.377918  147758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.120 22 <nil> <nil>}
	I1010 19:23:10.377930  147758 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-541370 && echo "embed-certs-541370" | sudo tee /etc/hostname
	I1010 19:23:10.495847  147758 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-541370
	
	I1010 19:23:10.495880  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHHostname
	I1010 19:23:10.498868  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.499205  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:10.499247  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.499362  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHPort
	I1010 19:23:10.499556  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:23:10.499700  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:23:10.499829  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHUsername
	I1010 19:23:10.499961  147758 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:10.500187  147758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.120 22 <nil> <nil>}
	I1010 19:23:10.500210  147758 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-541370' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-541370/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-541370' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1010 19:23:10.614318  147758 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1010 19:23:10.614357  147758 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19787-81676/.minikube CaCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19787-81676/.minikube}
	I1010 19:23:10.614412  147758 buildroot.go:174] setting up certificates
	I1010 19:23:10.614429  147758 provision.go:84] configureAuth start
	I1010 19:23:10.614457  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetMachineName
	I1010 19:23:10.614763  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetIP
	I1010 19:23:10.617457  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.617888  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:10.617916  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.618078  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHHostname
	I1010 19:23:10.620243  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.620635  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:10.620666  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.620789  147758 provision.go:143] copyHostCerts
	I1010 19:23:10.620895  147758 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem, removing ...
	I1010 19:23:10.620913  147758 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem
	I1010 19:23:10.620998  147758 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem (1078 bytes)
	I1010 19:23:10.621111  147758 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem, removing ...
	I1010 19:23:10.621123  147758 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem
	I1010 19:23:10.621159  147758 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem (1123 bytes)
	I1010 19:23:10.621245  147758 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem, removing ...
	I1010 19:23:10.621257  147758 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem
	I1010 19:23:10.621292  147758 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem (1675 bytes)
	I1010 19:23:10.621364  147758 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem org=jenkins.embed-certs-541370 san=[127.0.0.1 192.168.39.120 embed-certs-541370 localhost minikube]
	I1010 19:23:10.697456  147758 provision.go:177] copyRemoteCerts
	I1010 19:23:10.697515  147758 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1010 19:23:10.697547  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHHostname
	I1010 19:23:10.700439  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.700763  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:10.700799  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.700956  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHPort
	I1010 19:23:10.701162  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:23:10.701320  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHUsername
	I1010 19:23:10.701465  147758 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/embed-certs-541370/id_rsa Username:docker}
	I1010 19:23:10.783442  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1010 19:23:10.808446  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1010 19:23:10.832117  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1010 19:23:10.856286  147758 provision.go:87] duration metric: took 241.840139ms to configureAuth
	I1010 19:23:10.856318  147758 buildroot.go:189] setting minikube options for container-runtime
	I1010 19:23:10.856528  147758 config.go:182] Loaded profile config "embed-certs-541370": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 19:23:10.856640  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHHostname
	I1010 19:23:10.859252  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.859677  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:10.859708  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.859916  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHPort
	I1010 19:23:10.860087  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:23:10.860222  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:23:10.860344  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHUsername
	I1010 19:23:10.860524  147758 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:10.860688  147758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.120 22 <nil> <nil>}
	I1010 19:23:10.860702  147758 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1010 19:23:11.086349  147758 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1010 19:23:11.086375  147758 machine.go:96] duration metric: took 820.421344ms to provisionDockerMachine
	I1010 19:23:11.086386  147758 start.go:293] postStartSetup for "embed-certs-541370" (driver="kvm2")
	I1010 19:23:11.086401  147758 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1010 19:23:11.086423  147758 main.go:141] libmachine: (embed-certs-541370) Calling .DriverName
	I1010 19:23:11.086755  147758 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1010 19:23:11.086783  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHHostname
	I1010 19:23:11.089482  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:11.089838  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:11.089860  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:11.090042  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHPort
	I1010 19:23:11.090253  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:23:11.090410  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHUsername
	I1010 19:23:11.090535  147758 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/embed-certs-541370/id_rsa Username:docker}
	I1010 19:23:11.172474  147758 ssh_runner.go:195] Run: cat /etc/os-release
	I1010 19:23:11.176699  147758 info.go:137] Remote host: Buildroot 2023.02.9
	I1010 19:23:11.176733  147758 filesync.go:126] Scanning /home/jenkins/minikube-integration/19787-81676/.minikube/addons for local assets ...
	I1010 19:23:11.176800  147758 filesync.go:126] Scanning /home/jenkins/minikube-integration/19787-81676/.minikube/files for local assets ...
	I1010 19:23:11.176899  147758 filesync.go:149] local asset: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem -> 888762.pem in /etc/ssl/certs
	I1010 19:23:11.177044  147758 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1010 19:23:11.186985  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem --> /etc/ssl/certs/888762.pem (1708 bytes)
	I1010 19:23:11.211385  147758 start.go:296] duration metric: took 124.982089ms for postStartSetup
	I1010 19:23:11.211442  147758 fix.go:56] duration metric: took 19.913977793s for fixHost
	I1010 19:23:11.211472  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHHostname
	I1010 19:23:11.214421  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:11.214780  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:11.214812  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:11.214999  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHPort
	I1010 19:23:11.215219  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:23:11.215429  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:23:11.215612  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHUsername
	I1010 19:23:11.215786  147758 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:11.215974  147758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.120 22 <nil> <nil>}
	I1010 19:23:11.215985  147758 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1010 19:23:11.321786  147758 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728588191.295446348
	
	I1010 19:23:11.321814  147758 fix.go:216] guest clock: 1728588191.295446348
	I1010 19:23:11.321822  147758 fix.go:229] Guest: 2024-10-10 19:23:11.295446348 +0000 UTC Remote: 2024-10-10 19:23:11.211447413 +0000 UTC m=+249.373680838 (delta=83.998935ms)
	I1010 19:23:11.321870  147758 fix.go:200] guest clock delta is within tolerance: 83.998935ms
	I1010 19:23:11.321877  147758 start.go:83] releasing machines lock for "embed-certs-541370", held for 20.024455781s
	I1010 19:23:11.321905  147758 main.go:141] libmachine: (embed-certs-541370) Calling .DriverName
	I1010 19:23:11.322169  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetIP
	I1010 19:23:11.325004  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:11.325350  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:11.325375  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:11.325566  147758 main.go:141] libmachine: (embed-certs-541370) Calling .DriverName
	I1010 19:23:11.326090  147758 main.go:141] libmachine: (embed-certs-541370) Calling .DriverName
	I1010 19:23:11.326294  147758 main.go:141] libmachine: (embed-certs-541370) Calling .DriverName
	I1010 19:23:11.326383  147758 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1010 19:23:11.326444  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHHostname
	I1010 19:23:11.326501  147758 ssh_runner.go:195] Run: cat /version.json
	I1010 19:23:11.326529  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHHostname
	I1010 19:23:11.329311  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:11.329657  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:11.329690  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:11.329713  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:11.329866  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHPort
	I1010 19:23:11.330057  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:23:11.330160  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:11.330188  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:11.330204  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHUsername
	I1010 19:23:11.330344  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHPort
	I1010 19:23:11.330346  147758 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/embed-certs-541370/id_rsa Username:docker}
	I1010 19:23:11.330538  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:23:11.330687  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHUsername
	I1010 19:23:11.330821  147758 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/embed-certs-541370/id_rsa Username:docker}
	I1010 19:23:11.406525  147758 ssh_runner.go:195] Run: systemctl --version
	I1010 19:23:11.428958  147758 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1010 19:23:11.577663  147758 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1010 19:23:11.584024  147758 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1010 19:23:11.584112  147758 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1010 19:23:11.603163  147758 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1010 19:23:11.603190  147758 start.go:495] detecting cgroup driver to use...
	I1010 19:23:11.603291  147758 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1010 19:23:11.624744  147758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1010 19:23:11.645477  147758 docker.go:217] disabling cri-docker service (if available) ...
	I1010 19:23:11.645537  147758 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1010 19:23:11.660216  147758 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1010 19:23:11.675019  147758 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1010 19:23:11.796038  147758 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1010 19:23:11.967750  147758 docker.go:233] disabling docker service ...
	I1010 19:23:11.967828  147758 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1010 19:23:11.983184  147758 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1010 19:23:12.001603  147758 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1010 19:23:12.149408  147758 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1010 19:23:12.306724  147758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1010 19:23:12.324302  147758 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1010 19:23:12.345426  147758 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1010 19:23:12.345508  147758 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:12.357812  147758 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1010 19:23:12.357883  147758 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:12.370095  147758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:12.382389  147758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:12.395000  147758 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1010 19:23:12.408429  147758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:12.426851  147758 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:12.450568  147758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:12.463434  147758 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1010 19:23:12.474537  147758 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1010 19:23:12.474606  147758 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1010 19:23:12.489074  147758 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1010 19:23:12.500048  147758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 19:23:12.635695  147758 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1010 19:23:12.733511  147758 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1010 19:23:12.733593  147758 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1010 19:23:12.739072  147758 start.go:563] Will wait 60s for crictl version
	I1010 19:23:12.739138  147758 ssh_runner.go:195] Run: which crictl
	I1010 19:23:12.743675  147758 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1010 19:23:12.792272  147758 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1010 19:23:12.792379  147758 ssh_runner.go:195] Run: crio --version
	I1010 19:23:12.829968  147758 ssh_runner.go:195] Run: crio --version
	I1010 19:23:12.862579  147758 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1010 19:23:11.347158  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .Start
	I1010 19:23:11.347359  148123 main.go:141] libmachine: (old-k8s-version-947203) Ensuring networks are active...
	I1010 19:23:11.348252  148123 main.go:141] libmachine: (old-k8s-version-947203) Ensuring network default is active
	I1010 19:23:11.348689  148123 main.go:141] libmachine: (old-k8s-version-947203) Ensuring network mk-old-k8s-version-947203 is active
	I1010 19:23:11.349139  148123 main.go:141] libmachine: (old-k8s-version-947203) Getting domain xml...
	I1010 19:23:11.349799  148123 main.go:141] libmachine: (old-k8s-version-947203) Creating domain...
	I1010 19:23:12.671472  148123 main.go:141] libmachine: (old-k8s-version-947203) Waiting to get IP...
	I1010 19:23:12.672628  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:12.673145  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:12.673278  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:12.673132  149057 retry.go:31] will retry after 280.111667ms: waiting for machine to come up
	I1010 19:23:12.954865  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:12.955325  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:12.955377  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:12.955300  149057 retry.go:31] will retry after 289.967238ms: waiting for machine to come up
	I1010 19:23:13.247039  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:13.247663  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:13.247769  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:13.247632  149057 retry.go:31] will retry after 319.085935ms: waiting for machine to come up
	I1010 19:23:12.863797  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetIP
	I1010 19:23:12.867335  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:12.867760  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:12.867794  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:12.868029  147758 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1010 19:23:12.872503  147758 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 19:23:12.887684  147758 kubeadm.go:883] updating cluster {Name:embed-certs-541370 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:embed-certs-541370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.120 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1010 19:23:12.887809  147758 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1010 19:23:12.887853  147758 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 19:23:12.924155  147758 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1010 19:23:12.924240  147758 ssh_runner.go:195] Run: which lz4
	I1010 19:23:12.928613  147758 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1010 19:23:12.933024  147758 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1010 19:23:12.933069  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1010 19:23:14.450790  147758 crio.go:462] duration metric: took 1.522223644s to copy over tarball
	I1010 19:23:14.450893  147758 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1010 19:23:16.642155  147758 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.191220673s)
	I1010 19:23:16.642193  147758 crio.go:469] duration metric: took 2.191371146s to extract the tarball
	I1010 19:23:16.642202  147758 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1010 19:23:16.679611  147758 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 19:23:16.723840  147758 crio.go:514] all images are preloaded for cri-o runtime.
	I1010 19:23:16.723865  147758 cache_images.go:84] Images are preloaded, skipping loading
	I1010 19:23:16.723874  147758 kubeadm.go:934] updating node { 192.168.39.120 8443 v1.31.1 crio true true} ...
	I1010 19:23:16.723998  147758 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-541370 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.120
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-541370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1010 19:23:16.724081  147758 ssh_runner.go:195] Run: crio config
	I1010 19:23:16.779659  147758 cni.go:84] Creating CNI manager for ""
	I1010 19:23:16.779682  147758 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1010 19:23:16.779693  147758 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1010 19:23:16.779714  147758 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.120 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-541370 NodeName:embed-certs-541370 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.120"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.120 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1010 19:23:16.779842  147758 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.120
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-541370"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.120
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.120"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1010 19:23:16.779904  147758 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1010 19:23:16.791424  147758 binaries.go:44] Found k8s binaries, skipping transfer
	I1010 19:23:16.791493  147758 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1010 19:23:16.801715  147758 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I1010 19:23:16.821364  147758 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1010 19:23:16.842703  147758 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I1010 19:23:16.864835  147758 ssh_runner.go:195] Run: grep 192.168.39.120	control-plane.minikube.internal$ /etc/hosts
	I1010 19:23:16.868928  147758 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.120	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 19:23:13.568192  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:13.568690  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:13.568721  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:13.568657  149057 retry.go:31] will retry after 575.032546ms: waiting for machine to come up
	I1010 19:23:14.145650  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:14.146293  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:14.146319  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:14.146234  149057 retry.go:31] will retry after 702.803794ms: waiting for machine to come up
	I1010 19:23:14.851201  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:14.851736  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:14.851769  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:14.851706  149057 retry.go:31] will retry after 883.195401ms: waiting for machine to come up
	I1010 19:23:15.736627  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:15.737199  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:15.737252  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:15.737155  149057 retry.go:31] will retry after 794.699815ms: waiting for machine to come up
	I1010 19:23:16.533952  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:16.534510  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:16.534545  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:16.534443  149057 retry.go:31] will retry after 961.751912ms: waiting for machine to come up
	I1010 19:23:17.497535  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:17.498007  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:17.498035  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:17.497936  149057 retry.go:31] will retry after 1.423503471s: waiting for machine to come up
	I1010 19:23:16.883162  147758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 19:23:17.027646  147758 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 19:23:17.045083  147758 certs.go:68] Setting up /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/embed-certs-541370 for IP: 192.168.39.120
	I1010 19:23:17.045108  147758 certs.go:194] generating shared ca certs ...
	I1010 19:23:17.045130  147758 certs.go:226] acquiring lock for ca certs: {Name:mk934aa2a82050d7b4e8800492bf64f91d140517 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 19:23:17.045491  147758 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key
	I1010 19:23:17.045561  147758 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key
	I1010 19:23:17.045579  147758 certs.go:256] generating profile certs ...
	I1010 19:23:17.045730  147758 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/embed-certs-541370/client.key
	I1010 19:23:17.045814  147758 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/embed-certs-541370/apiserver.key.dd7630a8
	I1010 19:23:17.045874  147758 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/embed-certs-541370/proxy-client.key
	I1010 19:23:17.046015  147758 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876.pem (1338 bytes)
	W1010 19:23:17.046055  147758 certs.go:480] ignoring /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876_empty.pem, impossibly tiny 0 bytes
	I1010 19:23:17.046075  147758 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem (1679 bytes)
	I1010 19:23:17.046114  147758 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem (1078 bytes)
	I1010 19:23:17.046150  147758 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem (1123 bytes)
	I1010 19:23:17.046177  147758 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem (1675 bytes)
	I1010 19:23:17.046235  147758 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem (1708 bytes)
	I1010 19:23:17.047131  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1010 19:23:17.087057  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1010 19:23:17.137707  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1010 19:23:17.181707  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1010 19:23:17.213227  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/embed-certs-541370/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1010 19:23:17.247846  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/embed-certs-541370/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1010 19:23:17.275989  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/embed-certs-541370/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1010 19:23:17.301144  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/embed-certs-541370/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1010 19:23:17.326232  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem --> /usr/share/ca-certificates/888762.pem (1708 bytes)
	I1010 19:23:17.350586  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1010 19:23:17.374666  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876.pem --> /usr/share/ca-certificates/88876.pem (1338 bytes)
	I1010 19:23:17.399570  147758 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1010 19:23:17.417846  147758 ssh_runner.go:195] Run: openssl version
	I1010 19:23:17.424206  147758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/888762.pem && ln -fs /usr/share/ca-certificates/888762.pem /etc/ssl/certs/888762.pem"
	I1010 19:23:17.436091  147758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/888762.pem
	I1010 19:23:17.441020  147758 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 10 18:09 /usr/share/ca-certificates/888762.pem
	I1010 19:23:17.441090  147758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/888762.pem
	I1010 19:23:17.447318  147758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/888762.pem /etc/ssl/certs/3ec20f2e.0"
	I1010 19:23:17.459191  147758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1010 19:23:17.470878  147758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1010 19:23:17.476185  147758 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 10 17:58 /usr/share/ca-certificates/minikubeCA.pem
	I1010 19:23:17.476248  147758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1010 19:23:17.482808  147758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1010 19:23:17.494626  147758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/88876.pem && ln -fs /usr/share/ca-certificates/88876.pem /etc/ssl/certs/88876.pem"
	I1010 19:23:17.506522  147758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/88876.pem
	I1010 19:23:17.511484  147758 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 10 18:09 /usr/share/ca-certificates/88876.pem
	I1010 19:23:17.511558  147758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/88876.pem
	I1010 19:23:17.517445  147758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/88876.pem /etc/ssl/certs/51391683.0"
	I1010 19:23:17.529109  147758 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1010 19:23:17.534139  147758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1010 19:23:17.540846  147758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1010 19:23:17.547429  147758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1010 19:23:17.554350  147758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1010 19:23:17.561036  147758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1010 19:23:17.567571  147758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1010 19:23:17.574019  147758 kubeadm.go:392] StartCluster: {Name:embed-certs-541370 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:embed-certs-541370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.120 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 19:23:17.574128  147758 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1010 19:23:17.574187  147758 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1010 19:23:17.612699  147758 cri.go:89] found id: ""
	I1010 19:23:17.612804  147758 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1010 19:23:17.623827  147758 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1010 19:23:17.623856  147758 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1010 19:23:17.623917  147758 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1010 19:23:17.634732  147758 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1010 19:23:17.635754  147758 kubeconfig.go:125] found "embed-certs-541370" server: "https://192.168.39.120:8443"
	I1010 19:23:17.637813  147758 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1010 19:23:17.648543  147758 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.120
	I1010 19:23:17.648590  147758 kubeadm.go:1160] stopping kube-system containers ...
	I1010 19:23:17.648606  147758 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1010 19:23:17.648671  147758 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1010 19:23:17.693966  147758 cri.go:89] found id: ""
	I1010 19:23:17.694057  147758 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1010 19:23:17.715977  147758 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1010 19:23:17.727871  147758 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1010 19:23:17.727891  147758 kubeadm.go:157] found existing configuration files:
	
	I1010 19:23:17.727942  147758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1010 19:23:17.738274  147758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1010 19:23:17.738340  147758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1010 19:23:17.748925  147758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1010 19:23:17.758945  147758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1010 19:23:17.759008  147758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1010 19:23:17.769169  147758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1010 19:23:17.779196  147758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1010 19:23:17.779282  147758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1010 19:23:17.790948  147758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1010 19:23:17.802264  147758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1010 19:23:17.802332  147758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1010 19:23:17.814009  147758 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1010 19:23:17.826820  147758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:23:17.947270  147758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:23:19.128720  147758 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.181409785s)
	I1010 19:23:19.128770  147758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:23:19.343735  147758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:23:19.419728  147758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:23:19.529802  147758 api_server.go:52] waiting for apiserver process to appear ...
	I1010 19:23:19.529930  147758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:20.030019  147758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:20.530833  147758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:20.558314  147758 api_server.go:72] duration metric: took 1.028510044s to wait for apiserver process to appear ...
	I1010 19:23:20.558350  147758 api_server.go:88] waiting for apiserver healthz status ...
	I1010 19:23:20.558375  147758 api_server.go:253] Checking apiserver healthz at https://192.168.39.120:8443/healthz ...
	I1010 19:23:20.558991  147758 api_server.go:269] stopped: https://192.168.39.120:8443/healthz: Get "https://192.168.39.120:8443/healthz": dial tcp 192.168.39.120:8443: connect: connection refused
	I1010 19:23:21.058727  147758 api_server.go:253] Checking apiserver healthz at https://192.168.39.120:8443/healthz ...
	I1010 19:23:18.923057  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:18.923636  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:18.923671  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:18.923582  149057 retry.go:31] will retry after 2.09836426s: waiting for machine to come up
	I1010 19:23:21.023500  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:21.024054  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:21.024084  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:21.023999  149057 retry.go:31] will retry after 2.809962093s: waiting for machine to come up
	I1010 19:23:23.187135  147758 api_server.go:279] https://192.168.39.120:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1010 19:23:23.187187  147758 api_server.go:103] status: https://192.168.39.120:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1010 19:23:23.187203  147758 api_server.go:253] Checking apiserver healthz at https://192.168.39.120:8443/healthz ...
	I1010 19:23:23.233367  147758 api_server.go:279] https://192.168.39.120:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1010 19:23:23.233414  147758 api_server.go:103] status: https://192.168.39.120:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1010 19:23:23.558658  147758 api_server.go:253] Checking apiserver healthz at https://192.168.39.120:8443/healthz ...
	I1010 19:23:23.575108  147758 api_server.go:279] https://192.168.39.120:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1010 19:23:23.575139  147758 api_server.go:103] status: https://192.168.39.120:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1010 19:23:24.058679  147758 api_server.go:253] Checking apiserver healthz at https://192.168.39.120:8443/healthz ...
	I1010 19:23:24.065735  147758 api_server.go:279] https://192.168.39.120:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1010 19:23:24.065763  147758 api_server.go:103] status: https://192.168.39.120:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1010 19:23:24.559440  147758 api_server.go:253] Checking apiserver healthz at https://192.168.39.120:8443/healthz ...
	I1010 19:23:24.565460  147758 api_server.go:279] https://192.168.39.120:8443/healthz returned 200:
	ok
	I1010 19:23:24.571828  147758 api_server.go:141] control plane version: v1.31.1
	I1010 19:23:24.571859  147758 api_server.go:131] duration metric: took 4.013501806s to wait for apiserver health ...
	I1010 19:23:24.571869  147758 cni.go:84] Creating CNI manager for ""
	I1010 19:23:24.571875  147758 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1010 19:23:24.573875  147758 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1010 19:23:24.575458  147758 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1010 19:23:24.586870  147758 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1010 19:23:24.624362  147758 system_pods.go:43] waiting for kube-system pods to appear ...
	I1010 19:23:24.643465  147758 system_pods.go:59] 8 kube-system pods found
	I1010 19:23:24.643516  147758 system_pods.go:61] "coredns-7c65d6cfc9-fgtkg" [df696e79-ca6f-4d73-a57e-9c6cdc93c505] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1010 19:23:24.643532  147758 system_pods.go:61] "etcd-embed-certs-541370" [254fa12c-b0d2-499f-8dd9-c1505efeaaab] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1010 19:23:24.643543  147758 system_pods.go:61] "kube-apiserver-embed-certs-541370" [fcd3809d-d325-4481-8e86-c246e29458fe] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1010 19:23:24.643565  147758 system_pods.go:61] "kube-controller-manager-embed-certs-541370" [ab0fdd6b-d9b7-48dc-b82f-29b21d2295ee] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1010 19:23:24.643584  147758 system_pods.go:61] "kube-proxy-f5l6x" [446383fa-44c5-4b9e-bfc5-e38799597e75] Running
	I1010 19:23:24.643592  147758 system_pods.go:61] "kube-scheduler-embed-certs-541370" [1c6af7e7-ce16-4ae2-8feb-e5d474173de1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1010 19:23:24.643603  147758 system_pods.go:61] "metrics-server-6867b74b74-kw529" [aad00321-d499-4563-849e-286d6e699fc8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1010 19:23:24.643611  147758 system_pods.go:61] "storage-provisioner" [df4ae621-5066-4276-9276-a0538a9f9dd1] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1010 19:23:24.643620  147758 system_pods.go:74] duration metric: took 19.234558ms to wait for pod list to return data ...
	I1010 19:23:24.643637  147758 node_conditions.go:102] verifying NodePressure condition ...
	I1010 19:23:24.651647  147758 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1010 19:23:24.651683  147758 node_conditions.go:123] node cpu capacity is 2
	I1010 19:23:24.651699  147758 node_conditions.go:105] duration metric: took 8.056629ms to run NodePressure ...
	I1010 19:23:24.651720  147758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:23:24.915651  147758 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1010 19:23:24.921104  147758 kubeadm.go:739] kubelet initialised
	I1010 19:23:24.921131  147758 kubeadm.go:740] duration metric: took 5.44643ms waiting for restarted kubelet to initialise ...
	I1010 19:23:24.921142  147758 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 19:23:24.927535  147758 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-fgtkg" in "kube-system" namespace to be "Ready" ...
	I1010 19:23:23.837827  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:23.838271  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:23.838295  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:23.838220  149057 retry.go:31] will retry after 2.562944835s: waiting for machine to come up
	I1010 19:23:26.402699  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:26.403250  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:26.403294  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:26.403192  149057 retry.go:31] will retry after 3.867656846s: waiting for machine to come up
	I1010 19:23:26.932764  147758 pod_ready.go:103] pod "coredns-7c65d6cfc9-fgtkg" in "kube-system" namespace has status "Ready":"False"
	I1010 19:23:28.936055  147758 pod_ready.go:103] pod "coredns-7c65d6cfc9-fgtkg" in "kube-system" namespace has status "Ready":"False"
	I1010 19:23:31.434959  147758 pod_ready.go:103] pod "coredns-7c65d6cfc9-fgtkg" in "kube-system" namespace has status "Ready":"False"
	I1010 19:23:31.893914  148525 start.go:364] duration metric: took 2m17.836396131s to acquireMachinesLock for "default-k8s-diff-port-361847"
	I1010 19:23:31.893993  148525 start.go:96] Skipping create...Using existing machine configuration
	I1010 19:23:31.894007  148525 fix.go:54] fixHost starting: 
	I1010 19:23:31.894438  148525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:23:31.894502  148525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:23:31.914583  148525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37215
	I1010 19:23:31.915054  148525 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:23:31.915535  148525 main.go:141] libmachine: Using API Version  1
	I1010 19:23:31.915560  148525 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:23:31.915967  148525 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:23:31.916207  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .DriverName
	I1010 19:23:31.916387  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetState
	I1010 19:23:31.918035  148525 fix.go:112] recreateIfNeeded on default-k8s-diff-port-361847: state=Stopped err=<nil>
	I1010 19:23:31.918073  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .DriverName
	W1010 19:23:31.918241  148525 fix.go:138] unexpected machine state, will restart: <nil>
	I1010 19:23:31.920390  148525 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-361847" ...
	I1010 19:23:30.275222  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.275762  148123 main.go:141] libmachine: (old-k8s-version-947203) Found IP for machine: 192.168.61.112
	I1010 19:23:30.275779  148123 main.go:141] libmachine: (old-k8s-version-947203) Reserving static IP address...
	I1010 19:23:30.275794  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has current primary IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.276478  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "old-k8s-version-947203", mac: "52:54:00:b5:0b:b2", ip: "192.168.61.112"} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:30.276529  148123 main.go:141] libmachine: (old-k8s-version-947203) Reserved static IP address: 192.168.61.112
	I1010 19:23:30.276553  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | skip adding static IP to network mk-old-k8s-version-947203 - found existing host DHCP lease matching {name: "old-k8s-version-947203", mac: "52:54:00:b5:0b:b2", ip: "192.168.61.112"}
	I1010 19:23:30.276574  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | Getting to WaitForSSH function...
	I1010 19:23:30.276590  148123 main.go:141] libmachine: (old-k8s-version-947203) Waiting for SSH to be available...
	I1010 19:23:30.278486  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.278854  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:30.278890  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.278984  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | Using SSH client type: external
	I1010 19:23:30.279016  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | Using SSH private key: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/old-k8s-version-947203/id_rsa (-rw-------)
	I1010 19:23:30.279051  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.112 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19787-81676/.minikube/machines/old-k8s-version-947203/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1010 19:23:30.279068  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | About to run SSH command:
	I1010 19:23:30.279080  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | exit 0
	I1010 19:23:30.405053  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | SSH cmd err, output: <nil>: 
	I1010 19:23:30.405492  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetConfigRaw
	I1010 19:23:30.406224  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetIP
	I1010 19:23:30.408830  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.409178  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:30.409204  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.409462  148123 profile.go:143] Saving config to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/old-k8s-version-947203/config.json ...
	I1010 19:23:30.409673  148123 machine.go:93] provisionDockerMachine start ...
	I1010 19:23:30.409693  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .DriverName
	I1010 19:23:30.409916  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHHostname
	I1010 19:23:30.412131  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.412400  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:30.412427  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.412574  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHPort
	I1010 19:23:30.412770  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:30.412965  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:30.413117  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHUsername
	I1010 19:23:30.413368  148123 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:30.413653  148123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.112 22 <nil> <nil>}
	I1010 19:23:30.413668  148123 main.go:141] libmachine: About to run SSH command:
	hostname
	I1010 19:23:30.525149  148123 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1010 19:23:30.525176  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetMachineName
	I1010 19:23:30.525417  148123 buildroot.go:166] provisioning hostname "old-k8s-version-947203"
	I1010 19:23:30.525478  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetMachineName
	I1010 19:23:30.525703  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHHostname
	I1010 19:23:30.528343  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.528681  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:30.528708  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.528841  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHPort
	I1010 19:23:30.529035  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:30.529185  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:30.529303  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHUsername
	I1010 19:23:30.529460  148123 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:30.529635  148123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.112 22 <nil> <nil>}
	I1010 19:23:30.529645  148123 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-947203 && echo "old-k8s-version-947203" | sudo tee /etc/hostname
	I1010 19:23:30.655605  148123 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-947203
	
	I1010 19:23:30.655644  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHHostname
	I1010 19:23:30.658439  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.658834  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:30.658869  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.659063  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHPort
	I1010 19:23:30.659285  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:30.659458  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:30.659574  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHUsername
	I1010 19:23:30.659733  148123 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:30.659905  148123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.112 22 <nil> <nil>}
	I1010 19:23:30.659921  148123 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-947203' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-947203/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-947203' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1010 19:23:30.781974  148123 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1010 19:23:30.782011  148123 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19787-81676/.minikube CaCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19787-81676/.minikube}
	I1010 19:23:30.782033  148123 buildroot.go:174] setting up certificates
	I1010 19:23:30.782048  148123 provision.go:84] configureAuth start
	I1010 19:23:30.782059  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetMachineName
	I1010 19:23:30.782379  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetIP
	I1010 19:23:30.785189  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.785584  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:30.785613  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.785782  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHHostname
	I1010 19:23:30.788086  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.788432  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:30.788458  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.788582  148123 provision.go:143] copyHostCerts
	I1010 19:23:30.788645  148123 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem, removing ...
	I1010 19:23:30.788660  148123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem
	I1010 19:23:30.788729  148123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem (1078 bytes)
	I1010 19:23:30.788839  148123 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem, removing ...
	I1010 19:23:30.788867  148123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem
	I1010 19:23:30.788900  148123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem (1123 bytes)
	I1010 19:23:30.788977  148123 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem, removing ...
	I1010 19:23:30.788986  148123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem
	I1010 19:23:30.789013  148123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem (1675 bytes)
	I1010 19:23:30.789081  148123 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-947203 san=[127.0.0.1 192.168.61.112 localhost minikube old-k8s-version-947203]
	I1010 19:23:31.240626  148123 provision.go:177] copyRemoteCerts
	I1010 19:23:31.240698  148123 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1010 19:23:31.240737  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHHostname
	I1010 19:23:31.243678  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.244108  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:31.244138  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.244356  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHPort
	I1010 19:23:31.244565  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:31.244765  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHUsername
	I1010 19:23:31.244929  148123 sshutil.go:53] new ssh client: &{IP:192.168.61.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/old-k8s-version-947203/id_rsa Username:docker}
	I1010 19:23:31.331337  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1010 19:23:31.356266  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1010 19:23:31.381186  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1010 19:23:31.405301  148123 provision.go:87] duration metric: took 623.235619ms to configureAuth
	I1010 19:23:31.405337  148123 buildroot.go:189] setting minikube options for container-runtime
	I1010 19:23:31.405541  148123 config.go:182] Loaded profile config "old-k8s-version-947203": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1010 19:23:31.405620  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHHostname
	I1010 19:23:31.408111  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.408444  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:31.408470  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.408695  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHPort
	I1010 19:23:31.408947  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:31.409109  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:31.409229  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHUsername
	I1010 19:23:31.409374  148123 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:31.409583  148123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.112 22 <nil> <nil>}
	I1010 19:23:31.409609  148123 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1010 19:23:31.647023  148123 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1010 19:23:31.647050  148123 machine.go:96] duration metric: took 1.237364012s to provisionDockerMachine
	I1010 19:23:31.647063  148123 start.go:293] postStartSetup for "old-k8s-version-947203" (driver="kvm2")
	I1010 19:23:31.647073  148123 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1010 19:23:31.647104  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .DriverName
	I1010 19:23:31.647464  148123 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1010 19:23:31.647502  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHHostname
	I1010 19:23:31.649991  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.650288  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:31.650311  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.650484  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHPort
	I1010 19:23:31.650637  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:31.650832  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHUsername
	I1010 19:23:31.650968  148123 sshutil.go:53] new ssh client: &{IP:192.168.61.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/old-k8s-version-947203/id_rsa Username:docker}
	I1010 19:23:31.735985  148123 ssh_runner.go:195] Run: cat /etc/os-release
	I1010 19:23:31.740689  148123 info.go:137] Remote host: Buildroot 2023.02.9
	I1010 19:23:31.740725  148123 filesync.go:126] Scanning /home/jenkins/minikube-integration/19787-81676/.minikube/addons for local assets ...
	I1010 19:23:31.740803  148123 filesync.go:126] Scanning /home/jenkins/minikube-integration/19787-81676/.minikube/files for local assets ...
	I1010 19:23:31.740947  148123 filesync.go:149] local asset: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem -> 888762.pem in /etc/ssl/certs
	I1010 19:23:31.741057  148123 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1010 19:23:31.751383  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem --> /etc/ssl/certs/888762.pem (1708 bytes)
	I1010 19:23:31.777091  148123 start.go:296] duration metric: took 130.011618ms for postStartSetup
	I1010 19:23:31.777134  148123 fix.go:56] duration metric: took 20.455078936s for fixHost
	I1010 19:23:31.777158  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHHostname
	I1010 19:23:31.780083  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.780661  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:31.780708  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.780832  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHPort
	I1010 19:23:31.781069  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:31.781245  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:31.781382  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHUsername
	I1010 19:23:31.781534  148123 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:31.781711  148123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.112 22 <nil> <nil>}
	I1010 19:23:31.781721  148123 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1010 19:23:31.893727  148123 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728588211.849777581
	
	I1010 19:23:31.893756  148123 fix.go:216] guest clock: 1728588211.849777581
	I1010 19:23:31.893763  148123 fix.go:229] Guest: 2024-10-10 19:23:31.849777581 +0000 UTC Remote: 2024-10-10 19:23:31.777138808 +0000 UTC m=+218.253674512 (delta=72.638773ms)
	I1010 19:23:31.893806  148123 fix.go:200] guest clock delta is within tolerance: 72.638773ms
	I1010 19:23:31.893813  148123 start.go:83] releasing machines lock for "old-k8s-version-947203", held for 20.571797307s
	I1010 19:23:31.893848  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .DriverName
	I1010 19:23:31.894151  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetIP
	I1010 19:23:31.896747  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.897156  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:31.897207  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.897385  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .DriverName
	I1010 19:23:31.898036  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .DriverName
	I1010 19:23:31.898229  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .DriverName
	I1010 19:23:31.898375  148123 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1010 19:23:31.898422  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHHostname
	I1010 19:23:31.898461  148123 ssh_runner.go:195] Run: cat /version.json
	I1010 19:23:31.898487  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHHostname
	I1010 19:23:31.901274  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.901450  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.901608  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:31.901649  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.901784  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHPort
	I1010 19:23:31.901920  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:31.901945  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.901952  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:31.902112  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHPort
	I1010 19:23:31.902130  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHUsername
	I1010 19:23:31.902306  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:31.902348  148123 sshutil.go:53] new ssh client: &{IP:192.168.61.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/old-k8s-version-947203/id_rsa Username:docker}
	I1010 19:23:31.902412  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHUsername
	I1010 19:23:31.902533  148123 sshutil.go:53] new ssh client: &{IP:192.168.61.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/old-k8s-version-947203/id_rsa Username:docker}
	I1010 19:23:32.016264  148123 ssh_runner.go:195] Run: systemctl --version
	I1010 19:23:32.022618  148123 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1010 19:23:32.169502  148123 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1010 19:23:32.175960  148123 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1010 19:23:32.176039  148123 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1010 19:23:32.193584  148123 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1010 19:23:32.193615  148123 start.go:495] detecting cgroup driver to use...
	I1010 19:23:32.193698  148123 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1010 19:23:32.210923  148123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1010 19:23:32.227172  148123 docker.go:217] disabling cri-docker service (if available) ...
	I1010 19:23:32.227254  148123 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1010 19:23:32.242142  148123 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1010 19:23:32.256896  148123 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1010 19:23:32.387462  148123 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1010 19:23:32.575184  148123 docker.go:233] disabling docker service ...
	I1010 19:23:32.575257  148123 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1010 19:23:32.590667  148123 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1010 19:23:32.604825  148123 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1010 19:23:32.739827  148123 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1010 19:23:32.863648  148123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1010 19:23:32.879435  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1010 19:23:32.899582  148123 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1010 19:23:32.899659  148123 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:32.915010  148123 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1010 19:23:32.915081  148123 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:32.931181  148123 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:32.944038  148123 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:32.955182  148123 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1010 19:23:32.972220  148123 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1010 19:23:32.982681  148123 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1010 19:23:32.982739  148123 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1010 19:23:32.997012  148123 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1010 19:23:33.009468  148123 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 19:23:33.180403  148123 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1010 19:23:33.284934  148123 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1010 19:23:33.285008  148123 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1010 19:23:33.290063  148123 start.go:563] Will wait 60s for crictl version
	I1010 19:23:33.290124  148123 ssh_runner.go:195] Run: which crictl
	I1010 19:23:33.294752  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1010 19:23:33.342710  148123 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1010 19:23:33.342792  148123 ssh_runner.go:195] Run: crio --version
	I1010 19:23:33.372821  148123 ssh_runner.go:195] Run: crio --version
	I1010 19:23:33.409395  148123 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1010 19:23:33.411012  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetIP
	I1010 19:23:33.414374  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:33.414789  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:33.414836  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:33.415084  148123 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1010 19:23:33.420164  148123 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 19:23:33.433442  148123 kubeadm.go:883] updating cluster {Name:old-k8s-version-947203 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-947203 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.112 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1010 19:23:33.433631  148123 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1010 19:23:33.433717  148123 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 19:23:33.480013  148123 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1010 19:23:33.480078  148123 ssh_runner.go:195] Run: which lz4
	I1010 19:23:33.485443  148123 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1010 19:23:33.490986  148123 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1010 19:23:33.491032  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1010 19:23:31.921836  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .Start
	I1010 19:23:31.922036  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Ensuring networks are active...
	I1010 19:23:31.922890  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Ensuring network default is active
	I1010 19:23:31.923271  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Ensuring network mk-default-k8s-diff-port-361847 is active
	I1010 19:23:31.923685  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Getting domain xml...
	I1010 19:23:31.924449  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Creating domain...
	I1010 19:23:33.241164  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting to get IP...
	I1010 19:23:33.242273  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:33.242713  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | unable to find current IP address of domain default-k8s-diff-port-361847 in network mk-default-k8s-diff-port-361847
	I1010 19:23:33.242814  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | I1010 19:23:33.242702  149213 retry.go:31] will retry after 195.013046ms: waiting for machine to come up
	I1010 19:23:33.438965  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:33.439425  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | unable to find current IP address of domain default-k8s-diff-port-361847 in network mk-default-k8s-diff-port-361847
	I1010 19:23:33.439452  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | I1010 19:23:33.439379  149213 retry.go:31] will retry after 344.223823ms: waiting for machine to come up
	I1010 19:23:33.785167  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:33.785833  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | unable to find current IP address of domain default-k8s-diff-port-361847 in network mk-default-k8s-diff-port-361847
	I1010 19:23:33.785864  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | I1010 19:23:33.785780  149213 retry.go:31] will retry after 342.787658ms: waiting for machine to come up
	I1010 19:23:33.435066  147758 pod_ready.go:103] pod "coredns-7c65d6cfc9-fgtkg" in "kube-system" namespace has status "Ready":"False"
	I1010 19:23:34.936768  147758 pod_ready.go:93] pod "coredns-7c65d6cfc9-fgtkg" in "kube-system" namespace has status "Ready":"True"
	I1010 19:23:34.936800  147758 pod_ready.go:82] duration metric: took 10.009235225s for pod "coredns-7c65d6cfc9-fgtkg" in "kube-system" namespace to be "Ready" ...
	I1010 19:23:34.936814  147758 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:23:35.944395  147758 pod_ready.go:93] pod "etcd-embed-certs-541370" in "kube-system" namespace has status "Ready":"True"
	I1010 19:23:35.944430  147758 pod_ready.go:82] duration metric: took 1.007599746s for pod "etcd-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:23:35.944445  147758 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:23:35.953224  147758 pod_ready.go:93] pod "kube-apiserver-embed-certs-541370" in "kube-system" namespace has status "Ready":"True"
	I1010 19:23:35.953255  147758 pod_ready.go:82] duration metric: took 8.801702ms for pod "kube-apiserver-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:23:35.953266  147758 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:23:35.227279  148123 crio.go:462] duration metric: took 1.74186828s to copy over tarball
	I1010 19:23:35.227383  148123 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1010 19:23:38.216499  148123 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.989082447s)
	I1010 19:23:38.216533  148123 crio.go:469] duration metric: took 2.989218587s to extract the tarball
	I1010 19:23:38.216541  148123 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1010 19:23:38.259699  148123 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 19:23:38.308284  148123 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1010 19:23:38.308313  148123 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1010 19:23:38.308422  148123 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1010 19:23:38.308477  148123 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1010 19:23:38.308482  148123 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1010 19:23:38.308593  148123 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1010 19:23:38.308617  148123 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1010 19:23:38.308405  148123 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 19:23:38.308446  148123 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1010 19:23:38.308530  148123 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1010 19:23:38.310558  148123 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1010 19:23:38.310612  148123 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1010 19:23:38.310642  148123 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1010 19:23:38.310652  148123 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1010 19:23:38.310645  148123 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1010 19:23:38.310560  148123 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 19:23:38.310733  148123 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1010 19:23:38.311068  148123 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1010 19:23:38.469174  148123 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1010 19:23:38.469563  148123 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1010 19:23:38.488463  148123 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1010 19:23:38.490815  148123 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1010 19:23:38.491841  148123 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1010 19:23:38.496079  148123 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1010 19:23:38.501292  148123 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1010 19:23:34.130443  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:34.130968  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | unable to find current IP address of domain default-k8s-diff-port-361847 in network mk-default-k8s-diff-port-361847
	I1010 19:23:34.130998  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | I1010 19:23:34.130915  149213 retry.go:31] will retry after 393.100812ms: waiting for machine to come up
	I1010 19:23:34.525570  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:34.526032  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | unable to find current IP address of domain default-k8s-diff-port-361847 in network mk-default-k8s-diff-port-361847
	I1010 19:23:34.526060  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | I1010 19:23:34.525980  149213 retry.go:31] will retry after 465.468437ms: waiting for machine to come up
	I1010 19:23:34.992775  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:34.993348  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | unable to find current IP address of domain default-k8s-diff-port-361847 in network mk-default-k8s-diff-port-361847
	I1010 19:23:34.993386  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | I1010 19:23:34.993287  149213 retry.go:31] will retry after 907.884473ms: waiting for machine to come up
	I1010 19:23:35.902481  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:35.902942  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | unable to find current IP address of domain default-k8s-diff-port-361847 in network mk-default-k8s-diff-port-361847
	I1010 19:23:35.902974  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | I1010 19:23:35.902878  149213 retry.go:31] will retry after 1.157806188s: waiting for machine to come up
	I1010 19:23:37.062068  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:37.062749  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | unable to find current IP address of domain default-k8s-diff-port-361847 in network mk-default-k8s-diff-port-361847
	I1010 19:23:37.062777  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | I1010 19:23:37.062706  149213 retry.go:31] will retry after 1.432559208s: waiting for machine to come up
	I1010 19:23:38.496653  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:38.497120  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | unable to find current IP address of domain default-k8s-diff-port-361847 in network mk-default-k8s-diff-port-361847
	I1010 19:23:38.497153  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | I1010 19:23:38.497066  149213 retry.go:31] will retry after 1.559787003s: waiting for machine to come up
	I1010 19:23:37.961068  147758 pod_ready.go:103] pod "kube-controller-manager-embed-certs-541370" in "kube-system" namespace has status "Ready":"False"
	I1010 19:23:40.065559  147758 pod_ready.go:103] pod "kube-controller-manager-embed-certs-541370" in "kube-system" namespace has status "Ready":"False"
	I1010 19:23:40.528757  147758 pod_ready.go:93] pod "kube-controller-manager-embed-certs-541370" in "kube-system" namespace has status "Ready":"True"
	I1010 19:23:40.528786  147758 pod_ready.go:82] duration metric: took 4.575513259s for pod "kube-controller-manager-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:23:40.528802  147758 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-f5l6x" in "kube-system" namespace to be "Ready" ...
	I1010 19:23:40.538002  147758 pod_ready.go:93] pod "kube-proxy-f5l6x" in "kube-system" namespace has status "Ready":"True"
	I1010 19:23:40.538034  147758 pod_ready.go:82] duration metric: took 9.22357ms for pod "kube-proxy-f5l6x" in "kube-system" namespace to be "Ready" ...
	I1010 19:23:40.538049  147758 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:23:40.543594  147758 pod_ready.go:93] pod "kube-scheduler-embed-certs-541370" in "kube-system" namespace has status "Ready":"True"
	I1010 19:23:40.543615  147758 pod_ready.go:82] duration metric: took 5.558665ms for pod "kube-scheduler-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:23:40.543626  147758 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace to be "Ready" ...
	I1010 19:23:38.581315  148123 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1010 19:23:38.581361  148123 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1010 19:23:38.581385  148123 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1010 19:23:38.581407  148123 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1010 19:23:38.581442  148123 ssh_runner.go:195] Run: which crictl
	I1010 19:23:38.581457  148123 ssh_runner.go:195] Run: which crictl
	I1010 19:23:38.642668  148123 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1010 19:23:38.642721  148123 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1010 19:23:38.642777  148123 ssh_runner.go:195] Run: which crictl
	I1010 19:23:38.658235  148123 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1010 19:23:38.658291  148123 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1010 19:23:38.658288  148123 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1010 19:23:38.658328  148123 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1010 19:23:38.658346  148123 ssh_runner.go:195] Run: which crictl
	I1010 19:23:38.658373  148123 ssh_runner.go:195] Run: which crictl
	I1010 19:23:38.678038  148123 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1010 19:23:38.678097  148123 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1010 19:23:38.678155  148123 ssh_runner.go:195] Run: which crictl
	I1010 19:23:38.682719  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1010 19:23:38.682773  148123 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1010 19:23:38.682721  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1010 19:23:38.682791  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1010 19:23:38.682812  148123 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1010 19:23:38.682845  148123 ssh_runner.go:195] Run: which crictl
	I1010 19:23:38.682851  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1010 19:23:38.682872  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1010 19:23:38.691181  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1010 19:23:38.842626  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1010 19:23:38.842714  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1010 19:23:38.842754  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1010 19:23:38.842797  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1010 19:23:38.842721  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1010 19:23:38.842851  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1010 19:23:38.842789  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1010 19:23:38.989994  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1010 19:23:39.005815  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1010 19:23:39.005923  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1010 19:23:39.010029  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1010 19:23:39.010105  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1010 19:23:39.010150  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1010 19:23:39.010230  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1010 19:23:39.134646  148123 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 19:23:39.141431  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1010 19:23:39.176750  148123 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1010 19:23:39.176945  148123 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1010 19:23:39.176989  148123 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1010 19:23:39.177038  148123 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1010 19:23:39.177073  148123 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1010 19:23:39.177104  148123 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1010 19:23:39.335056  148123 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1010 19:23:39.335113  148123 cache_images.go:92] duration metric: took 1.026784312s to LoadCachedImages
	W1010 19:23:39.335180  148123 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I1010 19:23:39.335192  148123 kubeadm.go:934] updating node { 192.168.61.112 8443 v1.20.0 crio true true} ...
	I1010 19:23:39.335305  148123 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-947203 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.112
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-947203 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1010 19:23:39.335380  148123 ssh_runner.go:195] Run: crio config
	I1010 19:23:39.394307  148123 cni.go:84] Creating CNI manager for ""
	I1010 19:23:39.394338  148123 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1010 19:23:39.394350  148123 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1010 19:23:39.394378  148123 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.112 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-947203 NodeName:old-k8s-version-947203 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.112"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.112 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1010 19:23:39.394572  148123 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.112
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-947203"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.112
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.112"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1010 19:23:39.394662  148123 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1010 19:23:39.405510  148123 binaries.go:44] Found k8s binaries, skipping transfer
	I1010 19:23:39.405600  148123 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1010 19:23:39.415968  148123 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1010 19:23:39.443234  148123 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1010 19:23:39.462448  148123 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1010 19:23:39.481959  148123 ssh_runner.go:195] Run: grep 192.168.61.112	control-plane.minikube.internal$ /etc/hosts
	I1010 19:23:39.486188  148123 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.112	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 19:23:39.501922  148123 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 19:23:39.642769  148123 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 19:23:39.661335  148123 certs.go:68] Setting up /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/old-k8s-version-947203 for IP: 192.168.61.112
	I1010 19:23:39.661363  148123 certs.go:194] generating shared ca certs ...
	I1010 19:23:39.661384  148123 certs.go:226] acquiring lock for ca certs: {Name:mk934aa2a82050d7b4e8800492bf64f91d140517 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 19:23:39.661595  148123 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key
	I1010 19:23:39.661658  148123 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key
	I1010 19:23:39.661671  148123 certs.go:256] generating profile certs ...
	I1010 19:23:39.661779  148123 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/old-k8s-version-947203/client.key
	I1010 19:23:39.661867  148123 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/old-k8s-version-947203/apiserver.key.8a666a52
	I1010 19:23:39.661922  148123 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/old-k8s-version-947203/proxy-client.key
	I1010 19:23:39.662312  148123 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876.pem (1338 bytes)
	W1010 19:23:39.662395  148123 certs.go:480] ignoring /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876_empty.pem, impossibly tiny 0 bytes
	I1010 19:23:39.662410  148123 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem (1679 bytes)
	I1010 19:23:39.662447  148123 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem (1078 bytes)
	I1010 19:23:39.662501  148123 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem (1123 bytes)
	I1010 19:23:39.662531  148123 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem (1675 bytes)
	I1010 19:23:39.662622  148123 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem (1708 bytes)
	I1010 19:23:39.663372  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1010 19:23:39.710366  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1010 19:23:39.752724  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1010 19:23:39.801408  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1010 19:23:39.843557  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/old-k8s-version-947203/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1010 19:23:39.892522  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/old-k8s-version-947203/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1010 19:23:39.938519  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/old-k8s-version-947203/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1010 19:23:39.966140  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/old-k8s-version-947203/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1010 19:23:39.992974  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem --> /usr/share/ca-certificates/888762.pem (1708 bytes)
	I1010 19:23:40.019381  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1010 19:23:40.045769  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876.pem --> /usr/share/ca-certificates/88876.pem (1338 bytes)
	I1010 19:23:40.080683  148123 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1010 19:23:40.099747  148123 ssh_runner.go:195] Run: openssl version
	I1010 19:23:40.107865  148123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1010 19:23:40.123158  148123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1010 19:23:40.128594  148123 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 10 17:58 /usr/share/ca-certificates/minikubeCA.pem
	I1010 19:23:40.128660  148123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1010 19:23:40.135319  148123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1010 19:23:40.147514  148123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/88876.pem && ln -fs /usr/share/ca-certificates/88876.pem /etc/ssl/certs/88876.pem"
	I1010 19:23:40.162463  148123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/88876.pem
	I1010 19:23:40.168387  148123 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 10 18:09 /usr/share/ca-certificates/88876.pem
	I1010 19:23:40.168512  148123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/88876.pem
	I1010 19:23:40.176304  148123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/88876.pem /etc/ssl/certs/51391683.0"
	I1010 19:23:40.191306  148123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/888762.pem && ln -fs /usr/share/ca-certificates/888762.pem /etc/ssl/certs/888762.pem"
	I1010 19:23:40.204196  148123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/888762.pem
	I1010 19:23:40.211044  148123 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 10 18:09 /usr/share/ca-certificates/888762.pem
	I1010 19:23:40.211122  148123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/888762.pem
	I1010 19:23:40.219574  148123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/888762.pem /etc/ssl/certs/3ec20f2e.0"
	I1010 19:23:40.232634  148123 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1010 19:23:40.237639  148123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1010 19:23:40.244141  148123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1010 19:23:40.251188  148123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1010 19:23:40.258538  148123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1010 19:23:40.265029  148123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1010 19:23:40.271754  148123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1010 19:23:40.278352  148123 kubeadm.go:392] StartCluster: {Name:old-k8s-version-947203 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-947203 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.112 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 19:23:40.278453  148123 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1010 19:23:40.278518  148123 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1010 19:23:40.317771  148123 cri.go:89] found id: ""
	I1010 19:23:40.317838  148123 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1010 19:23:40.330494  148123 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1010 19:23:40.330520  148123 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1010 19:23:40.330576  148123 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1010 19:23:40.341984  148123 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1010 19:23:40.343386  148123 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-947203" does not appear in /home/jenkins/minikube-integration/19787-81676/kubeconfig
	I1010 19:23:40.344334  148123 kubeconfig.go:62] /home/jenkins/minikube-integration/19787-81676/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-947203" cluster setting kubeconfig missing "old-k8s-version-947203" context setting]
	I1010 19:23:40.345360  148123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19787-81676/kubeconfig: {Name:mk31297b607b774870865548d952622c04d970cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 19:23:40.347128  148123 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1010 19:23:40.357902  148123 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.112
	I1010 19:23:40.357944  148123 kubeadm.go:1160] stopping kube-system containers ...
	I1010 19:23:40.357961  148123 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1010 19:23:40.358031  148123 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1010 19:23:40.399970  148123 cri.go:89] found id: ""
	I1010 19:23:40.400057  148123 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1010 19:23:40.418527  148123 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1010 19:23:40.429182  148123 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1010 19:23:40.429217  148123 kubeadm.go:157] found existing configuration files:
	
	I1010 19:23:40.429262  148123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1010 19:23:40.439287  148123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1010 19:23:40.439363  148123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1010 19:23:40.449479  148123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1010 19:23:40.459288  148123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1010 19:23:40.459359  148123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1010 19:23:40.472733  148123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1010 19:23:40.484890  148123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1010 19:23:40.484958  148123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1010 19:23:40.495301  148123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1010 19:23:40.504750  148123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1010 19:23:40.504820  148123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1010 19:23:40.515034  148123 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1010 19:23:40.526168  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:23:40.675022  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:23:41.566371  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:23:41.815296  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:23:41.930082  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:23:42.027597  148123 api_server.go:52] waiting for apiserver process to appear ...
	I1010 19:23:42.027713  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:42.528219  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:43.027889  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:43.527735  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:40.058247  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:40.058783  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | unable to find current IP address of domain default-k8s-diff-port-361847 in network mk-default-k8s-diff-port-361847
	I1010 19:23:40.058835  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | I1010 19:23:40.058696  149213 retry.go:31] will retry after 2.214094081s: waiting for machine to come up
	I1010 19:23:42.274629  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:42.275167  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | unable to find current IP address of domain default-k8s-diff-port-361847 in network mk-default-k8s-diff-port-361847
	I1010 19:23:42.275194  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | I1010 19:23:42.275106  149213 retry.go:31] will retry after 2.126528577s: waiting for machine to come up
	I1010 19:23:42.550865  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:23:45.051043  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:23:44.028463  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:44.527916  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:45.027911  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:45.528797  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:46.028772  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:46.527982  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:47.027799  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:47.527894  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:48.028352  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:48.527935  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:44.403101  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:44.403575  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | unable to find current IP address of domain default-k8s-diff-port-361847 in network mk-default-k8s-diff-port-361847
	I1010 19:23:44.403616  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | I1010 19:23:44.403534  149213 retry.go:31] will retry after 3.603964622s: waiting for machine to come up
	I1010 19:23:48.008726  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:48.009142  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | unable to find current IP address of domain default-k8s-diff-port-361847 in network mk-default-k8s-diff-port-361847
	I1010 19:23:48.009191  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | I1010 19:23:48.009100  149213 retry.go:31] will retry after 3.639744981s: waiting for machine to come up
	I1010 19:23:47.551003  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:23:49.661572  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:23:52.858209  147213 start.go:364] duration metric: took 56.558774237s to acquireMachinesLock for "no-preload-320324"
	I1010 19:23:52.858274  147213 start.go:96] Skipping create...Using existing machine configuration
	I1010 19:23:52.858283  147213 fix.go:54] fixHost starting: 
	I1010 19:23:52.858705  147213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:23:52.858742  147213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:23:52.878428  147213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38525
	I1010 19:23:52.878955  147213 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:23:52.879563  147213 main.go:141] libmachine: Using API Version  1
	I1010 19:23:52.879599  147213 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:23:52.879945  147213 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:23:52.880144  147213 main.go:141] libmachine: (no-preload-320324) Calling .DriverName
	I1010 19:23:52.880282  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetState
	I1010 19:23:52.881626  147213 fix.go:112] recreateIfNeeded on no-preload-320324: state=Stopped err=<nil>
	I1010 19:23:52.881650  147213 main.go:141] libmachine: (no-preload-320324) Calling .DriverName
	W1010 19:23:52.881799  147213 fix.go:138] unexpected machine state, will restart: <nil>
	I1010 19:23:52.883912  147213 out.go:177] * Restarting existing kvm2 VM for "no-preload-320324" ...
	I1010 19:23:49.028421  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:49.528271  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:50.028462  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:50.527867  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:51.028782  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:51.528581  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:52.028732  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:52.527987  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:53.027852  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:53.528647  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:52.885239  147213 main.go:141] libmachine: (no-preload-320324) Calling .Start
	I1010 19:23:52.885429  147213 main.go:141] libmachine: (no-preload-320324) Ensuring networks are active...
	I1010 19:23:52.886211  147213 main.go:141] libmachine: (no-preload-320324) Ensuring network default is active
	I1010 19:23:52.886749  147213 main.go:141] libmachine: (no-preload-320324) Ensuring network mk-no-preload-320324 is active
	I1010 19:23:52.887310  147213 main.go:141] libmachine: (no-preload-320324) Getting domain xml...
	I1010 19:23:52.888034  147213 main.go:141] libmachine: (no-preload-320324) Creating domain...
	I1010 19:23:51.652975  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:51.653464  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Found IP for machine: 192.168.50.32
	I1010 19:23:51.653487  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Reserving static IP address...
	I1010 19:23:51.653509  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has current primary IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:51.653910  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-361847", mac: "52:54:00:a6:72:58", ip: "192.168.50.32"} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:51.653956  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | skip adding static IP to network mk-default-k8s-diff-port-361847 - found existing host DHCP lease matching {name: "default-k8s-diff-port-361847", mac: "52:54:00:a6:72:58", ip: "192.168.50.32"}
	I1010 19:23:51.653974  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Reserved static IP address: 192.168.50.32
	I1010 19:23:51.653993  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for SSH to be available...
	I1010 19:23:51.654006  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | Getting to WaitForSSH function...
	I1010 19:23:51.655927  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:51.656210  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:51.656240  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:51.656334  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | Using SSH client type: external
	I1010 19:23:51.656372  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | Using SSH private key: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/default-k8s-diff-port-361847/id_rsa (-rw-------)
	I1010 19:23:51.656409  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.32 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19787-81676/.minikube/machines/default-k8s-diff-port-361847/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1010 19:23:51.656425  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | About to run SSH command:
	I1010 19:23:51.656436  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | exit 0
	I1010 19:23:51.780839  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | SSH cmd err, output: <nil>: 
	I1010 19:23:51.781206  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetConfigRaw
	I1010 19:23:51.781939  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetIP
	I1010 19:23:51.784347  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:51.784663  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:51.784696  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:51.784918  148525 profile.go:143] Saving config to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/default-k8s-diff-port-361847/config.json ...
	I1010 19:23:51.785134  148525 machine.go:93] provisionDockerMachine start ...
	I1010 19:23:51.785158  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .DriverName
	I1010 19:23:51.785403  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHHostname
	I1010 19:23:51.787817  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:51.788306  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:51.788347  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:51.788547  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHPort
	I1010 19:23:51.788807  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:23:51.789038  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:23:51.789274  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHUsername
	I1010 19:23:51.789515  148525 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:51.789802  148525 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.32 22 <nil> <nil>}
	I1010 19:23:51.789825  148525 main.go:141] libmachine: About to run SSH command:
	hostname
	I1010 19:23:51.893367  148525 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1010 19:23:51.893399  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetMachineName
	I1010 19:23:51.893652  148525 buildroot.go:166] provisioning hostname "default-k8s-diff-port-361847"
	I1010 19:23:51.893699  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetMachineName
	I1010 19:23:51.893921  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHHostname
	I1010 19:23:51.896986  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:51.897377  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:51.897422  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:51.897662  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHPort
	I1010 19:23:51.897815  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:23:51.897949  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:23:51.898064  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHUsername
	I1010 19:23:51.898302  148525 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:51.898489  148525 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.32 22 <nil> <nil>}
	I1010 19:23:51.898502  148525 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-361847 && echo "default-k8s-diff-port-361847" | sudo tee /etc/hostname
	I1010 19:23:52.015158  148525 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-361847
	
	I1010 19:23:52.015199  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHHostname
	I1010 19:23:52.018094  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.018468  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:52.018497  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.018683  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHPort
	I1010 19:23:52.018901  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:23:52.019039  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:23:52.019209  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHUsername
	I1010 19:23:52.019474  148525 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:52.019690  148525 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.32 22 <nil> <nil>}
	I1010 19:23:52.019708  148525 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-361847' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-361847/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-361847' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1010 19:23:52.133923  148525 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1010 19:23:52.133960  148525 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19787-81676/.minikube CaCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19787-81676/.minikube}
	I1010 19:23:52.134007  148525 buildroot.go:174] setting up certificates
	I1010 19:23:52.134023  148525 provision.go:84] configureAuth start
	I1010 19:23:52.134043  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetMachineName
	I1010 19:23:52.134351  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetIP
	I1010 19:23:52.137242  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.137637  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:52.137670  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.137860  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHHostname
	I1010 19:23:52.140264  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.140640  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:52.140672  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.140833  148525 provision.go:143] copyHostCerts
	I1010 19:23:52.140907  148525 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem, removing ...
	I1010 19:23:52.140922  148525 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem
	I1010 19:23:52.140977  148525 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem (1078 bytes)
	I1010 19:23:52.141088  148525 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem, removing ...
	I1010 19:23:52.141098  148525 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem
	I1010 19:23:52.141118  148525 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem (1123 bytes)
	I1010 19:23:52.141175  148525 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem, removing ...
	I1010 19:23:52.141182  148525 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem
	I1010 19:23:52.141213  148525 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem (1675 bytes)
	I1010 19:23:52.141264  148525 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-361847 san=[127.0.0.1 192.168.50.32 default-k8s-diff-port-361847 localhost minikube]
	I1010 19:23:52.241146  148525 provision.go:177] copyRemoteCerts
	I1010 19:23:52.241212  148525 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1010 19:23:52.241241  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHHostname
	I1010 19:23:52.244061  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.244463  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:52.244490  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.244731  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHPort
	I1010 19:23:52.244929  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:23:52.245110  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHUsername
	I1010 19:23:52.245228  148525 sshutil.go:53] new ssh client: &{IP:192.168.50.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/default-k8s-diff-port-361847/id_rsa Username:docker}
	I1010 19:23:52.327309  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1010 19:23:52.352288  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1010 19:23:52.376308  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1010 19:23:52.400807  148525 provision.go:87] duration metric: took 266.765119ms to configureAuth
	I1010 19:23:52.400862  148525 buildroot.go:189] setting minikube options for container-runtime
	I1010 19:23:52.401065  148525 config.go:182] Loaded profile config "default-k8s-diff-port-361847": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 19:23:52.401171  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHHostname
	I1010 19:23:52.403552  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.403919  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:52.403950  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.404173  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHPort
	I1010 19:23:52.404371  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:23:52.404513  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:23:52.404629  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHUsername
	I1010 19:23:52.404743  148525 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:52.404927  148525 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.32 22 <nil> <nil>}
	I1010 19:23:52.404949  148525 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1010 19:23:52.622902  148525 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1010 19:23:52.622930  148525 machine.go:96] duration metric: took 837.779579ms to provisionDockerMachine
	I1010 19:23:52.622942  148525 start.go:293] postStartSetup for "default-k8s-diff-port-361847" (driver="kvm2")
	I1010 19:23:52.622952  148525 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1010 19:23:52.622968  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .DriverName
	I1010 19:23:52.623331  148525 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1010 19:23:52.623369  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHHostname
	I1010 19:23:52.626106  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.626435  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:52.626479  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.626721  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHPort
	I1010 19:23:52.626932  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:23:52.627091  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHUsername
	I1010 19:23:52.627262  148525 sshutil.go:53] new ssh client: &{IP:192.168.50.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/default-k8s-diff-port-361847/id_rsa Username:docker}
	I1010 19:23:52.708050  148525 ssh_runner.go:195] Run: cat /etc/os-release
	I1010 19:23:52.712524  148525 info.go:137] Remote host: Buildroot 2023.02.9
	I1010 19:23:52.712550  148525 filesync.go:126] Scanning /home/jenkins/minikube-integration/19787-81676/.minikube/addons for local assets ...
	I1010 19:23:52.712608  148525 filesync.go:126] Scanning /home/jenkins/minikube-integration/19787-81676/.minikube/files for local assets ...
	I1010 19:23:52.712688  148525 filesync.go:149] local asset: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem -> 888762.pem in /etc/ssl/certs
	I1010 19:23:52.712782  148525 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1010 19:23:52.723719  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem --> /etc/ssl/certs/888762.pem (1708 bytes)
	I1010 19:23:52.747686  148525 start.go:296] duration metric: took 124.729371ms for postStartSetup
	I1010 19:23:52.747727  148525 fix.go:56] duration metric: took 20.853721623s for fixHost
	I1010 19:23:52.747749  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHHostname
	I1010 19:23:52.750316  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.750645  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:52.750677  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.750817  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHPort
	I1010 19:23:52.751046  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:23:52.751195  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:23:52.751333  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHUsername
	I1010 19:23:52.751511  148525 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:52.751733  148525 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.32 22 <nil> <nil>}
	I1010 19:23:52.751749  148525 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1010 19:23:52.857986  148525 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728588232.831281012
	
	I1010 19:23:52.858019  148525 fix.go:216] guest clock: 1728588232.831281012
	I1010 19:23:52.858029  148525 fix.go:229] Guest: 2024-10-10 19:23:52.831281012 +0000 UTC Remote: 2024-10-10 19:23:52.747731551 +0000 UTC m=+158.845659062 (delta=83.549461ms)
	I1010 19:23:52.858075  148525 fix.go:200] guest clock delta is within tolerance: 83.549461ms
	I1010 19:23:52.858088  148525 start.go:83] releasing machines lock for "default-k8s-diff-port-361847", held for 20.964121636s
	I1010 19:23:52.858120  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .DriverName
	I1010 19:23:52.858491  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetIP
	I1010 19:23:52.861220  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.861640  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:52.861672  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.861828  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .DriverName
	I1010 19:23:52.862337  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .DriverName
	I1010 19:23:52.862548  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .DriverName
	I1010 19:23:52.862655  148525 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1010 19:23:52.862702  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHHostname
	I1010 19:23:52.862825  148525 ssh_runner.go:195] Run: cat /version.json
	I1010 19:23:52.862854  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHHostname
	I1010 19:23:52.865579  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.865840  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.865921  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:52.865960  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.866106  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHPort
	I1010 19:23:52.866290  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:23:52.866300  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:52.866319  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.866423  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHUsername
	I1010 19:23:52.866496  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHPort
	I1010 19:23:52.866648  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:23:52.866671  148525 sshutil.go:53] new ssh client: &{IP:192.168.50.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/default-k8s-diff-port-361847/id_rsa Username:docker}
	I1010 19:23:52.866798  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHUsername
	I1010 19:23:52.866910  148525 sshutil.go:53] new ssh client: &{IP:192.168.50.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/default-k8s-diff-port-361847/id_rsa Username:docker}
	I1010 19:23:52.966354  148525 ssh_runner.go:195] Run: systemctl --version
	I1010 19:23:52.972526  148525 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1010 19:23:53.119801  148525 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1010 19:23:53.126287  148525 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1010 19:23:53.126355  148525 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1010 19:23:53.147301  148525 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1010 19:23:53.147325  148525 start.go:495] detecting cgroup driver to use...
	I1010 19:23:53.147381  148525 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1010 19:23:53.167368  148525 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1010 19:23:53.183239  148525 docker.go:217] disabling cri-docker service (if available) ...
	I1010 19:23:53.183308  148525 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1010 19:23:53.203230  148525 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1010 19:23:53.217261  148525 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1010 19:23:53.343555  148525 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1010 19:23:53.491952  148525 docker.go:233] disabling docker service ...
	I1010 19:23:53.492054  148525 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1010 19:23:53.508136  148525 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1010 19:23:53.521662  148525 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1010 19:23:53.651858  148525 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1010 19:23:53.781954  148525 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1010 19:23:53.803934  148525 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1010 19:23:53.826070  148525 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1010 19:23:53.826146  148525 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:53.837506  148525 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1010 19:23:53.837587  148525 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:53.848653  148525 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:53.860511  148525 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:53.873254  148525 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1010 19:23:53.887862  148525 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:53.899507  148525 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:53.923325  148525 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:53.934999  148525 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1010 19:23:53.946869  148525 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1010 19:23:53.946945  148525 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1010 19:23:53.968116  148525 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1010 19:23:53.980109  148525 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 19:23:54.106345  148525 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1010 19:23:54.210345  148525 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1010 19:23:54.210417  148525 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1010 19:23:54.215968  148525 start.go:563] Will wait 60s for crictl version
	I1010 19:23:54.216037  148525 ssh_runner.go:195] Run: which crictl
	I1010 19:23:54.219885  148525 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1010 19:23:54.260286  148525 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1010 19:23:54.260375  148525 ssh_runner.go:195] Run: crio --version
	I1010 19:23:54.289908  148525 ssh_runner.go:195] Run: crio --version
	I1010 19:23:54.320940  148525 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1010 19:23:52.050137  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:23:54.060194  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:23:56.551981  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:23:54.027982  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:54.528065  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:55.027890  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:55.527753  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:56.027877  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:56.527790  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:57.028263  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:57.527790  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:58.028743  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:58.528039  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:54.234149  147213 main.go:141] libmachine: (no-preload-320324) Waiting to get IP...
	I1010 19:23:54.235147  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:23:54.235598  147213 main.go:141] libmachine: (no-preload-320324) DBG | unable to find current IP address of domain no-preload-320324 in network mk-no-preload-320324
	I1010 19:23:54.235657  147213 main.go:141] libmachine: (no-preload-320324) DBG | I1010 19:23:54.235580  149378 retry.go:31] will retry after 308.921504ms: waiting for machine to come up
	I1010 19:23:54.546327  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:23:54.547002  147213 main.go:141] libmachine: (no-preload-320324) DBG | unable to find current IP address of domain no-preload-320324 in network mk-no-preload-320324
	I1010 19:23:54.547029  147213 main.go:141] libmachine: (no-preload-320324) DBG | I1010 19:23:54.546956  149378 retry.go:31] will retry after 288.92327ms: waiting for machine to come up
	I1010 19:23:54.837625  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:23:54.838136  147213 main.go:141] libmachine: (no-preload-320324) DBG | unable to find current IP address of domain no-preload-320324 in network mk-no-preload-320324
	I1010 19:23:54.838164  147213 main.go:141] libmachine: (no-preload-320324) DBG | I1010 19:23:54.838054  149378 retry.go:31] will retry after 321.948113ms: waiting for machine to come up
	I1010 19:23:55.161940  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:23:55.162494  147213 main.go:141] libmachine: (no-preload-320324) DBG | unable to find current IP address of domain no-preload-320324 in network mk-no-preload-320324
	I1010 19:23:55.162526  147213 main.go:141] libmachine: (no-preload-320324) DBG | I1010 19:23:55.162441  149378 retry.go:31] will retry after 573.848095ms: waiting for machine to come up
	I1010 19:23:55.739080  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:23:55.739592  147213 main.go:141] libmachine: (no-preload-320324) DBG | unable to find current IP address of domain no-preload-320324 in network mk-no-preload-320324
	I1010 19:23:55.739620  147213 main.go:141] libmachine: (no-preload-320324) DBG | I1010 19:23:55.739494  149378 retry.go:31] will retry after 529.087622ms: waiting for machine to come up
	I1010 19:23:56.270324  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:23:56.270899  147213 main.go:141] libmachine: (no-preload-320324) DBG | unable to find current IP address of domain no-preload-320324 in network mk-no-preload-320324
	I1010 19:23:56.270929  147213 main.go:141] libmachine: (no-preload-320324) DBG | I1010 19:23:56.270850  149378 retry.go:31] will retry after 629.204989ms: waiting for machine to come up
	I1010 19:23:56.901836  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:23:56.902283  147213 main.go:141] libmachine: (no-preload-320324) DBG | unable to find current IP address of domain no-preload-320324 in network mk-no-preload-320324
	I1010 19:23:56.902325  147213 main.go:141] libmachine: (no-preload-320324) DBG | I1010 19:23:56.902222  149378 retry.go:31] will retry after 804.309499ms: waiting for machine to come up
	I1010 19:23:57.708806  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:23:57.709175  147213 main.go:141] libmachine: (no-preload-320324) DBG | unable to find current IP address of domain no-preload-320324 in network mk-no-preload-320324
	I1010 19:23:57.709208  147213 main.go:141] libmachine: (no-preload-320324) DBG | I1010 19:23:57.709151  149378 retry.go:31] will retry after 1.204078295s: waiting for machine to come up
	I1010 19:23:54.322534  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetIP
	I1010 19:23:54.325744  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:54.326217  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:54.326257  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:54.326533  148525 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1010 19:23:54.331527  148525 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 19:23:54.343881  148525 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-361847 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:default-k8s-diff-port-361847 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.32 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1010 19:23:54.344033  148525 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1010 19:23:54.344084  148525 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 19:23:54.389066  148525 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1010 19:23:54.389149  148525 ssh_runner.go:195] Run: which lz4
	I1010 19:23:54.393550  148525 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1010 19:23:54.397787  148525 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1010 19:23:54.397833  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1010 19:23:55.897111  148525 crio.go:462] duration metric: took 1.503593301s to copy over tarball
	I1010 19:23:55.897212  148525 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1010 19:23:58.060691  148525 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.16343467s)
	I1010 19:23:58.060731  148525 crio.go:469] duration metric: took 2.163580526s to extract the tarball
	I1010 19:23:58.060741  148525 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1010 19:23:58.103877  148525 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 19:23:58.162881  148525 crio.go:514] all images are preloaded for cri-o runtime.
	I1010 19:23:58.162907  148525 cache_images.go:84] Images are preloaded, skipping loading
	I1010 19:23:58.162915  148525 kubeadm.go:934] updating node { 192.168.50.32 8444 v1.31.1 crio true true} ...
	I1010 19:23:58.163031  148525 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-361847 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.32
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-361847 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1010 19:23:58.163098  148525 ssh_runner.go:195] Run: crio config
	I1010 19:23:58.219804  148525 cni.go:84] Creating CNI manager for ""
	I1010 19:23:58.219827  148525 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1010 19:23:58.219837  148525 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1010 19:23:58.219861  148525 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.32 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-361847 NodeName:default-k8s-diff-port-361847 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.32"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.32 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1010 19:23:58.219982  148525 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.32
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-361847"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.32
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.32"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1010 19:23:58.220042  148525 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1010 19:23:58.231444  148525 binaries.go:44] Found k8s binaries, skipping transfer
	I1010 19:23:58.231565  148525 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1010 19:23:58.241835  148525 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I1010 19:23:58.259408  148525 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1010 19:23:58.276571  148525 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I1010 19:23:58.294640  148525 ssh_runner.go:195] Run: grep 192.168.50.32	control-plane.minikube.internal$ /etc/hosts
	I1010 19:23:58.298503  148525 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.32	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 19:23:58.312286  148525 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 19:23:58.449757  148525 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 19:23:58.467342  148525 certs.go:68] Setting up /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/default-k8s-diff-port-361847 for IP: 192.168.50.32
	I1010 19:23:58.467377  148525 certs.go:194] generating shared ca certs ...
	I1010 19:23:58.467398  148525 certs.go:226] acquiring lock for ca certs: {Name:mk934aa2a82050d7b4e8800492bf64f91d140517 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 19:23:58.467583  148525 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key
	I1010 19:23:58.467642  148525 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key
	I1010 19:23:58.467655  148525 certs.go:256] generating profile certs ...
	I1010 19:23:58.467826  148525 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/default-k8s-diff-port-361847/client.key
	I1010 19:23:58.467895  148525 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/default-k8s-diff-port-361847/apiserver.key.ae5e3f04
	I1010 19:23:58.467951  148525 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/default-k8s-diff-port-361847/proxy-client.key
	I1010 19:23:58.468089  148525 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876.pem (1338 bytes)
	W1010 19:23:58.468136  148525 certs.go:480] ignoring /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876_empty.pem, impossibly tiny 0 bytes
	I1010 19:23:58.468153  148525 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem (1679 bytes)
	I1010 19:23:58.468194  148525 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem (1078 bytes)
	I1010 19:23:58.468226  148525 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem (1123 bytes)
	I1010 19:23:58.468260  148525 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem (1675 bytes)
	I1010 19:23:58.468317  148525 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem (1708 bytes)
	I1010 19:23:58.468931  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1010 19:23:58.529632  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1010 19:23:58.571900  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1010 19:23:58.612599  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1010 19:23:58.645536  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/default-k8s-diff-port-361847/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1010 19:23:58.675961  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/default-k8s-diff-port-361847/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1010 19:23:58.700712  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/default-k8s-diff-port-361847/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1010 19:23:58.725355  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/default-k8s-diff-port-361847/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1010 19:23:58.751138  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1010 19:23:58.775832  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876.pem --> /usr/share/ca-certificates/88876.pem (1338 bytes)
	I1010 19:23:58.800729  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem --> /usr/share/ca-certificates/888762.pem (1708 bytes)
	I1010 19:23:58.825558  148525 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1010 19:23:58.843331  148525 ssh_runner.go:195] Run: openssl version
	I1010 19:23:58.849271  148525 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/88876.pem && ln -fs /usr/share/ca-certificates/88876.pem /etc/ssl/certs/88876.pem"
	I1010 19:23:58.861031  148525 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/88876.pem
	I1010 19:23:58.865721  148525 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 10 18:09 /usr/share/ca-certificates/88876.pem
	I1010 19:23:58.865797  148525 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/88876.pem
	I1010 19:23:58.871961  148525 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/88876.pem /etc/ssl/certs/51391683.0"
	I1010 19:23:58.884520  148525 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/888762.pem && ln -fs /usr/share/ca-certificates/888762.pem /etc/ssl/certs/888762.pem"
	I1010 19:23:58.896744  148525 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/888762.pem
	I1010 19:23:58.901507  148525 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 10 18:09 /usr/share/ca-certificates/888762.pem
	I1010 19:23:58.901571  148525 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/888762.pem
	I1010 19:23:58.907366  148525 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/888762.pem /etc/ssl/certs/3ec20f2e.0"
	I1010 19:23:58.919784  148525 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1010 19:23:58.931972  148525 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1010 19:23:58.936897  148525 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 10 17:58 /usr/share/ca-certificates/minikubeCA.pem
	I1010 19:23:58.936981  148525 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1010 19:23:58.943007  148525 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1010 19:23:59.052037  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:01.551982  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:23:59.028519  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:59.527790  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:00.027889  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:00.528623  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:01.028697  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:01.528030  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:02.028062  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:02.527876  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:03.028745  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:03.528569  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:58.914409  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:23:58.914894  147213 main.go:141] libmachine: (no-preload-320324) DBG | unable to find current IP address of domain no-preload-320324 in network mk-no-preload-320324
	I1010 19:23:58.914927  147213 main.go:141] libmachine: (no-preload-320324) DBG | I1010 19:23:58.914831  149378 retry.go:31] will retry after 1.631827888s: waiting for machine to come up
	I1010 19:24:00.548505  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:00.549135  147213 main.go:141] libmachine: (no-preload-320324) DBG | unable to find current IP address of domain no-preload-320324 in network mk-no-preload-320324
	I1010 19:24:00.549164  147213 main.go:141] libmachine: (no-preload-320324) DBG | I1010 19:24:00.549043  149378 retry.go:31] will retry after 2.126895157s: waiting for machine to come up
	I1010 19:24:02.678328  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:02.678907  147213 main.go:141] libmachine: (no-preload-320324) DBG | unable to find current IP address of domain no-preload-320324 in network mk-no-preload-320324
	I1010 19:24:02.678969  147213 main.go:141] libmachine: (no-preload-320324) DBG | I1010 19:24:02.678891  149378 retry.go:31] will retry after 2.754376625s: waiting for machine to come up
	I1010 19:23:58.955104  148525 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1010 19:23:58.959833  148525 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1010 19:23:58.966528  148525 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1010 19:23:58.973590  148525 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1010 19:23:58.982390  148525 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1010 19:23:58.990767  148525 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1010 19:23:58.997162  148525 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1010 19:23:59.003647  148525 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-361847 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:default-k8s-diff-port-361847 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.32 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 19:23:59.003786  148525 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1010 19:23:59.003865  148525 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1010 19:23:59.048772  148525 cri.go:89] found id: ""
	I1010 19:23:59.048869  148525 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1010 19:23:59.061267  148525 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1010 19:23:59.061288  148525 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1010 19:23:59.061338  148525 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1010 19:23:59.072629  148525 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1010 19:23:59.074287  148525 kubeconfig.go:125] found "default-k8s-diff-port-361847" server: "https://192.168.50.32:8444"
	I1010 19:23:59.077880  148525 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1010 19:23:59.090738  148525 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.32
	I1010 19:23:59.090783  148525 kubeadm.go:1160] stopping kube-system containers ...
	I1010 19:23:59.090799  148525 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1010 19:23:59.090886  148525 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1010 19:23:59.136762  148525 cri.go:89] found id: ""
	I1010 19:23:59.136888  148525 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1010 19:23:59.155937  148525 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1010 19:23:59.166471  148525 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1010 19:23:59.166493  148525 kubeadm.go:157] found existing configuration files:
	
	I1010 19:23:59.166549  148525 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1010 19:23:59.178247  148525 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1010 19:23:59.178313  148525 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1010 19:23:59.189455  148525 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1010 19:23:59.200127  148525 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1010 19:23:59.200204  148525 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1010 19:23:59.210764  148525 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1010 19:23:59.221048  148525 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1010 19:23:59.221119  148525 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1010 19:23:59.231762  148525 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1010 19:23:59.242152  148525 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1010 19:23:59.242217  148525 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1010 19:23:59.252608  148525 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1010 19:23:59.265219  148525 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:23:59.391743  148525 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:24:00.243288  148525 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:24:00.453782  148525 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:24:00.532137  148525 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:24:00.623598  148525 api_server.go:52] waiting for apiserver process to appear ...
	I1010 19:24:00.623711  148525 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:01.124678  148525 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:01.624626  148525 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:01.667587  148525 api_server.go:72] duration metric: took 1.043987857s to wait for apiserver process to appear ...
	I1010 19:24:01.667621  148525 api_server.go:88] waiting for apiserver healthz status ...
	I1010 19:24:01.667649  148525 api_server.go:253] Checking apiserver healthz at https://192.168.50.32:8444/healthz ...
	I1010 19:24:01.668298  148525 api_server.go:269] stopped: https://192.168.50.32:8444/healthz: Get "https://192.168.50.32:8444/healthz": dial tcp 192.168.50.32:8444: connect: connection refused
	I1010 19:24:02.168273  148525 api_server.go:253] Checking apiserver healthz at https://192.168.50.32:8444/healthz ...
	I1010 19:24:05.275654  148525 api_server.go:279] https://192.168.50.32:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1010 19:24:05.275695  148525 api_server.go:103] status: https://192.168.50.32:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1010 19:24:05.275713  148525 api_server.go:253] Checking apiserver healthz at https://192.168.50.32:8444/healthz ...
	I1010 19:24:05.309713  148525 api_server.go:279] https://192.168.50.32:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1010 19:24:05.309770  148525 api_server.go:103] status: https://192.168.50.32:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1010 19:24:05.668325  148525 api_server.go:253] Checking apiserver healthz at https://192.168.50.32:8444/healthz ...
	I1010 19:24:05.684992  148525 api_server.go:279] https://192.168.50.32:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1010 19:24:05.685031  148525 api_server.go:103] status: https://192.168.50.32:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1010 19:24:06.168198  148525 api_server.go:253] Checking apiserver healthz at https://192.168.50.32:8444/healthz ...
	I1010 19:24:06.176584  148525 api_server.go:279] https://192.168.50.32:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1010 19:24:06.176627  148525 api_server.go:103] status: https://192.168.50.32:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1010 19:24:06.668130  148525 api_server.go:253] Checking apiserver healthz at https://192.168.50.32:8444/healthz ...
	I1010 19:24:06.682049  148525 api_server.go:279] https://192.168.50.32:8444/healthz returned 200:
	ok
	I1010 19:24:06.692780  148525 api_server.go:141] control plane version: v1.31.1
	I1010 19:24:06.692811  148525 api_server.go:131] duration metric: took 5.025182717s to wait for apiserver health ...
	I1010 19:24:06.692820  148525 cni.go:84] Creating CNI manager for ""
	I1010 19:24:06.692831  148525 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1010 19:24:06.694447  148525 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1010 19:24:03.558797  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:06.054012  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:04.028711  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:04.528770  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:05.028716  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:05.528083  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:06.028204  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:06.528430  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:07.027972  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:07.527987  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:08.027829  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:08.528676  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:05.435450  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:05.435940  147213 main.go:141] libmachine: (no-preload-320324) DBG | unable to find current IP address of domain no-preload-320324 in network mk-no-preload-320324
	I1010 19:24:05.435970  147213 main.go:141] libmachine: (no-preload-320324) DBG | I1010 19:24:05.435888  149378 retry.go:31] will retry after 2.981990051s: waiting for machine to come up
	I1010 19:24:08.419385  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:08.419982  147213 main.go:141] libmachine: (no-preload-320324) DBG | unable to find current IP address of domain no-preload-320324 in network mk-no-preload-320324
	I1010 19:24:08.420006  147213 main.go:141] libmachine: (no-preload-320324) DBG | I1010 19:24:08.419905  149378 retry.go:31] will retry after 3.976204267s: waiting for machine to come up
	I1010 19:24:06.695841  148525 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1010 19:24:06.711212  148525 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1010 19:24:06.747753  148525 system_pods.go:43] waiting for kube-system pods to appear ...
	I1010 19:24:06.768344  148525 system_pods.go:59] 8 kube-system pods found
	I1010 19:24:06.768429  148525 system_pods.go:61] "coredns-7c65d6cfc9-rv8vq" [93b209ea-bb5f-40c5-aea8-8771b785f021] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1010 19:24:06.768446  148525 system_pods.go:61] "etcd-default-k8s-diff-port-361847" [65129999-984d-497c-a6e1-9c53a5374991] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1010 19:24:06.768452  148525 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-361847" [5f18ba24-29cf-433e-a70d-23757278c04f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1010 19:24:06.768460  148525 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-361847" [c189c785-8ac5-4003-802d-9e7c089d450e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1010 19:24:06.768467  148525 system_pods.go:61] "kube-proxy-v5lm8" [e78eabf9-5c65-4cba-83fd-0837cef05126] Running
	I1010 19:24:06.768476  148525 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-361847" [4f84f0f5-e255-4534-9db3-e5cfee0b2447] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1010 19:24:06.768485  148525 system_pods.go:61] "metrics-server-6867b74b74-h5kjm" [a3979b79-bd21-490b-97ac-0a78efd43a99] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1010 19:24:06.768493  148525 system_pods.go:61] "storage-provisioner" [ca8606d3-9adb-46da-886a-3081b11b52a6] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1010 19:24:06.768499  148525 system_pods.go:74] duration metric: took 20.716461ms to wait for pod list to return data ...
	I1010 19:24:06.768509  148525 node_conditions.go:102] verifying NodePressure condition ...
	I1010 19:24:06.777935  148525 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1010 19:24:06.777973  148525 node_conditions.go:123] node cpu capacity is 2
	I1010 19:24:06.777988  148525 node_conditions.go:105] duration metric: took 9.473726ms to run NodePressure ...
	I1010 19:24:06.778019  148525 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:24:07.053296  148525 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1010 19:24:07.057585  148525 kubeadm.go:739] kubelet initialised
	I1010 19:24:07.057608  148525 kubeadm.go:740] duration metric: took 4.283027ms waiting for restarted kubelet to initialise ...
	I1010 19:24:07.057618  148525 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 19:24:07.064157  148525 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-rv8vq" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:07.069962  148525 pod_ready.go:98] node "default-k8s-diff-port-361847" hosting pod "coredns-7c65d6cfc9-rv8vq" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-361847" has status "Ready":"False"
	I1010 19:24:07.069989  148525 pod_ready.go:82] duration metric: took 5.791958ms for pod "coredns-7c65d6cfc9-rv8vq" in "kube-system" namespace to be "Ready" ...
	E1010 19:24:07.069999  148525 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-361847" hosting pod "coredns-7c65d6cfc9-rv8vq" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-361847" has status "Ready":"False"
	I1010 19:24:07.070022  148525 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:07.075615  148525 pod_ready.go:98] node "default-k8s-diff-port-361847" hosting pod "etcd-default-k8s-diff-port-361847" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-361847" has status "Ready":"False"
	I1010 19:24:07.075644  148525 pod_ready.go:82] duration metric: took 5.608749ms for pod "etcd-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	E1010 19:24:07.075654  148525 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-361847" hosting pod "etcd-default-k8s-diff-port-361847" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-361847" has status "Ready":"False"
	I1010 19:24:07.075661  148525 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:07.081717  148525 pod_ready.go:98] node "default-k8s-diff-port-361847" hosting pod "kube-apiserver-default-k8s-diff-port-361847" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-361847" has status "Ready":"False"
	I1010 19:24:07.081743  148525 pod_ready.go:82] duration metric: took 6.074977ms for pod "kube-apiserver-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	E1010 19:24:07.081754  148525 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-361847" hosting pod "kube-apiserver-default-k8s-diff-port-361847" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-361847" has status "Ready":"False"
	I1010 19:24:07.081761  148525 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:07.152204  148525 pod_ready.go:98] node "default-k8s-diff-port-361847" hosting pod "kube-controller-manager-default-k8s-diff-port-361847" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-361847" has status "Ready":"False"
	I1010 19:24:07.152244  148525 pod_ready.go:82] duration metric: took 70.475599ms for pod "kube-controller-manager-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	E1010 19:24:07.152258  148525 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-361847" hosting pod "kube-controller-manager-default-k8s-diff-port-361847" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-361847" has status "Ready":"False"
	I1010 19:24:07.152266  148525 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-v5lm8" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:07.551283  148525 pod_ready.go:93] pod "kube-proxy-v5lm8" in "kube-system" namespace has status "Ready":"True"
	I1010 19:24:07.551311  148525 pod_ready.go:82] duration metric: took 399.036581ms for pod "kube-proxy-v5lm8" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:07.551324  148525 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:08.550896  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:10.551437  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:09.028538  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:09.527883  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:10.028693  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:10.528439  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:11.028528  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:11.528679  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:12.028002  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:12.527904  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:13.028685  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:13.527833  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:12.401115  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.401808  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has current primary IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.401841  147213 main.go:141] libmachine: (no-preload-320324) Found IP for machine: 192.168.72.11
	I1010 19:24:12.401856  147213 main.go:141] libmachine: (no-preload-320324) Reserving static IP address...
	I1010 19:24:12.402368  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "no-preload-320324", mac: "52:54:00:95:03:cd", ip: "192.168.72.11"} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:12.402407  147213 main.go:141] libmachine: (no-preload-320324) DBG | skip adding static IP to network mk-no-preload-320324 - found existing host DHCP lease matching {name: "no-preload-320324", mac: "52:54:00:95:03:cd", ip: "192.168.72.11"}
	I1010 19:24:12.402426  147213 main.go:141] libmachine: (no-preload-320324) Reserved static IP address: 192.168.72.11
	I1010 19:24:12.402443  147213 main.go:141] libmachine: (no-preload-320324) Waiting for SSH to be available...
	I1010 19:24:12.402458  147213 main.go:141] libmachine: (no-preload-320324) DBG | Getting to WaitForSSH function...
	I1010 19:24:12.404803  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.405200  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:12.405226  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.405461  147213 main.go:141] libmachine: (no-preload-320324) DBG | Using SSH client type: external
	I1010 19:24:12.405494  147213 main.go:141] libmachine: (no-preload-320324) DBG | Using SSH private key: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/no-preload-320324/id_rsa (-rw-------)
	I1010 19:24:12.405527  147213 main.go:141] libmachine: (no-preload-320324) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.11 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19787-81676/.minikube/machines/no-preload-320324/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1010 19:24:12.405541  147213 main.go:141] libmachine: (no-preload-320324) DBG | About to run SSH command:
	I1010 19:24:12.405554  147213 main.go:141] libmachine: (no-preload-320324) DBG | exit 0
	I1010 19:24:12.529010  147213 main.go:141] libmachine: (no-preload-320324) DBG | SSH cmd err, output: <nil>: 
	I1010 19:24:12.529401  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetConfigRaw
	I1010 19:24:12.530257  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetIP
	I1010 19:24:12.533285  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.533692  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:12.533727  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.533963  147213 profile.go:143] Saving config to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/no-preload-320324/config.json ...
	I1010 19:24:12.534205  147213 machine.go:93] provisionDockerMachine start ...
	I1010 19:24:12.534230  147213 main.go:141] libmachine: (no-preload-320324) Calling .DriverName
	I1010 19:24:12.534450  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHHostname
	I1010 19:24:12.536585  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.536976  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:12.537003  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.537133  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHPort
	I1010 19:24:12.537323  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:12.537512  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:12.537689  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHUsername
	I1010 19:24:12.537925  147213 main.go:141] libmachine: Using SSH client type: native
	I1010 19:24:12.538138  147213 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.11 22 <nil> <nil>}
	I1010 19:24:12.538151  147213 main.go:141] libmachine: About to run SSH command:
	hostname
	I1010 19:24:12.641679  147213 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1010 19:24:12.641706  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetMachineName
	I1010 19:24:12.641964  147213 buildroot.go:166] provisioning hostname "no-preload-320324"
	I1010 19:24:12.642002  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetMachineName
	I1010 19:24:12.642235  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHHostname
	I1010 19:24:12.645149  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.645488  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:12.645521  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.645647  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHPort
	I1010 19:24:12.645836  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:12.646001  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:12.646155  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHUsername
	I1010 19:24:12.646352  147213 main.go:141] libmachine: Using SSH client type: native
	I1010 19:24:12.646533  147213 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.11 22 <nil> <nil>}
	I1010 19:24:12.646545  147213 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-320324 && echo "no-preload-320324" | sudo tee /etc/hostname
	I1010 19:24:12.766449  147213 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-320324
	
	I1010 19:24:12.766480  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHHostname
	I1010 19:24:12.769836  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.770331  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:12.770356  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.770584  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHPort
	I1010 19:24:12.770810  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:12.770962  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:12.771119  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHUsername
	I1010 19:24:12.771252  147213 main.go:141] libmachine: Using SSH client type: native
	I1010 19:24:12.771448  147213 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.11 22 <nil> <nil>}
	I1010 19:24:12.771470  147213 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-320324' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-320324/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-320324' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1010 19:24:12.882458  147213 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1010 19:24:12.882495  147213 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19787-81676/.minikube CaCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19787-81676/.minikube}
	I1010 19:24:12.882537  147213 buildroot.go:174] setting up certificates
	I1010 19:24:12.882547  147213 provision.go:84] configureAuth start
	I1010 19:24:12.882562  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetMachineName
	I1010 19:24:12.882865  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetIP
	I1010 19:24:12.885854  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.886139  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:12.886173  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.886308  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHHostname
	I1010 19:24:12.888479  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.888788  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:12.888819  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.888976  147213 provision.go:143] copyHostCerts
	I1010 19:24:12.889037  147213 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem, removing ...
	I1010 19:24:12.889049  147213 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem
	I1010 19:24:12.889102  147213 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem (1123 bytes)
	I1010 19:24:12.889235  147213 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem, removing ...
	I1010 19:24:12.889246  147213 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem
	I1010 19:24:12.889278  147213 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem (1675 bytes)
	I1010 19:24:12.889370  147213 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem, removing ...
	I1010 19:24:12.889381  147213 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem
	I1010 19:24:12.889406  147213 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem (1078 bytes)
	I1010 19:24:12.889493  147213 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem org=jenkins.no-preload-320324 san=[127.0.0.1 192.168.72.11 localhost minikube no-preload-320324]
	I1010 19:24:12.978176  147213 provision.go:177] copyRemoteCerts
	I1010 19:24:12.978235  147213 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1010 19:24:12.978261  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHHostname
	I1010 19:24:12.981662  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.982182  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:12.982218  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.982452  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHPort
	I1010 19:24:12.982647  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:12.982829  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHUsername
	I1010 19:24:12.983005  147213 sshutil.go:53] new ssh client: &{IP:192.168.72.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/no-preload-320324/id_rsa Username:docker}
	I1010 19:24:13.067269  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1010 19:24:13.092777  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1010 19:24:13.118530  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1010 19:24:13.143401  147213 provision.go:87] duration metric: took 260.833877ms to configureAuth
	I1010 19:24:13.143436  147213 buildroot.go:189] setting minikube options for container-runtime
	I1010 19:24:13.143678  147213 config.go:182] Loaded profile config "no-preload-320324": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 19:24:13.143776  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHHostname
	I1010 19:24:13.147086  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:13.147507  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:13.147531  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:13.147787  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHPort
	I1010 19:24:13.148032  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:13.148222  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:13.148452  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHUsername
	I1010 19:24:13.148660  147213 main.go:141] libmachine: Using SSH client type: native
	I1010 19:24:13.149013  147213 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.11 22 <nil> <nil>}
	I1010 19:24:13.149041  147213 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1010 19:24:13.375683  147213 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1010 19:24:13.375714  147213 machine.go:96] duration metric: took 841.493636ms to provisionDockerMachine
	I1010 19:24:13.375736  147213 start.go:293] postStartSetup for "no-preload-320324" (driver="kvm2")
	I1010 19:24:13.375754  147213 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1010 19:24:13.375775  147213 main.go:141] libmachine: (no-preload-320324) Calling .DriverName
	I1010 19:24:13.376085  147213 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1010 19:24:13.376116  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHHostname
	I1010 19:24:13.378855  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:13.379179  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:13.379224  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:13.379408  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHPort
	I1010 19:24:13.379608  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:13.379769  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHUsername
	I1010 19:24:13.379910  147213 sshutil.go:53] new ssh client: &{IP:192.168.72.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/no-preload-320324/id_rsa Username:docker}
	I1010 19:24:13.459580  147213 ssh_runner.go:195] Run: cat /etc/os-release
	I1010 19:24:13.463644  147213 info.go:137] Remote host: Buildroot 2023.02.9
	I1010 19:24:13.463674  147213 filesync.go:126] Scanning /home/jenkins/minikube-integration/19787-81676/.minikube/addons for local assets ...
	I1010 19:24:13.463751  147213 filesync.go:126] Scanning /home/jenkins/minikube-integration/19787-81676/.minikube/files for local assets ...
	I1010 19:24:13.463845  147213 filesync.go:149] local asset: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem -> 888762.pem in /etc/ssl/certs
	I1010 19:24:13.463963  147213 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1010 19:24:13.473483  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem --> /etc/ssl/certs/888762.pem (1708 bytes)
	I1010 19:24:13.498773  147213 start.go:296] duration metric: took 123.021762ms for postStartSetup
	I1010 19:24:13.498814  147213 fix.go:56] duration metric: took 20.640532088s for fixHost
	I1010 19:24:13.498834  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHHostname
	I1010 19:24:13.501681  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:13.502243  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:13.502281  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:13.502476  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHPort
	I1010 19:24:13.502679  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:13.502835  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:13.502993  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHUsername
	I1010 19:24:13.503177  147213 main.go:141] libmachine: Using SSH client type: native
	I1010 19:24:13.503383  147213 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.11 22 <nil> <nil>}
	I1010 19:24:13.503396  147213 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1010 19:24:13.613929  147213 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728588253.586950075
	
	I1010 19:24:13.613954  147213 fix.go:216] guest clock: 1728588253.586950075
	I1010 19:24:13.613963  147213 fix.go:229] Guest: 2024-10-10 19:24:13.586950075 +0000 UTC Remote: 2024-10-10 19:24:13.498818059 +0000 UTC m=+359.788559229 (delta=88.132016ms)
	I1010 19:24:13.613988  147213 fix.go:200] guest clock delta is within tolerance: 88.132016ms
	I1010 19:24:13.614020  147213 start.go:83] releasing machines lock for "no-preload-320324", held for 20.755775587s
	I1010 19:24:13.614063  147213 main.go:141] libmachine: (no-preload-320324) Calling .DriverName
	I1010 19:24:13.614473  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetIP
	I1010 19:24:13.617327  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:13.617694  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:13.617721  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:13.618016  147213 main.go:141] libmachine: (no-preload-320324) Calling .DriverName
	I1010 19:24:13.618670  147213 main.go:141] libmachine: (no-preload-320324) Calling .DriverName
	I1010 19:24:13.618884  147213 main.go:141] libmachine: (no-preload-320324) Calling .DriverName
	I1010 19:24:13.618989  147213 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1010 19:24:13.619047  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHHostname
	I1010 19:24:13.619142  147213 ssh_runner.go:195] Run: cat /version.json
	I1010 19:24:13.619185  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHHostname
	I1010 19:24:13.621972  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:13.622229  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:13.622322  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:13.622348  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:13.622533  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHPort
	I1010 19:24:13.622666  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:13.622697  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:13.622736  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:13.622865  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHPort
	I1010 19:24:13.622930  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHUsername
	I1010 19:24:13.623059  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:13.623073  147213 sshutil.go:53] new ssh client: &{IP:192.168.72.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/no-preload-320324/id_rsa Username:docker}
	I1010 19:24:13.623225  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHUsername
	I1010 19:24:13.623349  147213 sshutil.go:53] new ssh client: &{IP:192.168.72.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/no-preload-320324/id_rsa Username:docker}
	I1010 19:24:13.720999  147213 ssh_runner.go:195] Run: systemctl --version
	I1010 19:24:13.727679  147213 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1010 19:24:09.562834  148525 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-361847" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:12.058686  148525 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-361847" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:13.870558  147213 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1010 19:24:13.877853  147213 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1010 19:24:13.877923  147213 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1010 19:24:13.896295  147213 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1010 19:24:13.896325  147213 start.go:495] detecting cgroup driver to use...
	I1010 19:24:13.896400  147213 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1010 19:24:13.913122  147213 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1010 19:24:13.929359  147213 docker.go:217] disabling cri-docker service (if available) ...
	I1010 19:24:13.929437  147213 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1010 19:24:13.944840  147213 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1010 19:24:13.960062  147213 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1010 19:24:14.090774  147213 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1010 19:24:14.246094  147213 docker.go:233] disabling docker service ...
	I1010 19:24:14.246161  147213 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1010 19:24:14.264682  147213 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1010 19:24:14.280264  147213 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1010 19:24:14.437156  147213 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1010 19:24:14.569220  147213 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1010 19:24:14.585723  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1010 19:24:14.607349  147213 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1010 19:24:14.607429  147213 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:24:14.619113  147213 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1010 19:24:14.619198  147213 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:24:14.631818  147213 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:24:14.643977  147213 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:24:14.655753  147213 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1010 19:24:14.667235  147213 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:24:14.679225  147213 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:24:14.698760  147213 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:24:14.710440  147213 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1010 19:24:14.722565  147213 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1010 19:24:14.722625  147213 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1010 19:24:14.740587  147213 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1010 19:24:14.752630  147213 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 19:24:14.887728  147213 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1010 19:24:14.989026  147213 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1010 19:24:14.989109  147213 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1010 19:24:14.995309  147213 start.go:563] Will wait 60s for crictl version
	I1010 19:24:14.995366  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:24:14.999840  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1010 19:24:15.043758  147213 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1010 19:24:15.043856  147213 ssh_runner.go:195] Run: crio --version
	I1010 19:24:15.079274  147213 ssh_runner.go:195] Run: crio --version
	I1010 19:24:15.116630  147213 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1010 19:24:13.050633  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:15.552413  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:14.028090  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:14.528072  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:15.028648  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:15.528388  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:16.028060  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:16.528472  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:17.028098  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:17.528125  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:18.028563  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:18.528613  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:15.118343  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetIP
	I1010 19:24:15.121596  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:15.122101  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:15.122133  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:15.122396  147213 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1010 19:24:15.127140  147213 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 19:24:15.141249  147213 kubeadm.go:883] updating cluster {Name:no-preload-320324 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:no-preload-320324 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.11 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1010 19:24:15.141375  147213 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1010 19:24:15.141417  147213 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 19:24:15.183271  147213 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1010 19:24:15.183303  147213 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1010 19:24:15.183412  147213 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 19:24:15.183444  147213 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1010 19:24:15.183452  147213 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I1010 19:24:15.183459  147213 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1010 19:24:15.183422  147213 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1010 19:24:15.183493  147213 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1010 19:24:15.183512  147213 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I1010 19:24:15.183507  147213 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I1010 19:24:15.185097  147213 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I1010 19:24:15.185097  147213 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1010 19:24:15.185097  147213 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 19:24:15.185099  147213 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I1010 19:24:15.185097  147213 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I1010 19:24:15.185098  147213 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1010 19:24:15.185103  147213 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1010 19:24:15.185106  147213 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1010 19:24:15.328484  147213 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.1
	I1010 19:24:15.333573  147213 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I1010 19:24:15.340047  147213 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I1010 19:24:15.358922  147213 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I1010 19:24:15.359800  147213 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.1
	I1010 19:24:15.366668  147213 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.1
	I1010 19:24:15.409942  147213 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I1010 19:24:15.409995  147213 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1010 19:24:15.410050  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:24:15.416186  147213 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.1
	I1010 19:24:15.452343  147213 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I1010 19:24:15.452385  147213 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I1010 19:24:15.452426  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:24:15.533567  147213 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I1010 19:24:15.533620  147213 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I1010 19:24:15.533671  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:24:15.585611  147213 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I1010 19:24:15.585659  147213 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I1010 19:24:15.585685  147213 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I1010 19:24:15.585712  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:24:15.585724  147213 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I1010 19:24:15.585765  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:24:15.585769  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I1010 19:24:15.585805  147213 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I1010 19:24:15.585832  147213 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I1010 19:24:15.585856  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1010 19:24:15.585872  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:24:15.585943  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1010 19:24:15.603131  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I1010 19:24:15.661918  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1010 19:24:15.683739  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I1010 19:24:15.683760  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I1010 19:24:15.683833  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1010 19:24:15.683880  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I1010 19:24:15.685385  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I1010 19:24:15.792253  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1010 19:24:15.818116  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I1010 19:24:15.818183  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I1010 19:24:15.818289  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1010 19:24:15.818321  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I1010 19:24:15.818402  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I1010 19:24:15.878069  147213 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I1010 19:24:15.878202  147213 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I1010 19:24:15.940520  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I1010 19:24:15.953841  147213 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I1010 19:24:15.953955  147213 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I1010 19:24:15.953990  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I1010 19:24:15.954047  147213 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I1010 19:24:15.954115  147213 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I1010 19:24:15.954120  147213 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I1010 19:24:15.954130  147213 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I1010 19:24:15.954144  147213 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I1010 19:24:15.954157  147213 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I1010 19:24:15.954205  147213 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	I1010 19:24:16.005975  147213 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I1010 19:24:16.006028  147213 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I1010 19:24:16.006090  147213 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I1010 19:24:16.023905  147213 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I1010 19:24:16.023990  147213 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.1 (exists)
	I1010 19:24:16.024024  147213 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I1010 19:24:16.024023  147213 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.1 (exists)
	I1010 19:24:16.033715  147213 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 19:24:18.150881  147213 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1: (2.144766677s)
	I1010 19:24:18.150935  147213 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.1 (exists)
	I1010 19:24:18.150931  147213 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (2.196753845s)
	I1010 19:24:18.150944  147213 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1: (2.126894115s)
	I1010 19:24:18.150973  147213 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.1 (exists)
	I1010 19:24:18.150953  147213 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I1010 19:24:18.150982  147213 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.117235962s)
	I1010 19:24:18.151002  147213 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I1010 19:24:18.151014  147213 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1010 19:24:18.151053  147213 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 19:24:18.151069  147213 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I1010 19:24:18.151097  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:24:14.059223  148525 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-361847" in "kube-system" namespace has status "Ready":"True"
	I1010 19:24:14.059252  148525 pod_ready.go:82] duration metric: took 6.507918149s for pod "kube-scheduler-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:14.059266  148525 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:16.066908  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:18.082398  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:18.051799  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:20.552644  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:19.028306  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:19.527854  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:20.028765  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:20.528245  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:21.027936  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:21.528172  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:22.028527  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:22.528446  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:23.028709  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:23.528711  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:21.952099  147213 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.801005716s)
	I1010 19:24:21.952134  147213 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I1010 19:24:21.952163  147213 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I1010 19:24:21.952165  147213 ssh_runner.go:235] Completed: which crictl: (3.801048272s)
	I1010 19:24:21.952212  147213 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I1010 19:24:21.952225  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 19:24:21.993627  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 19:24:20.566055  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:22.567145  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:23.053514  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:25.554151  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:24.028402  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:24.528516  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:25.028207  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:25.527928  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:26.028032  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:26.527826  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:27.028609  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:27.528448  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:28.028018  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:28.528597  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:23.929370  147213 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1: (1.977128659s)
	I1010 19:24:23.929418  147213 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I1010 19:24:23.929450  147213 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I1010 19:24:23.929498  147213 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.935844384s)
	I1010 19:24:23.929532  147213 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1
	I1010 19:24:23.929551  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 19:24:26.009485  147213 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.079908324s)
	I1010 19:24:26.009567  147213 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1010 19:24:26.009484  147213 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1: (2.079925224s)
	I1010 19:24:26.009641  147213 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I1010 19:24:26.009671  147213 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I1010 19:24:26.009684  147213 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1010 19:24:26.009720  147213 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1
	I1010 19:24:27.968483  147213 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.958772952s)
	I1010 19:24:27.968534  147213 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1010 19:24:27.968559  147213 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1: (1.958813643s)
	I1010 19:24:27.968587  147213 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I1010 19:24:27.968619  147213 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I1010 19:24:27.968686  147213 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1
	I1010 19:24:25.069787  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:27.567013  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:28.050968  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:30.551528  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:29.027960  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:29.528054  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:30.028227  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:30.527791  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:31.027790  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:31.528761  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:32.028343  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:32.528553  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:33.028195  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:33.527895  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:29.315157  147213 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1: (1.346440456s)
	I1010 19:24:29.315211  147213 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I1010 19:24:29.315244  147213 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1010 19:24:29.315296  147213 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1010 19:24:30.173931  147213 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1010 19:24:30.173977  147213 cache_images.go:123] Successfully loaded all cached images
	I1010 19:24:30.173985  147213 cache_images.go:92] duration metric: took 14.990666845s to LoadCachedImages
	I1010 19:24:30.174001  147213 kubeadm.go:934] updating node { 192.168.72.11 8443 v1.31.1 crio true true} ...
	I1010 19:24:30.174129  147213 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-320324 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.11
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-320324 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1010 19:24:30.174221  147213 ssh_runner.go:195] Run: crio config
	I1010 19:24:30.222677  147213 cni.go:84] Creating CNI manager for ""
	I1010 19:24:30.222702  147213 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1010 19:24:30.222711  147213 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1010 19:24:30.222736  147213 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.11 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-320324 NodeName:no-preload-320324 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.11"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.11 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1010 19:24:30.222923  147213 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.11
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-320324"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.11
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.11"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1010 19:24:30.222998  147213 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1010 19:24:30.233755  147213 binaries.go:44] Found k8s binaries, skipping transfer
	I1010 19:24:30.233818  147213 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1010 19:24:30.243829  147213 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I1010 19:24:30.263056  147213 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1010 19:24:30.282362  147213 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I1010 19:24:30.300449  147213 ssh_runner.go:195] Run: grep 192.168.72.11	control-plane.minikube.internal$ /etc/hosts
	I1010 19:24:30.304661  147213 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.11	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 19:24:30.317462  147213 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 19:24:30.445515  147213 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 19:24:30.462816  147213 certs.go:68] Setting up /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/no-preload-320324 for IP: 192.168.72.11
	I1010 19:24:30.462847  147213 certs.go:194] generating shared ca certs ...
	I1010 19:24:30.462871  147213 certs.go:226] acquiring lock for ca certs: {Name:mk934aa2a82050d7b4e8800492bf64f91d140517 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 19:24:30.463074  147213 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key
	I1010 19:24:30.463132  147213 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key
	I1010 19:24:30.463145  147213 certs.go:256] generating profile certs ...
	I1010 19:24:30.463289  147213 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/no-preload-320324/client.key
	I1010 19:24:30.463364  147213 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/no-preload-320324/apiserver.key.a7785fc5
	I1010 19:24:30.463413  147213 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/no-preload-320324/proxy-client.key
	I1010 19:24:30.463565  147213 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876.pem (1338 bytes)
	W1010 19:24:30.463604  147213 certs.go:480] ignoring /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876_empty.pem, impossibly tiny 0 bytes
	I1010 19:24:30.463617  147213 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem (1679 bytes)
	I1010 19:24:30.463657  147213 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem (1078 bytes)
	I1010 19:24:30.463689  147213 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem (1123 bytes)
	I1010 19:24:30.463721  147213 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem (1675 bytes)
	I1010 19:24:30.463774  147213 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem (1708 bytes)
	I1010 19:24:30.464502  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1010 19:24:30.525320  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1010 19:24:30.565229  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1010 19:24:30.597731  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1010 19:24:30.626174  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/no-preload-320324/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1010 19:24:30.659991  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/no-preload-320324/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1010 19:24:30.685662  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/no-preload-320324/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1010 19:24:30.710757  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/no-preload-320324/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1010 19:24:30.736325  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem --> /usr/share/ca-certificates/888762.pem (1708 bytes)
	I1010 19:24:30.771239  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1010 19:24:30.796467  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876.pem --> /usr/share/ca-certificates/88876.pem (1338 bytes)
	I1010 19:24:30.821925  147213 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1010 19:24:30.840743  147213 ssh_runner.go:195] Run: openssl version
	I1010 19:24:30.846898  147213 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/88876.pem && ln -fs /usr/share/ca-certificates/88876.pem /etc/ssl/certs/88876.pem"
	I1010 19:24:30.858410  147213 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/88876.pem
	I1010 19:24:30.863188  147213 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 10 18:09 /usr/share/ca-certificates/88876.pem
	I1010 19:24:30.863260  147213 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/88876.pem
	I1010 19:24:30.869307  147213 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/88876.pem /etc/ssl/certs/51391683.0"
	I1010 19:24:30.880319  147213 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/888762.pem && ln -fs /usr/share/ca-certificates/888762.pem /etc/ssl/certs/888762.pem"
	I1010 19:24:30.891307  147213 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/888762.pem
	I1010 19:24:30.895771  147213 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 10 18:09 /usr/share/ca-certificates/888762.pem
	I1010 19:24:30.895828  147213 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/888762.pem
	I1010 19:24:30.901510  147213 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/888762.pem /etc/ssl/certs/3ec20f2e.0"
	I1010 19:24:30.912627  147213 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1010 19:24:30.924330  147213 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1010 19:24:30.929108  147213 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 10 17:58 /usr/share/ca-certificates/minikubeCA.pem
	I1010 19:24:30.929194  147213 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1010 19:24:30.935266  147213 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1010 19:24:30.946714  147213 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1010 19:24:30.951692  147213 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1010 19:24:30.957910  147213 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1010 19:24:30.964296  147213 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1010 19:24:30.971001  147213 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1010 19:24:30.977427  147213 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1010 19:24:30.984201  147213 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1010 19:24:30.990532  147213 kubeadm.go:392] StartCluster: {Name:no-preload-320324 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:no-preload-320324 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.11 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 19:24:30.990622  147213 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1010 19:24:30.990727  147213 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1010 19:24:31.033544  147213 cri.go:89] found id: ""
	I1010 19:24:31.033624  147213 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1010 19:24:31.044956  147213 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1010 19:24:31.044975  147213 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1010 19:24:31.045025  147213 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1010 19:24:31.056563  147213 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1010 19:24:31.057705  147213 kubeconfig.go:125] found "no-preload-320324" server: "https://192.168.72.11:8443"
	I1010 19:24:31.059853  147213 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1010 19:24:31.071304  147213 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.11
	I1010 19:24:31.071338  147213 kubeadm.go:1160] stopping kube-system containers ...
	I1010 19:24:31.071353  147213 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1010 19:24:31.071444  147213 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1010 19:24:31.107345  147213 cri.go:89] found id: ""
	I1010 19:24:31.107429  147213 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1010 19:24:31.125556  147213 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1010 19:24:31.135390  147213 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1010 19:24:31.135428  147213 kubeadm.go:157] found existing configuration files:
	
	I1010 19:24:31.135478  147213 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1010 19:24:31.144653  147213 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1010 19:24:31.144715  147213 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1010 19:24:31.154458  147213 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1010 19:24:31.163444  147213 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1010 19:24:31.163501  147213 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1010 19:24:31.172633  147213 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1010 19:24:31.181939  147213 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1010 19:24:31.182001  147213 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1010 19:24:31.191638  147213 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1010 19:24:31.200846  147213 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1010 19:24:31.200935  147213 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1010 19:24:31.211048  147213 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1010 19:24:31.221008  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:24:31.352733  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:24:32.270546  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:24:32.474510  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:24:32.551517  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:24:32.707737  147213 api_server.go:52] waiting for apiserver process to appear ...
	I1010 19:24:32.707826  147213 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:33.208647  147213 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:33.708539  147213 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:33.728647  147213 api_server.go:72] duration metric: took 1.020907246s to wait for apiserver process to appear ...
	I1010 19:24:33.728678  147213 api_server.go:88] waiting for apiserver healthz status ...
	I1010 19:24:33.728701  147213 api_server.go:253] Checking apiserver healthz at https://192.168.72.11:8443/healthz ...
	I1010 19:24:30.066635  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:32.066732  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:32.552277  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:35.051399  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:34.028434  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:34.527949  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:35.028224  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:35.528470  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:36.028540  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:36.528499  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:37.028362  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:37.528483  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:38.028531  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:38.527865  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:37.025756  147213 api_server.go:279] https://192.168.72.11:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1010 19:24:37.025787  147213 api_server.go:103] status: https://192.168.72.11:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1010 19:24:37.025802  147213 api_server.go:253] Checking apiserver healthz at https://192.168.72.11:8443/healthz ...
	I1010 19:24:37.078247  147213 api_server.go:279] https://192.168.72.11:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1010 19:24:37.078283  147213 api_server.go:103] status: https://192.168.72.11:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1010 19:24:37.229601  147213 api_server.go:253] Checking apiserver healthz at https://192.168.72.11:8443/healthz ...
	I1010 19:24:37.237166  147213 api_server.go:279] https://192.168.72.11:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1010 19:24:37.237204  147213 api_server.go:103] status: https://192.168.72.11:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1010 19:24:37.728824  147213 api_server.go:253] Checking apiserver healthz at https://192.168.72.11:8443/healthz ...
	I1010 19:24:37.735660  147213 api_server.go:279] https://192.168.72.11:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1010 19:24:37.735700  147213 api_server.go:103] status: https://192.168.72.11:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1010 19:24:38.229746  147213 api_server.go:253] Checking apiserver healthz at https://192.168.72.11:8443/healthz ...
	I1010 19:24:38.234449  147213 api_server.go:279] https://192.168.72.11:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1010 19:24:38.234491  147213 api_server.go:103] status: https://192.168.72.11:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1010 19:24:38.729000  147213 api_server.go:253] Checking apiserver healthz at https://192.168.72.11:8443/healthz ...
	I1010 19:24:38.737564  147213 api_server.go:279] https://192.168.72.11:8443/healthz returned 200:
	ok
	I1010 19:24:38.751982  147213 api_server.go:141] control plane version: v1.31.1
	I1010 19:24:38.752012  147213 api_server.go:131] duration metric: took 5.023326632s to wait for apiserver health ...
	I1010 19:24:38.752023  147213 cni.go:84] Creating CNI manager for ""
	I1010 19:24:38.752030  147213 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1010 19:24:38.753351  147213 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1010 19:24:34.067208  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:36.067413  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:38.566729  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:38.754645  147213 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1010 19:24:38.772086  147213 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1010 19:24:38.792017  147213 system_pods.go:43] waiting for kube-system pods to appear ...
	I1010 19:24:38.800547  147213 system_pods.go:59] 8 kube-system pods found
	I1010 19:24:38.800592  147213 system_pods.go:61] "coredns-7c65d6cfc9-86brb" [28e5f869-f82f-4bd4-9d9c-89499fa89c89] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1010 19:24:38.800602  147213 system_pods.go:61] "etcd-no-preload-320324" [3027fc7b-a2cd-47c9-a6f1-122bfd0475a8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1010 19:24:38.800609  147213 system_pods.go:61] "kube-apiserver-no-preload-320324" [6e3d2eab-4568-48e9-957c-e724ff89b5da] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1010 19:24:38.800617  147213 system_pods.go:61] "kube-controller-manager-no-preload-320324" [a6d65cd3-48b1-4faf-bb7f-f6433641d6f3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1010 19:24:38.800624  147213 system_pods.go:61] "kube-proxy-vn6sv" [e5b2c419-7299-4bc4-b263-99408b9484eb] Running
	I1010 19:24:38.800629  147213 system_pods.go:61] "kube-scheduler-no-preload-320324" [e089c258-e720-4c78-ada1-eebbe89556c1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1010 19:24:38.800638  147213 system_pods.go:61] "metrics-server-6867b74b74-8w9lk" [354939e6-2ca9-44f5-8e8e-c10493c68b79] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1010 19:24:38.800642  147213 system_pods.go:61] "storage-provisioner" [d4965b53-60c3-4a97-bd52-d164d977247a] Running
	I1010 19:24:38.800648  147213 system_pods.go:74] duration metric: took 8.60732ms to wait for pod list to return data ...
	I1010 19:24:38.800654  147213 node_conditions.go:102] verifying NodePressure condition ...
	I1010 19:24:38.804628  147213 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1010 19:24:38.804663  147213 node_conditions.go:123] node cpu capacity is 2
	I1010 19:24:38.804680  147213 node_conditions.go:105] duration metric: took 4.021699ms to run NodePressure ...
	I1010 19:24:38.804700  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:24:39.078452  147213 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1010 19:24:39.087090  147213 kubeadm.go:739] kubelet initialised
	I1010 19:24:39.087116  147213 kubeadm.go:740] duration metric: took 8.636436ms waiting for restarted kubelet to initialise ...
	I1010 19:24:39.087125  147213 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 19:24:39.094468  147213 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-86brb" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:39.108724  147213 pod_ready.go:98] node "no-preload-320324" hosting pod "coredns-7c65d6cfc9-86brb" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:39.108756  147213 pod_ready.go:82] duration metric: took 14.254631ms for pod "coredns-7c65d6cfc9-86brb" in "kube-system" namespace to be "Ready" ...
	E1010 19:24:39.108770  147213 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-320324" hosting pod "coredns-7c65d6cfc9-86brb" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:39.108780  147213 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:39.119304  147213 pod_ready.go:98] node "no-preload-320324" hosting pod "etcd-no-preload-320324" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:39.119335  147213 pod_ready.go:82] duration metric: took 10.543376ms for pod "etcd-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	E1010 19:24:39.119345  147213 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-320324" hosting pod "etcd-no-preload-320324" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:39.119352  147213 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:39.127243  147213 pod_ready.go:98] node "no-preload-320324" hosting pod "kube-apiserver-no-preload-320324" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:39.127268  147213 pod_ready.go:82] duration metric: took 7.907414ms for pod "kube-apiserver-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	E1010 19:24:39.127278  147213 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-320324" hosting pod "kube-apiserver-no-preload-320324" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:39.127285  147213 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:39.195549  147213 pod_ready.go:98] node "no-preload-320324" hosting pod "kube-controller-manager-no-preload-320324" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:39.195578  147213 pod_ready.go:82] duration metric: took 68.282333ms for pod "kube-controller-manager-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	E1010 19:24:39.195588  147213 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-320324" hosting pod "kube-controller-manager-no-preload-320324" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:39.195594  147213 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-vn6sv" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:39.595842  147213 pod_ready.go:98] node "no-preload-320324" hosting pod "kube-proxy-vn6sv" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:39.595871  147213 pod_ready.go:82] duration metric: took 400.267905ms for pod "kube-proxy-vn6sv" in "kube-system" namespace to be "Ready" ...
	E1010 19:24:39.595880  147213 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-320324" hosting pod "kube-proxy-vn6sv" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:39.595886  147213 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:39.995731  147213 pod_ready.go:98] node "no-preload-320324" hosting pod "kube-scheduler-no-preload-320324" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:39.995760  147213 pod_ready.go:82] duration metric: took 399.866947ms for pod "kube-scheduler-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	E1010 19:24:39.995770  147213 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-320324" hosting pod "kube-scheduler-no-preload-320324" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:39.995777  147213 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:40.396420  147213 pod_ready.go:98] node "no-preload-320324" hosting pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:40.396456  147213 pod_ready.go:82] duration metric: took 400.667834ms for pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace to be "Ready" ...
	E1010 19:24:40.396470  147213 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-320324" hosting pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:40.396482  147213 pod_ready.go:39] duration metric: took 1.309346973s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 19:24:40.396508  147213 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1010 19:24:40.409956  147213 ops.go:34] apiserver oom_adj: -16
	I1010 19:24:40.409980  147213 kubeadm.go:597] duration metric: took 9.364998977s to restartPrimaryControlPlane
	I1010 19:24:40.409991  147213 kubeadm.go:394] duration metric: took 9.419470024s to StartCluster
	I1010 19:24:40.410009  147213 settings.go:142] acquiring lock: {Name:mk8d73e472e6ca16e47be8eb712406a95625b8da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 19:24:40.410085  147213 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19787-81676/kubeconfig
	I1010 19:24:40.413037  147213 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19787-81676/kubeconfig: {Name:mk31297b607b774870865548d952622c04d970cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 19:24:40.413448  147213 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.11 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1010 19:24:40.413783  147213 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1010 19:24:40.413979  147213 config.go:182] Loaded profile config "no-preload-320324": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 19:24:40.413996  147213 addons.go:69] Setting default-storageclass=true in profile "no-preload-320324"
	I1010 19:24:40.414020  147213 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-320324"
	I1010 19:24:40.413983  147213 addons.go:69] Setting storage-provisioner=true in profile "no-preload-320324"
	I1010 19:24:40.414048  147213 addons.go:234] Setting addon storage-provisioner=true in "no-preload-320324"
	W1010 19:24:40.414057  147213 addons.go:243] addon storage-provisioner should already be in state true
	I1010 19:24:40.414091  147213 host.go:66] Checking if "no-preload-320324" exists ...
	I1010 19:24:40.414170  147213 addons.go:69] Setting metrics-server=true in profile "no-preload-320324"
	I1010 19:24:40.414230  147213 addons.go:234] Setting addon metrics-server=true in "no-preload-320324"
	W1010 19:24:40.414252  147213 addons.go:243] addon metrics-server should already be in state true
	I1010 19:24:40.414292  147213 host.go:66] Checking if "no-preload-320324" exists ...
	I1010 19:24:40.414612  147213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:24:40.414640  147213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:24:40.414678  147213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:24:40.414712  147213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:24:40.415409  147213 out.go:177] * Verifying Kubernetes components...
	I1010 19:24:40.415412  147213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:24:40.415553  147213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:24:40.416812  147213 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 19:24:40.431363  147213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44727
	I1010 19:24:40.431474  147213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46487
	I1010 19:24:40.431659  147213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33767
	I1010 19:24:40.431983  147213 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:24:40.432136  147213 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:24:40.432156  147213 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:24:40.432567  147213 main.go:141] libmachine: Using API Version  1
	I1010 19:24:40.432587  147213 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:24:40.432710  147213 main.go:141] libmachine: Using API Version  1
	I1010 19:24:40.432732  147213 main.go:141] libmachine: Using API Version  1
	I1010 19:24:40.432740  147213 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:24:40.432749  147213 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:24:40.433000  147213 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:24:40.433079  147213 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:24:40.433103  147213 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:24:40.433468  147213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:24:40.433498  147213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:24:40.436984  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetState
	I1010 19:24:40.453362  147213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:24:40.453426  147213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:24:40.454884  147213 addons.go:234] Setting addon default-storageclass=true in "no-preload-320324"
	W1010 19:24:40.454913  147213 addons.go:243] addon default-storageclass should already be in state true
	I1010 19:24:40.454947  147213 host.go:66] Checking if "no-preload-320324" exists ...
	I1010 19:24:40.455335  147213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:24:40.455394  147213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:24:40.470642  147213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38831
	I1010 19:24:40.471118  147213 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:24:40.471701  147213 main.go:141] libmachine: Using API Version  1
	I1010 19:24:40.471730  147213 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:24:40.472241  147213 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:24:40.472523  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetState
	I1010 19:24:40.473953  147213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36987
	I1010 19:24:40.474196  147213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34527
	I1010 19:24:40.474332  147213 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:24:40.474672  147213 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:24:40.474814  147213 main.go:141] libmachine: Using API Version  1
	I1010 19:24:40.474827  147213 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:24:40.475181  147213 main.go:141] libmachine: Using API Version  1
	I1010 19:24:40.475210  147213 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:24:40.475310  147213 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:24:40.475702  147213 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:24:40.475785  147213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:24:40.475825  147213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:24:40.475922  147213 main.go:141] libmachine: (no-preload-320324) Calling .DriverName
	I1010 19:24:40.476046  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetState
	I1010 19:24:40.478147  147213 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 19:24:40.478395  147213 main.go:141] libmachine: (no-preload-320324) Calling .DriverName
	I1010 19:24:40.479869  147213 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 19:24:40.479896  147213 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1010 19:24:40.479922  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHHostname
	I1010 19:24:40.480549  147213 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1010 19:24:37.051611  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:39.551952  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:41.553895  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:40.482101  147213 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1010 19:24:40.482119  147213 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1010 19:24:40.482144  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHHostname
	I1010 19:24:40.484066  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:40.484560  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:40.484588  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:40.484833  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHPort
	I1010 19:24:40.485065  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:40.485241  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHUsername
	I1010 19:24:40.485272  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:40.485443  147213 sshutil.go:53] new ssh client: &{IP:192.168.72.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/no-preload-320324/id_rsa Username:docker}
	I1010 19:24:40.485788  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:40.485807  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:40.485842  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHPort
	I1010 19:24:40.486017  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:40.486202  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHUsername
	I1010 19:24:40.486454  147213 sshutil.go:53] new ssh client: &{IP:192.168.72.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/no-preload-320324/id_rsa Username:docker}
	I1010 19:24:40.492533  147213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38967
	I1010 19:24:40.493012  147213 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:24:40.493566  147213 main.go:141] libmachine: Using API Version  1
	I1010 19:24:40.493595  147213 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:24:40.494056  147213 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:24:40.494325  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetState
	I1010 19:24:40.496053  147213 main.go:141] libmachine: (no-preload-320324) Calling .DriverName
	I1010 19:24:40.496301  147213 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1010 19:24:40.496321  147213 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1010 19:24:40.496344  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHHostname
	I1010 19:24:40.499125  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:40.499667  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:40.499690  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:40.499843  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHPort
	I1010 19:24:40.500022  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:40.500194  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHUsername
	I1010 19:24:40.500357  147213 sshutil.go:53] new ssh client: &{IP:192.168.72.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/no-preload-320324/id_rsa Username:docker}
	I1010 19:24:40.651454  147213 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 19:24:40.667056  147213 node_ready.go:35] waiting up to 6m0s for node "no-preload-320324" to be "Ready" ...
	I1010 19:24:40.782217  147213 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 19:24:40.803094  147213 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1010 19:24:40.803122  147213 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1010 19:24:40.812288  147213 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1010 19:24:40.837679  147213 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1010 19:24:40.837723  147213 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1010 19:24:40.882090  147213 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1010 19:24:40.882119  147213 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1010 19:24:40.940115  147213 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1010 19:24:41.949181  147213 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.136852217s)
	I1010 19:24:41.949258  147213 main.go:141] libmachine: Making call to close driver server
	I1010 19:24:41.949275  147213 main.go:141] libmachine: (no-preload-320324) Calling .Close
	I1010 19:24:41.949286  147213 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.167030419s)
	I1010 19:24:41.949327  147213 main.go:141] libmachine: Making call to close driver server
	I1010 19:24:41.949345  147213 main.go:141] libmachine: (no-preload-320324) Calling .Close
	I1010 19:24:41.949625  147213 main.go:141] libmachine: (no-preload-320324) DBG | Closing plugin on server side
	I1010 19:24:41.949652  147213 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:24:41.949660  147213 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:24:41.949661  147213 main.go:141] libmachine: (no-preload-320324) DBG | Closing plugin on server side
	I1010 19:24:41.949668  147213 main.go:141] libmachine: Making call to close driver server
	I1010 19:24:41.949679  147213 main.go:141] libmachine: (no-preload-320324) Calling .Close
	I1010 19:24:41.949761  147213 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:24:41.949804  147213 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:24:41.949819  147213 main.go:141] libmachine: Making call to close driver server
	I1010 19:24:41.949826  147213 main.go:141] libmachine: (no-preload-320324) Calling .Close
	I1010 19:24:41.950811  147213 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:24:41.950824  147213 main.go:141] libmachine: (no-preload-320324) DBG | Closing plugin on server side
	I1010 19:24:41.950827  147213 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:24:41.950822  147213 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:24:41.950845  147213 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:24:41.950811  147213 main.go:141] libmachine: (no-preload-320324) DBG | Closing plugin on server side
	I1010 19:24:41.957797  147213 main.go:141] libmachine: Making call to close driver server
	I1010 19:24:41.957814  147213 main.go:141] libmachine: (no-preload-320324) Calling .Close
	I1010 19:24:41.958071  147213 main.go:141] libmachine: (no-preload-320324) DBG | Closing plugin on server side
	I1010 19:24:41.958077  147213 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:24:41.958099  147213 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:24:42.005530  147213 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.065377363s)
	I1010 19:24:42.005590  147213 main.go:141] libmachine: Making call to close driver server
	I1010 19:24:42.005602  147213 main.go:141] libmachine: (no-preload-320324) Calling .Close
	I1010 19:24:42.005914  147213 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:24:42.005937  147213 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:24:42.005935  147213 main.go:141] libmachine: (no-preload-320324) DBG | Closing plugin on server side
	I1010 19:24:42.005972  147213 main.go:141] libmachine: Making call to close driver server
	I1010 19:24:42.006003  147213 main.go:141] libmachine: (no-preload-320324) Calling .Close
	I1010 19:24:42.006280  147213 main.go:141] libmachine: (no-preload-320324) DBG | Closing plugin on server side
	I1010 19:24:42.006313  147213 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:24:42.006335  147213 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:24:42.006354  147213 addons.go:475] Verifying addon metrics-server=true in "no-preload-320324"
	I1010 19:24:42.008523  147213 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1010 19:24:39.028363  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:39.528571  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:40.027992  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:40.528552  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:41.028220  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:41.527889  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:42.028462  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:24:42.028549  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:24:42.070669  148123 cri.go:89] found id: ""
	I1010 19:24:42.070701  148123 logs.go:282] 0 containers: []
	W1010 19:24:42.070710  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:24:42.070716  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:24:42.070775  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:24:42.110693  148123 cri.go:89] found id: ""
	I1010 19:24:42.110731  148123 logs.go:282] 0 containers: []
	W1010 19:24:42.110748  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:24:42.110756  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:24:42.110816  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:24:42.148471  148123 cri.go:89] found id: ""
	I1010 19:24:42.148511  148123 logs.go:282] 0 containers: []
	W1010 19:24:42.148525  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:24:42.148535  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:24:42.148603  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:24:42.184637  148123 cri.go:89] found id: ""
	I1010 19:24:42.184670  148123 logs.go:282] 0 containers: []
	W1010 19:24:42.184683  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:24:42.184691  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:24:42.184759  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:24:42.226794  148123 cri.go:89] found id: ""
	I1010 19:24:42.226834  148123 logs.go:282] 0 containers: []
	W1010 19:24:42.226848  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:24:42.226857  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:24:42.226942  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:24:42.262969  148123 cri.go:89] found id: ""
	I1010 19:24:42.263004  148123 logs.go:282] 0 containers: []
	W1010 19:24:42.263017  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:24:42.263027  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:24:42.263167  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:24:42.301063  148123 cri.go:89] found id: ""
	I1010 19:24:42.301088  148123 logs.go:282] 0 containers: []
	W1010 19:24:42.301096  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:24:42.301102  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:24:42.301153  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:24:42.338829  148123 cri.go:89] found id: ""
	I1010 19:24:42.338862  148123 logs.go:282] 0 containers: []
	W1010 19:24:42.338873  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:24:42.338886  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:24:42.338901  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:24:42.391426  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:24:42.391478  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:24:42.405520  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:24:42.405563  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:24:42.544245  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:24:42.544275  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:24:42.544292  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:24:42.620760  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:24:42.620804  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:24:42.009965  147213 addons.go:510] duration metric: took 1.596190602s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1010 19:24:42.672792  147213 node_ready.go:53] node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:41.066744  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:43.066850  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:43.557231  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:46.051820  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:45.164389  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:45.179714  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:24:45.179776  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:24:45.216271  148123 cri.go:89] found id: ""
	I1010 19:24:45.216308  148123 logs.go:282] 0 containers: []
	W1010 19:24:45.216319  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:24:45.216327  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:24:45.216394  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:24:45.255109  148123 cri.go:89] found id: ""
	I1010 19:24:45.255154  148123 logs.go:282] 0 containers: []
	W1010 19:24:45.255167  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:24:45.255176  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:24:45.255248  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:24:45.294338  148123 cri.go:89] found id: ""
	I1010 19:24:45.294369  148123 logs.go:282] 0 containers: []
	W1010 19:24:45.294380  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:24:45.294388  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:24:45.294457  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:24:45.339573  148123 cri.go:89] found id: ""
	I1010 19:24:45.339606  148123 logs.go:282] 0 containers: []
	W1010 19:24:45.339626  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:24:45.339636  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:24:45.339703  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:24:45.400184  148123 cri.go:89] found id: ""
	I1010 19:24:45.400214  148123 logs.go:282] 0 containers: []
	W1010 19:24:45.400225  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:24:45.400234  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:24:45.400301  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:24:45.436152  148123 cri.go:89] found id: ""
	I1010 19:24:45.436183  148123 logs.go:282] 0 containers: []
	W1010 19:24:45.436195  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:24:45.436203  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:24:45.436262  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:24:45.484321  148123 cri.go:89] found id: ""
	I1010 19:24:45.484347  148123 logs.go:282] 0 containers: []
	W1010 19:24:45.484355  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:24:45.484361  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:24:45.484441  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:24:45.532893  148123 cri.go:89] found id: ""
	I1010 19:24:45.532923  148123 logs.go:282] 0 containers: []
	W1010 19:24:45.532932  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:24:45.532943  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:24:45.532958  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:24:45.585183  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:24:45.585214  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:24:45.638800  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:24:45.638847  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:24:45.653928  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:24:45.653961  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:24:45.733534  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:24:45.733564  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:24:45.733580  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:24:48.314367  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:48.329687  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:24:48.329754  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:24:48.370372  148123 cri.go:89] found id: ""
	I1010 19:24:48.370400  148123 logs.go:282] 0 containers: []
	W1010 19:24:48.370409  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:24:48.370415  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:24:48.370490  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:24:48.409362  148123 cri.go:89] found id: ""
	I1010 19:24:48.409396  148123 logs.go:282] 0 containers: []
	W1010 19:24:48.409407  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:24:48.409415  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:24:48.409500  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:24:48.449640  148123 cri.go:89] found id: ""
	I1010 19:24:48.449672  148123 logs.go:282] 0 containers: []
	W1010 19:24:48.449681  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:24:48.449687  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:24:48.449746  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:24:48.485053  148123 cri.go:89] found id: ""
	I1010 19:24:48.485104  148123 logs.go:282] 0 containers: []
	W1010 19:24:48.485121  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:24:48.485129  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:24:48.485183  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:24:48.520083  148123 cri.go:89] found id: ""
	I1010 19:24:48.520113  148123 logs.go:282] 0 containers: []
	W1010 19:24:48.520121  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:24:48.520127  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:24:48.520185  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:24:48.555112  148123 cri.go:89] found id: ""
	I1010 19:24:48.555138  148123 logs.go:282] 0 containers: []
	W1010 19:24:48.555149  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:24:48.555156  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:24:48.555241  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:24:45.171882  147213 node_ready.go:53] node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:47.673073  147213 node_ready.go:49] node "no-preload-320324" has status "Ready":"True"
	I1010 19:24:47.673103  147213 node_ready.go:38] duration metric: took 7.00601327s for node "no-preload-320324" to be "Ready" ...
	I1010 19:24:47.673117  147213 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 19:24:47.682195  147213 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-86brb" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:47.690079  147213 pod_ready.go:93] pod "coredns-7c65d6cfc9-86brb" in "kube-system" namespace has status "Ready":"True"
	I1010 19:24:47.690111  147213 pod_ready.go:82] duration metric: took 7.882823ms for pod "coredns-7c65d6cfc9-86brb" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:47.690126  147213 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:47.698009  147213 pod_ready.go:93] pod "etcd-no-preload-320324" in "kube-system" namespace has status "Ready":"True"
	I1010 19:24:47.698038  147213 pod_ready.go:82] duration metric: took 7.903016ms for pod "etcd-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:47.698052  147213 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:45.066893  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:47.566144  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:48.551853  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:51.050365  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:48.592682  148123 cri.go:89] found id: ""
	I1010 19:24:48.592710  148123 logs.go:282] 0 containers: []
	W1010 19:24:48.592719  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:24:48.592725  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:24:48.592775  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:24:48.628449  148123 cri.go:89] found id: ""
	I1010 19:24:48.628482  148123 logs.go:282] 0 containers: []
	W1010 19:24:48.628490  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:24:48.628500  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:24:48.628513  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:24:48.709385  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:24:48.709428  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:24:48.752542  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:24:48.752677  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:24:48.812331  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:24:48.812385  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:24:48.827057  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:24:48.827095  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:24:48.908312  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:24:51.409122  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:51.423404  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:24:51.423482  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:24:51.465742  148123 cri.go:89] found id: ""
	I1010 19:24:51.465781  148123 logs.go:282] 0 containers: []
	W1010 19:24:51.465793  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:24:51.465803  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:24:51.465875  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:24:51.501597  148123 cri.go:89] found id: ""
	I1010 19:24:51.501630  148123 logs.go:282] 0 containers: []
	W1010 19:24:51.501641  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:24:51.501648  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:24:51.501711  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:24:51.537500  148123 cri.go:89] found id: ""
	I1010 19:24:51.537539  148123 logs.go:282] 0 containers: []
	W1010 19:24:51.537551  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:24:51.537559  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:24:51.537626  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:24:51.579546  148123 cri.go:89] found id: ""
	I1010 19:24:51.579576  148123 logs.go:282] 0 containers: []
	W1010 19:24:51.579587  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:24:51.579595  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:24:51.579660  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:24:51.619893  148123 cri.go:89] found id: ""
	I1010 19:24:51.619917  148123 logs.go:282] 0 containers: []
	W1010 19:24:51.619925  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:24:51.619931  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:24:51.619999  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:24:51.655787  148123 cri.go:89] found id: ""
	I1010 19:24:51.655824  148123 logs.go:282] 0 containers: []
	W1010 19:24:51.655837  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:24:51.655846  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:24:51.655921  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:24:51.699582  148123 cri.go:89] found id: ""
	I1010 19:24:51.699611  148123 logs.go:282] 0 containers: []
	W1010 19:24:51.699619  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:24:51.699625  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:24:51.699685  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:24:51.738658  148123 cri.go:89] found id: ""
	I1010 19:24:51.738689  148123 logs.go:282] 0 containers: []
	W1010 19:24:51.738697  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:24:51.738707  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:24:51.738721  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:24:51.789556  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:24:51.789587  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:24:51.858919  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:24:51.858968  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:24:51.887818  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:24:51.887854  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:24:51.964408  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:24:51.964434  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:24:51.964449  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:24:49.705130  147213 pod_ready.go:103] pod "kube-apiserver-no-preload-320324" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:51.705847  147213 pod_ready.go:103] pod "kube-apiserver-no-preload-320324" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:53.205374  147213 pod_ready.go:93] pod "kube-apiserver-no-preload-320324" in "kube-system" namespace has status "Ready":"True"
	I1010 19:24:53.205401  147213 pod_ready.go:82] duration metric: took 5.507341974s for pod "kube-apiserver-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:53.205413  147213 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:53.210237  147213 pod_ready.go:93] pod "kube-controller-manager-no-preload-320324" in "kube-system" namespace has status "Ready":"True"
	I1010 19:24:53.210259  147213 pod_ready.go:82] duration metric: took 4.83925ms for pod "kube-controller-manager-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:53.210269  147213 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-vn6sv" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:53.215158  147213 pod_ready.go:93] pod "kube-proxy-vn6sv" in "kube-system" namespace has status "Ready":"True"
	I1010 19:24:53.215186  147213 pod_ready.go:82] duration metric: took 4.909888ms for pod "kube-proxy-vn6sv" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:53.215198  147213 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:53.220077  147213 pod_ready.go:93] pod "kube-scheduler-no-preload-320324" in "kube-system" namespace has status "Ready":"True"
	I1010 19:24:53.220097  147213 pod_ready.go:82] duration metric: took 4.890652ms for pod "kube-scheduler-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:53.220105  147213 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:50.066165  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:52.066343  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:53.552604  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:56.050748  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:54.545627  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:54.560748  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:24:54.560830  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:24:54.595863  148123 cri.go:89] found id: ""
	I1010 19:24:54.595893  148123 logs.go:282] 0 containers: []
	W1010 19:24:54.595903  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:24:54.595912  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:24:54.595978  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:24:54.633645  148123 cri.go:89] found id: ""
	I1010 19:24:54.633681  148123 logs.go:282] 0 containers: []
	W1010 19:24:54.633693  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:24:54.633701  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:24:54.633761  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:24:54.668269  148123 cri.go:89] found id: ""
	I1010 19:24:54.668299  148123 logs.go:282] 0 containers: []
	W1010 19:24:54.668311  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:24:54.668317  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:24:54.668369  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:24:54.706559  148123 cri.go:89] found id: ""
	I1010 19:24:54.706591  148123 logs.go:282] 0 containers: []
	W1010 19:24:54.706600  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:24:54.706608  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:24:54.706673  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:24:54.746248  148123 cri.go:89] found id: ""
	I1010 19:24:54.746283  148123 logs.go:282] 0 containers: []
	W1010 19:24:54.746295  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:24:54.746303  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:24:54.746383  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:24:54.782982  148123 cri.go:89] found id: ""
	I1010 19:24:54.783017  148123 logs.go:282] 0 containers: []
	W1010 19:24:54.783027  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:24:54.783033  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:24:54.783085  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:24:54.819664  148123 cri.go:89] found id: ""
	I1010 19:24:54.819700  148123 logs.go:282] 0 containers: []
	W1010 19:24:54.819713  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:24:54.819722  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:24:54.819797  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:24:54.859603  148123 cri.go:89] found id: ""
	I1010 19:24:54.859632  148123 logs.go:282] 0 containers: []
	W1010 19:24:54.859640  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:24:54.859650  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:24:54.859662  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:24:54.910949  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:24:54.910987  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:24:54.925941  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:24:54.925975  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:24:55.009626  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:24:55.009654  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:24:55.009669  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:24:55.097196  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:24:55.097237  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:24:57.641732  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:57.661141  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:24:57.661222  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:24:57.716054  148123 cri.go:89] found id: ""
	I1010 19:24:57.716086  148123 logs.go:282] 0 containers: []
	W1010 19:24:57.716094  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:24:57.716100  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:24:57.716178  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:24:57.766862  148123 cri.go:89] found id: ""
	I1010 19:24:57.766892  148123 logs.go:282] 0 containers: []
	W1010 19:24:57.766906  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:24:57.766917  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:24:57.766989  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:24:57.814776  148123 cri.go:89] found id: ""
	I1010 19:24:57.814808  148123 logs.go:282] 0 containers: []
	W1010 19:24:57.814821  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:24:57.814829  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:24:57.814899  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:24:57.850458  148123 cri.go:89] found id: ""
	I1010 19:24:57.850495  148123 logs.go:282] 0 containers: []
	W1010 19:24:57.850505  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:24:57.850516  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:24:57.850667  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:24:57.886541  148123 cri.go:89] found id: ""
	I1010 19:24:57.886566  148123 logs.go:282] 0 containers: []
	W1010 19:24:57.886575  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:24:57.886581  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:24:57.886645  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:24:57.920748  148123 cri.go:89] found id: ""
	I1010 19:24:57.920783  148123 logs.go:282] 0 containers: []
	W1010 19:24:57.920795  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:24:57.920802  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:24:57.920887  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:24:57.957801  148123 cri.go:89] found id: ""
	I1010 19:24:57.957833  148123 logs.go:282] 0 containers: []
	W1010 19:24:57.957844  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:24:57.957852  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:24:57.957919  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:24:57.995648  148123 cri.go:89] found id: ""
	I1010 19:24:57.995683  148123 logs.go:282] 0 containers: []
	W1010 19:24:57.995694  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:24:57.995706  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:24:57.995723  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:24:58.034987  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:24:58.035030  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:24:58.089014  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:24:58.089066  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:24:58.104179  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:24:58.104211  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:24:58.178239  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:24:58.178271  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:24:58.178291  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:24:55.229459  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:57.727298  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:54.566779  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:57.065902  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:58.051248  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:00.550512  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:00.756057  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:00.770086  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:00.770150  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:00.811400  148123 cri.go:89] found id: ""
	I1010 19:25:00.811444  148123 logs.go:282] 0 containers: []
	W1010 19:25:00.811456  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:00.811466  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:00.811533  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:00.848821  148123 cri.go:89] found id: ""
	I1010 19:25:00.848859  148123 logs.go:282] 0 containers: []
	W1010 19:25:00.848870  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:00.848877  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:00.848947  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:00.883529  148123 cri.go:89] found id: ""
	I1010 19:25:00.883557  148123 logs.go:282] 0 containers: []
	W1010 19:25:00.883566  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:00.883573  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:00.883631  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:00.922952  148123 cri.go:89] found id: ""
	I1010 19:25:00.922982  148123 logs.go:282] 0 containers: []
	W1010 19:25:00.922992  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:00.922998  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:00.923057  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:00.956116  148123 cri.go:89] found id: ""
	I1010 19:25:00.956147  148123 logs.go:282] 0 containers: []
	W1010 19:25:00.956159  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:00.956168  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:00.956233  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:00.998892  148123 cri.go:89] found id: ""
	I1010 19:25:00.998919  148123 logs.go:282] 0 containers: []
	W1010 19:25:00.998930  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:00.998939  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:00.998996  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:01.037617  148123 cri.go:89] found id: ""
	I1010 19:25:01.037649  148123 logs.go:282] 0 containers: []
	W1010 19:25:01.037657  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:01.037663  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:01.037717  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:01.078003  148123 cri.go:89] found id: ""
	I1010 19:25:01.078034  148123 logs.go:282] 0 containers: []
	W1010 19:25:01.078046  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:01.078055  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:01.078069  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:01.118948  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:01.118977  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:01.171053  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:01.171091  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:01.187027  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:01.187060  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:01.261288  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:01.261315  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:01.261330  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:24:59.728997  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:02.227142  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:59.566448  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:02.066184  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:02.551951  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:05.050558  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:03.848144  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:03.862527  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:03.862601  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:03.899120  148123 cri.go:89] found id: ""
	I1010 19:25:03.899159  148123 logs.go:282] 0 containers: []
	W1010 19:25:03.899180  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:03.899187  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:03.899276  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:03.935461  148123 cri.go:89] found id: ""
	I1010 19:25:03.935492  148123 logs.go:282] 0 containers: []
	W1010 19:25:03.935500  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:03.935507  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:03.935569  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:03.971892  148123 cri.go:89] found id: ""
	I1010 19:25:03.971925  148123 logs.go:282] 0 containers: []
	W1010 19:25:03.971937  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:03.971945  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:03.972019  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:04.008341  148123 cri.go:89] found id: ""
	I1010 19:25:04.008368  148123 logs.go:282] 0 containers: []
	W1010 19:25:04.008377  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:04.008390  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:04.008447  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:04.042007  148123 cri.go:89] found id: ""
	I1010 19:25:04.042036  148123 logs.go:282] 0 containers: []
	W1010 19:25:04.042044  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:04.042051  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:04.042101  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:04.078017  148123 cri.go:89] found id: ""
	I1010 19:25:04.078045  148123 logs.go:282] 0 containers: []
	W1010 19:25:04.078053  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:04.078059  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:04.078112  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:04.116792  148123 cri.go:89] found id: ""
	I1010 19:25:04.116823  148123 logs.go:282] 0 containers: []
	W1010 19:25:04.116832  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:04.116839  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:04.116928  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:04.153468  148123 cri.go:89] found id: ""
	I1010 19:25:04.153496  148123 logs.go:282] 0 containers: []
	W1010 19:25:04.153503  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:04.153513  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:04.153525  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:04.230646  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:04.230683  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:04.270975  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:04.271015  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:04.320845  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:04.320902  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:04.337789  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:04.337828  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:04.416077  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:06.916412  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:06.931309  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:06.931376  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:06.969224  148123 cri.go:89] found id: ""
	I1010 19:25:06.969257  148123 logs.go:282] 0 containers: []
	W1010 19:25:06.969269  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:06.969277  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:06.969346  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:07.006598  148123 cri.go:89] found id: ""
	I1010 19:25:07.006653  148123 logs.go:282] 0 containers: []
	W1010 19:25:07.006667  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:07.006676  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:07.006744  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:07.046392  148123 cri.go:89] found id: ""
	I1010 19:25:07.046418  148123 logs.go:282] 0 containers: []
	W1010 19:25:07.046427  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:07.046433  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:07.046491  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:07.089570  148123 cri.go:89] found id: ""
	I1010 19:25:07.089603  148123 logs.go:282] 0 containers: []
	W1010 19:25:07.089615  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:07.089624  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:07.089689  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:07.127152  148123 cri.go:89] found id: ""
	I1010 19:25:07.127185  148123 logs.go:282] 0 containers: []
	W1010 19:25:07.127195  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:07.127204  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:07.127282  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:07.160516  148123 cri.go:89] found id: ""
	I1010 19:25:07.160543  148123 logs.go:282] 0 containers: []
	W1010 19:25:07.160554  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:07.160563  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:07.160635  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:07.197270  148123 cri.go:89] found id: ""
	I1010 19:25:07.197307  148123 logs.go:282] 0 containers: []
	W1010 19:25:07.197318  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:07.197327  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:07.197395  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:07.232658  148123 cri.go:89] found id: ""
	I1010 19:25:07.232687  148123 logs.go:282] 0 containers: []
	W1010 19:25:07.232696  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:07.232706  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:07.232720  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:07.246579  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:07.246609  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:07.315200  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:07.315230  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:07.315251  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:07.389532  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:07.389580  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:07.438808  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:07.438837  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:04.227537  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:06.727865  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:04.067121  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:06.565089  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:08.565565  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:07.051371  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:09.051420  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:11.054211  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:09.990294  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:10.004155  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:10.004270  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:10.039034  148123 cri.go:89] found id: ""
	I1010 19:25:10.039068  148123 logs.go:282] 0 containers: []
	W1010 19:25:10.039078  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:10.039087  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:10.039174  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:10.079936  148123 cri.go:89] found id: ""
	I1010 19:25:10.079967  148123 logs.go:282] 0 containers: []
	W1010 19:25:10.079978  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:10.079985  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:10.080068  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:10.116441  148123 cri.go:89] found id: ""
	I1010 19:25:10.116471  148123 logs.go:282] 0 containers: []
	W1010 19:25:10.116483  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:10.116491  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:10.116556  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:10.151998  148123 cri.go:89] found id: ""
	I1010 19:25:10.152045  148123 logs.go:282] 0 containers: []
	W1010 19:25:10.152058  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:10.152075  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:10.152153  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:10.188514  148123 cri.go:89] found id: ""
	I1010 19:25:10.188547  148123 logs.go:282] 0 containers: []
	W1010 19:25:10.188565  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:10.188574  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:10.188640  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:10.223648  148123 cri.go:89] found id: ""
	I1010 19:25:10.223682  148123 logs.go:282] 0 containers: []
	W1010 19:25:10.223694  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:10.223702  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:10.223771  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:10.264117  148123 cri.go:89] found id: ""
	I1010 19:25:10.264143  148123 logs.go:282] 0 containers: []
	W1010 19:25:10.264151  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:10.264158  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:10.264215  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:10.302566  148123 cri.go:89] found id: ""
	I1010 19:25:10.302601  148123 logs.go:282] 0 containers: []
	W1010 19:25:10.302613  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:10.302625  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:10.302649  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:10.316567  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:10.316606  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:10.388642  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:10.388674  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:10.388692  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:10.471035  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:10.471090  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:10.519600  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:10.519645  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:13.075428  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:13.090830  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:13.090915  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:13.126302  148123 cri.go:89] found id: ""
	I1010 19:25:13.126336  148123 logs.go:282] 0 containers: []
	W1010 19:25:13.126348  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:13.126357  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:13.126429  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:13.163309  148123 cri.go:89] found id: ""
	I1010 19:25:13.163344  148123 logs.go:282] 0 containers: []
	W1010 19:25:13.163357  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:13.163365  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:13.163428  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:13.197569  148123 cri.go:89] found id: ""
	I1010 19:25:13.197605  148123 logs.go:282] 0 containers: []
	W1010 19:25:13.197615  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:13.197621  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:13.197692  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:13.236299  148123 cri.go:89] found id: ""
	I1010 19:25:13.236328  148123 logs.go:282] 0 containers: []
	W1010 19:25:13.236336  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:13.236342  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:13.236406  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:13.275667  148123 cri.go:89] found id: ""
	I1010 19:25:13.275696  148123 logs.go:282] 0 containers: []
	W1010 19:25:13.275705  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:13.275711  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:13.275776  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:13.309728  148123 cri.go:89] found id: ""
	I1010 19:25:13.309763  148123 logs.go:282] 0 containers: []
	W1010 19:25:13.309774  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:13.309786  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:13.309854  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:13.347464  148123 cri.go:89] found id: ""
	I1010 19:25:13.347493  148123 logs.go:282] 0 containers: []
	W1010 19:25:13.347504  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:13.347513  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:13.347576  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:13.385086  148123 cri.go:89] found id: ""
	I1010 19:25:13.385119  148123 logs.go:282] 0 containers: []
	W1010 19:25:13.385130  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:13.385142  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:13.385156  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:13.466513  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:13.466553  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:13.508610  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:13.508651  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:09.226850  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:11.227241  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:13.726879  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:10.565663  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:12.565845  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:13.555465  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:16.051764  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:13.564897  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:13.564932  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:13.578771  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:13.578803  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:13.651857  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:16.153066  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:16.167613  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:16.167698  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:16.203867  148123 cri.go:89] found id: ""
	I1010 19:25:16.203897  148123 logs.go:282] 0 containers: []
	W1010 19:25:16.203906  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:16.203912  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:16.203965  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:16.242077  148123 cri.go:89] found id: ""
	I1010 19:25:16.242109  148123 logs.go:282] 0 containers: []
	W1010 19:25:16.242130  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:16.242138  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:16.242234  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:16.283792  148123 cri.go:89] found id: ""
	I1010 19:25:16.283825  148123 logs.go:282] 0 containers: []
	W1010 19:25:16.283836  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:16.283844  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:16.283907  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:16.319934  148123 cri.go:89] found id: ""
	I1010 19:25:16.319969  148123 logs.go:282] 0 containers: []
	W1010 19:25:16.319980  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:16.319990  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:16.320063  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:16.360439  148123 cri.go:89] found id: ""
	I1010 19:25:16.360482  148123 logs.go:282] 0 containers: []
	W1010 19:25:16.360495  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:16.360504  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:16.360569  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:16.395890  148123 cri.go:89] found id: ""
	I1010 19:25:16.395922  148123 logs.go:282] 0 containers: []
	W1010 19:25:16.395931  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:16.395941  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:16.396009  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:16.431396  148123 cri.go:89] found id: ""
	I1010 19:25:16.431442  148123 logs.go:282] 0 containers: []
	W1010 19:25:16.431456  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:16.431463  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:16.431531  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:16.471768  148123 cri.go:89] found id: ""
	I1010 19:25:16.471796  148123 logs.go:282] 0 containers: []
	W1010 19:25:16.471804  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:16.471814  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:16.471830  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:16.526112  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:16.526155  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:16.541041  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:16.541074  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:16.623245  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:16.623273  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:16.623289  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:16.710840  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:16.710884  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:15.727171  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:17.728705  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:15.067362  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:17.566242  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:18.551207  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:21.050222  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:19.257566  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:19.273982  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:19.274063  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:19.311360  148123 cri.go:89] found id: ""
	I1010 19:25:19.311392  148123 logs.go:282] 0 containers: []
	W1010 19:25:19.311404  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:19.311411  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:19.311475  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:19.347025  148123 cri.go:89] found id: ""
	I1010 19:25:19.347053  148123 logs.go:282] 0 containers: []
	W1010 19:25:19.347062  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:19.347068  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:19.347120  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:19.383575  148123 cri.go:89] found id: ""
	I1010 19:25:19.383608  148123 logs.go:282] 0 containers: []
	W1010 19:25:19.383620  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:19.383633  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:19.383698  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:19.419245  148123 cri.go:89] found id: ""
	I1010 19:25:19.419270  148123 logs.go:282] 0 containers: []
	W1010 19:25:19.419278  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:19.419284  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:19.419334  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:19.453563  148123 cri.go:89] found id: ""
	I1010 19:25:19.453591  148123 logs.go:282] 0 containers: []
	W1010 19:25:19.453601  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:19.453658  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:19.453715  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:19.496430  148123 cri.go:89] found id: ""
	I1010 19:25:19.496458  148123 logs.go:282] 0 containers: []
	W1010 19:25:19.496466  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:19.496473  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:19.496533  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:19.534626  148123 cri.go:89] found id: ""
	I1010 19:25:19.534670  148123 logs.go:282] 0 containers: []
	W1010 19:25:19.534682  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:19.534691  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:19.534757  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:19.574186  148123 cri.go:89] found id: ""
	I1010 19:25:19.574236  148123 logs.go:282] 0 containers: []
	W1010 19:25:19.574248  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:19.574261  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:19.574283  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:19.587952  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:19.587989  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:19.664607  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:19.664644  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:19.664664  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:19.742185  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:19.742224  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:19.781530  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:19.781562  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:22.335398  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:22.349627  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:22.349693  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:22.389075  148123 cri.go:89] found id: ""
	I1010 19:25:22.389102  148123 logs.go:282] 0 containers: []
	W1010 19:25:22.389110  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:22.389117  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:22.389168  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:22.424331  148123 cri.go:89] found id: ""
	I1010 19:25:22.424368  148123 logs.go:282] 0 containers: []
	W1010 19:25:22.424381  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:22.424389  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:22.424460  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:22.464011  148123 cri.go:89] found id: ""
	I1010 19:25:22.464051  148123 logs.go:282] 0 containers: []
	W1010 19:25:22.464062  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:22.464070  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:22.464141  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:22.500786  148123 cri.go:89] found id: ""
	I1010 19:25:22.500819  148123 logs.go:282] 0 containers: []
	W1010 19:25:22.500828  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:22.500835  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:22.500914  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:22.538016  148123 cri.go:89] found id: ""
	I1010 19:25:22.538053  148123 logs.go:282] 0 containers: []
	W1010 19:25:22.538061  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:22.538068  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:22.538124  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:22.576600  148123 cri.go:89] found id: ""
	I1010 19:25:22.576629  148123 logs.go:282] 0 containers: []
	W1010 19:25:22.576637  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:22.576643  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:22.576702  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:22.616020  148123 cri.go:89] found id: ""
	I1010 19:25:22.616048  148123 logs.go:282] 0 containers: []
	W1010 19:25:22.616057  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:22.616064  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:22.616114  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:22.652238  148123 cri.go:89] found id: ""
	I1010 19:25:22.652271  148123 logs.go:282] 0 containers: []
	W1010 19:25:22.652283  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:22.652295  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:22.652311  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:22.726182  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:22.726216  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:22.726234  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:22.814040  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:22.814086  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:22.859254  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:22.859289  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:22.916565  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:22.916607  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:20.227871  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:22.732566  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:20.066872  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:22.566173  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:23.050833  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:25.551662  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:25.432636  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:25.446982  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:25.447047  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:25.482440  148123 cri.go:89] found id: ""
	I1010 19:25:25.482472  148123 logs.go:282] 0 containers: []
	W1010 19:25:25.482484  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:25.482492  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:25.482552  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:25.517709  148123 cri.go:89] found id: ""
	I1010 19:25:25.517742  148123 logs.go:282] 0 containers: []
	W1010 19:25:25.517756  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:25.517764  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:25.517867  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:25.561504  148123 cri.go:89] found id: ""
	I1010 19:25:25.561532  148123 logs.go:282] 0 containers: []
	W1010 19:25:25.561544  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:25.561552  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:25.561616  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:25.601704  148123 cri.go:89] found id: ""
	I1010 19:25:25.601741  148123 logs.go:282] 0 containers: []
	W1010 19:25:25.601753  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:25.601762  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:25.601825  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:25.638519  148123 cri.go:89] found id: ""
	I1010 19:25:25.638544  148123 logs.go:282] 0 containers: []
	W1010 19:25:25.638552  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:25.638558  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:25.638609  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:25.673390  148123 cri.go:89] found id: ""
	I1010 19:25:25.673425  148123 logs.go:282] 0 containers: []
	W1010 19:25:25.673436  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:25.673447  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:25.673525  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:25.708251  148123 cri.go:89] found id: ""
	I1010 19:25:25.708278  148123 logs.go:282] 0 containers: []
	W1010 19:25:25.708286  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:25.708293  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:25.708354  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:25.749793  148123 cri.go:89] found id: ""
	I1010 19:25:25.749826  148123 logs.go:282] 0 containers: []
	W1010 19:25:25.749837  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:25.749846  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:25.749861  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:25.802511  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:25.802554  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:25.817523  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:25.817562  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:25.898595  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:25.898619  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:25.898635  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:25.978629  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:25.978684  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:28.520165  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:28.532725  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:28.532793  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:25.226875  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:27.729015  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:25.066298  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:27.066963  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:27.551915  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:29.558497  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:28.571667  148123 cri.go:89] found id: ""
	I1010 19:25:28.571695  148123 logs.go:282] 0 containers: []
	W1010 19:25:28.571704  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:28.571710  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:28.571770  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:28.608136  148123 cri.go:89] found id: ""
	I1010 19:25:28.608165  148123 logs.go:282] 0 containers: []
	W1010 19:25:28.608174  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:28.608181  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:28.608244  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:28.642123  148123 cri.go:89] found id: ""
	I1010 19:25:28.642161  148123 logs.go:282] 0 containers: []
	W1010 19:25:28.642173  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:28.642181  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:28.642242  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:28.678600  148123 cri.go:89] found id: ""
	I1010 19:25:28.678633  148123 logs.go:282] 0 containers: []
	W1010 19:25:28.678643  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:28.678651  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:28.678702  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:28.716627  148123 cri.go:89] found id: ""
	I1010 19:25:28.716660  148123 logs.go:282] 0 containers: []
	W1010 19:25:28.716673  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:28.716681  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:28.716766  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:28.752750  148123 cri.go:89] found id: ""
	I1010 19:25:28.752786  148123 logs.go:282] 0 containers: []
	W1010 19:25:28.752798  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:28.752806  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:28.752898  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:28.790021  148123 cri.go:89] found id: ""
	I1010 19:25:28.790054  148123 logs.go:282] 0 containers: []
	W1010 19:25:28.790066  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:28.790075  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:28.790128  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:28.828760  148123 cri.go:89] found id: ""
	I1010 19:25:28.828803  148123 logs.go:282] 0 containers: []
	W1010 19:25:28.828827  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:28.828839  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:28.828877  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:28.879773  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:28.879811  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:28.894311  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:28.894342  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:28.968675  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:28.968706  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:28.968729  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:29.045862  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:29.045913  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:31.588772  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:31.601453  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:31.601526  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:31.637056  148123 cri.go:89] found id: ""
	I1010 19:25:31.637090  148123 logs.go:282] 0 containers: []
	W1010 19:25:31.637101  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:31.637108  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:31.637183  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:31.677825  148123 cri.go:89] found id: ""
	I1010 19:25:31.677854  148123 logs.go:282] 0 containers: []
	W1010 19:25:31.677862  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:31.677867  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:31.677920  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:31.715372  148123 cri.go:89] found id: ""
	I1010 19:25:31.715402  148123 logs.go:282] 0 containers: []
	W1010 19:25:31.715411  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:31.715419  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:31.715470  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:31.753767  148123 cri.go:89] found id: ""
	I1010 19:25:31.753794  148123 logs.go:282] 0 containers: []
	W1010 19:25:31.753806  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:31.753814  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:31.753879  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:31.792999  148123 cri.go:89] found id: ""
	I1010 19:25:31.793024  148123 logs.go:282] 0 containers: []
	W1010 19:25:31.793035  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:31.793050  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:31.793106  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:31.831571  148123 cri.go:89] found id: ""
	I1010 19:25:31.831598  148123 logs.go:282] 0 containers: []
	W1010 19:25:31.831607  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:31.831614  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:31.831673  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:31.880382  148123 cri.go:89] found id: ""
	I1010 19:25:31.880422  148123 logs.go:282] 0 containers: []
	W1010 19:25:31.880436  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:31.880447  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:31.880527  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:31.919596  148123 cri.go:89] found id: ""
	I1010 19:25:31.919627  148123 logs.go:282] 0 containers: []
	W1010 19:25:31.919637  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:31.919647  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:31.919661  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:31.967963  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:31.968001  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:31.983064  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:31.983106  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:32.056073  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:32.056105  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:32.056122  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:32.142927  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:32.142978  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:30.226683  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:32.227047  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:29.565699  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:31.566109  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:32.051411  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:34.052064  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:36.550062  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:34.690025  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:34.705326  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:34.705413  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:34.740756  148123 cri.go:89] found id: ""
	I1010 19:25:34.740786  148123 logs.go:282] 0 containers: []
	W1010 19:25:34.740793  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:34.740799  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:34.740872  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:34.777021  148123 cri.go:89] found id: ""
	I1010 19:25:34.777049  148123 logs.go:282] 0 containers: []
	W1010 19:25:34.777061  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:34.777069  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:34.777123  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:34.826699  148123 cri.go:89] found id: ""
	I1010 19:25:34.826735  148123 logs.go:282] 0 containers: []
	W1010 19:25:34.826747  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:34.826754  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:34.826896  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:34.864297  148123 cri.go:89] found id: ""
	I1010 19:25:34.864329  148123 logs.go:282] 0 containers: []
	W1010 19:25:34.864340  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:34.864348  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:34.864415  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:34.899198  148123 cri.go:89] found id: ""
	I1010 19:25:34.899234  148123 logs.go:282] 0 containers: []
	W1010 19:25:34.899247  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:34.899256  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:34.899315  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:34.934132  148123 cri.go:89] found id: ""
	I1010 19:25:34.934162  148123 logs.go:282] 0 containers: []
	W1010 19:25:34.934170  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:34.934176  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:34.934238  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:34.969858  148123 cri.go:89] found id: ""
	I1010 19:25:34.969891  148123 logs.go:282] 0 containers: []
	W1010 19:25:34.969903  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:34.969911  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:34.969978  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:35.006082  148123 cri.go:89] found id: ""
	I1010 19:25:35.006121  148123 logs.go:282] 0 containers: []
	W1010 19:25:35.006132  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:35.006145  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:35.006160  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:35.060596  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:35.060631  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:35.076246  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:35.076281  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:35.151879  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:35.151912  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:35.151931  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:35.231802  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:35.231845  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:37.774550  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:37.787871  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:37.787938  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:37.824273  148123 cri.go:89] found id: ""
	I1010 19:25:37.824306  148123 logs.go:282] 0 containers: []
	W1010 19:25:37.824317  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:37.824325  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:37.824410  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:37.860548  148123 cri.go:89] found id: ""
	I1010 19:25:37.860582  148123 logs.go:282] 0 containers: []
	W1010 19:25:37.860593  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:37.860601  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:37.860677  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:37.894616  148123 cri.go:89] found id: ""
	I1010 19:25:37.894644  148123 logs.go:282] 0 containers: []
	W1010 19:25:37.894654  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:37.894662  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:37.894730  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:37.931247  148123 cri.go:89] found id: ""
	I1010 19:25:37.931272  148123 logs.go:282] 0 containers: []
	W1010 19:25:37.931280  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:37.931287  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:37.931337  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:37.968028  148123 cri.go:89] found id: ""
	I1010 19:25:37.968068  148123 logs.go:282] 0 containers: []
	W1010 19:25:37.968079  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:37.968086  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:37.968153  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:38.004661  148123 cri.go:89] found id: ""
	I1010 19:25:38.004692  148123 logs.go:282] 0 containers: []
	W1010 19:25:38.004700  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:38.004706  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:38.004760  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:38.040874  148123 cri.go:89] found id: ""
	I1010 19:25:38.040906  148123 logs.go:282] 0 containers: []
	W1010 19:25:38.040915  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:38.040922  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:38.040990  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:38.083771  148123 cri.go:89] found id: ""
	I1010 19:25:38.083802  148123 logs.go:282] 0 containers: []
	W1010 19:25:38.083811  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:38.083821  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:38.083836  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:38.099645  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:38.099683  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:38.172168  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:38.172207  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:38.172223  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:38.248523  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:38.248561  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:38.306594  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:38.306630  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:34.728106  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:37.226285  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:34.065919  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:36.066751  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:38.067361  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:38.550359  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:40.551190  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:40.873868  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:40.889561  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:40.889668  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:40.926769  148123 cri.go:89] found id: ""
	I1010 19:25:40.926800  148123 logs.go:282] 0 containers: []
	W1010 19:25:40.926808  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:40.926816  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:40.926869  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:40.962564  148123 cri.go:89] found id: ""
	I1010 19:25:40.962592  148123 logs.go:282] 0 containers: []
	W1010 19:25:40.962600  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:40.962606  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:40.962659  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:40.997856  148123 cri.go:89] found id: ""
	I1010 19:25:40.997885  148123 logs.go:282] 0 containers: []
	W1010 19:25:40.997894  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:40.997900  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:40.997951  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:41.035479  148123 cri.go:89] found id: ""
	I1010 19:25:41.035506  148123 logs.go:282] 0 containers: []
	W1010 19:25:41.035514  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:41.035520  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:41.035575  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:41.077733  148123 cri.go:89] found id: ""
	I1010 19:25:41.077817  148123 logs.go:282] 0 containers: []
	W1010 19:25:41.077830  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:41.077840  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:41.077916  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:41.115439  148123 cri.go:89] found id: ""
	I1010 19:25:41.115476  148123 logs.go:282] 0 containers: []
	W1010 19:25:41.115489  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:41.115498  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:41.115552  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:41.152586  148123 cri.go:89] found id: ""
	I1010 19:25:41.152625  148123 logs.go:282] 0 containers: []
	W1010 19:25:41.152636  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:41.152643  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:41.152695  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:41.186807  148123 cri.go:89] found id: ""
	I1010 19:25:41.186840  148123 logs.go:282] 0 containers: []
	W1010 19:25:41.186851  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:41.186862  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:41.186880  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:41.200404  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:41.200447  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:41.280717  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:41.280744  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:41.280760  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:41.360561  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:41.360611  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:41.404461  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:41.404497  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:39.226903  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:41.227077  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:43.727197  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:40.570404  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:43.066523  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:43.050813  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:45.051094  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:43.955723  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:43.970895  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:43.970964  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:44.012162  148123 cri.go:89] found id: ""
	I1010 19:25:44.012194  148123 logs.go:282] 0 containers: []
	W1010 19:25:44.012203  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:44.012209  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:44.012282  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:44.074583  148123 cri.go:89] found id: ""
	I1010 19:25:44.074606  148123 logs.go:282] 0 containers: []
	W1010 19:25:44.074614  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:44.074620  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:44.074675  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:44.133562  148123 cri.go:89] found id: ""
	I1010 19:25:44.133588  148123 logs.go:282] 0 containers: []
	W1010 19:25:44.133596  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:44.133602  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:44.133658  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:44.168832  148123 cri.go:89] found id: ""
	I1010 19:25:44.168883  148123 logs.go:282] 0 containers: []
	W1010 19:25:44.168896  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:44.168908  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:44.168967  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:44.204896  148123 cri.go:89] found id: ""
	I1010 19:25:44.204923  148123 logs.go:282] 0 containers: []
	W1010 19:25:44.204934  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:44.204943  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:44.205002  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:44.242410  148123 cri.go:89] found id: ""
	I1010 19:25:44.242437  148123 logs.go:282] 0 containers: []
	W1010 19:25:44.242448  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:44.242456  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:44.242524  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:44.278553  148123 cri.go:89] found id: ""
	I1010 19:25:44.278581  148123 logs.go:282] 0 containers: []
	W1010 19:25:44.278589  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:44.278595  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:44.278658  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:44.316099  148123 cri.go:89] found id: ""
	I1010 19:25:44.316125  148123 logs.go:282] 0 containers: []
	W1010 19:25:44.316132  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:44.316141  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:44.316155  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:44.365809  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:44.365860  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:44.380129  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:44.380162  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:44.454241  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:44.454267  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:44.454281  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:44.530928  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:44.530974  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:47.078279  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:47.092233  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:47.092310  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:47.129517  148123 cri.go:89] found id: ""
	I1010 19:25:47.129557  148123 logs.go:282] 0 containers: []
	W1010 19:25:47.129573  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:47.129582  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:47.129654  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:47.170309  148123 cri.go:89] found id: ""
	I1010 19:25:47.170345  148123 logs.go:282] 0 containers: []
	W1010 19:25:47.170357  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:47.170365  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:47.170432  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:47.205110  148123 cri.go:89] found id: ""
	I1010 19:25:47.205145  148123 logs.go:282] 0 containers: []
	W1010 19:25:47.205156  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:47.205162  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:47.205228  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:47.245668  148123 cri.go:89] found id: ""
	I1010 19:25:47.245695  148123 logs.go:282] 0 containers: []
	W1010 19:25:47.245704  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:47.245711  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:47.245768  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:47.284549  148123 cri.go:89] found id: ""
	I1010 19:25:47.284576  148123 logs.go:282] 0 containers: []
	W1010 19:25:47.284584  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:47.284590  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:47.284643  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:47.324676  148123 cri.go:89] found id: ""
	I1010 19:25:47.324708  148123 logs.go:282] 0 containers: []
	W1010 19:25:47.324720  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:47.324729  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:47.324782  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:47.364315  148123 cri.go:89] found id: ""
	I1010 19:25:47.364347  148123 logs.go:282] 0 containers: []
	W1010 19:25:47.364356  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:47.364362  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:47.364433  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:47.400611  148123 cri.go:89] found id: ""
	I1010 19:25:47.400641  148123 logs.go:282] 0 containers: []
	W1010 19:25:47.400652  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:47.400664  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:47.400680  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:47.415185  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:47.415227  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:47.481638  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:47.481665  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:47.481681  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:47.560135  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:47.560171  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:47.602144  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:47.602174  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:46.227386  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:48.227699  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:45.066887  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:47.565340  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:47.051459  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:49.550170  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:51.554542  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:50.151770  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:50.165336  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:50.165403  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:50.201045  148123 cri.go:89] found id: ""
	I1010 19:25:50.201072  148123 logs.go:282] 0 containers: []
	W1010 19:25:50.201082  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:50.201089  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:50.201154  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:50.237047  148123 cri.go:89] found id: ""
	I1010 19:25:50.237082  148123 logs.go:282] 0 containers: []
	W1010 19:25:50.237094  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:50.237102  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:50.237174  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:50.273727  148123 cri.go:89] found id: ""
	I1010 19:25:50.273756  148123 logs.go:282] 0 containers: []
	W1010 19:25:50.273767  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:50.273780  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:50.273843  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:50.311397  148123 cri.go:89] found id: ""
	I1010 19:25:50.311424  148123 logs.go:282] 0 containers: []
	W1010 19:25:50.311433  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:50.311450  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:50.311513  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:50.347593  148123 cri.go:89] found id: ""
	I1010 19:25:50.347625  148123 logs.go:282] 0 containers: []
	W1010 19:25:50.347637  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:50.347705  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:50.347782  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:50.382195  148123 cri.go:89] found id: ""
	I1010 19:25:50.382228  148123 logs.go:282] 0 containers: []
	W1010 19:25:50.382240  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:50.382247  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:50.382313  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:50.419189  148123 cri.go:89] found id: ""
	I1010 19:25:50.419221  148123 logs.go:282] 0 containers: []
	W1010 19:25:50.419229  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:50.419236  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:50.419297  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:50.454368  148123 cri.go:89] found id: ""
	I1010 19:25:50.454399  148123 logs.go:282] 0 containers: []
	W1010 19:25:50.454410  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:50.454421  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:50.454440  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:50.495575  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:50.495623  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:50.548691  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:50.548737  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:50.564585  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:50.564621  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:50.637152  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:50.637174  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:50.637190  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:53.221939  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:53.235889  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:53.235968  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:53.278589  148123 cri.go:89] found id: ""
	I1010 19:25:53.278620  148123 logs.go:282] 0 containers: []
	W1010 19:25:53.278632  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:53.278640  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:53.278705  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:53.315062  148123 cri.go:89] found id: ""
	I1010 19:25:53.315090  148123 logs.go:282] 0 containers: []
	W1010 19:25:53.315101  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:53.315109  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:53.315182  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:53.348994  148123 cri.go:89] found id: ""
	I1010 19:25:53.349031  148123 logs.go:282] 0 containers: []
	W1010 19:25:53.349040  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:53.349046  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:53.349099  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:53.383334  148123 cri.go:89] found id: ""
	I1010 19:25:53.383363  148123 logs.go:282] 0 containers: []
	W1010 19:25:53.383373  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:53.383380  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:53.383450  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:53.417888  148123 cri.go:89] found id: ""
	I1010 19:25:53.417920  148123 logs.go:282] 0 containers: []
	W1010 19:25:53.417931  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:53.417938  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:53.418013  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:53.454757  148123 cri.go:89] found id: ""
	I1010 19:25:53.454787  148123 logs.go:282] 0 containers: []
	W1010 19:25:53.454798  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:53.454806  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:53.454871  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:53.491422  148123 cri.go:89] found id: ""
	I1010 19:25:53.491452  148123 logs.go:282] 0 containers: []
	W1010 19:25:53.491464  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:53.491472  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:53.491541  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:53.531240  148123 cri.go:89] found id: ""
	I1010 19:25:53.531271  148123 logs.go:282] 0 containers: []
	W1010 19:25:53.531282  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:53.531294  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:53.531310  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:50.727196  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:53.226957  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:50.065907  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:52.567056  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:54.051112  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:56.554137  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:53.582576  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:53.582608  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:53.596481  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:53.596511  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:53.666134  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:53.666162  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:53.666178  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:53.745621  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:53.745659  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:56.290744  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:56.307536  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:56.307610  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:56.353456  148123 cri.go:89] found id: ""
	I1010 19:25:56.353484  148123 logs.go:282] 0 containers: []
	W1010 19:25:56.353494  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:56.353501  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:56.353553  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:56.394093  148123 cri.go:89] found id: ""
	I1010 19:25:56.394120  148123 logs.go:282] 0 containers: []
	W1010 19:25:56.394131  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:56.394138  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:56.394203  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:56.428388  148123 cri.go:89] found id: ""
	I1010 19:25:56.428428  148123 logs.go:282] 0 containers: []
	W1010 19:25:56.428441  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:56.428450  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:56.428524  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:56.464414  148123 cri.go:89] found id: ""
	I1010 19:25:56.464459  148123 logs.go:282] 0 containers: []
	W1010 19:25:56.464472  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:56.464480  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:56.464563  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:56.505271  148123 cri.go:89] found id: ""
	I1010 19:25:56.505307  148123 logs.go:282] 0 containers: []
	W1010 19:25:56.505316  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:56.505322  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:56.505374  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:56.546424  148123 cri.go:89] found id: ""
	I1010 19:25:56.546456  148123 logs.go:282] 0 containers: []
	W1010 19:25:56.546467  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:56.546475  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:56.546545  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:56.587458  148123 cri.go:89] found id: ""
	I1010 19:25:56.587489  148123 logs.go:282] 0 containers: []
	W1010 19:25:56.587501  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:56.587509  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:56.587588  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:56.628248  148123 cri.go:89] found id: ""
	I1010 19:25:56.628279  148123 logs.go:282] 0 containers: []
	W1010 19:25:56.628291  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:56.628304  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:56.628320  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:56.642109  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:56.642141  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:56.723646  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:56.723678  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:56.723696  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:56.805849  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:56.805899  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:56.849863  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:56.849891  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:55.230447  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:57.726896  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:55.066248  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:57.565240  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:59.051145  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:01.554276  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:59.407093  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:59.422915  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:59.423005  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:59.462867  148123 cri.go:89] found id: ""
	I1010 19:25:59.462898  148123 logs.go:282] 0 containers: []
	W1010 19:25:59.462909  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:59.462917  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:59.462980  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:59.496931  148123 cri.go:89] found id: ""
	I1010 19:25:59.496958  148123 logs.go:282] 0 containers: []
	W1010 19:25:59.496968  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:59.496983  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:59.497049  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:59.532241  148123 cri.go:89] found id: ""
	I1010 19:25:59.532271  148123 logs.go:282] 0 containers: []
	W1010 19:25:59.532283  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:59.532291  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:59.532352  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:59.571285  148123 cri.go:89] found id: ""
	I1010 19:25:59.571313  148123 logs.go:282] 0 containers: []
	W1010 19:25:59.571324  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:59.571331  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:59.571391  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:59.606687  148123 cri.go:89] found id: ""
	I1010 19:25:59.606721  148123 logs.go:282] 0 containers: []
	W1010 19:25:59.606734  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:59.606741  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:59.606800  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:59.643247  148123 cri.go:89] found id: ""
	I1010 19:25:59.643276  148123 logs.go:282] 0 containers: []
	W1010 19:25:59.643286  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:59.643294  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:59.643369  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:59.679305  148123 cri.go:89] found id: ""
	I1010 19:25:59.679335  148123 logs.go:282] 0 containers: []
	W1010 19:25:59.679344  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:59.679350  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:59.679407  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:59.716149  148123 cri.go:89] found id: ""
	I1010 19:25:59.716198  148123 logs.go:282] 0 containers: []
	W1010 19:25:59.716210  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:59.716222  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:59.716239  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:59.730334  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:59.730364  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:59.799759  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:59.799786  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:59.799804  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:59.881883  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:59.881925  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:59.923755  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:59.923784  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:02.478043  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:02.492627  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:02.492705  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:02.532133  148123 cri.go:89] found id: ""
	I1010 19:26:02.532160  148123 logs.go:282] 0 containers: []
	W1010 19:26:02.532172  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:02.532179  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:02.532244  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:02.571915  148123 cri.go:89] found id: ""
	I1010 19:26:02.571943  148123 logs.go:282] 0 containers: []
	W1010 19:26:02.571951  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:02.571957  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:02.572014  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:02.616330  148123 cri.go:89] found id: ""
	I1010 19:26:02.616365  148123 logs.go:282] 0 containers: []
	W1010 19:26:02.616376  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:02.616385  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:02.616455  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:02.662245  148123 cri.go:89] found id: ""
	I1010 19:26:02.662275  148123 logs.go:282] 0 containers: []
	W1010 19:26:02.662284  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:02.662291  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:02.662354  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:02.698692  148123 cri.go:89] found id: ""
	I1010 19:26:02.698720  148123 logs.go:282] 0 containers: []
	W1010 19:26:02.698731  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:02.698739  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:02.698805  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:02.736643  148123 cri.go:89] found id: ""
	I1010 19:26:02.736667  148123 logs.go:282] 0 containers: []
	W1010 19:26:02.736677  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:02.736685  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:02.736742  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:02.780356  148123 cri.go:89] found id: ""
	I1010 19:26:02.780388  148123 logs.go:282] 0 containers: []
	W1010 19:26:02.780399  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:02.780407  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:02.780475  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:02.816716  148123 cri.go:89] found id: ""
	I1010 19:26:02.816744  148123 logs.go:282] 0 containers: []
	W1010 19:26:02.816752  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:02.816761  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:02.816775  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:02.899002  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:02.899046  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:02.942060  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:02.942093  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:02.997605  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:02.997661  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:03.011768  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:03.011804  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:03.096404  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:59.727075  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:02.227526  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:59.565903  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:02.066179  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:04.049656  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:06.050425  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:05.596971  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:05.612524  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:05.612598  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:05.649999  148123 cri.go:89] found id: ""
	I1010 19:26:05.650032  148123 logs.go:282] 0 containers: []
	W1010 19:26:05.650044  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:05.650052  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:05.650119  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:05.684365  148123 cri.go:89] found id: ""
	I1010 19:26:05.684398  148123 logs.go:282] 0 containers: []
	W1010 19:26:05.684419  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:05.684427  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:05.684494  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:05.721801  148123 cri.go:89] found id: ""
	I1010 19:26:05.721832  148123 logs.go:282] 0 containers: []
	W1010 19:26:05.721840  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:05.721847  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:05.721908  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:05.762010  148123 cri.go:89] found id: ""
	I1010 19:26:05.762037  148123 logs.go:282] 0 containers: []
	W1010 19:26:05.762046  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:05.762056  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:05.762117  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:05.803425  148123 cri.go:89] found id: ""
	I1010 19:26:05.803457  148123 logs.go:282] 0 containers: []
	W1010 19:26:05.803468  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:05.803476  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:05.803551  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:05.838800  148123 cri.go:89] found id: ""
	I1010 19:26:05.838833  148123 logs.go:282] 0 containers: []
	W1010 19:26:05.838845  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:05.838853  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:05.838921  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:05.875110  148123 cri.go:89] found id: ""
	I1010 19:26:05.875147  148123 logs.go:282] 0 containers: []
	W1010 19:26:05.875158  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:05.875167  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:05.875245  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:05.908951  148123 cri.go:89] found id: ""
	I1010 19:26:05.908988  148123 logs.go:282] 0 containers: []
	W1010 19:26:05.909000  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:05.909012  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:05.909027  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:05.922263  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:05.922293  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:05.996441  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:05.996465  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:05.996484  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:06.078113  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:06.078160  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:06.122919  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:06.122956  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:04.726335  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:06.728178  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:04.066573  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:06.564991  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:08.566655  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:08.050522  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:10.550288  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:08.674464  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:08.688452  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:08.688570  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:08.726240  148123 cri.go:89] found id: ""
	I1010 19:26:08.726269  148123 logs.go:282] 0 containers: []
	W1010 19:26:08.726279  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:08.726287  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:08.726370  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:08.764762  148123 cri.go:89] found id: ""
	I1010 19:26:08.764790  148123 logs.go:282] 0 containers: []
	W1010 19:26:08.764799  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:08.764806  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:08.764887  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:08.805522  148123 cri.go:89] found id: ""
	I1010 19:26:08.805552  148123 logs.go:282] 0 containers: []
	W1010 19:26:08.805563  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:08.805572  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:08.805642  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:08.845694  148123 cri.go:89] found id: ""
	I1010 19:26:08.845729  148123 logs.go:282] 0 containers: []
	W1010 19:26:08.845740  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:08.845747  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:08.845817  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:08.885024  148123 cri.go:89] found id: ""
	I1010 19:26:08.885054  148123 logs.go:282] 0 containers: []
	W1010 19:26:08.885066  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:08.885074  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:08.885138  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:08.927500  148123 cri.go:89] found id: ""
	I1010 19:26:08.927531  148123 logs.go:282] 0 containers: []
	W1010 19:26:08.927540  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:08.927547  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:08.927616  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:08.963870  148123 cri.go:89] found id: ""
	I1010 19:26:08.963904  148123 logs.go:282] 0 containers: []
	W1010 19:26:08.963916  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:08.963924  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:08.963988  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:08.997984  148123 cri.go:89] found id: ""
	I1010 19:26:08.998017  148123 logs.go:282] 0 containers: []
	W1010 19:26:08.998027  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:08.998039  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:08.998056  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:09.049307  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:09.049341  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:09.063341  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:09.063432  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:09.145190  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:09.145214  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:09.145226  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:09.231409  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:09.231445  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:11.773475  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:11.788981  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:11.789055  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:11.830289  148123 cri.go:89] found id: ""
	I1010 19:26:11.830319  148123 logs.go:282] 0 containers: []
	W1010 19:26:11.830327  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:11.830333  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:11.830399  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:11.867093  148123 cri.go:89] found id: ""
	I1010 19:26:11.867120  148123 logs.go:282] 0 containers: []
	W1010 19:26:11.867131  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:11.867138  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:11.867212  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:11.902146  148123 cri.go:89] found id: ""
	I1010 19:26:11.902181  148123 logs.go:282] 0 containers: []
	W1010 19:26:11.902192  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:11.902201  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:11.902271  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:11.938652  148123 cri.go:89] found id: ""
	I1010 19:26:11.938681  148123 logs.go:282] 0 containers: []
	W1010 19:26:11.938691  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:11.938703  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:11.938771  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:11.975903  148123 cri.go:89] found id: ""
	I1010 19:26:11.975937  148123 logs.go:282] 0 containers: []
	W1010 19:26:11.975947  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:11.975955  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:11.976020  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:12.014083  148123 cri.go:89] found id: ""
	I1010 19:26:12.014111  148123 logs.go:282] 0 containers: []
	W1010 19:26:12.014119  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:12.014126  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:12.014190  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:12.057832  148123 cri.go:89] found id: ""
	I1010 19:26:12.057858  148123 logs.go:282] 0 containers: []
	W1010 19:26:12.057867  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:12.057874  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:12.057925  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:12.103253  148123 cri.go:89] found id: ""
	I1010 19:26:12.103286  148123 logs.go:282] 0 containers: []
	W1010 19:26:12.103298  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:12.103311  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:12.103326  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:12.171062  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:12.171104  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:12.185463  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:12.185496  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:12.261568  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:12.261592  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:12.261607  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:12.345819  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:12.345860  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:09.226954  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:11.227205  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:13.227457  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:11.066777  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:13.565854  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:12.551323  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:15.051745  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:14.886283  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:14.901117  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:14.901213  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:14.935235  148123 cri.go:89] found id: ""
	I1010 19:26:14.935264  148123 logs.go:282] 0 containers: []
	W1010 19:26:14.935273  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:14.935280  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:14.935336  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:14.970617  148123 cri.go:89] found id: ""
	I1010 19:26:14.970642  148123 logs.go:282] 0 containers: []
	W1010 19:26:14.970652  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:14.970658  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:14.970722  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:15.011086  148123 cri.go:89] found id: ""
	I1010 19:26:15.011113  148123 logs.go:282] 0 containers: []
	W1010 19:26:15.011122  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:15.011128  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:15.011188  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:15.053004  148123 cri.go:89] found id: ""
	I1010 19:26:15.053026  148123 logs.go:282] 0 containers: []
	W1010 19:26:15.053033  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:15.053039  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:15.053099  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:15.089275  148123 cri.go:89] found id: ""
	I1010 19:26:15.089304  148123 logs.go:282] 0 containers: []
	W1010 19:26:15.089312  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:15.089319  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:15.089383  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:15.126620  148123 cri.go:89] found id: ""
	I1010 19:26:15.126645  148123 logs.go:282] 0 containers: []
	W1010 19:26:15.126654  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:15.126660  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:15.126723  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:15.160920  148123 cri.go:89] found id: ""
	I1010 19:26:15.160956  148123 logs.go:282] 0 containers: []
	W1010 19:26:15.160969  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:15.160979  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:15.161037  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:15.197430  148123 cri.go:89] found id: ""
	I1010 19:26:15.197462  148123 logs.go:282] 0 containers: []
	W1010 19:26:15.197474  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:15.197486  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:15.197507  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:15.240905  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:15.240953  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:15.297663  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:15.297719  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:15.312279  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:15.312313  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:15.395296  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:15.395320  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:15.395336  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:17.978030  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:17.992574  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:17.992651  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:18.028846  148123 cri.go:89] found id: ""
	I1010 19:26:18.028890  148123 logs.go:282] 0 containers: []
	W1010 19:26:18.028901  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:18.028908  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:18.028965  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:18.073004  148123 cri.go:89] found id: ""
	I1010 19:26:18.073033  148123 logs.go:282] 0 containers: []
	W1010 19:26:18.073041  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:18.073047  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:18.073118  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:18.112062  148123 cri.go:89] found id: ""
	I1010 19:26:18.112098  148123 logs.go:282] 0 containers: []
	W1010 19:26:18.112111  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:18.112121  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:18.112209  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:18.151565  148123 cri.go:89] found id: ""
	I1010 19:26:18.151597  148123 logs.go:282] 0 containers: []
	W1010 19:26:18.151608  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:18.151616  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:18.151675  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:18.197483  148123 cri.go:89] found id: ""
	I1010 19:26:18.197509  148123 logs.go:282] 0 containers: []
	W1010 19:26:18.197518  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:18.197524  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:18.197591  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:18.235151  148123 cri.go:89] found id: ""
	I1010 19:26:18.235181  148123 logs.go:282] 0 containers: []
	W1010 19:26:18.235193  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:18.235201  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:18.235279  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:18.273807  148123 cri.go:89] found id: ""
	I1010 19:26:18.273841  148123 logs.go:282] 0 containers: []
	W1010 19:26:18.273852  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:18.273858  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:18.273920  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:18.311644  148123 cri.go:89] found id: ""
	I1010 19:26:18.311677  148123 logs.go:282] 0 containers: []
	W1010 19:26:18.311688  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:18.311700  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:18.311716  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:18.355599  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:18.355635  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:18.407786  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:18.407830  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:18.422944  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:18.422976  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:18.496830  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:18.496873  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:18.496887  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:15.227600  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:17.726712  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:16.065701  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:18.066861  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:17.558257  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:20.050914  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:21.108300  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:21.123177  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:21.123243  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:21.162544  148123 cri.go:89] found id: ""
	I1010 19:26:21.162575  148123 logs.go:282] 0 containers: []
	W1010 19:26:21.162586  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:21.162595  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:21.162662  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:21.199799  148123 cri.go:89] found id: ""
	I1010 19:26:21.199828  148123 logs.go:282] 0 containers: []
	W1010 19:26:21.199839  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:21.199845  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:21.199901  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:21.246333  148123 cri.go:89] found id: ""
	I1010 19:26:21.246357  148123 logs.go:282] 0 containers: []
	W1010 19:26:21.246366  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:21.246372  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:21.246433  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:21.282681  148123 cri.go:89] found id: ""
	I1010 19:26:21.282719  148123 logs.go:282] 0 containers: []
	W1010 19:26:21.282732  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:21.282740  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:21.282810  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:21.319500  148123 cri.go:89] found id: ""
	I1010 19:26:21.319535  148123 logs.go:282] 0 containers: []
	W1010 19:26:21.319546  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:21.319554  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:21.319623  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:21.355779  148123 cri.go:89] found id: ""
	I1010 19:26:21.355809  148123 logs.go:282] 0 containers: []
	W1010 19:26:21.355820  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:21.355828  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:21.355902  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:21.394567  148123 cri.go:89] found id: ""
	I1010 19:26:21.394608  148123 logs.go:282] 0 containers: []
	W1010 19:26:21.394617  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:21.394623  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:21.394684  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:21.430009  148123 cri.go:89] found id: ""
	I1010 19:26:21.430046  148123 logs.go:282] 0 containers: []
	W1010 19:26:21.430058  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:21.430069  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:21.430089  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:21.443267  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:21.443301  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:21.517046  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:21.517068  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:21.517083  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:21.594927  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:21.594978  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:21.634274  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:21.634311  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:20.227157  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:22.727736  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:20.566652  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:23.066459  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:22.550526  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:25.050647  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:24.189760  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:24.204355  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:24.204420  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:24.248130  148123 cri.go:89] found id: ""
	I1010 19:26:24.248160  148123 logs.go:282] 0 containers: []
	W1010 19:26:24.248171  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:24.248178  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:24.248244  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:24.293281  148123 cri.go:89] found id: ""
	I1010 19:26:24.293312  148123 logs.go:282] 0 containers: []
	W1010 19:26:24.293324  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:24.293331  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:24.293400  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:24.350709  148123 cri.go:89] found id: ""
	I1010 19:26:24.350743  148123 logs.go:282] 0 containers: []
	W1010 19:26:24.350755  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:24.350765  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:24.350838  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:24.394113  148123 cri.go:89] found id: ""
	I1010 19:26:24.394152  148123 logs.go:282] 0 containers: []
	W1010 19:26:24.394170  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:24.394181  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:24.394256  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:24.445999  148123 cri.go:89] found id: ""
	I1010 19:26:24.446030  148123 logs.go:282] 0 containers: []
	W1010 19:26:24.446042  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:24.446051  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:24.446120  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:24.483566  148123 cri.go:89] found id: ""
	I1010 19:26:24.483596  148123 logs.go:282] 0 containers: []
	W1010 19:26:24.483605  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:24.483612  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:24.483665  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:24.520705  148123 cri.go:89] found id: ""
	I1010 19:26:24.520736  148123 logs.go:282] 0 containers: []
	W1010 19:26:24.520748  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:24.520757  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:24.520825  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:24.557316  148123 cri.go:89] found id: ""
	I1010 19:26:24.557346  148123 logs.go:282] 0 containers: []
	W1010 19:26:24.557355  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:24.557364  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:24.557376  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:24.608065  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:24.608109  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:24.623202  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:24.623250  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:24.694665  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:24.694697  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:24.694716  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:24.783369  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:24.783414  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:27.327524  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:27.341132  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:27.341214  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:27.376589  148123 cri.go:89] found id: ""
	I1010 19:26:27.376618  148123 logs.go:282] 0 containers: []
	W1010 19:26:27.376627  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:27.376633  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:27.376684  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:27.414452  148123 cri.go:89] found id: ""
	I1010 19:26:27.414491  148123 logs.go:282] 0 containers: []
	W1010 19:26:27.414502  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:27.414510  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:27.414572  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:27.449839  148123 cri.go:89] found id: ""
	I1010 19:26:27.449867  148123 logs.go:282] 0 containers: []
	W1010 19:26:27.449875  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:27.449882  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:27.449932  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:27.492445  148123 cri.go:89] found id: ""
	I1010 19:26:27.492472  148123 logs.go:282] 0 containers: []
	W1010 19:26:27.492480  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:27.492487  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:27.492549  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:27.533032  148123 cri.go:89] found id: ""
	I1010 19:26:27.533085  148123 logs.go:282] 0 containers: []
	W1010 19:26:27.533095  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:27.533122  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:27.533199  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:27.571096  148123 cri.go:89] found id: ""
	I1010 19:26:27.571122  148123 logs.go:282] 0 containers: []
	W1010 19:26:27.571130  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:27.571135  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:27.571204  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:27.608762  148123 cri.go:89] found id: ""
	I1010 19:26:27.608798  148123 logs.go:282] 0 containers: []
	W1010 19:26:27.608809  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:27.608818  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:27.608896  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:27.648200  148123 cri.go:89] found id: ""
	I1010 19:26:27.648236  148123 logs.go:282] 0 containers: []
	W1010 19:26:27.648248  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:27.648260  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:27.648275  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:27.698124  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:27.698154  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:27.755009  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:27.755052  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:27.770048  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:27.770084  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:27.845475  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:27.845505  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:27.845519  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:24.729352  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:26.731831  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:25.566028  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:27.567052  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:27.555698  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:30.049914  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:30.423117  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:30.436706  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:30.436768  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:30.473367  148123 cri.go:89] found id: ""
	I1010 19:26:30.473396  148123 logs.go:282] 0 containers: []
	W1010 19:26:30.473408  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:30.473417  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:30.473470  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:30.510560  148123 cri.go:89] found id: ""
	I1010 19:26:30.510599  148123 logs.go:282] 0 containers: []
	W1010 19:26:30.510610  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:30.510619  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:30.510687  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:30.549682  148123 cri.go:89] found id: ""
	I1010 19:26:30.549715  148123 logs.go:282] 0 containers: []
	W1010 19:26:30.549726  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:30.549734  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:30.549798  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:30.586401  148123 cri.go:89] found id: ""
	I1010 19:26:30.586425  148123 logs.go:282] 0 containers: []
	W1010 19:26:30.586434  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:30.586441  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:30.586492  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:30.622012  148123 cri.go:89] found id: ""
	I1010 19:26:30.622037  148123 logs.go:282] 0 containers: []
	W1010 19:26:30.622045  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:30.622052  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:30.622107  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:30.658336  148123 cri.go:89] found id: ""
	I1010 19:26:30.658364  148123 logs.go:282] 0 containers: []
	W1010 19:26:30.658372  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:30.658378  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:30.658442  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:30.695495  148123 cri.go:89] found id: ""
	I1010 19:26:30.695523  148123 logs.go:282] 0 containers: []
	W1010 19:26:30.695532  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:30.695538  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:30.695591  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:30.735093  148123 cri.go:89] found id: ""
	I1010 19:26:30.735125  148123 logs.go:282] 0 containers: []
	W1010 19:26:30.735136  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:30.735148  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:30.735163  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:30.816097  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:30.816140  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:30.853878  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:30.853915  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:30.906289  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:30.906341  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:30.923742  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:30.923769  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:30.993226  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:33.493799  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:33.508083  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:33.508166  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:33.545019  148123 cri.go:89] found id: ""
	I1010 19:26:33.545055  148123 logs.go:282] 0 containers: []
	W1010 19:26:33.545072  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:33.545081  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:33.545150  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:29.226673  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:31.227117  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:33.727777  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:30.068231  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:32.566025  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:32.050118  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:34.051720  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:36.550138  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:33.584215  148123 cri.go:89] found id: ""
	I1010 19:26:33.584242  148123 logs.go:282] 0 containers: []
	W1010 19:26:33.584252  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:33.584259  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:33.584315  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:33.620598  148123 cri.go:89] found id: ""
	I1010 19:26:33.620627  148123 logs.go:282] 0 containers: []
	W1010 19:26:33.620636  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:33.620643  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:33.620691  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:33.655225  148123 cri.go:89] found id: ""
	I1010 19:26:33.655252  148123 logs.go:282] 0 containers: []
	W1010 19:26:33.655260  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:33.655267  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:33.655322  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:33.693464  148123 cri.go:89] found id: ""
	I1010 19:26:33.693490  148123 logs.go:282] 0 containers: []
	W1010 19:26:33.693498  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:33.693505  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:33.693554  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:33.734024  148123 cri.go:89] found id: ""
	I1010 19:26:33.734059  148123 logs.go:282] 0 containers: []
	W1010 19:26:33.734071  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:33.734079  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:33.734141  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:33.769434  148123 cri.go:89] found id: ""
	I1010 19:26:33.769467  148123 logs.go:282] 0 containers: []
	W1010 19:26:33.769476  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:33.769483  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:33.769533  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:33.809011  148123 cri.go:89] found id: ""
	I1010 19:26:33.809040  148123 logs.go:282] 0 containers: []
	W1010 19:26:33.809049  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:33.809059  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:33.809071  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:33.864186  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:33.864230  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:33.878681  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:33.878711  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:33.958225  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:33.958250  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:33.958265  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:34.034730  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:34.034774  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:36.577123  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:36.591494  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:36.591560  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:36.625170  148123 cri.go:89] found id: ""
	I1010 19:26:36.625210  148123 logs.go:282] 0 containers: []
	W1010 19:26:36.625222  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:36.625230  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:36.625295  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:36.660790  148123 cri.go:89] found id: ""
	I1010 19:26:36.660821  148123 logs.go:282] 0 containers: []
	W1010 19:26:36.660831  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:36.660838  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:36.660911  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:36.695742  148123 cri.go:89] found id: ""
	I1010 19:26:36.695774  148123 logs.go:282] 0 containers: []
	W1010 19:26:36.695786  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:36.695793  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:36.695854  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:36.729523  148123 cri.go:89] found id: ""
	I1010 19:26:36.729552  148123 logs.go:282] 0 containers: []
	W1010 19:26:36.729561  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:36.729569  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:36.729630  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:36.765515  148123 cri.go:89] found id: ""
	I1010 19:26:36.765549  148123 logs.go:282] 0 containers: []
	W1010 19:26:36.765561  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:36.765571  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:36.765635  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:36.799110  148123 cri.go:89] found id: ""
	I1010 19:26:36.799144  148123 logs.go:282] 0 containers: []
	W1010 19:26:36.799155  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:36.799164  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:36.799224  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:36.832806  148123 cri.go:89] found id: ""
	I1010 19:26:36.832834  148123 logs.go:282] 0 containers: []
	W1010 19:26:36.832845  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:36.832865  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:36.832933  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:36.869093  148123 cri.go:89] found id: ""
	I1010 19:26:36.869126  148123 logs.go:282] 0 containers: []
	W1010 19:26:36.869137  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:36.869149  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:36.869165  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:36.922229  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:36.922276  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:36.936973  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:36.937019  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:37.016400  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:37.016425  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:37.016448  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:37.106308  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:37.106354  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:36.227451  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:38.726229  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:35.067396  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:37.565711  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:38.550438  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:41.050698  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:39.652101  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:39.665546  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:39.665639  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:39.699646  148123 cri.go:89] found id: ""
	I1010 19:26:39.699675  148123 logs.go:282] 0 containers: []
	W1010 19:26:39.699686  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:39.699693  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:39.699745  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:39.735568  148123 cri.go:89] found id: ""
	I1010 19:26:39.735592  148123 logs.go:282] 0 containers: []
	W1010 19:26:39.735600  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:39.735606  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:39.735656  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:39.770021  148123 cri.go:89] found id: ""
	I1010 19:26:39.770049  148123 logs.go:282] 0 containers: []
	W1010 19:26:39.770060  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:39.770067  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:39.770139  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:39.804867  148123 cri.go:89] found id: ""
	I1010 19:26:39.804894  148123 logs.go:282] 0 containers: []
	W1010 19:26:39.804903  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:39.804910  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:39.804972  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:39.847074  148123 cri.go:89] found id: ""
	I1010 19:26:39.847104  148123 logs.go:282] 0 containers: []
	W1010 19:26:39.847115  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:39.847124  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:39.847200  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:39.890334  148123 cri.go:89] found id: ""
	I1010 19:26:39.890368  148123 logs.go:282] 0 containers: []
	W1010 19:26:39.890379  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:39.890387  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:39.890456  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:39.926316  148123 cri.go:89] found id: ""
	I1010 19:26:39.926346  148123 logs.go:282] 0 containers: []
	W1010 19:26:39.926355  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:39.926362  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:39.926416  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:39.966179  148123 cri.go:89] found id: ""
	I1010 19:26:39.966215  148123 logs.go:282] 0 containers: []
	W1010 19:26:39.966227  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:39.966239  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:39.966255  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:40.047457  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:40.047505  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:40.086611  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:40.086656  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:40.139123  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:40.139167  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:40.152966  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:40.152995  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:40.231009  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:42.731851  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:42.748201  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:42.748287  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:42.787567  148123 cri.go:89] found id: ""
	I1010 19:26:42.787599  148123 logs.go:282] 0 containers: []
	W1010 19:26:42.787607  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:42.787614  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:42.787697  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:42.831051  148123 cri.go:89] found id: ""
	I1010 19:26:42.831089  148123 logs.go:282] 0 containers: []
	W1010 19:26:42.831100  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:42.831109  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:42.831180  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:42.871352  148123 cri.go:89] found id: ""
	I1010 19:26:42.871390  148123 logs.go:282] 0 containers: []
	W1010 19:26:42.871402  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:42.871410  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:42.871484  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:42.910283  148123 cri.go:89] found id: ""
	I1010 19:26:42.910316  148123 logs.go:282] 0 containers: []
	W1010 19:26:42.910327  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:42.910335  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:42.910403  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:42.946971  148123 cri.go:89] found id: ""
	I1010 19:26:42.947003  148123 logs.go:282] 0 containers: []
	W1010 19:26:42.947012  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:42.947019  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:42.947087  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:42.985484  148123 cri.go:89] found id: ""
	I1010 19:26:42.985517  148123 logs.go:282] 0 containers: []
	W1010 19:26:42.985528  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:42.985537  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:42.985603  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:43.024500  148123 cri.go:89] found id: ""
	I1010 19:26:43.024535  148123 logs.go:282] 0 containers: []
	W1010 19:26:43.024544  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:43.024552  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:43.024608  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:43.064008  148123 cri.go:89] found id: ""
	I1010 19:26:43.064039  148123 logs.go:282] 0 containers: []
	W1010 19:26:43.064049  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:43.064060  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:43.064075  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:43.119367  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:43.119415  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:43.133396  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:43.133428  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:43.207447  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:43.207470  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:43.207491  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:43.290416  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:43.290456  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:40.727919  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:43.227782  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:40.066461  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:42.565505  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:43.051835  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:45.052308  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:45.834115  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:45.849172  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:45.849238  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:45.886020  148123 cri.go:89] found id: ""
	I1010 19:26:45.886049  148123 logs.go:282] 0 containers: []
	W1010 19:26:45.886060  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:45.886067  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:45.886152  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:45.923188  148123 cri.go:89] found id: ""
	I1010 19:26:45.923218  148123 logs.go:282] 0 containers: []
	W1010 19:26:45.923245  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:45.923253  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:45.923316  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:45.960297  148123 cri.go:89] found id: ""
	I1010 19:26:45.960337  148123 logs.go:282] 0 containers: []
	W1010 19:26:45.960351  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:45.960361  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:45.960432  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:45.995140  148123 cri.go:89] found id: ""
	I1010 19:26:45.995173  148123 logs.go:282] 0 containers: []
	W1010 19:26:45.995184  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:45.995191  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:45.995265  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:46.040390  148123 cri.go:89] found id: ""
	I1010 19:26:46.040418  148123 logs.go:282] 0 containers: []
	W1010 19:26:46.040426  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:46.040433  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:46.040500  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:46.079999  148123 cri.go:89] found id: ""
	I1010 19:26:46.080029  148123 logs.go:282] 0 containers: []
	W1010 19:26:46.080042  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:46.080052  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:46.080105  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:46.117331  148123 cri.go:89] found id: ""
	I1010 19:26:46.117357  148123 logs.go:282] 0 containers: []
	W1010 19:26:46.117364  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:46.117370  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:46.117441  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:46.152248  148123 cri.go:89] found id: ""
	I1010 19:26:46.152279  148123 logs.go:282] 0 containers: []
	W1010 19:26:46.152290  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:46.152303  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:46.152319  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:46.192503  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:46.192533  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:46.246419  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:46.246456  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:46.259864  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:46.259895  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:46.338518  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:46.338549  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:46.338567  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:45.726776  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:48.228318  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:44.567056  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:47.065636  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:47.551013  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:50.053824  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:48.924216  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:48.938741  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:48.938805  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:48.973310  148123 cri.go:89] found id: ""
	I1010 19:26:48.973339  148123 logs.go:282] 0 containers: []
	W1010 19:26:48.973348  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:48.973356  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:48.973411  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:49.010467  148123 cri.go:89] found id: ""
	I1010 19:26:49.010492  148123 logs.go:282] 0 containers: []
	W1010 19:26:49.010500  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:49.010506  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:49.010585  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:49.051621  148123 cri.go:89] found id: ""
	I1010 19:26:49.051646  148123 logs.go:282] 0 containers: []
	W1010 19:26:49.051653  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:49.051664  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:49.051727  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:49.091090  148123 cri.go:89] found id: ""
	I1010 19:26:49.091121  148123 logs.go:282] 0 containers: []
	W1010 19:26:49.091132  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:49.091140  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:49.091202  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:49.127669  148123 cri.go:89] found id: ""
	I1010 19:26:49.127712  148123 logs.go:282] 0 containers: []
	W1010 19:26:49.127724  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:49.127732  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:49.127804  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:49.164446  148123 cri.go:89] found id: ""
	I1010 19:26:49.164476  148123 logs.go:282] 0 containers: []
	W1010 19:26:49.164485  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:49.164491  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:49.164553  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:49.201222  148123 cri.go:89] found id: ""
	I1010 19:26:49.201263  148123 logs.go:282] 0 containers: []
	W1010 19:26:49.201275  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:49.201284  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:49.201345  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:49.238258  148123 cri.go:89] found id: ""
	I1010 19:26:49.238283  148123 logs.go:282] 0 containers: []
	W1010 19:26:49.238290  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:49.238299  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:49.238311  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:49.313576  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:49.313619  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:49.351066  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:49.351096  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:49.402772  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:49.402823  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:49.417713  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:49.417752  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:49.488834  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:51.989031  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:52.003056  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:52.003140  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:52.039682  148123 cri.go:89] found id: ""
	I1010 19:26:52.039709  148123 logs.go:282] 0 containers: []
	W1010 19:26:52.039718  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:52.039725  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:52.039788  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:52.081402  148123 cri.go:89] found id: ""
	I1010 19:26:52.081433  148123 logs.go:282] 0 containers: []
	W1010 19:26:52.081443  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:52.081449  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:52.081502  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:52.118270  148123 cri.go:89] found id: ""
	I1010 19:26:52.118304  148123 logs.go:282] 0 containers: []
	W1010 19:26:52.118315  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:52.118325  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:52.118392  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:52.154692  148123 cri.go:89] found id: ""
	I1010 19:26:52.154724  148123 logs.go:282] 0 containers: []
	W1010 19:26:52.154735  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:52.154743  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:52.154807  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:52.190056  148123 cri.go:89] found id: ""
	I1010 19:26:52.190085  148123 logs.go:282] 0 containers: []
	W1010 19:26:52.190094  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:52.190101  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:52.190161  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:52.225468  148123 cri.go:89] found id: ""
	I1010 19:26:52.225501  148123 logs.go:282] 0 containers: []
	W1010 19:26:52.225511  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:52.225521  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:52.225589  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:52.260682  148123 cri.go:89] found id: ""
	I1010 19:26:52.260710  148123 logs.go:282] 0 containers: []
	W1010 19:26:52.260718  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:52.260724  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:52.260774  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:52.297613  148123 cri.go:89] found id: ""
	I1010 19:26:52.297642  148123 logs.go:282] 0 containers: []
	W1010 19:26:52.297659  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:52.297672  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:52.297689  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:52.352224  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:52.352267  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:52.367003  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:52.367033  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:52.443124  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:52.443157  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:52.443173  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:52.522391  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:52.522433  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:50.726363  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:52.727069  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:49.069109  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:51.566132  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:53.567867  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:52.554195  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:55.050995  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:55.066877  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:55.082191  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:55.082258  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:55.120729  148123 cri.go:89] found id: ""
	I1010 19:26:55.120763  148123 logs.go:282] 0 containers: []
	W1010 19:26:55.120775  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:55.120784  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:55.120863  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:55.154795  148123 cri.go:89] found id: ""
	I1010 19:26:55.154827  148123 logs.go:282] 0 containers: []
	W1010 19:26:55.154839  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:55.154848  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:55.154904  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:55.194846  148123 cri.go:89] found id: ""
	I1010 19:26:55.194876  148123 logs.go:282] 0 containers: []
	W1010 19:26:55.194889  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:55.194897  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:55.194958  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:55.230173  148123 cri.go:89] found id: ""
	I1010 19:26:55.230201  148123 logs.go:282] 0 containers: []
	W1010 19:26:55.230212  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:55.230220  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:55.230290  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:55.264999  148123 cri.go:89] found id: ""
	I1010 19:26:55.265025  148123 logs.go:282] 0 containers: []
	W1010 19:26:55.265032  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:55.265039  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:55.265096  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:55.303666  148123 cri.go:89] found id: ""
	I1010 19:26:55.303695  148123 logs.go:282] 0 containers: []
	W1010 19:26:55.303703  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:55.303724  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:55.303797  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:55.342061  148123 cri.go:89] found id: ""
	I1010 19:26:55.342087  148123 logs.go:282] 0 containers: []
	W1010 19:26:55.342098  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:55.342106  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:55.342171  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:55.378305  148123 cri.go:89] found id: ""
	I1010 19:26:55.378336  148123 logs.go:282] 0 containers: []
	W1010 19:26:55.378345  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:55.378354  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:55.378365  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:55.416815  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:55.416865  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:55.467371  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:55.467415  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:55.481704  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:55.481744  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:55.559219  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:55.559242  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:55.559254  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:58.141472  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:58.154326  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:58.154394  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:58.193053  148123 cri.go:89] found id: ""
	I1010 19:26:58.193080  148123 logs.go:282] 0 containers: []
	W1010 19:26:58.193088  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:58.193094  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:58.193142  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:58.232514  148123 cri.go:89] found id: ""
	I1010 19:26:58.232538  148123 logs.go:282] 0 containers: []
	W1010 19:26:58.232545  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:58.232551  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:58.232599  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:58.266724  148123 cri.go:89] found id: ""
	I1010 19:26:58.266756  148123 logs.go:282] 0 containers: []
	W1010 19:26:58.266765  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:58.266771  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:58.266824  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:58.301986  148123 cri.go:89] found id: ""
	I1010 19:26:58.302012  148123 logs.go:282] 0 containers: []
	W1010 19:26:58.302019  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:58.302031  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:58.302080  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:58.340394  148123 cri.go:89] found id: ""
	I1010 19:26:58.340431  148123 logs.go:282] 0 containers: []
	W1010 19:26:58.340440  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:58.340448  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:58.340500  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:58.407035  148123 cri.go:89] found id: ""
	I1010 19:26:58.407069  148123 logs.go:282] 0 containers: []
	W1010 19:26:58.407081  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:58.407088  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:58.407151  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:58.446650  148123 cri.go:89] found id: ""
	I1010 19:26:58.446682  148123 logs.go:282] 0 containers: []
	W1010 19:26:58.446694  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:58.446703  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:58.446763  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:58.480719  148123 cri.go:89] found id: ""
	I1010 19:26:58.480750  148123 logs.go:282] 0 containers: []
	W1010 19:26:58.480759  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:58.480768  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:58.480781  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:58.561568  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:58.561602  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:55.227199  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:57.726841  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:56.065787  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:58.566732  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:57.550718  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:59.550793  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:58.609237  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:58.609264  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:58.659705  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:58.659748  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:58.674646  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:58.674680  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:58.750922  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:01.252098  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:01.265420  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:01.265497  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:01.300133  148123 cri.go:89] found id: ""
	I1010 19:27:01.300168  148123 logs.go:282] 0 containers: []
	W1010 19:27:01.300180  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:01.300188  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:01.300272  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:01.337451  148123 cri.go:89] found id: ""
	I1010 19:27:01.337477  148123 logs.go:282] 0 containers: []
	W1010 19:27:01.337485  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:01.337492  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:01.337539  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:01.371987  148123 cri.go:89] found id: ""
	I1010 19:27:01.372019  148123 logs.go:282] 0 containers: []
	W1010 19:27:01.372028  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:01.372034  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:01.372098  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:01.408996  148123 cri.go:89] found id: ""
	I1010 19:27:01.409022  148123 logs.go:282] 0 containers: []
	W1010 19:27:01.409033  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:01.409042  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:01.409109  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:01.443558  148123 cri.go:89] found id: ""
	I1010 19:27:01.443587  148123 logs.go:282] 0 containers: []
	W1010 19:27:01.443595  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:01.443602  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:01.443663  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:01.478517  148123 cri.go:89] found id: ""
	I1010 19:27:01.478546  148123 logs.go:282] 0 containers: []
	W1010 19:27:01.478555  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:01.478561  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:01.478611  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:01.513528  148123 cri.go:89] found id: ""
	I1010 19:27:01.513556  148123 logs.go:282] 0 containers: []
	W1010 19:27:01.513568  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:01.513576  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:01.513641  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:01.558861  148123 cri.go:89] found id: ""
	I1010 19:27:01.558897  148123 logs.go:282] 0 containers: []
	W1010 19:27:01.558909  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:01.558921  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:01.558937  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:01.599719  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:01.599747  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:01.650736  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:01.650774  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:01.664643  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:01.664669  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:01.736533  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:01.736557  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:01.736570  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:00.225540  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:02.226962  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:00.567193  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:03.066587  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:02.050439  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:04.050984  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:06.550977  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:04.317302  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:04.330450  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:04.330519  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:04.368893  148123 cri.go:89] found id: ""
	I1010 19:27:04.368923  148123 logs.go:282] 0 containers: []
	W1010 19:27:04.368932  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:04.368939  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:04.368993  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:04.406598  148123 cri.go:89] found id: ""
	I1010 19:27:04.406625  148123 logs.go:282] 0 containers: []
	W1010 19:27:04.406634  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:04.406640  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:04.406692  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:04.441485  148123 cri.go:89] found id: ""
	I1010 19:27:04.441515  148123 logs.go:282] 0 containers: []
	W1010 19:27:04.441525  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:04.441532  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:04.441581  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:04.476663  148123 cri.go:89] found id: ""
	I1010 19:27:04.476690  148123 logs.go:282] 0 containers: []
	W1010 19:27:04.476698  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:04.476704  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:04.476765  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:04.512124  148123 cri.go:89] found id: ""
	I1010 19:27:04.512171  148123 logs.go:282] 0 containers: []
	W1010 19:27:04.512186  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:04.512195  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:04.512260  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:04.547902  148123 cri.go:89] found id: ""
	I1010 19:27:04.547929  148123 logs.go:282] 0 containers: []
	W1010 19:27:04.547940  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:04.547949  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:04.548008  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:04.585257  148123 cri.go:89] found id: ""
	I1010 19:27:04.585287  148123 logs.go:282] 0 containers: []
	W1010 19:27:04.585297  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:04.585303  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:04.585352  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:04.621019  148123 cri.go:89] found id: ""
	I1010 19:27:04.621048  148123 logs.go:282] 0 containers: []
	W1010 19:27:04.621057  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:04.621068  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:04.621080  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:04.662770  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:04.662801  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:04.714551  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:04.714592  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:04.730922  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:04.730951  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:04.841139  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:04.841163  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:04.841178  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:07.428528  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:07.442423  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:07.442498  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:07.476722  148123 cri.go:89] found id: ""
	I1010 19:27:07.476753  148123 logs.go:282] 0 containers: []
	W1010 19:27:07.476764  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:07.476772  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:07.476835  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:07.512211  148123 cri.go:89] found id: ""
	I1010 19:27:07.512245  148123 logs.go:282] 0 containers: []
	W1010 19:27:07.512256  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:07.512263  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:07.512325  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:07.546546  148123 cri.go:89] found id: ""
	I1010 19:27:07.546588  148123 logs.go:282] 0 containers: []
	W1010 19:27:07.546601  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:07.546619  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:07.546688  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:07.585743  148123 cri.go:89] found id: ""
	I1010 19:27:07.585768  148123 logs.go:282] 0 containers: []
	W1010 19:27:07.585777  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:07.585783  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:07.585848  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:07.626827  148123 cri.go:89] found id: ""
	I1010 19:27:07.626855  148123 logs.go:282] 0 containers: []
	W1010 19:27:07.626865  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:07.626874  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:07.626960  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:07.661910  148123 cri.go:89] found id: ""
	I1010 19:27:07.661940  148123 logs.go:282] 0 containers: []
	W1010 19:27:07.661948  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:07.661955  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:07.662072  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:07.698255  148123 cri.go:89] found id: ""
	I1010 19:27:07.698288  148123 logs.go:282] 0 containers: []
	W1010 19:27:07.698301  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:07.698309  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:07.698374  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:07.734389  148123 cri.go:89] found id: ""
	I1010 19:27:07.734417  148123 logs.go:282] 0 containers: []
	W1010 19:27:07.734428  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:07.734440  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:07.734454  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:07.814089  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:07.814130  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:07.854799  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:07.854831  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:07.906595  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:07.906635  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:07.921928  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:07.921966  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:07.989839  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:04.727522  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:07.226694  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:05.565868  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:07.567139  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:09.050772  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:11.051291  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:10.490115  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:10.504216  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:10.504292  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:10.554789  148123 cri.go:89] found id: ""
	I1010 19:27:10.554815  148123 logs.go:282] 0 containers: []
	W1010 19:27:10.554825  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:10.554831  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:10.554889  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:10.618970  148123 cri.go:89] found id: ""
	I1010 19:27:10.618997  148123 logs.go:282] 0 containers: []
	W1010 19:27:10.619005  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:10.619012  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:10.619061  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:10.666900  148123 cri.go:89] found id: ""
	I1010 19:27:10.666933  148123 logs.go:282] 0 containers: []
	W1010 19:27:10.666946  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:10.666953  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:10.667023  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:10.704364  148123 cri.go:89] found id: ""
	I1010 19:27:10.704405  148123 logs.go:282] 0 containers: []
	W1010 19:27:10.704430  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:10.704450  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:10.704525  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:10.742341  148123 cri.go:89] found id: ""
	I1010 19:27:10.742380  148123 logs.go:282] 0 containers: []
	W1010 19:27:10.742389  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:10.742396  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:10.742461  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:10.783056  148123 cri.go:89] found id: ""
	I1010 19:27:10.783088  148123 logs.go:282] 0 containers: []
	W1010 19:27:10.783098  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:10.783105  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:10.783174  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:10.823198  148123 cri.go:89] found id: ""
	I1010 19:27:10.823224  148123 logs.go:282] 0 containers: []
	W1010 19:27:10.823233  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:10.823243  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:10.823302  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:10.858154  148123 cri.go:89] found id: ""
	I1010 19:27:10.858195  148123 logs.go:282] 0 containers: []
	W1010 19:27:10.858204  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:10.858213  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:10.858229  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:10.935491  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:10.935518  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:10.935532  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:11.015653  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:11.015694  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:11.058765  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:11.058793  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:11.116748  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:11.116799  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:09.727270  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:12.225797  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:10.065372  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:12.065695  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:13.550669  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:16.051044  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:13.631579  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:13.644885  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:13.644953  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:13.685242  148123 cri.go:89] found id: ""
	I1010 19:27:13.685273  148123 logs.go:282] 0 containers: []
	W1010 19:27:13.685285  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:13.685294  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:13.685364  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:13.720831  148123 cri.go:89] found id: ""
	I1010 19:27:13.720866  148123 logs.go:282] 0 containers: []
	W1010 19:27:13.720877  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:13.720885  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:13.720947  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:13.766374  148123 cri.go:89] found id: ""
	I1010 19:27:13.766404  148123 logs.go:282] 0 containers: []
	W1010 19:27:13.766413  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:13.766419  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:13.766468  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:13.804550  148123 cri.go:89] found id: ""
	I1010 19:27:13.804585  148123 logs.go:282] 0 containers: []
	W1010 19:27:13.804597  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:13.804606  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:13.804674  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:13.842406  148123 cri.go:89] found id: ""
	I1010 19:27:13.842432  148123 logs.go:282] 0 containers: []
	W1010 19:27:13.842440  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:13.842460  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:13.842678  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:13.882365  148123 cri.go:89] found id: ""
	I1010 19:27:13.882402  148123 logs.go:282] 0 containers: []
	W1010 19:27:13.882415  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:13.882423  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:13.882505  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:13.926016  148123 cri.go:89] found id: ""
	I1010 19:27:13.926052  148123 logs.go:282] 0 containers: []
	W1010 19:27:13.926065  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:13.926075  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:13.926177  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:13.960485  148123 cri.go:89] found id: ""
	I1010 19:27:13.960523  148123 logs.go:282] 0 containers: []
	W1010 19:27:13.960533  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:13.960542  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:13.960558  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:14.013013  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:14.013055  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:14.026906  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:14.026935  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:14.104469  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:14.104494  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:14.104507  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:14.185917  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:14.185959  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:16.732088  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:16.746646  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:16.746710  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:16.784715  148123 cri.go:89] found id: ""
	I1010 19:27:16.784739  148123 logs.go:282] 0 containers: []
	W1010 19:27:16.784749  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:16.784755  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:16.784807  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:16.824181  148123 cri.go:89] found id: ""
	I1010 19:27:16.824210  148123 logs.go:282] 0 containers: []
	W1010 19:27:16.824220  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:16.824228  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:16.824289  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:16.860901  148123 cri.go:89] found id: ""
	I1010 19:27:16.860932  148123 logs.go:282] 0 containers: []
	W1010 19:27:16.860941  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:16.860947  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:16.860997  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:16.895127  148123 cri.go:89] found id: ""
	I1010 19:27:16.895159  148123 logs.go:282] 0 containers: []
	W1010 19:27:16.895175  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:16.895183  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:16.895266  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:16.931989  148123 cri.go:89] found id: ""
	I1010 19:27:16.932020  148123 logs.go:282] 0 containers: []
	W1010 19:27:16.932028  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:16.932035  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:16.932086  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:16.967254  148123 cri.go:89] found id: ""
	I1010 19:27:16.967283  148123 logs.go:282] 0 containers: []
	W1010 19:27:16.967292  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:16.967299  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:16.967347  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:17.003010  148123 cri.go:89] found id: ""
	I1010 19:27:17.003035  148123 logs.go:282] 0 containers: []
	W1010 19:27:17.003043  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:17.003050  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:17.003097  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:17.040053  148123 cri.go:89] found id: ""
	I1010 19:27:17.040093  148123 logs.go:282] 0 containers: []
	W1010 19:27:17.040102  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:17.040118  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:17.040131  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:17.093176  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:17.093217  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:17.108086  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:17.108116  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:17.186423  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:17.186452  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:17.186467  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:17.271884  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:17.271926  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:14.227197  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:16.739354  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:14.066233  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:16.565852  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:18.566337  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:18.051613  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:20.549888  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:19.817954  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:19.831924  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:19.831993  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:19.873415  148123 cri.go:89] found id: ""
	I1010 19:27:19.873441  148123 logs.go:282] 0 containers: []
	W1010 19:27:19.873459  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:19.873466  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:19.873527  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:19.912174  148123 cri.go:89] found id: ""
	I1010 19:27:19.912207  148123 logs.go:282] 0 containers: []
	W1010 19:27:19.912219  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:19.912227  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:19.912298  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:19.948422  148123 cri.go:89] found id: ""
	I1010 19:27:19.948456  148123 logs.go:282] 0 containers: []
	W1010 19:27:19.948466  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:19.948472  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:19.948524  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:19.983917  148123 cri.go:89] found id: ""
	I1010 19:27:19.983949  148123 logs.go:282] 0 containers: []
	W1010 19:27:19.983962  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:19.983970  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:19.984024  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:20.019160  148123 cri.go:89] found id: ""
	I1010 19:27:20.019187  148123 logs.go:282] 0 containers: []
	W1010 19:27:20.019198  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:20.019207  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:20.019271  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:20.056073  148123 cri.go:89] found id: ""
	I1010 19:27:20.056104  148123 logs.go:282] 0 containers: []
	W1010 19:27:20.056116  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:20.056124  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:20.056187  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:20.095877  148123 cri.go:89] found id: ""
	I1010 19:27:20.095916  148123 logs.go:282] 0 containers: []
	W1010 19:27:20.095928  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:20.095935  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:20.096007  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:20.132360  148123 cri.go:89] found id: ""
	I1010 19:27:20.132385  148123 logs.go:282] 0 containers: []
	W1010 19:27:20.132394  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:20.132402  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:20.132413  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:20.190573  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:20.190619  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:20.205785  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:20.205819  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:20.278882  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:20.278911  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:20.278924  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:20.363982  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:20.364024  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:22.906059  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:22.919839  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:22.919912  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:22.958203  148123 cri.go:89] found id: ""
	I1010 19:27:22.958242  148123 logs.go:282] 0 containers: []
	W1010 19:27:22.958252  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:22.958258  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:22.958312  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:22.995834  148123 cri.go:89] found id: ""
	I1010 19:27:22.995866  148123 logs.go:282] 0 containers: []
	W1010 19:27:22.995874  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:22.995880  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:22.995945  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:23.035913  148123 cri.go:89] found id: ""
	I1010 19:27:23.035950  148123 logs.go:282] 0 containers: []
	W1010 19:27:23.035962  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:23.035970  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:23.036039  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:23.073995  148123 cri.go:89] found id: ""
	I1010 19:27:23.074036  148123 logs.go:282] 0 containers: []
	W1010 19:27:23.074049  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:23.074057  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:23.074117  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:23.111190  148123 cri.go:89] found id: ""
	I1010 19:27:23.111222  148123 logs.go:282] 0 containers: []
	W1010 19:27:23.111234  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:23.111243  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:23.111305  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:23.147771  148123 cri.go:89] found id: ""
	I1010 19:27:23.147797  148123 logs.go:282] 0 containers: []
	W1010 19:27:23.147806  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:23.147814  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:23.147878  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:23.185485  148123 cri.go:89] found id: ""
	I1010 19:27:23.185517  148123 logs.go:282] 0 containers: []
	W1010 19:27:23.185527  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:23.185535  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:23.185598  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:23.222030  148123 cri.go:89] found id: ""
	I1010 19:27:23.222060  148123 logs.go:282] 0 containers: []
	W1010 19:27:23.222070  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:23.222081  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:23.222097  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:23.301826  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:23.301849  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:23.301861  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:23.378688  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:23.378730  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:23.426683  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:23.426717  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:23.478425  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:23.478467  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:19.226994  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:21.727366  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:21.067094  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:23.567075  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:22.550076  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:24.551681  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:25.993768  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:26.008311  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:26.008383  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:26.046315  148123 cri.go:89] found id: ""
	I1010 19:27:26.046343  148123 logs.go:282] 0 containers: []
	W1010 19:27:26.046355  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:26.046364  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:26.046418  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:26.081823  148123 cri.go:89] found id: ""
	I1010 19:27:26.081847  148123 logs.go:282] 0 containers: []
	W1010 19:27:26.081855  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:26.081861  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:26.081913  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:26.117139  148123 cri.go:89] found id: ""
	I1010 19:27:26.117167  148123 logs.go:282] 0 containers: []
	W1010 19:27:26.117183  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:26.117191  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:26.117242  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:26.154477  148123 cri.go:89] found id: ""
	I1010 19:27:26.154501  148123 logs.go:282] 0 containers: []
	W1010 19:27:26.154510  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:26.154517  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:26.154568  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:26.187497  148123 cri.go:89] found id: ""
	I1010 19:27:26.187527  148123 logs.go:282] 0 containers: []
	W1010 19:27:26.187535  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:26.187541  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:26.187604  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:26.223110  148123 cri.go:89] found id: ""
	I1010 19:27:26.223140  148123 logs.go:282] 0 containers: []
	W1010 19:27:26.223151  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:26.223160  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:26.223226  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:26.264975  148123 cri.go:89] found id: ""
	I1010 19:27:26.265003  148123 logs.go:282] 0 containers: []
	W1010 19:27:26.265011  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:26.265017  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:26.265067  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:26.307467  148123 cri.go:89] found id: ""
	I1010 19:27:26.307494  148123 logs.go:282] 0 containers: []
	W1010 19:27:26.307503  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:26.307512  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:26.307524  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:26.346313  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:26.346342  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:26.400069  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:26.400109  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:26.415467  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:26.415500  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:26.486986  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:26.487016  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:26.487033  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:24.226736  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:26.228720  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:28.726470  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:26.067100  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:28.565675  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:27.051110  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:29.051207  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:31.553085  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:29.064522  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:29.077631  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:29.077696  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:29.116127  148123 cri.go:89] found id: ""
	I1010 19:27:29.116154  148123 logs.go:282] 0 containers: []
	W1010 19:27:29.116164  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:29.116172  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:29.116241  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:29.152863  148123 cri.go:89] found id: ""
	I1010 19:27:29.152893  148123 logs.go:282] 0 containers: []
	W1010 19:27:29.152901  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:29.152907  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:29.152963  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:29.189951  148123 cri.go:89] found id: ""
	I1010 19:27:29.189983  148123 logs.go:282] 0 containers: []
	W1010 19:27:29.189992  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:29.189999  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:29.190060  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:29.226611  148123 cri.go:89] found id: ""
	I1010 19:27:29.226646  148123 logs.go:282] 0 containers: []
	W1010 19:27:29.226657  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:29.226665  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:29.226729  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:29.271884  148123 cri.go:89] found id: ""
	I1010 19:27:29.271921  148123 logs.go:282] 0 containers: []
	W1010 19:27:29.271934  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:29.271944  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:29.272016  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:29.306128  148123 cri.go:89] found id: ""
	I1010 19:27:29.306168  148123 logs.go:282] 0 containers: []
	W1010 19:27:29.306181  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:29.306191  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:29.306255  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:29.341869  148123 cri.go:89] found id: ""
	I1010 19:27:29.341893  148123 logs.go:282] 0 containers: []
	W1010 19:27:29.341901  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:29.341908  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:29.341962  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:29.376213  148123 cri.go:89] found id: ""
	I1010 19:27:29.376240  148123 logs.go:282] 0 containers: []
	W1010 19:27:29.376249  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:29.376259  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:29.376273  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:29.428827  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:29.428878  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:29.443719  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:29.443754  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:29.516166  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:29.516193  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:29.516208  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:29.596643  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:29.596681  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:32.135286  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:32.148791  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:32.148893  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:32.185511  148123 cri.go:89] found id: ""
	I1010 19:27:32.185542  148123 logs.go:282] 0 containers: []
	W1010 19:27:32.185553  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:32.185560  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:32.185626  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:32.222908  148123 cri.go:89] found id: ""
	I1010 19:27:32.222934  148123 logs.go:282] 0 containers: []
	W1010 19:27:32.222942  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:32.222948  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:32.222999  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:32.260998  148123 cri.go:89] found id: ""
	I1010 19:27:32.261033  148123 logs.go:282] 0 containers: []
	W1010 19:27:32.261045  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:32.261063  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:32.261128  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:32.298311  148123 cri.go:89] found id: ""
	I1010 19:27:32.298339  148123 logs.go:282] 0 containers: []
	W1010 19:27:32.298348  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:32.298355  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:32.298409  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:32.334134  148123 cri.go:89] found id: ""
	I1010 19:27:32.334220  148123 logs.go:282] 0 containers: []
	W1010 19:27:32.334236  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:32.334249  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:32.334319  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:32.369690  148123 cri.go:89] found id: ""
	I1010 19:27:32.369723  148123 logs.go:282] 0 containers: []
	W1010 19:27:32.369735  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:32.369743  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:32.369807  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:32.407164  148123 cri.go:89] found id: ""
	I1010 19:27:32.407210  148123 logs.go:282] 0 containers: []
	W1010 19:27:32.407218  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:32.407224  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:32.407279  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:32.446468  148123 cri.go:89] found id: ""
	I1010 19:27:32.446494  148123 logs.go:282] 0 containers: []
	W1010 19:27:32.446505  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:32.446519  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:32.446535  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:32.497953  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:32.497997  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:32.513590  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:32.513620  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:32.594037  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:32.594072  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:32.594090  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:32.673546  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:32.673587  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:30.727725  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:32.727813  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:31.066731  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:33.067815  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:34.050574  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:36.550119  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:35.226084  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:35.241152  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:35.241222  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:35.283208  148123 cri.go:89] found id: ""
	I1010 19:27:35.283245  148123 logs.go:282] 0 containers: []
	W1010 19:27:35.283269  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:35.283286  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:35.283346  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:35.320406  148123 cri.go:89] found id: ""
	I1010 19:27:35.320444  148123 logs.go:282] 0 containers: []
	W1010 19:27:35.320457  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:35.320466  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:35.320523  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:35.360984  148123 cri.go:89] found id: ""
	I1010 19:27:35.361015  148123 logs.go:282] 0 containers: []
	W1010 19:27:35.361027  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:35.361034  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:35.361097  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:35.400191  148123 cri.go:89] found id: ""
	I1010 19:27:35.400219  148123 logs.go:282] 0 containers: []
	W1010 19:27:35.400230  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:35.400244  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:35.400314  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:35.438092  148123 cri.go:89] found id: ""
	I1010 19:27:35.438126  148123 logs.go:282] 0 containers: []
	W1010 19:27:35.438139  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:35.438147  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:35.438215  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:35.472656  148123 cri.go:89] found id: ""
	I1010 19:27:35.472681  148123 logs.go:282] 0 containers: []
	W1010 19:27:35.472688  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:35.472694  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:35.472743  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:35.505895  148123 cri.go:89] found id: ""
	I1010 19:27:35.505931  148123 logs.go:282] 0 containers: []
	W1010 19:27:35.505942  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:35.505950  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:35.506015  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:35.540406  148123 cri.go:89] found id: ""
	I1010 19:27:35.540441  148123 logs.go:282] 0 containers: []
	W1010 19:27:35.540451  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:35.540462  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:35.540478  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:35.555874  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:35.555903  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:35.628400  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:35.628427  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:35.628453  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:35.708262  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:35.708305  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:35.752796  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:35.752824  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:38.306212  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:38.319473  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:38.319543  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:38.357493  148123 cri.go:89] found id: ""
	I1010 19:27:38.357520  148123 logs.go:282] 0 containers: []
	W1010 19:27:38.357529  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:38.357536  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:38.357588  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:38.395908  148123 cri.go:89] found id: ""
	I1010 19:27:38.395935  148123 logs.go:282] 0 containers: []
	W1010 19:27:38.395943  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:38.395949  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:38.396010  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:38.434495  148123 cri.go:89] found id: ""
	I1010 19:27:38.434529  148123 logs.go:282] 0 containers: []
	W1010 19:27:38.434539  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:38.434545  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:38.434608  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:38.474647  148123 cri.go:89] found id: ""
	I1010 19:27:38.474686  148123 logs.go:282] 0 containers: []
	W1010 19:27:38.474698  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:38.474710  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:38.474784  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:38.511105  148123 cri.go:89] found id: ""
	I1010 19:27:38.511133  148123 logs.go:282] 0 containers: []
	W1010 19:27:38.511141  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:38.511148  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:38.511199  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:38.551011  148123 cri.go:89] found id: ""
	I1010 19:27:38.551055  148123 logs.go:282] 0 containers: []
	W1010 19:27:38.551066  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:38.551074  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:38.551139  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:35.227301  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:37.726528  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:35.567838  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:38.066658  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:38.552499  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:40.544561  147758 pod_ready.go:82] duration metric: took 4m0.00091784s for pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace to be "Ready" ...
	E1010 19:27:40.544600  147758 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace to be "Ready" (will not retry!)
	I1010 19:27:40.544623  147758 pod_ready.go:39] duration metric: took 4m15.623470592s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 19:27:40.544664  147758 kubeadm.go:597] duration metric: took 4m22.92080204s to restartPrimaryControlPlane
	W1010 19:27:40.544737  147758 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1010 19:27:40.544829  147758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1010 19:27:38.590579  148123 cri.go:89] found id: ""
	I1010 19:27:38.590606  148123 logs.go:282] 0 containers: []
	W1010 19:27:38.590615  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:38.590621  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:38.590686  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:38.627775  148123 cri.go:89] found id: ""
	I1010 19:27:38.627812  148123 logs.go:282] 0 containers: []
	W1010 19:27:38.627826  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:38.627838  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:38.627854  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:38.683195  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:38.683239  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:38.697220  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:38.697250  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:38.771619  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:38.771651  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:38.771668  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:38.851324  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:38.851362  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:41.408165  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:41.422958  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:41.423032  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:41.466774  148123 cri.go:89] found id: ""
	I1010 19:27:41.466806  148123 logs.go:282] 0 containers: []
	W1010 19:27:41.466816  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:41.466823  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:41.466874  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:41.506429  148123 cri.go:89] found id: ""
	I1010 19:27:41.506470  148123 logs.go:282] 0 containers: []
	W1010 19:27:41.506482  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:41.506489  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:41.506555  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:41.548929  148123 cri.go:89] found id: ""
	I1010 19:27:41.548965  148123 logs.go:282] 0 containers: []
	W1010 19:27:41.548976  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:41.548983  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:41.549044  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:41.592243  148123 cri.go:89] found id: ""
	I1010 19:27:41.592274  148123 logs.go:282] 0 containers: []
	W1010 19:27:41.592283  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:41.592290  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:41.592342  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:41.628516  148123 cri.go:89] found id: ""
	I1010 19:27:41.628558  148123 logs.go:282] 0 containers: []
	W1010 19:27:41.628570  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:41.628578  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:41.628650  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:41.665011  148123 cri.go:89] found id: ""
	I1010 19:27:41.665041  148123 logs.go:282] 0 containers: []
	W1010 19:27:41.665053  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:41.665060  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:41.665112  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:41.702650  148123 cri.go:89] found id: ""
	I1010 19:27:41.702681  148123 logs.go:282] 0 containers: []
	W1010 19:27:41.702692  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:41.702700  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:41.702764  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:41.738971  148123 cri.go:89] found id: ""
	I1010 19:27:41.739002  148123 logs.go:282] 0 containers: []
	W1010 19:27:41.739021  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:41.739031  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:41.739062  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:41.813296  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:41.813321  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:41.813335  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:41.895138  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:41.895185  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:41.941975  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:41.942012  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:41.994838  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:41.994888  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:39.727140  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:41.728263  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:40.566241  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:43.065219  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:44.511153  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:44.524871  148123 kubeadm.go:597] duration metric: took 4m4.194337013s to restartPrimaryControlPlane
	W1010 19:27:44.524953  148123 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1010 19:27:44.524988  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1010 19:27:44.994063  148123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 19:27:45.010926  148123 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1010 19:27:45.021757  148123 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1010 19:27:45.032246  148123 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1010 19:27:45.032270  148123 kubeadm.go:157] found existing configuration files:
	
	I1010 19:27:45.032326  148123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1010 19:27:45.042799  148123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1010 19:27:45.042866  148123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1010 19:27:45.053017  148123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1010 19:27:45.063373  148123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1010 19:27:45.063445  148123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1010 19:27:45.074689  148123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1010 19:27:45.085187  148123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1010 19:27:45.085241  148123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1010 19:27:45.096275  148123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1010 19:27:45.106686  148123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1010 19:27:45.106750  148123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1010 19:27:45.117018  148123 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1010 19:27:45.358920  148123 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1010 19:27:44.226853  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:46.227586  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:48.727469  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:45.066410  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:47.569864  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:51.230704  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:53.727351  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:50.065845  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:52.066267  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:55.727457  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:58.226861  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:54.564611  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:56.566702  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:00.728542  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:03.225779  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:59.065614  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:01.068088  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:03.566502  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:06.739904  147758 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.195045639s)
	I1010 19:28:06.739984  147758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 19:28:06.756046  147758 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1010 19:28:06.768580  147758 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1010 19:28:06.780663  147758 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1010 19:28:06.780732  147758 kubeadm.go:157] found existing configuration files:
	
	I1010 19:28:06.780807  147758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1010 19:28:06.792092  147758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1010 19:28:06.792179  147758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1010 19:28:06.804515  147758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1010 19:28:06.814969  147758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1010 19:28:06.815040  147758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1010 19:28:06.826056  147758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1010 19:28:06.836050  147758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1010 19:28:06.836108  147758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1010 19:28:06.846125  147758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1010 19:28:06.855505  147758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1010 19:28:06.855559  147758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1010 19:28:06.865367  147758 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1010 19:28:06.916227  147758 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1010 19:28:06.916375  147758 kubeadm.go:310] [preflight] Running pre-flight checks
	I1010 19:28:07.036539  147758 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1010 19:28:07.036652  147758 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1010 19:28:07.036762  147758 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1010 19:28:07.044897  147758 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1010 19:28:07.046978  147758 out.go:235]   - Generating certificates and keys ...
	I1010 19:28:07.047117  147758 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1010 19:28:07.047229  147758 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1010 19:28:07.047384  147758 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1010 19:28:07.047467  147758 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1010 19:28:07.047584  147758 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1010 19:28:07.047675  147758 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1010 19:28:07.047794  147758 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1010 19:28:07.047902  147758 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1010 19:28:07.048005  147758 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1010 19:28:07.048093  147758 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1010 19:28:07.048142  147758 kubeadm.go:310] [certs] Using the existing "sa" key
	I1010 19:28:07.048210  147758 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1010 19:28:07.127836  147758 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1010 19:28:07.434492  147758 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1010 19:28:07.487567  147758 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1010 19:28:07.731314  147758 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1010 19:28:07.919060  147758 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1010 19:28:07.919565  147758 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1010 19:28:07.922740  147758 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1010 19:28:05.227611  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:07.229836  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:06.065246  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:08.067360  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:07.925140  147758 out.go:235]   - Booting up control plane ...
	I1010 19:28:07.925239  147758 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1010 19:28:07.925356  147758 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1010 19:28:07.925444  147758 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1010 19:28:07.944375  147758 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1010 19:28:07.951182  147758 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1010 19:28:07.951274  147758 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1010 19:28:08.087325  147758 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1010 19:28:08.087560  147758 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1010 19:28:08.598361  147758 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 511.081439ms
	I1010 19:28:08.598502  147758 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1010 19:28:09.727932  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:12.227939  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:10.566945  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:13.067142  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:14.100517  147758 kubeadm.go:310] [api-check] The API server is healthy after 5.501985157s
	I1010 19:28:14.119932  147758 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1010 19:28:14.149557  147758 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1010 19:28:14.207413  147758 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1010 19:28:14.207735  147758 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-541370 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1010 19:28:14.226199  147758 kubeadm.go:310] [bootstrap-token] Using token: sbg4v0.t5me93bb5vn8m913
	I1010 19:28:14.228059  147758 out.go:235]   - Configuring RBAC rules ...
	I1010 19:28:14.228208  147758 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1010 19:28:14.241706  147758 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1010 19:28:14.256554  147758 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1010 19:28:14.263129  147758 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1010 19:28:14.274346  147758 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1010 19:28:14.282313  147758 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1010 19:28:14.507850  147758 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1010 19:28:14.970234  147758 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1010 19:28:15.508328  147758 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1010 19:28:15.509530  147758 kubeadm.go:310] 
	I1010 19:28:15.509635  147758 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1010 19:28:15.509653  147758 kubeadm.go:310] 
	I1010 19:28:15.509743  147758 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1010 19:28:15.509762  147758 kubeadm.go:310] 
	I1010 19:28:15.509795  147758 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1010 19:28:15.509888  147758 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1010 19:28:15.509954  147758 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1010 19:28:15.509970  147758 kubeadm.go:310] 
	I1010 19:28:15.510083  147758 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1010 19:28:15.510103  147758 kubeadm.go:310] 
	I1010 19:28:15.510203  147758 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1010 19:28:15.510214  147758 kubeadm.go:310] 
	I1010 19:28:15.510297  147758 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1010 19:28:15.510410  147758 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1010 19:28:15.510489  147758 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1010 19:28:15.510495  147758 kubeadm.go:310] 
	I1010 19:28:15.510603  147758 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1010 19:28:15.510707  147758 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1010 19:28:15.510724  147758 kubeadm.go:310] 
	I1010 19:28:15.510807  147758 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token sbg4v0.t5me93bb5vn8m913 \
	I1010 19:28:15.510958  147758 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:bc8568f96f96483e4403a7eff9866ac7bd8b0e76d8203796a49dd1a769f11bda \
	I1010 19:28:15.511005  147758 kubeadm.go:310] 	--control-plane 
	I1010 19:28:15.511034  147758 kubeadm.go:310] 
	I1010 19:28:15.511161  147758 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1010 19:28:15.511173  147758 kubeadm.go:310] 
	I1010 19:28:15.511268  147758 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token sbg4v0.t5me93bb5vn8m913 \
	I1010 19:28:15.511403  147758 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:bc8568f96f96483e4403a7eff9866ac7bd8b0e76d8203796a49dd1a769f11bda 
	I1010 19:28:15.512298  147758 kubeadm.go:310] W1010 19:28:06.890572    2539 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1010 19:28:15.512594  147758 kubeadm.go:310] W1010 19:28:06.891448    2539 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1010 19:28:15.512702  147758 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1010 19:28:15.512734  147758 cni.go:84] Creating CNI manager for ""
	I1010 19:28:15.512744  147758 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1010 19:28:15.514703  147758 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1010 19:28:15.516229  147758 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1010 19:28:15.527554  147758 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1010 19:28:15.549266  147758 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1010 19:28:15.549362  147758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:15.549399  147758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-541370 minikube.k8s.io/updated_at=2024_10_10T19_28_15_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=e90d7de550e839433dc631623053398b652747fc minikube.k8s.io/name=embed-certs-541370 minikube.k8s.io/primary=true
	I1010 19:28:15.590732  147758 ops.go:34] apiserver oom_adj: -16
	I1010 19:28:15.740942  147758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:16.241392  147758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:16.741807  147758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:14.229241  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:16.727260  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:14.059512  148525 pod_ready.go:82] duration metric: took 4m0.00022742s for pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace to be "Ready" ...
	E1010 19:28:14.059550  148525 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace to be "Ready" (will not retry!)
	I1010 19:28:14.059569  148525 pod_ready.go:39] duration metric: took 4m7.001942194s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 19:28:14.059614  148525 kubeadm.go:597] duration metric: took 4m14.998320151s to restartPrimaryControlPlane
	W1010 19:28:14.059672  148525 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1010 19:28:14.059698  148525 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1010 19:28:17.241315  147758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:17.741580  147758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:18.241006  147758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:18.742042  147758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:19.241251  147758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:19.741030  147758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:19.862541  147758 kubeadm.go:1113] duration metric: took 4.313246481s to wait for elevateKubeSystemPrivileges
	I1010 19:28:19.862579  147758 kubeadm.go:394] duration metric: took 5m2.288571479s to StartCluster
	I1010 19:28:19.862628  147758 settings.go:142] acquiring lock: {Name:mk8d73e472e6ca16e47be8eb712406a95625b8da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 19:28:19.862751  147758 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19787-81676/kubeconfig
	I1010 19:28:19.864528  147758 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19787-81676/kubeconfig: {Name:mk31297b607b774870865548d952622c04d970cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 19:28:19.864812  147758 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.120 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1010 19:28:19.864910  147758 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1010 19:28:19.865019  147758 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-541370"
	I1010 19:28:19.865041  147758 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-541370"
	W1010 19:28:19.865053  147758 addons.go:243] addon storage-provisioner should already be in state true
	I1010 19:28:19.865062  147758 addons.go:69] Setting default-storageclass=true in profile "embed-certs-541370"
	I1010 19:28:19.865085  147758 host.go:66] Checking if "embed-certs-541370" exists ...
	I1010 19:28:19.865077  147758 config.go:182] Loaded profile config "embed-certs-541370": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 19:28:19.865129  147758 addons.go:69] Setting metrics-server=true in profile "embed-certs-541370"
	I1010 19:28:19.865164  147758 addons.go:234] Setting addon metrics-server=true in "embed-certs-541370"
	W1010 19:28:19.865179  147758 addons.go:243] addon metrics-server should already be in state true
	I1010 19:28:19.865115  147758 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-541370"
	I1010 19:28:19.865215  147758 host.go:66] Checking if "embed-certs-541370" exists ...
	I1010 19:28:19.865558  147758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:28:19.865593  147758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:28:19.865607  147758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:28:19.865629  147758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:28:19.865595  147758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:28:19.865725  147758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:28:19.866857  147758 out.go:177] * Verifying Kubernetes components...
	I1010 19:28:19.868590  147758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 19:28:19.882524  147758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42899
	I1010 19:28:19.882595  147758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39477
	I1010 19:28:19.882678  147758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42999
	I1010 19:28:19.883065  147758 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:28:19.883168  147758 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:28:19.883281  147758 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:28:19.883559  147758 main.go:141] libmachine: Using API Version  1
	I1010 19:28:19.883575  147758 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:28:19.883657  147758 main.go:141] libmachine: Using API Version  1
	I1010 19:28:19.883669  147758 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:28:19.883802  147758 main.go:141] libmachine: Using API Version  1
	I1010 19:28:19.883818  147758 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:28:19.883968  147758 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:28:19.883976  147758 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:28:19.884141  147758 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:28:19.884194  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetState
	I1010 19:28:19.884408  147758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:28:19.884437  147758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:28:19.884684  147758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:28:19.884746  147758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:28:19.887912  147758 addons.go:234] Setting addon default-storageclass=true in "embed-certs-541370"
	W1010 19:28:19.887942  147758 addons.go:243] addon default-storageclass should already be in state true
	I1010 19:28:19.887973  147758 host.go:66] Checking if "embed-certs-541370" exists ...
	I1010 19:28:19.888333  147758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:28:19.888383  147758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:28:19.901588  147758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37895
	I1010 19:28:19.902131  147758 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:28:19.902597  147758 main.go:141] libmachine: Using API Version  1
	I1010 19:28:19.902621  147758 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:28:19.902927  147758 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:28:19.903101  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetState
	I1010 19:28:19.904556  147758 main.go:141] libmachine: (embed-certs-541370) Calling .DriverName
	I1010 19:28:19.905207  147758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33927
	I1010 19:28:19.905621  147758 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:28:19.906188  147758 main.go:141] libmachine: Using API Version  1
	I1010 19:28:19.906209  147758 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:28:19.906599  147758 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:28:19.906647  147758 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 19:28:19.906837  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetState
	I1010 19:28:19.907699  147758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34245
	I1010 19:28:19.908147  147758 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:28:19.908557  147758 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 19:28:19.908584  147758 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1010 19:28:19.908610  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHHostname
	I1010 19:28:19.908705  147758 main.go:141] libmachine: (embed-certs-541370) Calling .DriverName
	I1010 19:28:19.908717  147758 main.go:141] libmachine: Using API Version  1
	I1010 19:28:19.908745  147758 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:28:19.909364  147758 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:28:19.910154  147758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:28:19.910208  147758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:28:19.910840  147758 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1010 19:28:19.912716  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:28:19.912722  147758 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1010 19:28:19.912743  147758 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1010 19:28:19.912769  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHHostname
	I1010 19:28:19.913199  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:28:19.913224  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:28:19.913500  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHPort
	I1010 19:28:19.913682  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:28:19.913845  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHUsername
	I1010 19:28:19.913972  147758 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/embed-certs-541370/id_rsa Username:docker}
	I1010 19:28:19.921800  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:28:19.922343  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:28:19.922374  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:28:19.922653  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHPort
	I1010 19:28:19.922842  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:28:19.922965  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHUsername
	I1010 19:28:19.923108  147758 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/embed-certs-541370/id_rsa Username:docker}
	I1010 19:28:19.935097  147758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39933
	I1010 19:28:19.935605  147758 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:28:19.936123  147758 main.go:141] libmachine: Using API Version  1
	I1010 19:28:19.936146  147758 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:28:19.936561  147758 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:28:19.936747  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetState
	I1010 19:28:19.938789  147758 main.go:141] libmachine: (embed-certs-541370) Calling .DriverName
	I1010 19:28:19.939019  147758 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1010 19:28:19.939034  147758 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1010 19:28:19.939054  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHHostname
	I1010 19:28:19.941682  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:28:19.942137  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:28:19.942165  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:28:19.942404  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHPort
	I1010 19:28:19.942642  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:28:19.942767  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHUsername
	I1010 19:28:19.942915  147758 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/embed-certs-541370/id_rsa Username:docker}
	I1010 19:28:20.108247  147758 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 19:28:20.149819  147758 node_ready.go:35] waiting up to 6m0s for node "embed-certs-541370" to be "Ready" ...
	I1010 19:28:20.163096  147758 node_ready.go:49] node "embed-certs-541370" has status "Ready":"True"
	I1010 19:28:20.163118  147758 node_ready.go:38] duration metric: took 13.26779ms for node "embed-certs-541370" to be "Ready" ...
	I1010 19:28:20.163128  147758 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 19:28:20.168620  147758 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-59752" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:20.241952  147758 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1010 19:28:20.241978  147758 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1010 19:28:20.249679  147758 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1010 19:28:20.290149  147758 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1010 19:28:20.290190  147758 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1010 19:28:20.291475  147758 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 19:28:20.410539  147758 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1010 19:28:20.410582  147758 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1010 19:28:20.491567  147758 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1010 19:28:20.684370  147758 main.go:141] libmachine: Making call to close driver server
	I1010 19:28:20.684403  147758 main.go:141] libmachine: (embed-certs-541370) Calling .Close
	I1010 19:28:20.684695  147758 main.go:141] libmachine: (embed-certs-541370) DBG | Closing plugin on server side
	I1010 19:28:20.684742  147758 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:28:20.684749  147758 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:28:20.684756  147758 main.go:141] libmachine: Making call to close driver server
	I1010 19:28:20.684764  147758 main.go:141] libmachine: (embed-certs-541370) Calling .Close
	I1010 19:28:20.685029  147758 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:28:20.685059  147758 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:28:20.685036  147758 main.go:141] libmachine: (embed-certs-541370) DBG | Closing plugin on server side
	I1010 19:28:20.695901  147758 main.go:141] libmachine: Making call to close driver server
	I1010 19:28:20.695926  147758 main.go:141] libmachine: (embed-certs-541370) Calling .Close
	I1010 19:28:20.696202  147758 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:28:20.696249  147758 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:28:21.439463  147758 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.147952803s)
	I1010 19:28:21.439626  147758 main.go:141] libmachine: Making call to close driver server
	I1010 19:28:21.439659  147758 main.go:141] libmachine: (embed-certs-541370) Calling .Close
	I1010 19:28:21.439951  147758 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:28:21.439969  147758 main.go:141] libmachine: (embed-certs-541370) DBG | Closing plugin on server side
	I1010 19:28:21.439976  147758 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:28:21.439997  147758 main.go:141] libmachine: Making call to close driver server
	I1010 19:28:21.440009  147758 main.go:141] libmachine: (embed-certs-541370) Calling .Close
	I1010 19:28:21.440299  147758 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:28:21.440298  147758 main.go:141] libmachine: (embed-certs-541370) DBG | Closing plugin on server side
	I1010 19:28:21.440314  147758 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:28:21.780486  147758 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.288854773s)
	I1010 19:28:21.780551  147758 main.go:141] libmachine: Making call to close driver server
	I1010 19:28:21.780567  147758 main.go:141] libmachine: (embed-certs-541370) Calling .Close
	I1010 19:28:21.780948  147758 main.go:141] libmachine: (embed-certs-541370) DBG | Closing plugin on server side
	I1010 19:28:21.780980  147758 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:28:21.780996  147758 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:28:21.781007  147758 main.go:141] libmachine: Making call to close driver server
	I1010 19:28:21.781016  147758 main.go:141] libmachine: (embed-certs-541370) Calling .Close
	I1010 19:28:21.781289  147758 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:28:21.781310  147758 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:28:21.781331  147758 addons.go:475] Verifying addon metrics-server=true in "embed-certs-541370"
	I1010 19:28:21.783512  147758 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1010 19:28:21.784958  147758 addons.go:510] duration metric: took 1.92006141s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1010 19:28:19.225844  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:21.227960  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:23.726439  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:22.195129  147758 pod_ready.go:103] pod "coredns-7c65d6cfc9-59752" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:24.678736  147758 pod_ready.go:103] pod "coredns-7c65d6cfc9-59752" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:25.727053  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:27.727657  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:27.177348  147758 pod_ready.go:103] pod "coredns-7c65d6cfc9-59752" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:29.177459  147758 pod_ready.go:93] pod "coredns-7c65d6cfc9-59752" in "kube-system" namespace has status "Ready":"True"
	I1010 19:28:29.177485  147758 pod_ready.go:82] duration metric: took 9.008841503s for pod "coredns-7c65d6cfc9-59752" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:29.177495  147758 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-n7wxs" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:29.182744  147758 pod_ready.go:93] pod "coredns-7c65d6cfc9-n7wxs" in "kube-system" namespace has status "Ready":"True"
	I1010 19:28:29.182777  147758 pod_ready.go:82] duration metric: took 5.273263ms for pod "coredns-7c65d6cfc9-n7wxs" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:29.182791  147758 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:29.191507  147758 pod_ready.go:93] pod "etcd-embed-certs-541370" in "kube-system" namespace has status "Ready":"True"
	I1010 19:28:29.191539  147758 pod_ready.go:82] duration metric: took 8.738961ms for pod "etcd-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:29.191554  147758 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:29.199167  147758 pod_ready.go:93] pod "kube-apiserver-embed-certs-541370" in "kube-system" namespace has status "Ready":"True"
	I1010 19:28:29.199218  147758 pod_ready.go:82] duration metric: took 7.635672ms for pod "kube-apiserver-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:29.199234  147758 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:29.204558  147758 pod_ready.go:93] pod "kube-controller-manager-embed-certs-541370" in "kube-system" namespace has status "Ready":"True"
	I1010 19:28:29.204581  147758 pod_ready.go:82] duration metric: took 5.337574ms for pod "kube-controller-manager-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:29.204591  147758 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-6hdds" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:29.573781  147758 pod_ready.go:93] pod "kube-proxy-6hdds" in "kube-system" namespace has status "Ready":"True"
	I1010 19:28:29.573808  147758 pod_ready.go:82] duration metric: took 369.210969ms for pod "kube-proxy-6hdds" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:29.573818  147758 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:29.974015  147758 pod_ready.go:93] pod "kube-scheduler-embed-certs-541370" in "kube-system" namespace has status "Ready":"True"
	I1010 19:28:29.974039  147758 pod_ready.go:82] duration metric: took 400.214845ms for pod "kube-scheduler-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:29.974048  147758 pod_ready.go:39] duration metric: took 9.810911064s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 19:28:29.974066  147758 api_server.go:52] waiting for apiserver process to appear ...
	I1010 19:28:29.974120  147758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:28:29.991332  147758 api_server.go:72] duration metric: took 10.126480862s to wait for apiserver process to appear ...
	I1010 19:28:29.991356  147758 api_server.go:88] waiting for apiserver healthz status ...
	I1010 19:28:29.991382  147758 api_server.go:253] Checking apiserver healthz at https://192.168.39.120:8443/healthz ...
	I1010 19:28:29.995855  147758 api_server.go:279] https://192.168.39.120:8443/healthz returned 200:
	ok
	I1010 19:28:29.997488  147758 api_server.go:141] control plane version: v1.31.1
	I1010 19:28:29.997516  147758 api_server.go:131] duration metric: took 6.152312ms to wait for apiserver health ...
	I1010 19:28:29.997526  147758 system_pods.go:43] waiting for kube-system pods to appear ...
	I1010 19:28:30.176631  147758 system_pods.go:59] 9 kube-system pods found
	I1010 19:28:30.176662  147758 system_pods.go:61] "coredns-7c65d6cfc9-59752" [f7980c69-dd8e-42e0-a0ab-1dedf2203367] Running
	I1010 19:28:30.176668  147758 system_pods.go:61] "coredns-7c65d6cfc9-n7wxs" [ac89df92-bbdb-432b-8c4a-d9ced3c2f2b5] Running
	I1010 19:28:30.176672  147758 system_pods.go:61] "etcd-embed-certs-541370" [94f92d38-753f-47c7-95b6-7ef0486a172d] Running
	I1010 19:28:30.176676  147758 system_pods.go:61] "kube-apiserver-embed-certs-541370" [b60f7ec1-6416-4e0b-8f49-d673496187b6] Running
	I1010 19:28:30.176680  147758 system_pods.go:61] "kube-controller-manager-embed-certs-541370" [ca09511b-ec93-4146-9d43-5fbc0880394a] Running
	I1010 19:28:30.176683  147758 system_pods.go:61] "kube-proxy-6hdds" [fe7cbbf4-12be-469d-b176-37c4daccab96] Running
	I1010 19:28:30.176686  147758 system_pods.go:61] "kube-scheduler-embed-certs-541370" [37c96f22-d5a4-4233-ba65-7367b075656a] Running
	I1010 19:28:30.176693  147758 system_pods.go:61] "metrics-server-6867b74b74-znhn4" [5dc1f764-c7c7-480e-b787-5f5cf6c14a84] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1010 19:28:30.176699  147758 system_pods.go:61] "storage-provisioner" [cfb28184-daef-40be-9170-b42058727418] Running
	I1010 19:28:30.176707  147758 system_pods.go:74] duration metric: took 179.174083ms to wait for pod list to return data ...
	I1010 19:28:30.176714  147758 default_sa.go:34] waiting for default service account to be created ...
	I1010 19:28:30.375326  147758 default_sa.go:45] found service account: "default"
	I1010 19:28:30.375361  147758 default_sa.go:55] duration metric: took 198.640267ms for default service account to be created ...
	I1010 19:28:30.375374  147758 system_pods.go:116] waiting for k8s-apps to be running ...
	I1010 19:28:30.578749  147758 system_pods.go:86] 9 kube-system pods found
	I1010 19:28:30.578780  147758 system_pods.go:89] "coredns-7c65d6cfc9-59752" [f7980c69-dd8e-42e0-a0ab-1dedf2203367] Running
	I1010 19:28:30.578786  147758 system_pods.go:89] "coredns-7c65d6cfc9-n7wxs" [ac89df92-bbdb-432b-8c4a-d9ced3c2f2b5] Running
	I1010 19:28:30.578790  147758 system_pods.go:89] "etcd-embed-certs-541370" [94f92d38-753f-47c7-95b6-7ef0486a172d] Running
	I1010 19:28:30.578794  147758 system_pods.go:89] "kube-apiserver-embed-certs-541370" [b60f7ec1-6416-4e0b-8f49-d673496187b6] Running
	I1010 19:28:30.578797  147758 system_pods.go:89] "kube-controller-manager-embed-certs-541370" [ca09511b-ec93-4146-9d43-5fbc0880394a] Running
	I1010 19:28:30.578801  147758 system_pods.go:89] "kube-proxy-6hdds" [fe7cbbf4-12be-469d-b176-37c4daccab96] Running
	I1010 19:28:30.578804  147758 system_pods.go:89] "kube-scheduler-embed-certs-541370" [37c96f22-d5a4-4233-ba65-7367b075656a] Running
	I1010 19:28:30.578810  147758 system_pods.go:89] "metrics-server-6867b74b74-znhn4" [5dc1f764-c7c7-480e-b787-5f5cf6c14a84] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1010 19:28:30.578814  147758 system_pods.go:89] "storage-provisioner" [cfb28184-daef-40be-9170-b42058727418] Running
	I1010 19:28:30.578822  147758 system_pods.go:126] duration metric: took 203.441477ms to wait for k8s-apps to be running ...
	I1010 19:28:30.578829  147758 system_svc.go:44] waiting for kubelet service to be running ....
	I1010 19:28:30.578877  147758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 19:28:30.596523  147758 system_svc.go:56] duration metric: took 17.684729ms WaitForService to wait for kubelet
	I1010 19:28:30.596553  147758 kubeadm.go:582] duration metric: took 10.731708748s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1010 19:28:30.596573  147758 node_conditions.go:102] verifying NodePressure condition ...
	I1010 19:28:30.774749  147758 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1010 19:28:30.774783  147758 node_conditions.go:123] node cpu capacity is 2
	I1010 19:28:30.774807  147758 node_conditions.go:105] duration metric: took 178.228671ms to run NodePressure ...
	I1010 19:28:30.774822  147758 start.go:241] waiting for startup goroutines ...
	I1010 19:28:30.774831  147758 start.go:246] waiting for cluster config update ...
	I1010 19:28:30.774845  147758 start.go:255] writing updated cluster config ...
	I1010 19:28:30.775121  147758 ssh_runner.go:195] Run: rm -f paused
	I1010 19:28:30.826689  147758 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1010 19:28:30.828795  147758 out.go:177] * Done! kubectl is now configured to use "embed-certs-541370" cluster and "default" namespace by default
	I1010 19:28:29.728096  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:32.229632  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:34.726536  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:36.727032  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:38.727488  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:40.372903  148525 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.31317648s)
	I1010 19:28:40.372991  148525 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 19:28:40.389319  148525 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1010 19:28:40.400123  148525 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1010 19:28:40.411906  148525 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1010 19:28:40.411932  148525 kubeadm.go:157] found existing configuration files:
	
	I1010 19:28:40.411976  148525 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1010 19:28:40.421840  148525 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1010 19:28:40.421904  148525 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1010 19:28:40.432229  148525 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1010 19:28:40.442121  148525 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1010 19:28:40.442203  148525 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1010 19:28:40.452969  148525 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1010 19:28:40.463085  148525 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1010 19:28:40.463146  148525 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1010 19:28:40.473103  148525 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1010 19:28:40.482854  148525 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1010 19:28:40.482914  148525 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1010 19:28:40.494023  148525 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1010 19:28:40.543369  148525 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1010 19:28:40.543466  148525 kubeadm.go:310] [preflight] Running pre-flight checks
	I1010 19:28:40.657301  148525 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1010 19:28:40.657462  148525 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1010 19:28:40.657579  148525 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1010 19:28:40.669222  148525 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1010 19:28:40.670995  148525 out.go:235]   - Generating certificates and keys ...
	I1010 19:28:40.671102  148525 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1010 19:28:40.671171  148525 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1010 19:28:40.671284  148525 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1010 19:28:40.671374  148525 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1010 19:28:40.671471  148525 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1010 19:28:40.671557  148525 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1010 19:28:40.671650  148525 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1010 19:28:40.671751  148525 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1010 19:28:40.671895  148525 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1010 19:28:40.672000  148525 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1010 19:28:40.672056  148525 kubeadm.go:310] [certs] Using the existing "sa" key
	I1010 19:28:40.672136  148525 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1010 19:28:40.876613  148525 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1010 19:28:41.109518  148525 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1010 19:28:41.186751  148525 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1010 19:28:41.424710  148525 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1010 19:28:41.479611  148525 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1010 19:28:41.480235  148525 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1010 19:28:41.483222  148525 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1010 19:28:41.227521  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:43.728023  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:41.484809  148525 out.go:235]   - Booting up control plane ...
	I1010 19:28:41.484935  148525 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1010 19:28:41.485020  148525 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1010 19:28:41.485317  148525 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1010 19:28:41.506919  148525 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1010 19:28:41.517006  148525 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1010 19:28:41.517077  148525 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1010 19:28:41.653211  148525 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1010 19:28:41.653364  148525 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1010 19:28:42.655360  148525 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001910447s
	I1010 19:28:42.655482  148525 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1010 19:28:47.658431  148525 kubeadm.go:310] [api-check] The API server is healthy after 5.003169217s
	I1010 19:28:47.676178  148525 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1010 19:28:47.694752  148525 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1010 19:28:47.720376  148525 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1010 19:28:47.720645  148525 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-361847 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1010 19:28:47.736489  148525 kubeadm.go:310] [bootstrap-token] Using token: cprf0t.lm4xp75yi0cdu4sy
	I1010 19:28:46.228217  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:48.726740  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:47.737958  148525 out.go:235]   - Configuring RBAC rules ...
	I1010 19:28:47.738089  148525 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1010 19:28:47.750073  148525 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1010 19:28:47.758010  148525 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1010 19:28:47.761649  148525 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1010 19:28:47.768953  148525 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1010 19:28:47.774428  148525 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1010 19:28:48.065988  148525 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1010 19:28:48.502538  148525 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1010 19:28:49.066479  148525 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1010 19:28:49.069842  148525 kubeadm.go:310] 
	I1010 19:28:49.069937  148525 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1010 19:28:49.069947  148525 kubeadm.go:310] 
	I1010 19:28:49.070046  148525 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1010 19:28:49.070058  148525 kubeadm.go:310] 
	I1010 19:28:49.070089  148525 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1010 19:28:49.070166  148525 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1010 19:28:49.070254  148525 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1010 19:28:49.070265  148525 kubeadm.go:310] 
	I1010 19:28:49.070342  148525 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1010 19:28:49.070353  148525 kubeadm.go:310] 
	I1010 19:28:49.070446  148525 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1010 19:28:49.070478  148525 kubeadm.go:310] 
	I1010 19:28:49.070544  148525 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1010 19:28:49.070640  148525 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1010 19:28:49.070750  148525 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1010 19:28:49.070773  148525 kubeadm.go:310] 
	I1010 19:28:49.070880  148525 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1010 19:28:49.070990  148525 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1010 19:28:49.071001  148525 kubeadm.go:310] 
	I1010 19:28:49.071153  148525 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token cprf0t.lm4xp75yi0cdu4sy \
	I1010 19:28:49.071299  148525 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:bc8568f96f96483e4403a7eff9866ac7bd8b0e76d8203796a49dd1a769f11bda \
	I1010 19:28:49.071330  148525 kubeadm.go:310] 	--control-plane 
	I1010 19:28:49.071349  148525 kubeadm.go:310] 
	I1010 19:28:49.071468  148525 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1010 19:28:49.071497  148525 kubeadm.go:310] 
	I1010 19:28:49.072228  148525 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token cprf0t.lm4xp75yi0cdu4sy \
	I1010 19:28:49.072354  148525 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:bc8568f96f96483e4403a7eff9866ac7bd8b0e76d8203796a49dd1a769f11bda 
	I1010 19:28:49.074595  148525 kubeadm.go:310] W1010 19:28:40.525557    2560 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1010 19:28:49.074944  148525 kubeadm.go:310] W1010 19:28:40.526329    2560 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1010 19:28:49.075102  148525 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1010 19:28:49.075143  148525 cni.go:84] Creating CNI manager for ""
	I1010 19:28:49.075166  148525 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1010 19:28:49.077190  148525 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1010 19:28:49.078665  148525 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1010 19:28:49.091792  148525 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1010 19:28:49.113801  148525 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1010 19:28:49.113920  148525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-361847 minikube.k8s.io/updated_at=2024_10_10T19_28_49_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=e90d7de550e839433dc631623053398b652747fc minikube.k8s.io/name=default-k8s-diff-port-361847 minikube.k8s.io/primary=true
	I1010 19:28:49.114074  148525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:49.154398  148525 ops.go:34] apiserver oom_adj: -16
	I1010 19:28:49.351271  148525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:49.852049  148525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:50.351441  148525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:50.852022  148525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:51.351391  148525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:51.851329  148525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:52.351840  148525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:52.852392  148525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:53.351397  148525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:53.443325  148525 kubeadm.go:1113] duration metric: took 4.329288133s to wait for elevateKubeSystemPrivileges
	I1010 19:28:53.443363  148525 kubeadm.go:394] duration metric: took 4m54.439732071s to StartCluster
	I1010 19:28:53.443386  148525 settings.go:142] acquiring lock: {Name:mk8d73e472e6ca16e47be8eb712406a95625b8da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 19:28:53.443481  148525 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19787-81676/kubeconfig
	I1010 19:28:53.445465  148525 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19787-81676/kubeconfig: {Name:mk31297b607b774870865548d952622c04d970cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 19:28:53.445747  148525 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.32 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1010 19:28:53.445842  148525 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1010 19:28:53.445957  148525 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-361847"
	I1010 19:28:53.445980  148525 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-361847"
	W1010 19:28:53.445992  148525 addons.go:243] addon storage-provisioner should already be in state true
	I1010 19:28:53.446004  148525 config.go:182] Loaded profile config "default-k8s-diff-port-361847": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 19:28:53.446026  148525 host.go:66] Checking if "default-k8s-diff-port-361847" exists ...
	I1010 19:28:53.446065  148525 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-361847"
	I1010 19:28:53.446100  148525 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-361847"
	I1010 19:28:53.446085  148525 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-361847"
	I1010 19:28:53.446137  148525 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-361847"
	W1010 19:28:53.446151  148525 addons.go:243] addon metrics-server should already be in state true
	I1010 19:28:53.446242  148525 host.go:66] Checking if "default-k8s-diff-port-361847" exists ...
	I1010 19:28:53.446515  148525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:28:53.446562  148525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:28:53.447089  148525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:28:53.447135  148525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:28:53.447315  148525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:28:53.447360  148525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:28:53.450779  148525 out.go:177] * Verifying Kubernetes components...
	I1010 19:28:53.452838  148525 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 19:28:53.465502  148525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36411
	I1010 19:28:53.466020  148525 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:28:53.466572  148525 main.go:141] libmachine: Using API Version  1
	I1010 19:28:53.466594  148525 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:28:53.466772  148525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42293
	I1010 19:28:53.467034  148525 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:28:53.467209  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetState
	I1010 19:28:53.467310  148525 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:28:53.467828  148525 main.go:141] libmachine: Using API Version  1
	I1010 19:28:53.467857  148525 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:28:53.467899  148525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36387
	I1010 19:28:53.468270  148525 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:28:53.468451  148525 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:28:53.468866  148525 main.go:141] libmachine: Using API Version  1
	I1010 19:28:53.468891  148525 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:28:53.469102  148525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:28:53.469150  148525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:28:53.469484  148525 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:28:53.470068  148525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:28:53.470114  148525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:28:53.471192  148525 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-361847"
	W1010 19:28:53.471213  148525 addons.go:243] addon default-storageclass should already be in state true
	I1010 19:28:53.471261  148525 host.go:66] Checking if "default-k8s-diff-port-361847" exists ...
	I1010 19:28:53.471618  148525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:28:53.471664  148525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:28:53.486550  148525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38971
	I1010 19:28:53.487068  148525 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:28:53.487608  148525 main.go:141] libmachine: Using API Version  1
	I1010 19:28:53.487626  148525 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:28:53.488015  148525 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:28:53.488329  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetState
	I1010 19:28:53.490200  148525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44971
	I1010 19:28:53.490240  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .DriverName
	I1010 19:28:53.490790  148525 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:28:53.491318  148525 main.go:141] libmachine: Using API Version  1
	I1010 19:28:53.491341  148525 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:28:53.491682  148525 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:28:53.491957  148525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45297
	I1010 19:28:53.492100  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetState
	I1010 19:28:53.492423  148525 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:28:53.492731  148525 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1010 19:28:53.492811  148525 main.go:141] libmachine: Using API Version  1
	I1010 19:28:53.492831  148525 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:28:53.493240  148525 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:28:53.493885  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .DriverName
	I1010 19:28:53.493979  148525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:28:53.494031  148525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:28:53.494359  148525 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1010 19:28:53.494381  148525 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1010 19:28:53.494397  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHHostname
	I1010 19:28:53.495771  148525 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 19:28:51.226596  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:53.227299  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:53.227335  147213 pod_ready.go:82] duration metric: took 4m0.007224391s for pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace to be "Ready" ...
	E1010 19:28:53.227346  147213 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1010 19:28:53.227355  147213 pod_ready.go:39] duration metric: took 4m5.554224355s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 19:28:53.227375  147213 api_server.go:52] waiting for apiserver process to appear ...
	I1010 19:28:53.227419  147213 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:28:53.227484  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:28:53.288713  147213 cri.go:89] found id: "20a9cb514f18abab89d82173a52c43693b13548712f96726c491e14100e5deba"
	I1010 19:28:53.288749  147213 cri.go:89] found id: ""
	I1010 19:28:53.288759  147213 logs.go:282] 1 containers: [20a9cb514f18abab89d82173a52c43693b13548712f96726c491e14100e5deba]
	I1010 19:28:53.288823  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:53.294819  147213 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:28:53.294904  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:28:53.340169  147213 cri.go:89] found id: "bfc9f1f069a0299235908025d3c8840e059ddc2fc2f579932edb134cff568c36"
	I1010 19:28:53.340197  147213 cri.go:89] found id: ""
	I1010 19:28:53.340207  147213 logs.go:282] 1 containers: [bfc9f1f069a0299235908025d3c8840e059ddc2fc2f579932edb134cff568c36]
	I1010 19:28:53.340271  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:53.345214  147213 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:28:53.345292  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:28:53.392808  147213 cri.go:89] found id: "3c98f0e3e46ce82feb8638281ffb8c8cdf326b15115a6edfe91de59de6868846"
	I1010 19:28:53.392838  147213 cri.go:89] found id: ""
	I1010 19:28:53.392859  147213 logs.go:282] 1 containers: [3c98f0e3e46ce82feb8638281ffb8c8cdf326b15115a6edfe91de59de6868846]
	I1010 19:28:53.392921  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:53.398275  147213 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:28:53.398361  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:28:53.439567  147213 cri.go:89] found id: "d397ef1d012accb6f9cb41fa5bd5ed4649588e9ad8552f87d2f9a579a9c3f023"
	I1010 19:28:53.439594  147213 cri.go:89] found id: ""
	I1010 19:28:53.439604  147213 logs.go:282] 1 containers: [d397ef1d012accb6f9cb41fa5bd5ed4649588e9ad8552f87d2f9a579a9c3f023]
	I1010 19:28:53.439665  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:53.444366  147213 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:28:53.444436  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:28:53.522580  147213 cri.go:89] found id: "3a26f9cbec8dc8e6e1b1c1cc24a2432dc85819a78557be2263db6bc34e847664"
	I1010 19:28:53.522597  147213 cri.go:89] found id: ""
	I1010 19:28:53.522605  147213 logs.go:282] 1 containers: [3a26f9cbec8dc8e6e1b1c1cc24a2432dc85819a78557be2263db6bc34e847664]
	I1010 19:28:53.522654  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:53.528890  147213 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:28:53.528974  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:28:53.575933  147213 cri.go:89] found id: "d59196636b2826805e226757e3c5d3683d49f2fddb43f2e965aeae02026742ea"
	I1010 19:28:53.575963  147213 cri.go:89] found id: ""
	I1010 19:28:53.575975  147213 logs.go:282] 1 containers: [d59196636b2826805e226757e3c5d3683d49f2fddb43f2e965aeae02026742ea]
	I1010 19:28:53.576035  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:53.581693  147213 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:28:53.581763  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:28:53.619789  147213 cri.go:89] found id: ""
	I1010 19:28:53.619819  147213 logs.go:282] 0 containers: []
	W1010 19:28:53.619831  147213 logs.go:284] No container was found matching "kindnet"
	I1010 19:28:53.619839  147213 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1010 19:28:53.619899  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1010 19:28:53.659715  147213 cri.go:89] found id: "dfabbf70cd44985666c6b54a456e49c66a41cc19300a483565decd937ce240ab"
	I1010 19:28:53.659746  147213 cri.go:89] found id: "e14d37c6da3f630d60720c5dc7ec414f060a7b327fafd27b633f3402e552840e"
	I1010 19:28:53.659752  147213 cri.go:89] found id: ""
	I1010 19:28:53.659762  147213 logs.go:282] 2 containers: [dfabbf70cd44985666c6b54a456e49c66a41cc19300a483565decd937ce240ab e14d37c6da3f630d60720c5dc7ec414f060a7b327fafd27b633f3402e552840e]
	I1010 19:28:53.659828  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:53.664377  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:53.668766  147213 logs.go:123] Gathering logs for dmesg ...
	I1010 19:28:53.668796  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:28:53.685976  147213 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:28:53.686007  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 19:28:53.497232  148525 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 19:28:53.497251  148525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1010 19:28:53.497273  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHHostname
	I1010 19:28:53.497732  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:28:53.498599  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:28:53.498627  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:28:53.498971  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHPort
	I1010 19:28:53.499159  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:28:53.499312  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHUsername
	I1010 19:28:53.499414  148525 sshutil.go:53] new ssh client: &{IP:192.168.50.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/default-k8s-diff-port-361847/id_rsa Username:docker}
	I1010 19:28:53.501044  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:28:53.501509  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:28:53.501531  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:28:53.501782  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHPort
	I1010 19:28:53.501956  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:28:53.502080  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHUsername
	I1010 19:28:53.502232  148525 sshutil.go:53] new ssh client: &{IP:192.168.50.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/default-k8s-diff-port-361847/id_rsa Username:docker}
	I1010 19:28:53.512240  148525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34899
	I1010 19:28:53.512809  148525 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:28:53.513347  148525 main.go:141] libmachine: Using API Version  1
	I1010 19:28:53.513368  148525 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:28:53.513787  148525 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:28:53.514001  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetState
	I1010 19:28:53.515436  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .DriverName
	I1010 19:28:53.515639  148525 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1010 19:28:53.515659  148525 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1010 19:28:53.515681  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHHostname
	I1010 19:28:53.518128  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:28:53.518596  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:28:53.518628  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:28:53.518909  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHPort
	I1010 19:28:53.519080  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:28:53.519216  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHUsername
	I1010 19:28:53.519376  148525 sshutil.go:53] new ssh client: &{IP:192.168.50.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/default-k8s-diff-port-361847/id_rsa Username:docker}
	I1010 19:28:53.712871  148525 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 19:28:53.755059  148525 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-361847" to be "Ready" ...
	I1010 19:28:53.766564  148525 node_ready.go:49] node "default-k8s-diff-port-361847" has status "Ready":"True"
	I1010 19:28:53.766590  148525 node_ready.go:38] duration metric: took 11.490223ms for node "default-k8s-diff-port-361847" to be "Ready" ...
	I1010 19:28:53.766603  148525 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 19:28:53.777458  148525 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-dh9th" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:53.875493  148525 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1010 19:28:53.875525  148525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1010 19:28:53.911443  148525 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1010 19:28:53.944885  148525 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1010 19:28:53.944919  148525 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1010 19:28:53.945487  148525 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 19:28:54.011209  148525 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1010 19:28:54.011239  148525 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1010 19:28:54.039679  148525 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1010 19:28:54.598172  148525 main.go:141] libmachine: Making call to close driver server
	I1010 19:28:54.598226  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .Close
	I1010 19:28:54.598584  148525 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:28:54.598608  148525 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:28:54.598619  148525 main.go:141] libmachine: Making call to close driver server
	I1010 19:28:54.598629  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .Close
	I1010 19:28:54.598898  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | Closing plugin on server side
	I1010 19:28:54.598931  148525 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:28:54.598939  148525 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:28:54.643365  148525 main.go:141] libmachine: Making call to close driver server
	I1010 19:28:54.643392  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .Close
	I1010 19:28:54.643734  148525 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:28:54.643760  148525 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:28:55.287018  148525 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.341483807s)
	I1010 19:28:55.287045  148525 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.247326452s)
	I1010 19:28:55.287089  148525 main.go:141] libmachine: Making call to close driver server
	I1010 19:28:55.287094  148525 main.go:141] libmachine: Making call to close driver server
	I1010 19:28:55.287106  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .Close
	I1010 19:28:55.287112  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .Close
	I1010 19:28:55.287440  148525 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:28:55.287479  148525 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:28:55.287506  148525 main.go:141] libmachine: Making call to close driver server
	I1010 19:28:55.287524  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .Close
	I1010 19:28:55.287570  148525 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:28:55.287589  148525 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:28:55.287598  148525 main.go:141] libmachine: Making call to close driver server
	I1010 19:28:55.287607  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .Close
	I1010 19:28:55.287818  148525 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:28:55.287831  148525 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:28:55.287840  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | Closing plugin on server side
	I1010 19:28:55.287855  148525 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:28:55.287862  148525 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:28:55.287872  148525 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-361847"
	I1010 19:28:55.287880  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | Closing plugin on server side
	I1010 19:28:55.289944  148525 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1010 19:28:53.841387  147213 logs.go:123] Gathering logs for kube-apiserver [20a9cb514f18abab89d82173a52c43693b13548712f96726c491e14100e5deba] ...
	I1010 19:28:53.841441  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 20a9cb514f18abab89d82173a52c43693b13548712f96726c491e14100e5deba"
	I1010 19:28:53.892951  147213 logs.go:123] Gathering logs for coredns [3c98f0e3e46ce82feb8638281ffb8c8cdf326b15115a6edfe91de59de6868846] ...
	I1010 19:28:53.893005  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c98f0e3e46ce82feb8638281ffb8c8cdf326b15115a6edfe91de59de6868846"
	I1010 19:28:53.947636  147213 logs.go:123] Gathering logs for kube-scheduler [d397ef1d012accb6f9cb41fa5bd5ed4649588e9ad8552f87d2f9a579a9c3f023] ...
	I1010 19:28:53.947668  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d397ef1d012accb6f9cb41fa5bd5ed4649588e9ad8552f87d2f9a579a9c3f023"
	I1010 19:28:53.992969  147213 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:28:53.992998  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:28:54.520652  147213 logs.go:123] Gathering logs for kubelet ...
	I1010 19:28:54.520703  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:28:54.588366  147213 logs.go:123] Gathering logs for etcd [bfc9f1f069a0299235908025d3c8840e059ddc2fc2f579932edb134cff568c36] ...
	I1010 19:28:54.588418  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bfc9f1f069a0299235908025d3c8840e059ddc2fc2f579932edb134cff568c36"
	I1010 19:28:54.651179  147213 logs.go:123] Gathering logs for kube-proxy [3a26f9cbec8dc8e6e1b1c1cc24a2432dc85819a78557be2263db6bc34e847664] ...
	I1010 19:28:54.651227  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3a26f9cbec8dc8e6e1b1c1cc24a2432dc85819a78557be2263db6bc34e847664"
	I1010 19:28:54.712881  147213 logs.go:123] Gathering logs for kube-controller-manager [d59196636b2826805e226757e3c5d3683d49f2fddb43f2e965aeae02026742ea] ...
	I1010 19:28:54.712925  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d59196636b2826805e226757e3c5d3683d49f2fddb43f2e965aeae02026742ea"
	I1010 19:28:54.779030  147213 logs.go:123] Gathering logs for storage-provisioner [dfabbf70cd44985666c6b54a456e49c66a41cc19300a483565decd937ce240ab] ...
	I1010 19:28:54.779094  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dfabbf70cd44985666c6b54a456e49c66a41cc19300a483565decd937ce240ab"
	I1010 19:28:54.821961  147213 logs.go:123] Gathering logs for storage-provisioner [e14d37c6da3f630d60720c5dc7ec414f060a7b327fafd27b633f3402e552840e] ...
	I1010 19:28:54.822002  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e14d37c6da3f630d60720c5dc7ec414f060a7b327fafd27b633f3402e552840e"
	I1010 19:28:54.871409  147213 logs.go:123] Gathering logs for container status ...
	I1010 19:28:54.871446  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:28:57.425310  147213 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:28:57.442308  147213 api_server.go:72] duration metric: took 4m17.02881034s to wait for apiserver process to appear ...
	I1010 19:28:57.442343  147213 api_server.go:88] waiting for apiserver healthz status ...
	I1010 19:28:57.442383  147213 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:28:57.442444  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:28:57.481392  147213 cri.go:89] found id: "20a9cb514f18abab89d82173a52c43693b13548712f96726c491e14100e5deba"
	I1010 19:28:57.481420  147213 cri.go:89] found id: ""
	I1010 19:28:57.481430  147213 logs.go:282] 1 containers: [20a9cb514f18abab89d82173a52c43693b13548712f96726c491e14100e5deba]
	I1010 19:28:57.481503  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:57.486191  147213 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:28:57.486269  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:28:57.532238  147213 cri.go:89] found id: "bfc9f1f069a0299235908025d3c8840e059ddc2fc2f579932edb134cff568c36"
	I1010 19:28:57.532271  147213 cri.go:89] found id: ""
	I1010 19:28:57.532284  147213 logs.go:282] 1 containers: [bfc9f1f069a0299235908025d3c8840e059ddc2fc2f579932edb134cff568c36]
	I1010 19:28:57.532357  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:57.538105  147213 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:28:57.538188  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:28:57.579729  147213 cri.go:89] found id: "3c98f0e3e46ce82feb8638281ffb8c8cdf326b15115a6edfe91de59de6868846"
	I1010 19:28:57.579757  147213 cri.go:89] found id: ""
	I1010 19:28:57.579767  147213 logs.go:282] 1 containers: [3c98f0e3e46ce82feb8638281ffb8c8cdf326b15115a6edfe91de59de6868846]
	I1010 19:28:57.579833  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:57.584494  147213 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:28:57.584568  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:28:57.623920  147213 cri.go:89] found id: "d397ef1d012accb6f9cb41fa5bd5ed4649588e9ad8552f87d2f9a579a9c3f023"
	I1010 19:28:57.623949  147213 cri.go:89] found id: ""
	I1010 19:28:57.623960  147213 logs.go:282] 1 containers: [d397ef1d012accb6f9cb41fa5bd5ed4649588e9ad8552f87d2f9a579a9c3f023]
	I1010 19:28:57.624028  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:57.628927  147213 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:28:57.629018  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:28:57.669669  147213 cri.go:89] found id: "3a26f9cbec8dc8e6e1b1c1cc24a2432dc85819a78557be2263db6bc34e847664"
	I1010 19:28:57.669698  147213 cri.go:89] found id: ""
	I1010 19:28:57.669707  147213 logs.go:282] 1 containers: [3a26f9cbec8dc8e6e1b1c1cc24a2432dc85819a78557be2263db6bc34e847664]
	I1010 19:28:57.669771  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:57.674449  147213 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:28:57.674526  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:28:57.721856  147213 cri.go:89] found id: "d59196636b2826805e226757e3c5d3683d49f2fddb43f2e965aeae02026742ea"
	I1010 19:28:57.721881  147213 cri.go:89] found id: ""
	I1010 19:28:57.721891  147213 logs.go:282] 1 containers: [d59196636b2826805e226757e3c5d3683d49f2fddb43f2e965aeae02026742ea]
	I1010 19:28:57.721955  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:57.726422  147213 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:28:57.726497  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:28:57.764464  147213 cri.go:89] found id: ""
	I1010 19:28:57.764499  147213 logs.go:282] 0 containers: []
	W1010 19:28:57.764512  147213 logs.go:284] No container was found matching "kindnet"
	I1010 19:28:57.764521  147213 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1010 19:28:57.764595  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1010 19:28:57.809758  147213 cri.go:89] found id: "dfabbf70cd44985666c6b54a456e49c66a41cc19300a483565decd937ce240ab"
	I1010 19:28:57.809784  147213 cri.go:89] found id: "e14d37c6da3f630d60720c5dc7ec414f060a7b327fafd27b633f3402e552840e"
	I1010 19:28:57.809788  147213 cri.go:89] found id: ""
	I1010 19:28:57.809797  147213 logs.go:282] 2 containers: [dfabbf70cd44985666c6b54a456e49c66a41cc19300a483565decd937ce240ab e14d37c6da3f630d60720c5dc7ec414f060a7b327fafd27b633f3402e552840e]
	I1010 19:28:57.809854  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:57.815576  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:57.820152  147213 logs.go:123] Gathering logs for kube-apiserver [20a9cb514f18abab89d82173a52c43693b13548712f96726c491e14100e5deba] ...
	I1010 19:28:57.820181  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 20a9cb514f18abab89d82173a52c43693b13548712f96726c491e14100e5deba"
	I1010 19:28:57.869339  147213 logs.go:123] Gathering logs for etcd [bfc9f1f069a0299235908025d3c8840e059ddc2fc2f579932edb134cff568c36] ...
	I1010 19:28:57.869383  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bfc9f1f069a0299235908025d3c8840e059ddc2fc2f579932edb134cff568c36"
	I1010 19:28:57.918698  147213 logs.go:123] Gathering logs for kube-proxy [3a26f9cbec8dc8e6e1b1c1cc24a2432dc85819a78557be2263db6bc34e847664] ...
	I1010 19:28:57.918739  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3a26f9cbec8dc8e6e1b1c1cc24a2432dc85819a78557be2263db6bc34e847664"
	I1010 19:28:57.960939  147213 logs.go:123] Gathering logs for kube-controller-manager [d59196636b2826805e226757e3c5d3683d49f2fddb43f2e965aeae02026742ea] ...
	I1010 19:28:57.960985  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d59196636b2826805e226757e3c5d3683d49f2fddb43f2e965aeae02026742ea"
	I1010 19:28:58.013572  147213 logs.go:123] Gathering logs for storage-provisioner [dfabbf70cd44985666c6b54a456e49c66a41cc19300a483565decd937ce240ab] ...
	I1010 19:28:58.013612  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dfabbf70cd44985666c6b54a456e49c66a41cc19300a483565decd937ce240ab"
	I1010 19:28:58.053247  147213 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:28:58.053277  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:28:58.507428  147213 logs.go:123] Gathering logs for container status ...
	I1010 19:28:58.507473  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:28:58.552704  147213 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:28:58.552742  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 19:28:58.672077  147213 logs.go:123] Gathering logs for dmesg ...
	I1010 19:28:58.672127  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:28:58.690997  147213 logs.go:123] Gathering logs for coredns [3c98f0e3e46ce82feb8638281ffb8c8cdf326b15115a6edfe91de59de6868846] ...
	I1010 19:28:58.691049  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c98f0e3e46ce82feb8638281ffb8c8cdf326b15115a6edfe91de59de6868846"
	I1010 19:28:58.735251  147213 logs.go:123] Gathering logs for kube-scheduler [d397ef1d012accb6f9cb41fa5bd5ed4649588e9ad8552f87d2f9a579a9c3f023] ...
	I1010 19:28:58.735287  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d397ef1d012accb6f9cb41fa5bd5ed4649588e9ad8552f87d2f9a579a9c3f023"
	I1010 19:28:55.291700  148525 addons.go:510] duration metric: took 1.845864985s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1010 19:28:55.785186  148525 pod_ready.go:103] pod "coredns-7c65d6cfc9-dh9th" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:57.789567  148525 pod_ready.go:103] pod "coredns-7c65d6cfc9-dh9th" in "kube-system" namespace has status "Ready":"False"
	I1010 19:29:00.284444  148525 pod_ready.go:103] pod "coredns-7c65d6cfc9-dh9th" in "kube-system" namespace has status "Ready":"False"
	I1010 19:29:01.297627  148525 pod_ready.go:93] pod "coredns-7c65d6cfc9-dh9th" in "kube-system" namespace has status "Ready":"True"
	I1010 19:29:01.297660  148525 pod_ready.go:82] duration metric: took 7.520173084s for pod "coredns-7c65d6cfc9-dh9th" in "kube-system" namespace to be "Ready" ...
	I1010 19:29:01.297676  148525 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-fgxh7" in "kube-system" namespace to be "Ready" ...
	I1010 19:29:01.804654  148525 pod_ready.go:93] pod "coredns-7c65d6cfc9-fgxh7" in "kube-system" namespace has status "Ready":"True"
	I1010 19:29:01.804676  148525 pod_ready.go:82] duration metric: took 506.992872ms for pod "coredns-7c65d6cfc9-fgxh7" in "kube-system" namespace to be "Ready" ...
	I1010 19:29:01.804690  148525 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	I1010 19:29:01.809788  148525 pod_ready.go:93] pod "etcd-default-k8s-diff-port-361847" in "kube-system" namespace has status "Ready":"True"
	I1010 19:29:01.809814  148525 pod_ready.go:82] duration metric: took 5.116023ms for pod "etcd-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	I1010 19:29:01.809825  148525 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	I1010 19:29:01.814460  148525 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-361847" in "kube-system" namespace has status "Ready":"True"
	I1010 19:29:01.814486  148525 pod_ready.go:82] duration metric: took 4.652085ms for pod "kube-apiserver-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	I1010 19:29:01.814501  148525 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	I1010 19:29:01.819719  148525 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-361847" in "kube-system" namespace has status "Ready":"True"
	I1010 19:29:01.819741  148525 pod_ready.go:82] duration metric: took 5.231258ms for pod "kube-controller-manager-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	I1010 19:29:01.819753  148525 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-jlvn6" in "kube-system" namespace to be "Ready" ...
	I1010 19:29:02.082285  148525 pod_ready.go:93] pod "kube-proxy-jlvn6" in "kube-system" namespace has status "Ready":"True"
	I1010 19:29:02.082325  148525 pod_ready.go:82] duration metric: took 262.562954ms for pod "kube-proxy-jlvn6" in "kube-system" namespace to be "Ready" ...
	I1010 19:29:02.082342  148525 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	I1010 19:29:02.481705  148525 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-361847" in "kube-system" namespace has status "Ready":"True"
	I1010 19:29:02.481730  148525 pod_ready.go:82] duration metric: took 399.378957ms for pod "kube-scheduler-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	I1010 19:29:02.481742  148525 pod_ready.go:39] duration metric: took 8.715126416s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 19:29:02.481779  148525 api_server.go:52] waiting for apiserver process to appear ...
	I1010 19:29:02.481832  148525 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:29:02.498706  148525 api_server.go:72] duration metric: took 9.052891898s to wait for apiserver process to appear ...
	I1010 19:29:02.498760  148525 api_server.go:88] waiting for apiserver healthz status ...
	I1010 19:29:02.498795  148525 api_server.go:253] Checking apiserver healthz at https://192.168.50.32:8444/healthz ...
	I1010 19:29:02.503501  148525 api_server.go:279] https://192.168.50.32:8444/healthz returned 200:
	ok
	I1010 19:29:02.504594  148525 api_server.go:141] control plane version: v1.31.1
	I1010 19:29:02.504620  148525 api_server.go:131] duration metric: took 5.850548ms to wait for apiserver health ...
	I1010 19:29:02.504629  148525 system_pods.go:43] waiting for kube-system pods to appear ...
	I1010 19:29:02.685579  148525 system_pods.go:59] 9 kube-system pods found
	I1010 19:29:02.685611  148525 system_pods.go:61] "coredns-7c65d6cfc9-dh9th" [ff14d755-810a-497a-b1fc-7fe231748af3] Running
	I1010 19:29:02.685618  148525 system_pods.go:61] "coredns-7c65d6cfc9-fgxh7" [b4faa977-3205-4395-bda3-8fe24fdcf6cc] Running
	I1010 19:29:02.685624  148525 system_pods.go:61] "etcd-default-k8s-diff-port-361847" [d7e1e625-2945-4755-927a-aab5e40f5392] Running
	I1010 19:29:02.685630  148525 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-361847" [0f7dace6-6cdb-4438-a4b3-3fecac56c709] Running
	I1010 19:29:02.685635  148525 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-361847" [d08fb57e-d916-4d4f-929a-509c1c8c0f89] Running
	I1010 19:29:02.685639  148525 system_pods.go:61] "kube-proxy-jlvn6" [6336f682-0362-4855-b848-3540052aec19] Running
	I1010 19:29:02.685644  148525 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-361847" [60282768-5672-42b9-b9f5-6d5cd0b186cf] Running
	I1010 19:29:02.685653  148525 system_pods.go:61] "metrics-server-6867b74b74-fdf7p" [6f8ca204-13fe-4adb-9c09-33ec6821ff2d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1010 19:29:02.685658  148525 system_pods.go:61] "storage-provisioner" [ea1a4ade-9648-401f-a0ad-633ab3c1196b] Running
	I1010 19:29:02.685669  148525 system_pods.go:74] duration metric: took 181.032548ms to wait for pod list to return data ...
	I1010 19:29:02.685683  148525 default_sa.go:34] waiting for default service account to be created ...
	I1010 19:29:02.883256  148525 default_sa.go:45] found service account: "default"
	I1010 19:29:02.883288  148525 default_sa.go:55] duration metric: took 197.59742ms for default service account to be created ...
	I1010 19:29:02.883298  148525 system_pods.go:116] waiting for k8s-apps to be running ...
	I1010 19:29:03.084706  148525 system_pods.go:86] 9 kube-system pods found
	I1010 19:29:03.084737  148525 system_pods.go:89] "coredns-7c65d6cfc9-dh9th" [ff14d755-810a-497a-b1fc-7fe231748af3] Running
	I1010 19:29:03.084742  148525 system_pods.go:89] "coredns-7c65d6cfc9-fgxh7" [b4faa977-3205-4395-bda3-8fe24fdcf6cc] Running
	I1010 19:29:03.084746  148525 system_pods.go:89] "etcd-default-k8s-diff-port-361847" [d7e1e625-2945-4755-927a-aab5e40f5392] Running
	I1010 19:29:03.084751  148525 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-361847" [0f7dace6-6cdb-4438-a4b3-3fecac56c709] Running
	I1010 19:29:03.084755  148525 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-361847" [d08fb57e-d916-4d4f-929a-509c1c8c0f89] Running
	I1010 19:29:03.084759  148525 system_pods.go:89] "kube-proxy-jlvn6" [6336f682-0362-4855-b848-3540052aec19] Running
	I1010 19:29:03.084762  148525 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-361847" [60282768-5672-42b9-b9f5-6d5cd0b186cf] Running
	I1010 19:29:03.084768  148525 system_pods.go:89] "metrics-server-6867b74b74-fdf7p" [6f8ca204-13fe-4adb-9c09-33ec6821ff2d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1010 19:29:03.084772  148525 system_pods.go:89] "storage-provisioner" [ea1a4ade-9648-401f-a0ad-633ab3c1196b] Running
	I1010 19:29:03.084779  148525 system_pods.go:126] duration metric: took 201.476637ms to wait for k8s-apps to be running ...
	I1010 19:29:03.084787  148525 system_svc.go:44] waiting for kubelet service to be running ....
	I1010 19:29:03.084832  148525 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 19:29:03.100986  148525 system_svc.go:56] duration metric: took 16.183062ms WaitForService to wait for kubelet
	I1010 19:29:03.101026  148525 kubeadm.go:582] duration metric: took 9.655245557s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1010 19:29:03.101050  148525 node_conditions.go:102] verifying NodePressure condition ...
	I1010 19:29:03.282063  148525 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1010 19:29:03.282095  148525 node_conditions.go:123] node cpu capacity is 2
	I1010 19:29:03.282106  148525 node_conditions.go:105] duration metric: took 181.049888ms to run NodePressure ...
	I1010 19:29:03.282119  148525 start.go:241] waiting for startup goroutines ...
	I1010 19:29:03.282125  148525 start.go:246] waiting for cluster config update ...
	I1010 19:29:03.282135  148525 start.go:255] writing updated cluster config ...
	I1010 19:29:03.282414  148525 ssh_runner.go:195] Run: rm -f paused
	I1010 19:29:03.331838  148525 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1010 19:29:03.333698  148525 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-361847" cluster and "default" namespace by default
	I1010 19:28:58.775358  147213 logs.go:123] Gathering logs for storage-provisioner [e14d37c6da3f630d60720c5dc7ec414f060a7b327fafd27b633f3402e552840e] ...
	I1010 19:28:58.775396  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e14d37c6da3f630d60720c5dc7ec414f060a7b327fafd27b633f3402e552840e"
	I1010 19:28:58.812210  147213 logs.go:123] Gathering logs for kubelet ...
	I1010 19:28:58.812269  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:29:01.381750  147213 api_server.go:253] Checking apiserver healthz at https://192.168.72.11:8443/healthz ...
	I1010 19:29:01.386658  147213 api_server.go:279] https://192.168.72.11:8443/healthz returned 200:
	ok
	I1010 19:29:01.387793  147213 api_server.go:141] control plane version: v1.31.1
	I1010 19:29:01.387819  147213 api_server.go:131] duration metric: took 3.945468552s to wait for apiserver health ...
	I1010 19:29:01.387829  147213 system_pods.go:43] waiting for kube-system pods to appear ...
	I1010 19:29:01.387861  147213 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:29:01.387948  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:29:01.433312  147213 cri.go:89] found id: "20a9cb514f18abab89d82173a52c43693b13548712f96726c491e14100e5deba"
	I1010 19:29:01.433344  147213 cri.go:89] found id: ""
	I1010 19:29:01.433433  147213 logs.go:282] 1 containers: [20a9cb514f18abab89d82173a52c43693b13548712f96726c491e14100e5deba]
	I1010 19:29:01.433521  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:29:01.437920  147213 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:29:01.437983  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:29:01.476429  147213 cri.go:89] found id: "bfc9f1f069a0299235908025d3c8840e059ddc2fc2f579932edb134cff568c36"
	I1010 19:29:01.476458  147213 cri.go:89] found id: ""
	I1010 19:29:01.476470  147213 logs.go:282] 1 containers: [bfc9f1f069a0299235908025d3c8840e059ddc2fc2f579932edb134cff568c36]
	I1010 19:29:01.476522  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:29:01.480912  147213 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:29:01.480987  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:29:01.522141  147213 cri.go:89] found id: "3c98f0e3e46ce82feb8638281ffb8c8cdf326b15115a6edfe91de59de6868846"
	I1010 19:29:01.522164  147213 cri.go:89] found id: ""
	I1010 19:29:01.522173  147213 logs.go:282] 1 containers: [3c98f0e3e46ce82feb8638281ffb8c8cdf326b15115a6edfe91de59de6868846]
	I1010 19:29:01.522238  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:29:01.526742  147213 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:29:01.526803  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:29:01.572715  147213 cri.go:89] found id: "d397ef1d012accb6f9cb41fa5bd5ed4649588e9ad8552f87d2f9a579a9c3f023"
	I1010 19:29:01.572747  147213 cri.go:89] found id: ""
	I1010 19:29:01.572759  147213 logs.go:282] 1 containers: [d397ef1d012accb6f9cb41fa5bd5ed4649588e9ad8552f87d2f9a579a9c3f023]
	I1010 19:29:01.572814  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:29:01.577754  147213 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:29:01.577832  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:29:01.616077  147213 cri.go:89] found id: "3a26f9cbec8dc8e6e1b1c1cc24a2432dc85819a78557be2263db6bc34e847664"
	I1010 19:29:01.616104  147213 cri.go:89] found id: ""
	I1010 19:29:01.616121  147213 logs.go:282] 1 containers: [3a26f9cbec8dc8e6e1b1c1cc24a2432dc85819a78557be2263db6bc34e847664]
	I1010 19:29:01.616185  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:29:01.620622  147213 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:29:01.620702  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:29:01.662859  147213 cri.go:89] found id: "d59196636b2826805e226757e3c5d3683d49f2fddb43f2e965aeae02026742ea"
	I1010 19:29:01.662889  147213 cri.go:89] found id: ""
	I1010 19:29:01.662903  147213 logs.go:282] 1 containers: [d59196636b2826805e226757e3c5d3683d49f2fddb43f2e965aeae02026742ea]
	I1010 19:29:01.662964  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:29:01.667491  147213 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:29:01.667585  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:29:01.706191  147213 cri.go:89] found id: ""
	I1010 19:29:01.706217  147213 logs.go:282] 0 containers: []
	W1010 19:29:01.706228  147213 logs.go:284] No container was found matching "kindnet"
	I1010 19:29:01.706234  147213 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1010 19:29:01.706299  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1010 19:29:01.753559  147213 cri.go:89] found id: "dfabbf70cd44985666c6b54a456e49c66a41cc19300a483565decd937ce240ab"
	I1010 19:29:01.753581  147213 cri.go:89] found id: "e14d37c6da3f630d60720c5dc7ec414f060a7b327fafd27b633f3402e552840e"
	I1010 19:29:01.753584  147213 cri.go:89] found id: ""
	I1010 19:29:01.753591  147213 logs.go:282] 2 containers: [dfabbf70cd44985666c6b54a456e49c66a41cc19300a483565decd937ce240ab e14d37c6da3f630d60720c5dc7ec414f060a7b327fafd27b633f3402e552840e]
	I1010 19:29:01.753645  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:29:01.758179  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:29:01.762336  147213 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:29:01.762358  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 19:29:01.867667  147213 logs.go:123] Gathering logs for etcd [bfc9f1f069a0299235908025d3c8840e059ddc2fc2f579932edb134cff568c36] ...
	I1010 19:29:01.867698  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bfc9f1f069a0299235908025d3c8840e059ddc2fc2f579932edb134cff568c36"
	I1010 19:29:01.911722  147213 logs.go:123] Gathering logs for coredns [3c98f0e3e46ce82feb8638281ffb8c8cdf326b15115a6edfe91de59de6868846] ...
	I1010 19:29:01.911756  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c98f0e3e46ce82feb8638281ffb8c8cdf326b15115a6edfe91de59de6868846"
	I1010 19:29:01.955152  147213 logs.go:123] Gathering logs for kube-proxy [3a26f9cbec8dc8e6e1b1c1cc24a2432dc85819a78557be2263db6bc34e847664] ...
	I1010 19:29:01.955189  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3a26f9cbec8dc8e6e1b1c1cc24a2432dc85819a78557be2263db6bc34e847664"
	I1010 19:29:01.995010  147213 logs.go:123] Gathering logs for kube-controller-manager [d59196636b2826805e226757e3c5d3683d49f2fddb43f2e965aeae02026742ea] ...
	I1010 19:29:01.995041  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d59196636b2826805e226757e3c5d3683d49f2fddb43f2e965aeae02026742ea"
	I1010 19:29:02.047505  147213 logs.go:123] Gathering logs for storage-provisioner [dfabbf70cd44985666c6b54a456e49c66a41cc19300a483565decd937ce240ab] ...
	I1010 19:29:02.047546  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dfabbf70cd44985666c6b54a456e49c66a41cc19300a483565decd937ce240ab"
	I1010 19:29:02.085080  147213 logs.go:123] Gathering logs for storage-provisioner [e14d37c6da3f630d60720c5dc7ec414f060a7b327fafd27b633f3402e552840e] ...
	I1010 19:29:02.085110  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e14d37c6da3f630d60720c5dc7ec414f060a7b327fafd27b633f3402e552840e"
	I1010 19:29:02.128482  147213 logs.go:123] Gathering logs for kubelet ...
	I1010 19:29:02.128527  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:29:02.194867  147213 logs.go:123] Gathering logs for dmesg ...
	I1010 19:29:02.194904  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:29:02.211881  147213 logs.go:123] Gathering logs for kube-apiserver [20a9cb514f18abab89d82173a52c43693b13548712f96726c491e14100e5deba] ...
	I1010 19:29:02.211911  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 20a9cb514f18abab89d82173a52c43693b13548712f96726c491e14100e5deba"
	I1010 19:29:02.262969  147213 logs.go:123] Gathering logs for kube-scheduler [d397ef1d012accb6f9cb41fa5bd5ed4649588e9ad8552f87d2f9a579a9c3f023] ...
	I1010 19:29:02.263013  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d397ef1d012accb6f9cb41fa5bd5ed4649588e9ad8552f87d2f9a579a9c3f023"
	I1010 19:29:02.302921  147213 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:29:02.302956  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:29:02.671102  147213 logs.go:123] Gathering logs for container status ...
	I1010 19:29:02.671169  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:29:05.241477  147213 system_pods.go:59] 8 kube-system pods found
	I1010 19:29:05.241508  147213 system_pods.go:61] "coredns-7c65d6cfc9-86brb" [28e5f869-f82f-4bd4-9d9c-89499fa89c89] Running
	I1010 19:29:05.241513  147213 system_pods.go:61] "etcd-no-preload-320324" [3027fc7b-a2cd-47c9-a6f1-122bfd0475a8] Running
	I1010 19:29:05.241517  147213 system_pods.go:61] "kube-apiserver-no-preload-320324" [6e3d2eab-4568-48e9-957c-e724ff89b5da] Running
	I1010 19:29:05.241521  147213 system_pods.go:61] "kube-controller-manager-no-preload-320324" [a6d65cd3-48b1-4faf-bb7f-f6433641d6f3] Running
	I1010 19:29:05.241525  147213 system_pods.go:61] "kube-proxy-vn6sv" [e5b2c419-7299-4bc4-b263-99408b9484eb] Running
	I1010 19:29:05.241528  147213 system_pods.go:61] "kube-scheduler-no-preload-320324" [e089c258-e720-4c78-ada1-eebbe89556c1] Running
	I1010 19:29:05.241534  147213 system_pods.go:61] "metrics-server-6867b74b74-8w9lk" [354939e6-2ca9-44f5-8e8e-c10493c68b79] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1010 19:29:05.241540  147213 system_pods.go:61] "storage-provisioner" [d4965b53-60c3-4a97-bd52-d164d977247a] Running
	I1010 19:29:05.241549  147213 system_pods.go:74] duration metric: took 3.853712488s to wait for pod list to return data ...
	I1010 19:29:05.241556  147213 default_sa.go:34] waiting for default service account to be created ...
	I1010 19:29:05.244686  147213 default_sa.go:45] found service account: "default"
	I1010 19:29:05.244721  147213 default_sa.go:55] duration metric: took 3.158069ms for default service account to be created ...
	I1010 19:29:05.244733  147213 system_pods.go:116] waiting for k8s-apps to be running ...
	I1010 19:29:05.249372  147213 system_pods.go:86] 8 kube-system pods found
	I1010 19:29:05.249398  147213 system_pods.go:89] "coredns-7c65d6cfc9-86brb" [28e5f869-f82f-4bd4-9d9c-89499fa89c89] Running
	I1010 19:29:05.249404  147213 system_pods.go:89] "etcd-no-preload-320324" [3027fc7b-a2cd-47c9-a6f1-122bfd0475a8] Running
	I1010 19:29:05.249408  147213 system_pods.go:89] "kube-apiserver-no-preload-320324" [6e3d2eab-4568-48e9-957c-e724ff89b5da] Running
	I1010 19:29:05.249413  147213 system_pods.go:89] "kube-controller-manager-no-preload-320324" [a6d65cd3-48b1-4faf-bb7f-f6433641d6f3] Running
	I1010 19:29:05.249418  147213 system_pods.go:89] "kube-proxy-vn6sv" [e5b2c419-7299-4bc4-b263-99408b9484eb] Running
	I1010 19:29:05.249425  147213 system_pods.go:89] "kube-scheduler-no-preload-320324" [e089c258-e720-4c78-ada1-eebbe89556c1] Running
	I1010 19:29:05.249433  147213 system_pods.go:89] "metrics-server-6867b74b74-8w9lk" [354939e6-2ca9-44f5-8e8e-c10493c68b79] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1010 19:29:05.249442  147213 system_pods.go:89] "storage-provisioner" [d4965b53-60c3-4a97-bd52-d164d977247a] Running
	I1010 19:29:05.249455  147213 system_pods.go:126] duration metric: took 4.715381ms to wait for k8s-apps to be running ...
	I1010 19:29:05.249467  147213 system_svc.go:44] waiting for kubelet service to be running ....
	I1010 19:29:05.249519  147213 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 19:29:05.265180  147213 system_svc.go:56] duration metric: took 15.703413ms WaitForService to wait for kubelet
	I1010 19:29:05.265216  147213 kubeadm.go:582] duration metric: took 4m24.851723603s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1010 19:29:05.265237  147213 node_conditions.go:102] verifying NodePressure condition ...
	I1010 19:29:05.268775  147213 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1010 19:29:05.268807  147213 node_conditions.go:123] node cpu capacity is 2
	I1010 19:29:05.268821  147213 node_conditions.go:105] duration metric: took 3.575195ms to run NodePressure ...
	I1010 19:29:05.268834  147213 start.go:241] waiting for startup goroutines ...
	I1010 19:29:05.268840  147213 start.go:246] waiting for cluster config update ...
	I1010 19:29:05.268869  147213 start.go:255] writing updated cluster config ...
	I1010 19:29:05.269148  147213 ssh_runner.go:195] Run: rm -f paused
	I1010 19:29:05.319999  147213 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1010 19:29:05.322189  147213 out.go:177] * Done! kubectl is now configured to use "no-preload-320324" cluster and "default" namespace by default
	I1010 19:29:41.275119  148123 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1010 19:29:41.275272  148123 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1010 19:29:41.276822  148123 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1010 19:29:41.276919  148123 kubeadm.go:310] [preflight] Running pre-flight checks
	I1010 19:29:41.277017  148123 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1010 19:29:41.277142  148123 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1010 19:29:41.277254  148123 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1010 19:29:41.277357  148123 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1010 19:29:41.279069  148123 out.go:235]   - Generating certificates and keys ...
	I1010 19:29:41.279160  148123 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1010 19:29:41.279217  148123 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1010 19:29:41.279306  148123 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1010 19:29:41.279381  148123 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1010 19:29:41.279484  148123 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1010 19:29:41.279576  148123 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1010 19:29:41.279674  148123 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1010 19:29:41.279779  148123 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1010 19:29:41.279906  148123 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1010 19:29:41.279971  148123 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1010 19:29:41.280005  148123 kubeadm.go:310] [certs] Using the existing "sa" key
	I1010 19:29:41.280052  148123 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1010 19:29:41.280095  148123 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1010 19:29:41.280144  148123 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1010 19:29:41.280219  148123 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1010 19:29:41.280317  148123 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1010 19:29:41.280474  148123 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1010 19:29:41.280583  148123 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1010 19:29:41.280648  148123 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1010 19:29:41.280736  148123 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1010 19:29:41.282180  148123 out.go:235]   - Booting up control plane ...
	I1010 19:29:41.282266  148123 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1010 19:29:41.282336  148123 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1010 19:29:41.282414  148123 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1010 19:29:41.282538  148123 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1010 19:29:41.282748  148123 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1010 19:29:41.282822  148123 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1010 19:29:41.282896  148123 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1010 19:29:41.283061  148123 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1010 19:29:41.283123  148123 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1010 19:29:41.283287  148123 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1010 19:29:41.283348  148123 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1010 19:29:41.283519  148123 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1010 19:29:41.283623  148123 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1010 19:29:41.283809  148123 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1010 19:29:41.283900  148123 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1010 19:29:41.284115  148123 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1010 19:29:41.284135  148123 kubeadm.go:310] 
	I1010 19:29:41.284201  148123 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1010 19:29:41.284243  148123 kubeadm.go:310] 		timed out waiting for the condition
	I1010 19:29:41.284257  148123 kubeadm.go:310] 
	I1010 19:29:41.284307  148123 kubeadm.go:310] 	This error is likely caused by:
	I1010 19:29:41.284346  148123 kubeadm.go:310] 		- The kubelet is not running
	I1010 19:29:41.284481  148123 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1010 19:29:41.284505  148123 kubeadm.go:310] 
	I1010 19:29:41.284655  148123 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1010 19:29:41.284714  148123 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1010 19:29:41.284752  148123 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1010 19:29:41.284758  148123 kubeadm.go:310] 
	I1010 19:29:41.284913  148123 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1010 19:29:41.285038  148123 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1010 19:29:41.285055  148123 kubeadm.go:310] 
	I1010 19:29:41.285169  148123 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1010 19:29:41.285307  148123 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1010 19:29:41.285475  148123 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1010 19:29:41.285587  148123 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1010 19:29:41.285644  148123 kubeadm.go:310] 
	W1010 19:29:41.285747  148123 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1010 19:29:41.285792  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1010 19:29:46.681684  148123 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.395862838s)
	I1010 19:29:46.681797  148123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 19:29:46.696927  148123 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1010 19:29:46.708250  148123 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1010 19:29:46.708280  148123 kubeadm.go:157] found existing configuration files:
	
	I1010 19:29:46.708339  148123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1010 19:29:46.719748  148123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1010 19:29:46.719828  148123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1010 19:29:46.730892  148123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1010 19:29:46.742329  148123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1010 19:29:46.742401  148123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1010 19:29:46.752919  148123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1010 19:29:46.762860  148123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1010 19:29:46.762932  148123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1010 19:29:46.772513  148123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1010 19:29:46.781672  148123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1010 19:29:46.781746  148123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1010 19:29:46.791250  148123 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1010 19:29:47.018706  148123 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1010 19:31:43.351851  148123 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1010 19:31:43.351992  148123 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1010 19:31:43.353664  148123 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1010 19:31:43.353752  148123 kubeadm.go:310] [preflight] Running pre-flight checks
	I1010 19:31:43.353861  148123 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1010 19:31:43.353998  148123 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1010 19:31:43.354130  148123 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1010 19:31:43.354235  148123 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1010 19:31:43.356450  148123 out.go:235]   - Generating certificates and keys ...
	I1010 19:31:43.356557  148123 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1010 19:31:43.356652  148123 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1010 19:31:43.356783  148123 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1010 19:31:43.356902  148123 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1010 19:31:43.357002  148123 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1010 19:31:43.357074  148123 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1010 19:31:43.357181  148123 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1010 19:31:43.357247  148123 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1010 19:31:43.357325  148123 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1010 19:31:43.357408  148123 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1010 19:31:43.357459  148123 kubeadm.go:310] [certs] Using the existing "sa" key
	I1010 19:31:43.357529  148123 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1010 19:31:43.357604  148123 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1010 19:31:43.357676  148123 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1010 19:31:43.357751  148123 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1010 19:31:43.357830  148123 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1010 19:31:43.357979  148123 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1010 19:31:43.358092  148123 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1010 19:31:43.358158  148123 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1010 19:31:43.358263  148123 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1010 19:31:43.360216  148123 out.go:235]   - Booting up control plane ...
	I1010 19:31:43.360332  148123 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1010 19:31:43.360463  148123 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1010 19:31:43.360551  148123 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1010 19:31:43.360673  148123 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1010 19:31:43.360907  148123 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1010 19:31:43.360976  148123 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1010 19:31:43.361058  148123 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1010 19:31:43.361309  148123 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1010 19:31:43.361410  148123 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1010 19:31:43.361648  148123 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1010 19:31:43.361742  148123 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1010 19:31:43.361960  148123 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1010 19:31:43.362058  148123 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1010 19:31:43.362308  148123 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1010 19:31:43.362399  148123 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1010 19:31:43.362640  148123 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1010 19:31:43.362652  148123 kubeadm.go:310] 
	I1010 19:31:43.362708  148123 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1010 19:31:43.362758  148123 kubeadm.go:310] 		timed out waiting for the condition
	I1010 19:31:43.362768  148123 kubeadm.go:310] 
	I1010 19:31:43.362809  148123 kubeadm.go:310] 	This error is likely caused by:
	I1010 19:31:43.362858  148123 kubeadm.go:310] 		- The kubelet is not running
	I1010 19:31:43.362981  148123 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1010 19:31:43.362993  148123 kubeadm.go:310] 
	I1010 19:31:43.363119  148123 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1010 19:31:43.363164  148123 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1010 19:31:43.363204  148123 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1010 19:31:43.363219  148123 kubeadm.go:310] 
	I1010 19:31:43.363344  148123 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1010 19:31:43.363454  148123 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1010 19:31:43.363465  148123 kubeadm.go:310] 
	I1010 19:31:43.363591  148123 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1010 19:31:43.363708  148123 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1010 19:31:43.363803  148123 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1010 19:31:43.363892  148123 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1010 19:31:43.363978  148123 kubeadm.go:394] duration metric: took 8m3.085634474s to StartCluster
	I1010 19:31:43.364052  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:31:43.364159  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:31:43.364268  148123 kubeadm.go:310] 
	I1010 19:31:43.413041  148123 cri.go:89] found id: ""
	I1010 19:31:43.413075  148123 logs.go:282] 0 containers: []
	W1010 19:31:43.413086  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:31:43.413092  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:31:43.413206  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:31:43.456454  148123 cri.go:89] found id: ""
	I1010 19:31:43.456487  148123 logs.go:282] 0 containers: []
	W1010 19:31:43.456499  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:31:43.456507  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:31:43.456567  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:31:43.498582  148123 cri.go:89] found id: ""
	I1010 19:31:43.498614  148123 logs.go:282] 0 containers: []
	W1010 19:31:43.498624  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:31:43.498633  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:31:43.498694  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:31:43.540557  148123 cri.go:89] found id: ""
	I1010 19:31:43.540589  148123 logs.go:282] 0 containers: []
	W1010 19:31:43.540598  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:31:43.540605  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:31:43.540661  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:31:43.576743  148123 cri.go:89] found id: ""
	I1010 19:31:43.576776  148123 logs.go:282] 0 containers: []
	W1010 19:31:43.576787  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:31:43.576796  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:31:43.576887  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:31:43.614620  148123 cri.go:89] found id: ""
	I1010 19:31:43.614652  148123 logs.go:282] 0 containers: []
	W1010 19:31:43.614662  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:31:43.614668  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:31:43.614730  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:31:43.652008  148123 cri.go:89] found id: ""
	I1010 19:31:43.652061  148123 logs.go:282] 0 containers: []
	W1010 19:31:43.652075  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:31:43.652084  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:31:43.652153  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:31:43.689990  148123 cri.go:89] found id: ""
	I1010 19:31:43.690019  148123 logs.go:282] 0 containers: []
	W1010 19:31:43.690028  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:31:43.690043  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:31:43.690056  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:31:43.745634  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:31:43.745673  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:31:43.760064  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:31:43.760095  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:31:43.840168  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:31:43.840197  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:31:43.840214  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:31:43.943265  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:31:43.943303  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1010 19:31:43.987758  148123 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1010 19:31:43.987816  148123 out.go:270] * 
	W1010 19:31:43.987891  148123 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1010 19:31:43.987906  148123 out.go:270] * 
	W1010 19:31:43.988703  148123 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1010 19:31:43.991928  148123 out.go:201] 
	W1010 19:31:43.993494  148123 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1010 19:31:43.993556  148123 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1010 19:31:43.993585  148123 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1010 19:31:43.995273  148123 out.go:201] 
	
	
	==> CRI-O <==
	Oct 10 19:38:05 default-k8s-diff-port-361847 crio[714]: time="2024-10-10 19:38:05.456408404Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7a541d03-e35c-4310-8d94-94f247b3cc0f name=/runtime.v1.RuntimeService/Version
	Oct 10 19:38:05 default-k8s-diff-port-361847 crio[714]: time="2024-10-10 19:38:05.457441197Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5b6055b5-f325-4f85-a6b3-2741a8ef74ca name=/runtime.v1.ImageService/ImageFsInfo
	Oct 10 19:38:05 default-k8s-diff-port-361847 crio[714]: time="2024-10-10 19:38:05.457828442Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728589085457806714,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5b6055b5-f325-4f85-a6b3-2741a8ef74ca name=/runtime.v1.ImageService/ImageFsInfo
	Oct 10 19:38:05 default-k8s-diff-port-361847 crio[714]: time="2024-10-10 19:38:05.458369649Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bab1da87-6aee-4387-8c5f-5fbcf43badee name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 19:38:05 default-k8s-diff-port-361847 crio[714]: time="2024-10-10 19:38:05.458418110Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bab1da87-6aee-4387-8c5f-5fbcf43badee name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 19:38:05 default-k8s-diff-port-361847 crio[714]: time="2024-10-10 19:38:05.458625562Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3af3f927e6e218cdbe6528482fbad91146bcf8b8e2f3ecc6f4dd22367e3353f6,PodSandboxId:e01154709f127b006abdad36e3e8ded86fc4b6450036f64af1396f428d2d8f29,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728588535735861530,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea1a4ade-9648-401f-a0ad-633ab3c1196b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c34d62ea901a006bee1b94b8a0963dce08a38a1fa76a03c69046a03b594563c9,PodSandboxId:f82b561c276715641ce0cbe818bb5ed36fa05d45fee515656f171bc5a4450fd5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728588535033615341,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jlvn6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6336f682-0362-4855-b848-3540052aec19,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fafdf63631a8ff669c19a4671a7c2c85fa4dad5700f0ca207bb21013ef99b06,PodSandboxId:77913b146b0c128f6f16ec19db1e1cdad56d2dc8d1143598ae43bdc9cbcc5536,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728588534445030987,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fgxh7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4faa977-3205-4395-bda3-8fe24fdcf6cc,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tc
p\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8b8f844b7b0586105ff3dba820b339ba8a3f6bbd6277900172e961d7006a1c2,PodSandboxId:c1ea37fae8f88a9283add7ed11f2cc1c0f92b5e5b297926d04325e4132961de1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728588534125856622,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-dh9th,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff14d755-810a-497a-b1fc-
7fe231748af3,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbfa3f7b306bd53ceda1d13c69729c39dc0eb7db018adb04d47e120cbdb300f7,PodSandboxId:ad4c2c7d8d35d6ad8b2f97e2a1b6527975ed0d14435ea181f3b3e50a75e8ccd6,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728588522855220602,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-361847,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb250ff6dad742b9f14cc7b757329d85,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b7a70a0d1c6b99da20dd7aa2a10a91457890905859cc04c211e03b36b5e34be,PodSandboxId:5efab2a6937fd8e2fd2d74e6c07f859c00f1369d27e8fe02a033cbdebd922639,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728588522801390605,Labels:map[string]str
ing{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-361847,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f4eaa86354b36640568a0448bbc6bb4,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc897586e115d16a892ff088925d6b09643f59333e746617ae836506a76e3939,PodSandboxId:1ff7755d161d96dc0a648e1b1cd4cb0e147cbc9943c1021b6f1e4857bbe6f06f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728588522836342605,Label
s:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-361847,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b1732c94af48bf557e8cc0c0f19485d,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:decf33fb776b612209e9001fe9e0154fc6db1046acee4d9d83f96e7bf6e906ff,PodSandboxId:df22536f2cd09d30a1a1104b472ae89f67f8f97332d2bcc7067831641df363da,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728588522817430732,Labels:
map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-361847,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b57c646474679053469c7268c1c49d62,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57335da36e4a25689e6f890eff8bfda0f17a5e5f055b71ba73288383a1b0b07c,PodSandboxId:8c451c4f67d2e5ef483b0964581573cdb6a25ceeddeadde2b5e4166321e63f6f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728588241406370879,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-361847,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b1732c94af48bf557e8cc0c0f19485d,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bab1da87-6aee-4387-8c5f-5fbcf43badee name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 19:38:05 default-k8s-diff-port-361847 crio[714]: time="2024-10-10 19:38:05.497467798Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3fd89bfd-8d41-4413-9970-11779be8aa2f name=/runtime.v1.RuntimeService/Version
	Oct 10 19:38:05 default-k8s-diff-port-361847 crio[714]: time="2024-10-10 19:38:05.497538983Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3fd89bfd-8d41-4413-9970-11779be8aa2f name=/runtime.v1.RuntimeService/Version
	Oct 10 19:38:05 default-k8s-diff-port-361847 crio[714]: time="2024-10-10 19:38:05.498793125Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=558d51fc-623b-4863-b964-cf08bac9950e name=/runtime.v1.ImageService/ImageFsInfo
	Oct 10 19:38:05 default-k8s-diff-port-361847 crio[714]: time="2024-10-10 19:38:05.499421602Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728589085499360751,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=558d51fc-623b-4863-b964-cf08bac9950e name=/runtime.v1.ImageService/ImageFsInfo
	Oct 10 19:38:05 default-k8s-diff-port-361847 crio[714]: time="2024-10-10 19:38:05.500077228Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d65bc7df-bd36-47f7-90a0-c97b986e8a77 name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 19:38:05 default-k8s-diff-port-361847 crio[714]: time="2024-10-10 19:38:05.500126773Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d65bc7df-bd36-47f7-90a0-c97b986e8a77 name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 19:38:05 default-k8s-diff-port-361847 crio[714]: time="2024-10-10 19:38:05.500385828Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3af3f927e6e218cdbe6528482fbad91146bcf8b8e2f3ecc6f4dd22367e3353f6,PodSandboxId:e01154709f127b006abdad36e3e8ded86fc4b6450036f64af1396f428d2d8f29,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728588535735861530,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea1a4ade-9648-401f-a0ad-633ab3c1196b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c34d62ea901a006bee1b94b8a0963dce08a38a1fa76a03c69046a03b594563c9,PodSandboxId:f82b561c276715641ce0cbe818bb5ed36fa05d45fee515656f171bc5a4450fd5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728588535033615341,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jlvn6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6336f682-0362-4855-b848-3540052aec19,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fafdf63631a8ff669c19a4671a7c2c85fa4dad5700f0ca207bb21013ef99b06,PodSandboxId:77913b146b0c128f6f16ec19db1e1cdad56d2dc8d1143598ae43bdc9cbcc5536,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728588534445030987,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fgxh7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4faa977-3205-4395-bda3-8fe24fdcf6cc,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tc
p\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8b8f844b7b0586105ff3dba820b339ba8a3f6bbd6277900172e961d7006a1c2,PodSandboxId:c1ea37fae8f88a9283add7ed11f2cc1c0f92b5e5b297926d04325e4132961de1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728588534125856622,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-dh9th,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff14d755-810a-497a-b1fc-
7fe231748af3,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbfa3f7b306bd53ceda1d13c69729c39dc0eb7db018adb04d47e120cbdb300f7,PodSandboxId:ad4c2c7d8d35d6ad8b2f97e2a1b6527975ed0d14435ea181f3b3e50a75e8ccd6,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728588522855220602,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-361847,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb250ff6dad742b9f14cc7b757329d85,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b7a70a0d1c6b99da20dd7aa2a10a91457890905859cc04c211e03b36b5e34be,PodSandboxId:5efab2a6937fd8e2fd2d74e6c07f859c00f1369d27e8fe02a033cbdebd922639,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728588522801390605,Labels:map[string]str
ing{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-361847,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f4eaa86354b36640568a0448bbc6bb4,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc897586e115d16a892ff088925d6b09643f59333e746617ae836506a76e3939,PodSandboxId:1ff7755d161d96dc0a648e1b1cd4cb0e147cbc9943c1021b6f1e4857bbe6f06f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728588522836342605,Label
s:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-361847,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b1732c94af48bf557e8cc0c0f19485d,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:decf33fb776b612209e9001fe9e0154fc6db1046acee4d9d83f96e7bf6e906ff,PodSandboxId:df22536f2cd09d30a1a1104b472ae89f67f8f97332d2bcc7067831641df363da,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728588522817430732,Labels:
map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-361847,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b57c646474679053469c7268c1c49d62,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57335da36e4a25689e6f890eff8bfda0f17a5e5f055b71ba73288383a1b0b07c,PodSandboxId:8c451c4f67d2e5ef483b0964581573cdb6a25ceeddeadde2b5e4166321e63f6f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728588241406370879,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-361847,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b1732c94af48bf557e8cc0c0f19485d,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d65bc7df-bd36-47f7-90a0-c97b986e8a77 name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 19:38:05 default-k8s-diff-port-361847 crio[714]: time="2024-10-10 19:38:05.533025728Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a3af72c9-ffca-40bf-9294-e7cdf6fa2b3a name=/runtime.v1.RuntimeService/Version
	Oct 10 19:38:05 default-k8s-diff-port-361847 crio[714]: time="2024-10-10 19:38:05.533099727Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a3af72c9-ffca-40bf-9294-e7cdf6fa2b3a name=/runtime.v1.RuntimeService/Version
	Oct 10 19:38:05 default-k8s-diff-port-361847 crio[714]: time="2024-10-10 19:38:05.534661288Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ae5699f3-bab3-44b4-949b-09c7fd6d734a name=/runtime.v1.ImageService/ImageFsInfo
	Oct 10 19:38:05 default-k8s-diff-port-361847 crio[714]: time="2024-10-10 19:38:05.535119010Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728589085535092528,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ae5699f3-bab3-44b4-949b-09c7fd6d734a name=/runtime.v1.ImageService/ImageFsInfo
	Oct 10 19:38:05 default-k8s-diff-port-361847 crio[714]: time="2024-10-10 19:38:05.535766750Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a200e7af-eb6b-4305-a8ab-adbba7ab8b53 name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 19:38:05 default-k8s-diff-port-361847 crio[714]: time="2024-10-10 19:38:05.535837883Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a200e7af-eb6b-4305-a8ab-adbba7ab8b53 name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 19:38:05 default-k8s-diff-port-361847 crio[714]: time="2024-10-10 19:38:05.536044750Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3af3f927e6e218cdbe6528482fbad91146bcf8b8e2f3ecc6f4dd22367e3353f6,PodSandboxId:e01154709f127b006abdad36e3e8ded86fc4b6450036f64af1396f428d2d8f29,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728588535735861530,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea1a4ade-9648-401f-a0ad-633ab3c1196b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c34d62ea901a006bee1b94b8a0963dce08a38a1fa76a03c69046a03b594563c9,PodSandboxId:f82b561c276715641ce0cbe818bb5ed36fa05d45fee515656f171bc5a4450fd5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728588535033615341,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jlvn6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6336f682-0362-4855-b848-3540052aec19,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fafdf63631a8ff669c19a4671a7c2c85fa4dad5700f0ca207bb21013ef99b06,PodSandboxId:77913b146b0c128f6f16ec19db1e1cdad56d2dc8d1143598ae43bdc9cbcc5536,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728588534445030987,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fgxh7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4faa977-3205-4395-bda3-8fe24fdcf6cc,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tc
p\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8b8f844b7b0586105ff3dba820b339ba8a3f6bbd6277900172e961d7006a1c2,PodSandboxId:c1ea37fae8f88a9283add7ed11f2cc1c0f92b5e5b297926d04325e4132961de1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728588534125856622,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-dh9th,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff14d755-810a-497a-b1fc-
7fe231748af3,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbfa3f7b306bd53ceda1d13c69729c39dc0eb7db018adb04d47e120cbdb300f7,PodSandboxId:ad4c2c7d8d35d6ad8b2f97e2a1b6527975ed0d14435ea181f3b3e50a75e8ccd6,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728588522855220602,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-361847,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb250ff6dad742b9f14cc7b757329d85,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b7a70a0d1c6b99da20dd7aa2a10a91457890905859cc04c211e03b36b5e34be,PodSandboxId:5efab2a6937fd8e2fd2d74e6c07f859c00f1369d27e8fe02a033cbdebd922639,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728588522801390605,Labels:map[string]str
ing{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-361847,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f4eaa86354b36640568a0448bbc6bb4,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc897586e115d16a892ff088925d6b09643f59333e746617ae836506a76e3939,PodSandboxId:1ff7755d161d96dc0a648e1b1cd4cb0e147cbc9943c1021b6f1e4857bbe6f06f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728588522836342605,Label
s:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-361847,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b1732c94af48bf557e8cc0c0f19485d,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:decf33fb776b612209e9001fe9e0154fc6db1046acee4d9d83f96e7bf6e906ff,PodSandboxId:df22536f2cd09d30a1a1104b472ae89f67f8f97332d2bcc7067831641df363da,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728588522817430732,Labels:
map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-361847,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b57c646474679053469c7268c1c49d62,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57335da36e4a25689e6f890eff8bfda0f17a5e5f055b71ba73288383a1b0b07c,PodSandboxId:8c451c4f67d2e5ef483b0964581573cdb6a25ceeddeadde2b5e4166321e63f6f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728588241406370879,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-361847,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b1732c94af48bf557e8cc0c0f19485d,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a200e7af-eb6b-4305-a8ab-adbba7ab8b53 name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 19:38:05 default-k8s-diff-port-361847 crio[714]: time="2024-10-10 19:38:05.559281753Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=692ee71a-7c8e-4479-913e-a565181376dc name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 10 19:38:05 default-k8s-diff-port-361847 crio[714]: time="2024-10-10 19:38:05.559551642Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:e01154709f127b006abdad36e3e8ded86fc4b6450036f64af1396f428d2d8f29,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:ea1a4ade-9648-401f-a0ad-633ab3c1196b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728588535613532259,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea1a4ade-9648-401f-a0ad-633ab3c1196b,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespac
e\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-10-10T19:28:55.304371054Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:fa73065cb5693077d574c4dd5adb638ce3d145e55e31206ecd07375e68b168ac,Metadata:&PodSandboxMetadata{Name:metrics-server-6867b74b74-fdf7p,Uid:6f8ca204-13fe-4adb-9c09-33ec6821ff2d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728588535280968538,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-6867b74b74-fdf7p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f8ca204-13fe-4adb-9c09-3
3ec6821ff2d,k8s-app: metrics-server,pod-template-hash: 6867b74b74,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-10T19:28:54.969219609Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f82b561c276715641ce0cbe818bb5ed36fa05d45fee515656f171bc5a4450fd5,Metadata:&PodSandboxMetadata{Name:kube-proxy-jlvn6,Uid:6336f682-0362-4855-b848-3540052aec19,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728588534636017352,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-jlvn6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6336f682-0362-4855-b848-3540052aec19,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-10T19:28:52.810685377Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:77913b146b0c128f6f16ec19db1e1cdad56d2dc8d1143598ae43bdc9cbcc5536,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc
9-fgxh7,Uid:b4faa977-3205-4395-bda3-8fe24fdcf6cc,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728588533638431408,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-fgxh7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4faa977-3205-4395-bda3-8fe24fdcf6cc,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-10T19:28:53.325343059Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c1ea37fae8f88a9283add7ed11f2cc1c0f92b5e5b297926d04325e4132961de1,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-dh9th,Uid:ff14d755-810a-497a-b1fc-7fe231748af3,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728588533617507040,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-dh9th,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff14d755-810a-497a-b1fc-7fe231748af3,k8s-app: kube-dns,
pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-10T19:28:53.308688755Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1ff7755d161d96dc0a648e1b1cd4cb0e147cbc9943c1021b6f1e4857bbe6f06f,Metadata:&PodSandboxMetadata{Name:kube-apiserver-default-k8s-diff-port-361847,Uid:0b1732c94af48bf557e8cc0c0f19485d,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1728588522620847534,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-361847,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b1732c94af48bf557e8cc0c0f19485d,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.32:8444,kubernetes.io/config.hash: 0b1732c94af48bf557e8cc0c0f19485d,kubernetes.io/config.seen: 2024-10-10T19:28:42.165294638Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id
:ad4c2c7d8d35d6ad8b2f97e2a1b6527975ed0d14435ea181f3b3e50a75e8ccd6,Metadata:&PodSandboxMetadata{Name:etcd-default-k8s-diff-port-361847,Uid:fb250ff6dad742b9f14cc7b757329d85,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728588522617883082,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-default-k8s-diff-port-361847,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb250ff6dad742b9f14cc7b757329d85,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.50.32:2379,kubernetes.io/config.hash: fb250ff6dad742b9f14cc7b757329d85,kubernetes.io/config.seen: 2024-10-10T19:28:42.165292021Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:df22536f2cd09d30a1a1104b472ae89f67f8f97332d2bcc7067831641df363da,Metadata:&PodSandboxMetadata{Name:kube-scheduler-default-k8s-diff-port-361847,Uid:b57c646474679053469c7268c1c49d62,Namespace:kube-system,Attempt:0,},State:SAN
DBOX_READY,CreatedAt:1728588522612981051,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-361847,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b57c646474679053469c7268c1c49d62,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b57c646474679053469c7268c1c49d62,kubernetes.io/config.seen: 2024-10-10T19:28:42.165297409Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:5efab2a6937fd8e2fd2d74e6c07f859c00f1369d27e8fe02a033cbdebd922639,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-default-k8s-diff-port-361847,Uid:8f4eaa86354b36640568a0448bbc6bb4,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728588522604226596,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-361847,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 8f4eaa86354b36640568a0448bbc6bb4,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 8f4eaa86354b36640568a0448bbc6bb4,kubernetes.io/config.seen: 2024-10-10T19:28:42.165295955Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:8c451c4f67d2e5ef483b0964581573cdb6a25ceeddeadde2b5e4166321e63f6f,Metadata:&PodSandboxMetadata{Name:kube-apiserver-default-k8s-diff-port-361847,Uid:0b1732c94af48bf557e8cc0c0f19485d,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1728588241088639914,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-361847,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b1732c94af48bf557e8cc0c0f19485d,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.32:8444,kubernetes.io/config.hash: 0b1732c94af48bf557e8cc0c0f19485d,kubernetes.io/config.seen
: 2024-10-10T19:24:00.591298655Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=692ee71a-7c8e-4479-913e-a565181376dc name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 10 19:38:05 default-k8s-diff-port-361847 crio[714]: time="2024-10-10 19:38:05.560542789Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2d5c8b27-2bfd-44d1-a31b-0eef3d38b2b8 name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 19:38:05 default-k8s-diff-port-361847 crio[714]: time="2024-10-10 19:38:05.560618341Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2d5c8b27-2bfd-44d1-a31b-0eef3d38b2b8 name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 19:38:05 default-k8s-diff-port-361847 crio[714]: time="2024-10-10 19:38:05.560834991Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3af3f927e6e218cdbe6528482fbad91146bcf8b8e2f3ecc6f4dd22367e3353f6,PodSandboxId:e01154709f127b006abdad36e3e8ded86fc4b6450036f64af1396f428d2d8f29,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728588535735861530,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea1a4ade-9648-401f-a0ad-633ab3c1196b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c34d62ea901a006bee1b94b8a0963dce08a38a1fa76a03c69046a03b594563c9,PodSandboxId:f82b561c276715641ce0cbe818bb5ed36fa05d45fee515656f171bc5a4450fd5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728588535033615341,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jlvn6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6336f682-0362-4855-b848-3540052aec19,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fafdf63631a8ff669c19a4671a7c2c85fa4dad5700f0ca207bb21013ef99b06,PodSandboxId:77913b146b0c128f6f16ec19db1e1cdad56d2dc8d1143598ae43bdc9cbcc5536,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728588534445030987,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fgxh7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4faa977-3205-4395-bda3-8fe24fdcf6cc,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tc
p\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8b8f844b7b0586105ff3dba820b339ba8a3f6bbd6277900172e961d7006a1c2,PodSandboxId:c1ea37fae8f88a9283add7ed11f2cc1c0f92b5e5b297926d04325e4132961de1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728588534125856622,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-dh9th,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff14d755-810a-497a-b1fc-
7fe231748af3,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbfa3f7b306bd53ceda1d13c69729c39dc0eb7db018adb04d47e120cbdb300f7,PodSandboxId:ad4c2c7d8d35d6ad8b2f97e2a1b6527975ed0d14435ea181f3b3e50a75e8ccd6,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728588522855220602,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-361847,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb250ff6dad742b9f14cc7b757329d85,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b7a70a0d1c6b99da20dd7aa2a10a91457890905859cc04c211e03b36b5e34be,PodSandboxId:5efab2a6937fd8e2fd2d74e6c07f859c00f1369d27e8fe02a033cbdebd922639,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728588522801390605,Labels:map[string]str
ing{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-361847,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f4eaa86354b36640568a0448bbc6bb4,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc897586e115d16a892ff088925d6b09643f59333e746617ae836506a76e3939,PodSandboxId:1ff7755d161d96dc0a648e1b1cd4cb0e147cbc9943c1021b6f1e4857bbe6f06f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728588522836342605,Label
s:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-361847,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b1732c94af48bf557e8cc0c0f19485d,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:decf33fb776b612209e9001fe9e0154fc6db1046acee4d9d83f96e7bf6e906ff,PodSandboxId:df22536f2cd09d30a1a1104b472ae89f67f8f97332d2bcc7067831641df363da,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728588522817430732,Labels:
map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-361847,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b57c646474679053469c7268c1c49d62,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57335da36e4a25689e6f890eff8bfda0f17a5e5f055b71ba73288383a1b0b07c,PodSandboxId:8c451c4f67d2e5ef483b0964581573cdb6a25ceeddeadde2b5e4166321e63f6f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728588241406370879,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-361847,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b1732c94af48bf557e8cc0c0f19485d,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2d5c8b27-2bfd-44d1-a31b-0eef3d38b2b8 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	3af3f927e6e21       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   e01154709f127       storage-provisioner
	c34d62ea901a0       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   9 minutes ago       Running             kube-proxy                0                   f82b561c27671       kube-proxy-jlvn6
	1fafdf63631a8       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   77913b146b0c1       coredns-7c65d6cfc9-fgxh7
	c8b8f844b7b05       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   c1ea37fae8f88       coredns-7c65d6cfc9-dh9th
	fbfa3f7b306bd       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   9 minutes ago       Running             etcd                      2                   ad4c2c7d8d35d       etcd-default-k8s-diff-port-361847
	dc897586e115d       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   9 minutes ago       Running             kube-apiserver            2                   1ff7755d161d9       kube-apiserver-default-k8s-diff-port-361847
	decf33fb776b6       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   9 minutes ago       Running             kube-scheduler            2                   df22536f2cd09       kube-scheduler-default-k8s-diff-port-361847
	0b7a70a0d1c6b       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   9 minutes ago       Running             kube-controller-manager   2                   5efab2a6937fd       kube-controller-manager-default-k8s-diff-port-361847
	57335da36e4a2       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   14 minutes ago      Exited              kube-apiserver            1                   8c451c4f67d2e       kube-apiserver-default-k8s-diff-port-361847
	
	
	==> coredns [1fafdf63631a8ff669c19a4671a7c2c85fa4dad5700f0ca207bb21013ef99b06] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [c8b8f844b7b0586105ff3dba820b339ba8a3f6bbd6277900172e961d7006a1c2] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-361847
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-361847
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e90d7de550e839433dc631623053398b652747fc
	                    minikube.k8s.io/name=default-k8s-diff-port-361847
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_10T19_28_49_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 10 Oct 2024 19:28:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-361847
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 10 Oct 2024 19:37:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 10 Oct 2024 19:34:04 +0000   Thu, 10 Oct 2024 19:28:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 10 Oct 2024 19:34:04 +0000   Thu, 10 Oct 2024 19:28:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 10 Oct 2024 19:34:04 +0000   Thu, 10 Oct 2024 19:28:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 10 Oct 2024 19:34:04 +0000   Thu, 10 Oct 2024 19:28:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.32
	  Hostname:    default-k8s-diff-port-361847
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4eae24e916514d64996e008ddd3e63f0
	  System UUID:                4eae24e9-1651-4d64-996e-008ddd3e63f0
	  Boot ID:                    9b3a015f-d090-461f-84c7-df645892ed0b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-dh9th                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m12s
	  kube-system                 coredns-7c65d6cfc9-fgxh7                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m12s
	  kube-system                 etcd-default-k8s-diff-port-361847                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         9m17s
	  kube-system                 kube-apiserver-default-k8s-diff-port-361847             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m17s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-361847    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m17s
	  kube-system                 kube-proxy-jlvn6                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m13s
	  kube-system                 kube-scheduler-default-k8s-diff-port-361847             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m17s
	  kube-system                 metrics-server-6867b74b74-fdf7p                         100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         9m11s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m10s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m10s  kube-proxy       
	  Normal  Starting                 9m17s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m17s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m17s  kubelet          Node default-k8s-diff-port-361847 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m17s  kubelet          Node default-k8s-diff-port-361847 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m17s  kubelet          Node default-k8s-diff-port-361847 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m13s  node-controller  Node default-k8s-diff-port-361847 event: Registered Node default-k8s-diff-port-361847 in Controller
	
	
	==> dmesg <==
	[  +0.053424] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041721] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.187571] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.537652] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.606078] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.103148] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +0.058638] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.053693] systemd-fstab-generator[648]: Ignoring "noauto" option for root device
	[  +0.201763] systemd-fstab-generator[663]: Ignoring "noauto" option for root device
	[  +0.121613] systemd-fstab-generator[675]: Ignoring "noauto" option for root device
	[  +0.315074] systemd-fstab-generator[705]: Ignoring "noauto" option for root device
	[  +4.352652] systemd-fstab-generator[793]: Ignoring "noauto" option for root device
	[  +0.062955] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.926151] systemd-fstab-generator[915]: Ignoring "noauto" option for root device
	[Oct10 19:24] kauditd_printk_skb: 97 callbacks suppressed
	[  +8.480079] kauditd_printk_skb: 85 callbacks suppressed
	[Oct10 19:28] kauditd_printk_skb: 7 callbacks suppressed
	[  +1.203806] systemd-fstab-generator[2587]: Ignoring "noauto" option for root device
	[  +4.677214] kauditd_printk_skb: 54 callbacks suppressed
	[  +1.884942] systemd-fstab-generator[2909]: Ignoring "noauto" option for root device
	[  +5.439093] systemd-fstab-generator[3026]: Ignoring "noauto" option for root device
	[  +0.100917] kauditd_printk_skb: 14 callbacks suppressed
	[Oct10 19:29] kauditd_printk_skb: 82 callbacks suppressed
	
	
	==> etcd [fbfa3f7b306bd53ceda1d13c69729c39dc0eb7db018adb04d47e120cbdb300f7] <==
	{"level":"info","ts":"2024-10-10T19:28:43.275604Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-10-10T19:28:43.275865Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"fbd4dd8524dacdec","initial-advertise-peer-urls":["https://192.168.50.32:2380"],"listen-peer-urls":["https://192.168.50.32:2380"],"advertise-client-urls":["https://192.168.50.32:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.32:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-10-10T19:28:43.275965Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-10-10T19:28:43.276072Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.50.32:2380"}
	{"level":"info","ts":"2024-10-10T19:28:43.276114Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.50.32:2380"}
	{"level":"info","ts":"2024-10-10T19:28:43.402229Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbd4dd8524dacdec is starting a new election at term 1"}
	{"level":"info","ts":"2024-10-10T19:28:43.402286Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbd4dd8524dacdec became pre-candidate at term 1"}
	{"level":"info","ts":"2024-10-10T19:28:43.402302Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbd4dd8524dacdec received MsgPreVoteResp from fbd4dd8524dacdec at term 1"}
	{"level":"info","ts":"2024-10-10T19:28:43.402313Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbd4dd8524dacdec became candidate at term 2"}
	{"level":"info","ts":"2024-10-10T19:28:43.402319Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbd4dd8524dacdec received MsgVoteResp from fbd4dd8524dacdec at term 2"}
	{"level":"info","ts":"2024-10-10T19:28:43.402327Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbd4dd8524dacdec became leader at term 2"}
	{"level":"info","ts":"2024-10-10T19:28:43.402334Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: fbd4dd8524dacdec elected leader fbd4dd8524dacdec at term 2"}
	{"level":"info","ts":"2024-10-10T19:28:43.406368Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"fbd4dd8524dacdec","local-member-attributes":"{Name:default-k8s-diff-port-361847 ClientURLs:[https://192.168.50.32:2379]}","request-path":"/0/members/fbd4dd8524dacdec/attributes","cluster-id":"2484c988a436b7d1","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-10T19:28:43.406490Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-10T19:28:43.407219Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-10T19:28:43.417221Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-10T19:28:43.417365Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-10T19:28:43.413261Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-10T19:28:43.414813Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-10T19:28:43.422258Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.32:2379"}
	{"level":"info","ts":"2024-10-10T19:28:43.422890Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-10T19:28:43.425754Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-10T19:28:43.457411Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"2484c988a436b7d1","local-member-id":"fbd4dd8524dacdec","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-10T19:28:43.459433Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-10T19:28:43.459510Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 19:38:05 up 14 min,  0 users,  load average: 0.23, 0.14, 0.10
	Linux default-k8s-diff-port-361847 5.10.207 #1 SMP Tue Oct 8 15:16:25 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [57335da36e4a25689e6f890eff8bfda0f17a5e5f055b71ba73288383a1b0b07c] <==
	W1010 19:28:39.418934       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1010 19:28:39.425468       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1010 19:28:39.493673       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1010 19:28:39.497033       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1010 19:28:39.515769       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1010 19:28:39.537531       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1010 19:28:39.543923       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1010 19:28:39.581953       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1010 19:28:39.606927       1 logging.go:55] [core] [Channel #205 SubChannel #206]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1010 19:28:39.616860       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1010 19:28:39.637843       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1010 19:28:39.667418       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1010 19:28:39.675005       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1010 19:28:39.694737       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1010 19:28:39.703408       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1010 19:28:39.732722       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1010 19:28:39.805596       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1010 19:28:39.820129       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1010 19:28:39.843447       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1010 19:28:39.937101       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1010 19:28:40.014605       1 logging.go:55] [core] [Channel #17 SubChannel #18]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1010 19:28:40.061465       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1010 19:28:40.089411       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1010 19:28:40.089411       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1010 19:28:40.123574       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [dc897586e115d16a892ff088925d6b09643f59333e746617ae836506a76e3939] <==
	W1010 19:33:46.422984       1 handler_proxy.go:99] no RequestInfo found in the context
	E1010 19:33:46.423284       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1010 19:33:46.424255       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1010 19:33:46.424338       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1010 19:34:46.424653       1 handler_proxy.go:99] no RequestInfo found in the context
	E1010 19:34:46.424748       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1010 19:34:46.424666       1 handler_proxy.go:99] no RequestInfo found in the context
	E1010 19:34:46.424792       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1010 19:34:46.426068       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1010 19:34:46.426127       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1010 19:36:46.426694       1 handler_proxy.go:99] no RequestInfo found in the context
	W1010 19:36:46.426699       1 handler_proxy.go:99] no RequestInfo found in the context
	E1010 19:36:46.427055       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E1010 19:36:46.427076       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1010 19:36:46.428238       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1010 19:36:46.428293       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [0b7a70a0d1c6b99da20dd7aa2a10a91457890905859cc04c211e03b36b5e34be] <==
	E1010 19:32:52.472072       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1010 19:32:52.908918       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1010 19:33:22.483130       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1010 19:33:22.918006       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1010 19:33:52.490035       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1010 19:33:52.926581       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1010 19:34:04.125902       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-361847"
	E1010 19:34:22.496956       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1010 19:34:22.934328       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1010 19:34:52.503953       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1010 19:34:52.942214       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1010 19:35:04.428490       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="384.043µs"
	I1010 19:35:17.424232       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="178.194µs"
	E1010 19:35:22.515408       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1010 19:35:22.949959       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1010 19:35:52.523255       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1010 19:35:52.957887       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1010 19:36:22.529381       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1010 19:36:22.965825       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1010 19:36:52.535540       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1010 19:36:52.974294       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1010 19:37:22.543991       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1010 19:37:22.982566       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1010 19:37:52.551651       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1010 19:37:52.991810       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [c34d62ea901a006bee1b94b8a0963dce08a38a1fa76a03c69046a03b594563c9] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1010 19:28:55.433214       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1010 19:28:55.452659       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.32"]
	E1010 19:28:55.452966       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1010 19:28:55.604687       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1010 19:28:55.604789       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1010 19:28:55.604856       1 server_linux.go:169] "Using iptables Proxier"
	I1010 19:28:55.608887       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1010 19:28:55.610848       1 server.go:483] "Version info" version="v1.31.1"
	I1010 19:28:55.610968       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1010 19:28:55.614278       1 config.go:199] "Starting service config controller"
	I1010 19:28:55.615809       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1010 19:28:55.615871       1 config.go:105] "Starting endpoint slice config controller"
	I1010 19:28:55.615881       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1010 19:28:55.617600       1 config.go:328] "Starting node config controller"
	I1010 19:28:55.617610       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1010 19:28:55.716024       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1010 19:28:55.716108       1 shared_informer.go:320] Caches are synced for service config
	I1010 19:28:55.717671       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [decf33fb776b612209e9001fe9e0154fc6db1046acee4d9d83f96e7bf6e906ff] <==
	W1010 19:28:45.477362       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1010 19:28:45.478206       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1010 19:28:46.496323       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1010 19:28:46.497206       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1010 19:28:46.579399       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1010 19:28:46.579529       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1010 19:28:46.643093       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1010 19:28:46.643250       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1010 19:28:46.649461       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1010 19:28:46.649562       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1010 19:28:46.661549       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1010 19:28:46.661652       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1010 19:28:46.688958       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1010 19:28:46.689128       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1010 19:28:46.689071       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1010 19:28:46.689359       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1010 19:28:46.802868       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1010 19:28:46.803666       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1010 19:28:46.819655       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1010 19:28:46.820204       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1010 19:28:46.846338       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1010 19:28:46.846658       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1010 19:28:46.926814       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1010 19:28:46.928281       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I1010 19:28:48.966763       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 10 19:36:58 default-k8s-diff-port-361847 kubelet[2916]: E1010 19:36:58.540818    2916 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728589018540534043,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 19:36:58 default-k8s-diff-port-361847 kubelet[2916]: E1010 19:36:58.541190    2916 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728589018540534043,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 19:36:59 default-k8s-diff-port-361847 kubelet[2916]: E1010 19:36:59.408483    2916 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-fdf7p" podUID="6f8ca204-13fe-4adb-9c09-33ec6821ff2d"
	Oct 10 19:37:08 default-k8s-diff-port-361847 kubelet[2916]: E1010 19:37:08.542696    2916 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728589028542373634,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 19:37:08 default-k8s-diff-port-361847 kubelet[2916]: E1010 19:37:08.542766    2916 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728589028542373634,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 19:37:10 default-k8s-diff-port-361847 kubelet[2916]: E1010 19:37:10.408424    2916 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-fdf7p" podUID="6f8ca204-13fe-4adb-9c09-33ec6821ff2d"
	Oct 10 19:37:18 default-k8s-diff-port-361847 kubelet[2916]: E1010 19:37:18.544697    2916 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728589038544425495,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 19:37:18 default-k8s-diff-port-361847 kubelet[2916]: E1010 19:37:18.544995    2916 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728589038544425495,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 19:37:21 default-k8s-diff-port-361847 kubelet[2916]: E1010 19:37:21.408427    2916 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-fdf7p" podUID="6f8ca204-13fe-4adb-9c09-33ec6821ff2d"
	Oct 10 19:37:28 default-k8s-diff-port-361847 kubelet[2916]: E1010 19:37:28.547202    2916 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728589048546743232,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 19:37:28 default-k8s-diff-port-361847 kubelet[2916]: E1010 19:37:28.547673    2916 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728589048546743232,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 19:37:34 default-k8s-diff-port-361847 kubelet[2916]: E1010 19:37:34.408944    2916 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-fdf7p" podUID="6f8ca204-13fe-4adb-9c09-33ec6821ff2d"
	Oct 10 19:37:38 default-k8s-diff-port-361847 kubelet[2916]: E1010 19:37:38.551456    2916 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728589058551052554,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 19:37:38 default-k8s-diff-port-361847 kubelet[2916]: E1010 19:37:38.551480    2916 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728589058551052554,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 19:37:46 default-k8s-diff-port-361847 kubelet[2916]: E1010 19:37:46.407702    2916 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-fdf7p" podUID="6f8ca204-13fe-4adb-9c09-33ec6821ff2d"
	Oct 10 19:37:48 default-k8s-diff-port-361847 kubelet[2916]: E1010 19:37:48.440845    2916 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 10 19:37:48 default-k8s-diff-port-361847 kubelet[2916]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 10 19:37:48 default-k8s-diff-port-361847 kubelet[2916]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 10 19:37:48 default-k8s-diff-port-361847 kubelet[2916]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 10 19:37:48 default-k8s-diff-port-361847 kubelet[2916]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 10 19:37:48 default-k8s-diff-port-361847 kubelet[2916]: E1010 19:37:48.553964    2916 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728589068553489290,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 19:37:48 default-k8s-diff-port-361847 kubelet[2916]: E1010 19:37:48.554007    2916 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728589068553489290,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 19:37:58 default-k8s-diff-port-361847 kubelet[2916]: E1010 19:37:58.555856    2916 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728589078555566712,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 19:37:58 default-k8s-diff-port-361847 kubelet[2916]: E1010 19:37:58.555902    2916 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728589078555566712,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 19:38:01 default-k8s-diff-port-361847 kubelet[2916]: E1010 19:38:01.409129    2916 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-fdf7p" podUID="6f8ca204-13fe-4adb-9c09-33ec6821ff2d"
	
	
	==> storage-provisioner [3af3f927e6e218cdbe6528482fbad91146bcf8b8e2f3ecc6f4dd22367e3353f6] <==
	I1010 19:28:55.835064       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1010 19:28:55.844850       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1010 19:28:55.845024       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1010 19:28:55.858573       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1010 19:28:55.858899       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-361847_ecf32781-3b84-4083-8316-13968b37b0f6!
	I1010 19:28:55.859691       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ea091e49-901f-468f-9bcc-d20776ed10cf", APIVersion:"v1", ResourceVersion:"403", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-361847_ecf32781-3b84-4083-8316-13968b37b0f6 became leader
	I1010 19:28:55.959644       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-361847_ecf32781-3b84-4083-8316-13968b37b0f6!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-361847 -n default-k8s-diff-port-361847
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-361847 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-fdf7p
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-361847 describe pod metrics-server-6867b74b74-fdf7p
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-361847 describe pod metrics-server-6867b74b74-fdf7p: exit status 1 (73.371355ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-fdf7p" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-361847 describe pod metrics-server-6867b74b74-fdf7p: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.61s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.77s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1010 19:29:14.583187   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/flannel-873515/client.crt: no such file or directory" logger="UnhandledError"
E1010 19:29:29.521914   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/bridge-873515/client.crt: no such file or directory" logger="UnhandledError"
E1010 19:29:59.530814   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/addons-473910/client.crt: no such file or directory" logger="UnhandledError"
E1010 19:31:23.922690   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/auto-873515/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-320324 -n no-preload-320324
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-10-10 19:38:05.880164261 +0000 UTC m=+6033.565099450
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-320324 -n no-preload-320324
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-320324 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-320324 logs -n 25: (2.533969867s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p no-preload-320324                                   | no-preload-320324            | jenkins | v1.34.0 | 10 Oct 24 19:15 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-029826             | newest-cni-029826            | jenkins | v1.34.0 | 10 Oct 24 19:16 UTC | 10 Oct 24 19:16 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-029826                                   | newest-cni-029826            | jenkins | v1.34.0 | 10 Oct 24 19:16 UTC | 10 Oct 24 19:16 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-029826                  | newest-cni-029826            | jenkins | v1.34.0 | 10 Oct 24 19:16 UTC | 10 Oct 24 19:16 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-029826 --memory=2200 --alsologtostderr   | newest-cni-029826            | jenkins | v1.34.0 | 10 Oct 24 19:16 UTC | 10 Oct 24 19:16 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-541370            | embed-certs-541370           | jenkins | v1.34.0 | 10 Oct 24 19:16 UTC | 10 Oct 24 19:16 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-541370                                  | embed-certs-541370           | jenkins | v1.34.0 | 10 Oct 24 19:16 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| image   | newest-cni-029826 image list                           | newest-cni-029826            | jenkins | v1.34.0 | 10 Oct 24 19:16 UTC | 10 Oct 24 19:16 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-029826                                   | newest-cni-029826            | jenkins | v1.34.0 | 10 Oct 24 19:16 UTC | 10 Oct 24 19:16 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-029826                                   | newest-cni-029826            | jenkins | v1.34.0 | 10 Oct 24 19:16 UTC | 10 Oct 24 19:16 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-029826                                   | newest-cni-029826            | jenkins | v1.34.0 | 10 Oct 24 19:16 UTC | 10 Oct 24 19:16 UTC |
	| delete  | -p newest-cni-029826                                   | newest-cni-029826            | jenkins | v1.34.0 | 10 Oct 24 19:16 UTC | 10 Oct 24 19:17 UTC |
	| start   | -p                                                     | default-k8s-diff-port-361847 | jenkins | v1.34.0 | 10 Oct 24 19:17 UTC | 10 Oct 24 19:18 UTC |
	|         | default-k8s-diff-port-361847                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-320324                  | no-preload-320324            | jenkins | v1.34.0 | 10 Oct 24 19:18 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-320324                                   | no-preload-320324            | jenkins | v1.34.0 | 10 Oct 24 19:18 UTC | 10 Oct 24 19:29 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-947203        | old-k8s-version-947203       | jenkins | v1.34.0 | 10 Oct 24 19:18 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-361847  | default-k8s-diff-port-361847 | jenkins | v1.34.0 | 10 Oct 24 19:18 UTC | 10 Oct 24 19:18 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-361847 | jenkins | v1.34.0 | 10 Oct 24 19:18 UTC |                     |
	|         | default-k8s-diff-port-361847                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-541370                 | embed-certs-541370           | jenkins | v1.34.0 | 10 Oct 24 19:18 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-541370                                  | embed-certs-541370           | jenkins | v1.34.0 | 10 Oct 24 19:19 UTC | 10 Oct 24 19:28 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-947203                              | old-k8s-version-947203       | jenkins | v1.34.0 | 10 Oct 24 19:19 UTC | 10 Oct 24 19:19 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-947203             | old-k8s-version-947203       | jenkins | v1.34.0 | 10 Oct 24 19:19 UTC | 10 Oct 24 19:19 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-947203                              | old-k8s-version-947203       | jenkins | v1.34.0 | 10 Oct 24 19:19 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-361847       | default-k8s-diff-port-361847 | jenkins | v1.34.0 | 10 Oct 24 19:21 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-361847 | jenkins | v1.34.0 | 10 Oct 24 19:21 UTC | 10 Oct 24 19:29 UTC |
	|         | default-k8s-diff-port-361847                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/10 19:21:13
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1010 19:21:13.943219  148525 out.go:345] Setting OutFile to fd 1 ...
	I1010 19:21:13.943336  148525 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 19:21:13.943343  148525 out.go:358] Setting ErrFile to fd 2...
	I1010 19:21:13.943347  148525 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 19:21:13.943560  148525 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19787-81676/.minikube/bin
	I1010 19:21:13.944109  148525 out.go:352] Setting JSON to false
	I1010 19:21:13.945219  148525 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":11020,"bootTime":1728577054,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1010 19:21:13.945321  148525 start.go:139] virtualization: kvm guest
	I1010 19:21:13.947915  148525 out.go:177] * [default-k8s-diff-port-361847] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1010 19:21:13.950021  148525 out.go:177]   - MINIKUBE_LOCATION=19787
	I1010 19:21:13.950037  148525 notify.go:220] Checking for updates...
	I1010 19:21:13.952994  148525 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1010 19:21:13.954661  148525 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19787-81676/kubeconfig
	I1010 19:21:13.956438  148525 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19787-81676/.minikube
	I1010 19:21:13.958502  148525 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1010 19:21:13.960099  148525 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1010 19:21:13.961930  148525 config.go:182] Loaded profile config "default-k8s-diff-port-361847": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 19:21:13.962374  148525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:21:13.962450  148525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:21:13.978323  148525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41247
	I1010 19:21:13.978926  148525 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:21:13.979520  148525 main.go:141] libmachine: Using API Version  1
	I1010 19:21:13.979538  148525 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:21:13.979954  148525 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:21:13.980144  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .DriverName
	I1010 19:21:13.980446  148525 driver.go:394] Setting default libvirt URI to qemu:///system
	I1010 19:21:13.980745  148525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:21:13.980784  148525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:21:13.996046  148525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37617
	I1010 19:21:13.996534  148525 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:21:13.997069  148525 main.go:141] libmachine: Using API Version  1
	I1010 19:21:13.997097  148525 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:21:13.997530  148525 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:21:13.997788  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .DriverName
	I1010 19:21:14.033593  148525 out.go:177] * Using the kvm2 driver based on existing profile
	I1010 19:21:14.035367  148525 start.go:297] selected driver: kvm2
	I1010 19:21:14.035394  148525 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-361847 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-361847 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.32 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 19:21:14.035526  148525 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1010 19:21:14.036341  148525 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 19:21:14.036452  148525 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19787-81676/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1010 19:21:14.052462  148525 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1010 19:21:14.052918  148525 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1010 19:21:14.052967  148525 cni.go:84] Creating CNI manager for ""
	I1010 19:21:14.053019  148525 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1010 19:21:14.053067  148525 start.go:340] cluster config:
	{Name:default-k8s-diff-port-361847 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-361847 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.32 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-h
ost Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 19:21:14.053178  148525 iso.go:125] acquiring lock: {Name:mk53f4624ffe67cac4f5d6b29b9ed87292a010a5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 19:21:14.055485  148525 out.go:177] * Starting "default-k8s-diff-port-361847" primary control-plane node in "default-k8s-diff-port-361847" cluster
	I1010 19:21:16.773106  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:21:14.056945  148525 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1010 19:21:14.057002  148525 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1010 19:21:14.057014  148525 cache.go:56] Caching tarball of preloaded images
	I1010 19:21:14.057118  148525 preload.go:172] Found /home/jenkins/minikube-integration/19787-81676/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1010 19:21:14.057134  148525 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1010 19:21:14.057268  148525 profile.go:143] Saving config to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/default-k8s-diff-port-361847/config.json ...
	I1010 19:21:14.057476  148525 start.go:360] acquireMachinesLock for default-k8s-diff-port-361847: {Name:mkf50a8a3600473176c10a3ea212772e747151e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1010 19:21:22.853158  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:21:25.925174  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:21:32.005160  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:21:35.077198  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:21:41.157130  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:21:44.229127  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:21:50.309136  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:21:53.381191  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:21:59.461129  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:22:02.533201  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:22:08.613124  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:22:11.685169  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:22:17.765161  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:22:20.837208  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:22:26.917127  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:22:29.989172  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:22:36.069147  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:22:39.141173  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:22:45.221160  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:22:48.293141  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:22:51.297376  147758 start.go:364] duration metric: took 3m49.312490934s to acquireMachinesLock for "embed-certs-541370"
	I1010 19:22:51.297453  147758 start.go:96] Skipping create...Using existing machine configuration
	I1010 19:22:51.297464  147758 fix.go:54] fixHost starting: 
	I1010 19:22:51.297787  147758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:22:51.297848  147758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:22:51.314087  147758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43639
	I1010 19:22:51.314588  147758 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:22:51.315115  147758 main.go:141] libmachine: Using API Version  1
	I1010 19:22:51.315138  147758 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:22:51.315509  147758 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:22:51.315691  147758 main.go:141] libmachine: (embed-certs-541370) Calling .DriverName
	I1010 19:22:51.315879  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetState
	I1010 19:22:51.317597  147758 fix.go:112] recreateIfNeeded on embed-certs-541370: state=Stopped err=<nil>
	I1010 19:22:51.317621  147758 main.go:141] libmachine: (embed-certs-541370) Calling .DriverName
	W1010 19:22:51.317781  147758 fix.go:138] unexpected machine state, will restart: <nil>
	I1010 19:22:51.319664  147758 out.go:177] * Restarting existing kvm2 VM for "embed-certs-541370" ...
	I1010 19:22:51.320967  147758 main.go:141] libmachine: (embed-certs-541370) Calling .Start
	I1010 19:22:51.321134  147758 main.go:141] libmachine: (embed-certs-541370) Ensuring networks are active...
	I1010 19:22:51.322026  147758 main.go:141] libmachine: (embed-certs-541370) Ensuring network default is active
	I1010 19:22:51.322468  147758 main.go:141] libmachine: (embed-certs-541370) Ensuring network mk-embed-certs-541370 is active
	I1010 19:22:51.322874  147758 main.go:141] libmachine: (embed-certs-541370) Getting domain xml...
	I1010 19:22:51.323687  147758 main.go:141] libmachine: (embed-certs-541370) Creating domain...
	I1010 19:22:51.294881  147213 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1010 19:22:51.294927  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetMachineName
	I1010 19:22:51.295226  147213 buildroot.go:166] provisioning hostname "no-preload-320324"
	I1010 19:22:51.295256  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetMachineName
	I1010 19:22:51.295454  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHHostname
	I1010 19:22:51.297198  147213 machine.go:96] duration metric: took 4m37.414594306s to provisionDockerMachine
	I1010 19:22:51.297252  147213 fix.go:56] duration metric: took 4m37.436635356s for fixHost
	I1010 19:22:51.297259  147213 start.go:83] releasing machines lock for "no-preload-320324", held for 4m37.436668423s
	W1010 19:22:51.297278  147213 start.go:714] error starting host: provision: host is not running
	W1010 19:22:51.297382  147213 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1010 19:22:51.297396  147213 start.go:729] Will try again in 5 seconds ...
	I1010 19:22:52.568699  147758 main.go:141] libmachine: (embed-certs-541370) Waiting to get IP...
	I1010 19:22:52.569582  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:22:52.569952  147758 main.go:141] libmachine: (embed-certs-541370) DBG | unable to find current IP address of domain embed-certs-541370 in network mk-embed-certs-541370
	I1010 19:22:52.570018  147758 main.go:141] libmachine: (embed-certs-541370) DBG | I1010 19:22:52.569935  148914 retry.go:31] will retry after 261.244287ms: waiting for machine to come up
	I1010 19:22:52.832639  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:22:52.833280  147758 main.go:141] libmachine: (embed-certs-541370) DBG | unable to find current IP address of domain embed-certs-541370 in network mk-embed-certs-541370
	I1010 19:22:52.833310  147758 main.go:141] libmachine: (embed-certs-541370) DBG | I1010 19:22:52.833200  148914 retry.go:31] will retry after 304.116732ms: waiting for machine to come up
	I1010 19:22:53.138770  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:22:53.139091  147758 main.go:141] libmachine: (embed-certs-541370) DBG | unable to find current IP address of domain embed-certs-541370 in network mk-embed-certs-541370
	I1010 19:22:53.139124  147758 main.go:141] libmachine: (embed-certs-541370) DBG | I1010 19:22:53.139055  148914 retry.go:31] will retry after 484.354474ms: waiting for machine to come up
	I1010 19:22:53.624831  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:22:53.625293  147758 main.go:141] libmachine: (embed-certs-541370) DBG | unable to find current IP address of domain embed-certs-541370 in network mk-embed-certs-541370
	I1010 19:22:53.625323  147758 main.go:141] libmachine: (embed-certs-541370) DBG | I1010 19:22:53.625234  148914 retry.go:31] will retry after 591.916836ms: waiting for machine to come up
	I1010 19:22:54.219214  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:22:54.219732  147758 main.go:141] libmachine: (embed-certs-541370) DBG | unable to find current IP address of domain embed-certs-541370 in network mk-embed-certs-541370
	I1010 19:22:54.219763  147758 main.go:141] libmachine: (embed-certs-541370) DBG | I1010 19:22:54.219673  148914 retry.go:31] will retry after 614.162479ms: waiting for machine to come up
	I1010 19:22:54.835573  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:22:54.836038  147758 main.go:141] libmachine: (embed-certs-541370) DBG | unable to find current IP address of domain embed-certs-541370 in network mk-embed-certs-541370
	I1010 19:22:54.836063  147758 main.go:141] libmachine: (embed-certs-541370) DBG | I1010 19:22:54.835988  148914 retry.go:31] will retry after 824.170953ms: waiting for machine to come up
	I1010 19:22:55.662092  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:22:55.662646  147758 main.go:141] libmachine: (embed-certs-541370) DBG | unable to find current IP address of domain embed-certs-541370 in network mk-embed-certs-541370
	I1010 19:22:55.662668  147758 main.go:141] libmachine: (embed-certs-541370) DBG | I1010 19:22:55.662586  148914 retry.go:31] will retry after 928.483848ms: waiting for machine to come up
	I1010 19:22:56.593200  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:22:56.593724  147758 main.go:141] libmachine: (embed-certs-541370) DBG | unable to find current IP address of domain embed-certs-541370 in network mk-embed-certs-541370
	I1010 19:22:56.593756  147758 main.go:141] libmachine: (embed-certs-541370) DBG | I1010 19:22:56.593679  148914 retry.go:31] will retry after 941.138644ms: waiting for machine to come up
	I1010 19:22:56.299351  147213 start.go:360] acquireMachinesLock for no-preload-320324: {Name:mkf50a8a3600473176c10a3ea212772e747151e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1010 19:22:57.536977  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:22:57.537403  147758 main.go:141] libmachine: (embed-certs-541370) DBG | unable to find current IP address of domain embed-certs-541370 in network mk-embed-certs-541370
	I1010 19:22:57.537429  147758 main.go:141] libmachine: (embed-certs-541370) DBG | I1010 19:22:57.537331  148914 retry.go:31] will retry after 1.262203584s: waiting for machine to come up
	I1010 19:22:58.801921  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:22:58.802420  147758 main.go:141] libmachine: (embed-certs-541370) DBG | unable to find current IP address of domain embed-certs-541370 in network mk-embed-certs-541370
	I1010 19:22:58.802454  147758 main.go:141] libmachine: (embed-certs-541370) DBG | I1010 19:22:58.802381  148914 retry.go:31] will retry after 2.154751391s: waiting for machine to come up
	I1010 19:23:00.960100  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:00.960661  147758 main.go:141] libmachine: (embed-certs-541370) DBG | unable to find current IP address of domain embed-certs-541370 in network mk-embed-certs-541370
	I1010 19:23:00.960684  147758 main.go:141] libmachine: (embed-certs-541370) DBG | I1010 19:23:00.960607  148914 retry.go:31] will retry after 1.945155171s: waiting for machine to come up
	I1010 19:23:02.907705  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:02.908097  147758 main.go:141] libmachine: (embed-certs-541370) DBG | unable to find current IP address of domain embed-certs-541370 in network mk-embed-certs-541370
	I1010 19:23:02.908129  147758 main.go:141] libmachine: (embed-certs-541370) DBG | I1010 19:23:02.908038  148914 retry.go:31] will retry after 3.245262469s: waiting for machine to come up
	I1010 19:23:06.157527  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:06.157897  147758 main.go:141] libmachine: (embed-certs-541370) DBG | unable to find current IP address of domain embed-certs-541370 in network mk-embed-certs-541370
	I1010 19:23:06.157925  147758 main.go:141] libmachine: (embed-certs-541370) DBG | I1010 19:23:06.157858  148914 retry.go:31] will retry after 3.973579024s: waiting for machine to come up
	I1010 19:23:11.321975  148123 start.go:364] duration metric: took 3m17.648521443s to acquireMachinesLock for "old-k8s-version-947203"
	I1010 19:23:11.322043  148123 start.go:96] Skipping create...Using existing machine configuration
	I1010 19:23:11.322056  148123 fix.go:54] fixHost starting: 
	I1010 19:23:11.322537  148123 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:23:11.322596  148123 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:23:11.339586  148123 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42157
	I1010 19:23:11.340091  148123 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:23:11.340686  148123 main.go:141] libmachine: Using API Version  1
	I1010 19:23:11.340719  148123 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:23:11.341101  148123 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:23:11.341295  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .DriverName
	I1010 19:23:11.341453  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetState
	I1010 19:23:11.343215  148123 fix.go:112] recreateIfNeeded on old-k8s-version-947203: state=Stopped err=<nil>
	I1010 19:23:11.343247  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .DriverName
	W1010 19:23:11.343408  148123 fix.go:138] unexpected machine state, will restart: <nil>
	I1010 19:23:11.345653  148123 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-947203" ...
	I1010 19:23:10.135369  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.135799  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has current primary IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.135830  147758 main.go:141] libmachine: (embed-certs-541370) Found IP for machine: 192.168.39.120
	I1010 19:23:10.135839  147758 main.go:141] libmachine: (embed-certs-541370) Reserving static IP address...
	I1010 19:23:10.136283  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "embed-certs-541370", mac: "52:54:00:e2:ee:d0", ip: "192.168.39.120"} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:10.136311  147758 main.go:141] libmachine: (embed-certs-541370) Reserved static IP address: 192.168.39.120
	I1010 19:23:10.136327  147758 main.go:141] libmachine: (embed-certs-541370) DBG | skip adding static IP to network mk-embed-certs-541370 - found existing host DHCP lease matching {name: "embed-certs-541370", mac: "52:54:00:e2:ee:d0", ip: "192.168.39.120"}
	I1010 19:23:10.136339  147758 main.go:141] libmachine: (embed-certs-541370) DBG | Getting to WaitForSSH function...
	I1010 19:23:10.136351  147758 main.go:141] libmachine: (embed-certs-541370) Waiting for SSH to be available...
	I1010 19:23:10.138861  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.139259  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:10.139293  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.139438  147758 main.go:141] libmachine: (embed-certs-541370) DBG | Using SSH client type: external
	I1010 19:23:10.139472  147758 main.go:141] libmachine: (embed-certs-541370) DBG | Using SSH private key: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/embed-certs-541370/id_rsa (-rw-------)
	I1010 19:23:10.139517  147758 main.go:141] libmachine: (embed-certs-541370) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.120 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19787-81676/.minikube/machines/embed-certs-541370/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1010 19:23:10.139541  147758 main.go:141] libmachine: (embed-certs-541370) DBG | About to run SSH command:
	I1010 19:23:10.139562  147758 main.go:141] libmachine: (embed-certs-541370) DBG | exit 0
	I1010 19:23:10.261078  147758 main.go:141] libmachine: (embed-certs-541370) DBG | SSH cmd err, output: <nil>: 
	I1010 19:23:10.261533  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetConfigRaw
	I1010 19:23:10.262192  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetIP
	I1010 19:23:10.265071  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.265467  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:10.265515  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.265737  147758 profile.go:143] Saving config to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/embed-certs-541370/config.json ...
	I1010 19:23:10.265941  147758 machine.go:93] provisionDockerMachine start ...
	I1010 19:23:10.265960  147758 main.go:141] libmachine: (embed-certs-541370) Calling .DriverName
	I1010 19:23:10.266188  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHHostname
	I1010 19:23:10.269186  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.269618  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:10.269649  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.269799  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHPort
	I1010 19:23:10.269984  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:23:10.270206  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:23:10.270345  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHUsername
	I1010 19:23:10.270550  147758 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:10.270834  147758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.120 22 <nil> <nil>}
	I1010 19:23:10.270849  147758 main.go:141] libmachine: About to run SSH command:
	hostname
	I1010 19:23:10.373285  147758 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1010 19:23:10.373316  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetMachineName
	I1010 19:23:10.373625  147758 buildroot.go:166] provisioning hostname "embed-certs-541370"
	I1010 19:23:10.373660  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetMachineName
	I1010 19:23:10.373835  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHHostname
	I1010 19:23:10.376552  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.376951  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:10.376994  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.377132  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHPort
	I1010 19:23:10.377332  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:23:10.377489  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:23:10.377606  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHUsername
	I1010 19:23:10.377745  147758 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:10.377918  147758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.120 22 <nil> <nil>}
	I1010 19:23:10.377930  147758 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-541370 && echo "embed-certs-541370" | sudo tee /etc/hostname
	I1010 19:23:10.495847  147758 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-541370
	
	I1010 19:23:10.495880  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHHostname
	I1010 19:23:10.498868  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.499205  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:10.499247  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.499362  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHPort
	I1010 19:23:10.499556  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:23:10.499700  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:23:10.499829  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHUsername
	I1010 19:23:10.499961  147758 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:10.500187  147758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.120 22 <nil> <nil>}
	I1010 19:23:10.500210  147758 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-541370' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-541370/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-541370' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1010 19:23:10.614318  147758 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1010 19:23:10.614357  147758 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19787-81676/.minikube CaCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19787-81676/.minikube}
	I1010 19:23:10.614412  147758 buildroot.go:174] setting up certificates
	I1010 19:23:10.614429  147758 provision.go:84] configureAuth start
	I1010 19:23:10.614457  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetMachineName
	I1010 19:23:10.614763  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetIP
	I1010 19:23:10.617457  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.617888  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:10.617916  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.618078  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHHostname
	I1010 19:23:10.620243  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.620635  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:10.620666  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.620789  147758 provision.go:143] copyHostCerts
	I1010 19:23:10.620895  147758 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem, removing ...
	I1010 19:23:10.620913  147758 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem
	I1010 19:23:10.620998  147758 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem (1078 bytes)
	I1010 19:23:10.621111  147758 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem, removing ...
	I1010 19:23:10.621123  147758 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem
	I1010 19:23:10.621159  147758 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem (1123 bytes)
	I1010 19:23:10.621245  147758 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem, removing ...
	I1010 19:23:10.621257  147758 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem
	I1010 19:23:10.621292  147758 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem (1675 bytes)
	I1010 19:23:10.621364  147758 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem org=jenkins.embed-certs-541370 san=[127.0.0.1 192.168.39.120 embed-certs-541370 localhost minikube]
	I1010 19:23:10.697456  147758 provision.go:177] copyRemoteCerts
	I1010 19:23:10.697515  147758 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1010 19:23:10.697547  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHHostname
	I1010 19:23:10.700439  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.700763  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:10.700799  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.700956  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHPort
	I1010 19:23:10.701162  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:23:10.701320  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHUsername
	I1010 19:23:10.701465  147758 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/embed-certs-541370/id_rsa Username:docker}
	I1010 19:23:10.783442  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1010 19:23:10.808446  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1010 19:23:10.832117  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1010 19:23:10.856286  147758 provision.go:87] duration metric: took 241.840139ms to configureAuth
	I1010 19:23:10.856318  147758 buildroot.go:189] setting minikube options for container-runtime
	I1010 19:23:10.856528  147758 config.go:182] Loaded profile config "embed-certs-541370": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 19:23:10.856640  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHHostname
	I1010 19:23:10.859252  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.859677  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:10.859708  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.859916  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHPort
	I1010 19:23:10.860087  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:23:10.860222  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:23:10.860344  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHUsername
	I1010 19:23:10.860524  147758 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:10.860688  147758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.120 22 <nil> <nil>}
	I1010 19:23:10.860702  147758 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1010 19:23:11.086349  147758 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1010 19:23:11.086375  147758 machine.go:96] duration metric: took 820.421344ms to provisionDockerMachine
	I1010 19:23:11.086386  147758 start.go:293] postStartSetup for "embed-certs-541370" (driver="kvm2")
	I1010 19:23:11.086401  147758 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1010 19:23:11.086423  147758 main.go:141] libmachine: (embed-certs-541370) Calling .DriverName
	I1010 19:23:11.086755  147758 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1010 19:23:11.086783  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHHostname
	I1010 19:23:11.089482  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:11.089838  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:11.089860  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:11.090042  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHPort
	I1010 19:23:11.090253  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:23:11.090410  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHUsername
	I1010 19:23:11.090535  147758 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/embed-certs-541370/id_rsa Username:docker}
	I1010 19:23:11.172474  147758 ssh_runner.go:195] Run: cat /etc/os-release
	I1010 19:23:11.176699  147758 info.go:137] Remote host: Buildroot 2023.02.9
	I1010 19:23:11.176733  147758 filesync.go:126] Scanning /home/jenkins/minikube-integration/19787-81676/.minikube/addons for local assets ...
	I1010 19:23:11.176800  147758 filesync.go:126] Scanning /home/jenkins/minikube-integration/19787-81676/.minikube/files for local assets ...
	I1010 19:23:11.176899  147758 filesync.go:149] local asset: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem -> 888762.pem in /etc/ssl/certs
	I1010 19:23:11.177044  147758 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1010 19:23:11.186985  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem --> /etc/ssl/certs/888762.pem (1708 bytes)
	I1010 19:23:11.211385  147758 start.go:296] duration metric: took 124.982089ms for postStartSetup
	I1010 19:23:11.211442  147758 fix.go:56] duration metric: took 19.913977793s for fixHost
	I1010 19:23:11.211472  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHHostname
	I1010 19:23:11.214421  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:11.214780  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:11.214812  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:11.214999  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHPort
	I1010 19:23:11.215219  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:23:11.215429  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:23:11.215612  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHUsername
	I1010 19:23:11.215786  147758 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:11.215974  147758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.120 22 <nil> <nil>}
	I1010 19:23:11.215985  147758 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1010 19:23:11.321786  147758 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728588191.295446348
	
	I1010 19:23:11.321814  147758 fix.go:216] guest clock: 1728588191.295446348
	I1010 19:23:11.321822  147758 fix.go:229] Guest: 2024-10-10 19:23:11.295446348 +0000 UTC Remote: 2024-10-10 19:23:11.211447413 +0000 UTC m=+249.373680838 (delta=83.998935ms)
	I1010 19:23:11.321870  147758 fix.go:200] guest clock delta is within tolerance: 83.998935ms
	I1010 19:23:11.321877  147758 start.go:83] releasing machines lock for "embed-certs-541370", held for 20.024455781s
	I1010 19:23:11.321905  147758 main.go:141] libmachine: (embed-certs-541370) Calling .DriverName
	I1010 19:23:11.322169  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetIP
	I1010 19:23:11.325004  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:11.325350  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:11.325375  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:11.325566  147758 main.go:141] libmachine: (embed-certs-541370) Calling .DriverName
	I1010 19:23:11.326090  147758 main.go:141] libmachine: (embed-certs-541370) Calling .DriverName
	I1010 19:23:11.326294  147758 main.go:141] libmachine: (embed-certs-541370) Calling .DriverName
	I1010 19:23:11.326383  147758 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1010 19:23:11.326444  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHHostname
	I1010 19:23:11.326501  147758 ssh_runner.go:195] Run: cat /version.json
	I1010 19:23:11.326529  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHHostname
	I1010 19:23:11.329311  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:11.329657  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:11.329690  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:11.329713  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:11.329866  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHPort
	I1010 19:23:11.330057  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:23:11.330160  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:11.330188  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:11.330204  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHUsername
	I1010 19:23:11.330344  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHPort
	I1010 19:23:11.330346  147758 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/embed-certs-541370/id_rsa Username:docker}
	I1010 19:23:11.330538  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:23:11.330687  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHUsername
	I1010 19:23:11.330821  147758 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/embed-certs-541370/id_rsa Username:docker}
	I1010 19:23:11.406525  147758 ssh_runner.go:195] Run: systemctl --version
	I1010 19:23:11.428958  147758 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1010 19:23:11.577663  147758 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1010 19:23:11.584024  147758 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1010 19:23:11.584112  147758 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1010 19:23:11.603163  147758 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1010 19:23:11.603190  147758 start.go:495] detecting cgroup driver to use...
	I1010 19:23:11.603291  147758 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1010 19:23:11.624744  147758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1010 19:23:11.645477  147758 docker.go:217] disabling cri-docker service (if available) ...
	I1010 19:23:11.645537  147758 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1010 19:23:11.660216  147758 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1010 19:23:11.675019  147758 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1010 19:23:11.796038  147758 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1010 19:23:11.967750  147758 docker.go:233] disabling docker service ...
	I1010 19:23:11.967828  147758 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1010 19:23:11.983184  147758 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1010 19:23:12.001603  147758 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1010 19:23:12.149408  147758 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1010 19:23:12.306724  147758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1010 19:23:12.324302  147758 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1010 19:23:12.345426  147758 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1010 19:23:12.345508  147758 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:12.357812  147758 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1010 19:23:12.357883  147758 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:12.370095  147758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:12.382389  147758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:12.395000  147758 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1010 19:23:12.408429  147758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:12.426851  147758 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:12.450568  147758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:12.463434  147758 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1010 19:23:12.474537  147758 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1010 19:23:12.474606  147758 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1010 19:23:12.489074  147758 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1010 19:23:12.500048  147758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 19:23:12.635695  147758 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1010 19:23:12.733511  147758 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1010 19:23:12.733593  147758 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1010 19:23:12.739072  147758 start.go:563] Will wait 60s for crictl version
	I1010 19:23:12.739138  147758 ssh_runner.go:195] Run: which crictl
	I1010 19:23:12.743675  147758 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1010 19:23:12.792272  147758 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1010 19:23:12.792379  147758 ssh_runner.go:195] Run: crio --version
	I1010 19:23:12.829968  147758 ssh_runner.go:195] Run: crio --version
	I1010 19:23:12.862579  147758 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1010 19:23:11.347158  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .Start
	I1010 19:23:11.347359  148123 main.go:141] libmachine: (old-k8s-version-947203) Ensuring networks are active...
	I1010 19:23:11.348252  148123 main.go:141] libmachine: (old-k8s-version-947203) Ensuring network default is active
	I1010 19:23:11.348689  148123 main.go:141] libmachine: (old-k8s-version-947203) Ensuring network mk-old-k8s-version-947203 is active
	I1010 19:23:11.349139  148123 main.go:141] libmachine: (old-k8s-version-947203) Getting domain xml...
	I1010 19:23:11.349799  148123 main.go:141] libmachine: (old-k8s-version-947203) Creating domain...
	I1010 19:23:12.671472  148123 main.go:141] libmachine: (old-k8s-version-947203) Waiting to get IP...
	I1010 19:23:12.672628  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:12.673145  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:12.673278  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:12.673132  149057 retry.go:31] will retry after 280.111667ms: waiting for machine to come up
	I1010 19:23:12.954865  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:12.955325  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:12.955377  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:12.955300  149057 retry.go:31] will retry after 289.967238ms: waiting for machine to come up
	I1010 19:23:13.247039  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:13.247663  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:13.247769  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:13.247632  149057 retry.go:31] will retry after 319.085935ms: waiting for machine to come up
	I1010 19:23:12.863797  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetIP
	I1010 19:23:12.867335  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:12.867760  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:12.867794  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:12.868029  147758 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1010 19:23:12.872503  147758 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 19:23:12.887684  147758 kubeadm.go:883] updating cluster {Name:embed-certs-541370 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:embed-certs-541370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.120 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1010 19:23:12.887809  147758 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1010 19:23:12.887853  147758 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 19:23:12.924155  147758 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1010 19:23:12.924240  147758 ssh_runner.go:195] Run: which lz4
	I1010 19:23:12.928613  147758 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1010 19:23:12.933024  147758 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1010 19:23:12.933069  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1010 19:23:14.450790  147758 crio.go:462] duration metric: took 1.522223644s to copy over tarball
	I1010 19:23:14.450893  147758 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1010 19:23:16.642155  147758 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.191220673s)
	I1010 19:23:16.642193  147758 crio.go:469] duration metric: took 2.191371146s to extract the tarball
	I1010 19:23:16.642202  147758 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1010 19:23:16.679611  147758 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 19:23:16.723840  147758 crio.go:514] all images are preloaded for cri-o runtime.
	I1010 19:23:16.723865  147758 cache_images.go:84] Images are preloaded, skipping loading
	I1010 19:23:16.723874  147758 kubeadm.go:934] updating node { 192.168.39.120 8443 v1.31.1 crio true true} ...
	I1010 19:23:16.723998  147758 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-541370 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.120
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-541370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1010 19:23:16.724081  147758 ssh_runner.go:195] Run: crio config
	I1010 19:23:16.779659  147758 cni.go:84] Creating CNI manager for ""
	I1010 19:23:16.779682  147758 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1010 19:23:16.779693  147758 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1010 19:23:16.779714  147758 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.120 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-541370 NodeName:embed-certs-541370 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.120"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.120 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1010 19:23:16.779842  147758 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.120
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-541370"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.120
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.120"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1010 19:23:16.779904  147758 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1010 19:23:16.791424  147758 binaries.go:44] Found k8s binaries, skipping transfer
	I1010 19:23:16.791493  147758 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1010 19:23:16.801715  147758 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I1010 19:23:16.821364  147758 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1010 19:23:16.842703  147758 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I1010 19:23:16.864835  147758 ssh_runner.go:195] Run: grep 192.168.39.120	control-plane.minikube.internal$ /etc/hosts
	I1010 19:23:16.868928  147758 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.120	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 19:23:13.568192  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:13.568690  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:13.568721  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:13.568657  149057 retry.go:31] will retry after 575.032546ms: waiting for machine to come up
	I1010 19:23:14.145650  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:14.146293  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:14.146319  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:14.146234  149057 retry.go:31] will retry after 702.803794ms: waiting for machine to come up
	I1010 19:23:14.851201  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:14.851736  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:14.851769  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:14.851706  149057 retry.go:31] will retry after 883.195401ms: waiting for machine to come up
	I1010 19:23:15.736627  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:15.737199  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:15.737252  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:15.737155  149057 retry.go:31] will retry after 794.699815ms: waiting for machine to come up
	I1010 19:23:16.533952  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:16.534510  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:16.534545  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:16.534443  149057 retry.go:31] will retry after 961.751912ms: waiting for machine to come up
	I1010 19:23:17.497535  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:17.498007  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:17.498035  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:17.497936  149057 retry.go:31] will retry after 1.423503471s: waiting for machine to come up
	I1010 19:23:16.883162  147758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 19:23:17.027646  147758 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 19:23:17.045083  147758 certs.go:68] Setting up /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/embed-certs-541370 for IP: 192.168.39.120
	I1010 19:23:17.045108  147758 certs.go:194] generating shared ca certs ...
	I1010 19:23:17.045130  147758 certs.go:226] acquiring lock for ca certs: {Name:mk934aa2a82050d7b4e8800492bf64f91d140517 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 19:23:17.045491  147758 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key
	I1010 19:23:17.045561  147758 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key
	I1010 19:23:17.045579  147758 certs.go:256] generating profile certs ...
	I1010 19:23:17.045730  147758 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/embed-certs-541370/client.key
	I1010 19:23:17.045814  147758 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/embed-certs-541370/apiserver.key.dd7630a8
	I1010 19:23:17.045874  147758 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/embed-certs-541370/proxy-client.key
	I1010 19:23:17.046015  147758 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876.pem (1338 bytes)
	W1010 19:23:17.046055  147758 certs.go:480] ignoring /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876_empty.pem, impossibly tiny 0 bytes
	I1010 19:23:17.046075  147758 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem (1679 bytes)
	I1010 19:23:17.046114  147758 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem (1078 bytes)
	I1010 19:23:17.046150  147758 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem (1123 bytes)
	I1010 19:23:17.046177  147758 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem (1675 bytes)
	I1010 19:23:17.046235  147758 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem (1708 bytes)
	I1010 19:23:17.047131  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1010 19:23:17.087057  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1010 19:23:17.137707  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1010 19:23:17.181707  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1010 19:23:17.213227  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/embed-certs-541370/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1010 19:23:17.247846  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/embed-certs-541370/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1010 19:23:17.275989  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/embed-certs-541370/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1010 19:23:17.301144  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/embed-certs-541370/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1010 19:23:17.326232  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem --> /usr/share/ca-certificates/888762.pem (1708 bytes)
	I1010 19:23:17.350586  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1010 19:23:17.374666  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876.pem --> /usr/share/ca-certificates/88876.pem (1338 bytes)
	I1010 19:23:17.399570  147758 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1010 19:23:17.417846  147758 ssh_runner.go:195] Run: openssl version
	I1010 19:23:17.424206  147758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/888762.pem && ln -fs /usr/share/ca-certificates/888762.pem /etc/ssl/certs/888762.pem"
	I1010 19:23:17.436091  147758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/888762.pem
	I1010 19:23:17.441020  147758 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 10 18:09 /usr/share/ca-certificates/888762.pem
	I1010 19:23:17.441090  147758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/888762.pem
	I1010 19:23:17.447318  147758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/888762.pem /etc/ssl/certs/3ec20f2e.0"
	I1010 19:23:17.459191  147758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1010 19:23:17.470878  147758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1010 19:23:17.476185  147758 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 10 17:58 /usr/share/ca-certificates/minikubeCA.pem
	I1010 19:23:17.476248  147758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1010 19:23:17.482808  147758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1010 19:23:17.494626  147758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/88876.pem && ln -fs /usr/share/ca-certificates/88876.pem /etc/ssl/certs/88876.pem"
	I1010 19:23:17.506522  147758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/88876.pem
	I1010 19:23:17.511484  147758 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 10 18:09 /usr/share/ca-certificates/88876.pem
	I1010 19:23:17.511558  147758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/88876.pem
	I1010 19:23:17.517445  147758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/88876.pem /etc/ssl/certs/51391683.0"
	I1010 19:23:17.529109  147758 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1010 19:23:17.534139  147758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1010 19:23:17.540846  147758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1010 19:23:17.547429  147758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1010 19:23:17.554350  147758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1010 19:23:17.561036  147758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1010 19:23:17.567571  147758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1010 19:23:17.574019  147758 kubeadm.go:392] StartCluster: {Name:embed-certs-541370 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:embed-certs-541370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.120 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 19:23:17.574128  147758 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1010 19:23:17.574187  147758 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1010 19:23:17.612699  147758 cri.go:89] found id: ""
	I1010 19:23:17.612804  147758 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1010 19:23:17.623827  147758 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1010 19:23:17.623856  147758 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1010 19:23:17.623917  147758 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1010 19:23:17.634732  147758 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1010 19:23:17.635754  147758 kubeconfig.go:125] found "embed-certs-541370" server: "https://192.168.39.120:8443"
	I1010 19:23:17.637813  147758 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1010 19:23:17.648543  147758 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.120
	I1010 19:23:17.648590  147758 kubeadm.go:1160] stopping kube-system containers ...
	I1010 19:23:17.648606  147758 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1010 19:23:17.648671  147758 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1010 19:23:17.693966  147758 cri.go:89] found id: ""
	I1010 19:23:17.694057  147758 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1010 19:23:17.715977  147758 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1010 19:23:17.727871  147758 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1010 19:23:17.727891  147758 kubeadm.go:157] found existing configuration files:
	
	I1010 19:23:17.727942  147758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1010 19:23:17.738274  147758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1010 19:23:17.738340  147758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1010 19:23:17.748925  147758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1010 19:23:17.758945  147758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1010 19:23:17.759008  147758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1010 19:23:17.769169  147758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1010 19:23:17.779196  147758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1010 19:23:17.779282  147758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1010 19:23:17.790948  147758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1010 19:23:17.802264  147758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1010 19:23:17.802332  147758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1010 19:23:17.814009  147758 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1010 19:23:17.826820  147758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:23:17.947270  147758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:23:19.128720  147758 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.181409785s)
	I1010 19:23:19.128770  147758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:23:19.343735  147758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:23:19.419728  147758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:23:19.529802  147758 api_server.go:52] waiting for apiserver process to appear ...
	I1010 19:23:19.529930  147758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:20.030019  147758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:20.530833  147758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:20.558314  147758 api_server.go:72] duration metric: took 1.028510044s to wait for apiserver process to appear ...
	I1010 19:23:20.558350  147758 api_server.go:88] waiting for apiserver healthz status ...
	I1010 19:23:20.558375  147758 api_server.go:253] Checking apiserver healthz at https://192.168.39.120:8443/healthz ...
	I1010 19:23:20.558991  147758 api_server.go:269] stopped: https://192.168.39.120:8443/healthz: Get "https://192.168.39.120:8443/healthz": dial tcp 192.168.39.120:8443: connect: connection refused
	I1010 19:23:21.058727  147758 api_server.go:253] Checking apiserver healthz at https://192.168.39.120:8443/healthz ...
	I1010 19:23:18.923057  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:18.923636  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:18.923671  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:18.923582  149057 retry.go:31] will retry after 2.09836426s: waiting for machine to come up
	I1010 19:23:21.023500  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:21.024054  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:21.024084  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:21.023999  149057 retry.go:31] will retry after 2.809962093s: waiting for machine to come up
	I1010 19:23:23.187135  147758 api_server.go:279] https://192.168.39.120:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1010 19:23:23.187187  147758 api_server.go:103] status: https://192.168.39.120:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1010 19:23:23.187203  147758 api_server.go:253] Checking apiserver healthz at https://192.168.39.120:8443/healthz ...
	I1010 19:23:23.233367  147758 api_server.go:279] https://192.168.39.120:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1010 19:23:23.233414  147758 api_server.go:103] status: https://192.168.39.120:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1010 19:23:23.558658  147758 api_server.go:253] Checking apiserver healthz at https://192.168.39.120:8443/healthz ...
	I1010 19:23:23.575108  147758 api_server.go:279] https://192.168.39.120:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1010 19:23:23.575139  147758 api_server.go:103] status: https://192.168.39.120:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1010 19:23:24.058679  147758 api_server.go:253] Checking apiserver healthz at https://192.168.39.120:8443/healthz ...
	I1010 19:23:24.065735  147758 api_server.go:279] https://192.168.39.120:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1010 19:23:24.065763  147758 api_server.go:103] status: https://192.168.39.120:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1010 19:23:24.559440  147758 api_server.go:253] Checking apiserver healthz at https://192.168.39.120:8443/healthz ...
	I1010 19:23:24.565460  147758 api_server.go:279] https://192.168.39.120:8443/healthz returned 200:
	ok
	I1010 19:23:24.571828  147758 api_server.go:141] control plane version: v1.31.1
	I1010 19:23:24.571859  147758 api_server.go:131] duration metric: took 4.013501806s to wait for apiserver health ...
	I1010 19:23:24.571869  147758 cni.go:84] Creating CNI manager for ""
	I1010 19:23:24.571875  147758 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1010 19:23:24.573875  147758 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1010 19:23:24.575458  147758 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1010 19:23:24.586870  147758 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1010 19:23:24.624362  147758 system_pods.go:43] waiting for kube-system pods to appear ...
	I1010 19:23:24.643465  147758 system_pods.go:59] 8 kube-system pods found
	I1010 19:23:24.643516  147758 system_pods.go:61] "coredns-7c65d6cfc9-fgtkg" [df696e79-ca6f-4d73-a57e-9c6cdc93c505] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1010 19:23:24.643532  147758 system_pods.go:61] "etcd-embed-certs-541370" [254fa12c-b0d2-499f-8dd9-c1505efeaaab] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1010 19:23:24.643543  147758 system_pods.go:61] "kube-apiserver-embed-certs-541370" [fcd3809d-d325-4481-8e86-c246e29458fe] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1010 19:23:24.643565  147758 system_pods.go:61] "kube-controller-manager-embed-certs-541370" [ab0fdd6b-d9b7-48dc-b82f-29b21d2295ee] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1010 19:23:24.643584  147758 system_pods.go:61] "kube-proxy-f5l6x" [446383fa-44c5-4b9e-bfc5-e38799597e75] Running
	I1010 19:23:24.643592  147758 system_pods.go:61] "kube-scheduler-embed-certs-541370" [1c6af7e7-ce16-4ae2-8feb-e5d474173de1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1010 19:23:24.643603  147758 system_pods.go:61] "metrics-server-6867b74b74-kw529" [aad00321-d499-4563-849e-286d6e699fc8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1010 19:23:24.643611  147758 system_pods.go:61] "storage-provisioner" [df4ae621-5066-4276-9276-a0538a9f9dd1] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1010 19:23:24.643620  147758 system_pods.go:74] duration metric: took 19.234558ms to wait for pod list to return data ...
	I1010 19:23:24.643637  147758 node_conditions.go:102] verifying NodePressure condition ...
	I1010 19:23:24.651647  147758 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1010 19:23:24.651683  147758 node_conditions.go:123] node cpu capacity is 2
	I1010 19:23:24.651699  147758 node_conditions.go:105] duration metric: took 8.056629ms to run NodePressure ...
	I1010 19:23:24.651720  147758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:23:24.915651  147758 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1010 19:23:24.921104  147758 kubeadm.go:739] kubelet initialised
	I1010 19:23:24.921131  147758 kubeadm.go:740] duration metric: took 5.44643ms waiting for restarted kubelet to initialise ...
	I1010 19:23:24.921142  147758 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 19:23:24.927535  147758 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-fgtkg" in "kube-system" namespace to be "Ready" ...
	I1010 19:23:23.837827  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:23.838271  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:23.838295  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:23.838220  149057 retry.go:31] will retry after 2.562944835s: waiting for machine to come up
	I1010 19:23:26.402699  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:26.403250  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:26.403294  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:26.403192  149057 retry.go:31] will retry after 3.867656846s: waiting for machine to come up
	I1010 19:23:26.932764  147758 pod_ready.go:103] pod "coredns-7c65d6cfc9-fgtkg" in "kube-system" namespace has status "Ready":"False"
	I1010 19:23:28.936055  147758 pod_ready.go:103] pod "coredns-7c65d6cfc9-fgtkg" in "kube-system" namespace has status "Ready":"False"
	I1010 19:23:31.434959  147758 pod_ready.go:103] pod "coredns-7c65d6cfc9-fgtkg" in "kube-system" namespace has status "Ready":"False"
	I1010 19:23:31.893914  148525 start.go:364] duration metric: took 2m17.836396131s to acquireMachinesLock for "default-k8s-diff-port-361847"
	I1010 19:23:31.893993  148525 start.go:96] Skipping create...Using existing machine configuration
	I1010 19:23:31.894007  148525 fix.go:54] fixHost starting: 
	I1010 19:23:31.894438  148525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:23:31.894502  148525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:23:31.914583  148525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37215
	I1010 19:23:31.915054  148525 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:23:31.915535  148525 main.go:141] libmachine: Using API Version  1
	I1010 19:23:31.915560  148525 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:23:31.915967  148525 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:23:31.916207  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .DriverName
	I1010 19:23:31.916387  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetState
	I1010 19:23:31.918035  148525 fix.go:112] recreateIfNeeded on default-k8s-diff-port-361847: state=Stopped err=<nil>
	I1010 19:23:31.918073  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .DriverName
	W1010 19:23:31.918241  148525 fix.go:138] unexpected machine state, will restart: <nil>
	I1010 19:23:31.920390  148525 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-361847" ...
	I1010 19:23:30.275222  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.275762  148123 main.go:141] libmachine: (old-k8s-version-947203) Found IP for machine: 192.168.61.112
	I1010 19:23:30.275779  148123 main.go:141] libmachine: (old-k8s-version-947203) Reserving static IP address...
	I1010 19:23:30.275794  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has current primary IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.276478  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "old-k8s-version-947203", mac: "52:54:00:b5:0b:b2", ip: "192.168.61.112"} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:30.276529  148123 main.go:141] libmachine: (old-k8s-version-947203) Reserved static IP address: 192.168.61.112
	I1010 19:23:30.276553  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | skip adding static IP to network mk-old-k8s-version-947203 - found existing host DHCP lease matching {name: "old-k8s-version-947203", mac: "52:54:00:b5:0b:b2", ip: "192.168.61.112"}
	I1010 19:23:30.276574  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | Getting to WaitForSSH function...
	I1010 19:23:30.276590  148123 main.go:141] libmachine: (old-k8s-version-947203) Waiting for SSH to be available...
	I1010 19:23:30.278486  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.278854  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:30.278890  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.278984  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | Using SSH client type: external
	I1010 19:23:30.279016  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | Using SSH private key: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/old-k8s-version-947203/id_rsa (-rw-------)
	I1010 19:23:30.279051  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.112 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19787-81676/.minikube/machines/old-k8s-version-947203/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1010 19:23:30.279068  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | About to run SSH command:
	I1010 19:23:30.279080  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | exit 0
	I1010 19:23:30.405053  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | SSH cmd err, output: <nil>: 
	I1010 19:23:30.405492  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetConfigRaw
	I1010 19:23:30.406224  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetIP
	I1010 19:23:30.408830  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.409178  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:30.409204  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.409462  148123 profile.go:143] Saving config to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/old-k8s-version-947203/config.json ...
	I1010 19:23:30.409673  148123 machine.go:93] provisionDockerMachine start ...
	I1010 19:23:30.409693  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .DriverName
	I1010 19:23:30.409916  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHHostname
	I1010 19:23:30.412131  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.412400  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:30.412427  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.412574  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHPort
	I1010 19:23:30.412770  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:30.412965  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:30.413117  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHUsername
	I1010 19:23:30.413368  148123 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:30.413653  148123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.112 22 <nil> <nil>}
	I1010 19:23:30.413668  148123 main.go:141] libmachine: About to run SSH command:
	hostname
	I1010 19:23:30.525149  148123 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1010 19:23:30.525176  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetMachineName
	I1010 19:23:30.525417  148123 buildroot.go:166] provisioning hostname "old-k8s-version-947203"
	I1010 19:23:30.525478  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetMachineName
	I1010 19:23:30.525703  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHHostname
	I1010 19:23:30.528343  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.528681  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:30.528708  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.528841  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHPort
	I1010 19:23:30.529035  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:30.529185  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:30.529303  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHUsername
	I1010 19:23:30.529460  148123 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:30.529635  148123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.112 22 <nil> <nil>}
	I1010 19:23:30.529645  148123 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-947203 && echo "old-k8s-version-947203" | sudo tee /etc/hostname
	I1010 19:23:30.655605  148123 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-947203
	
	I1010 19:23:30.655644  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHHostname
	I1010 19:23:30.658439  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.658834  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:30.658869  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.659063  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHPort
	I1010 19:23:30.659285  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:30.659458  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:30.659574  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHUsername
	I1010 19:23:30.659733  148123 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:30.659905  148123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.112 22 <nil> <nil>}
	I1010 19:23:30.659921  148123 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-947203' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-947203/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-947203' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1010 19:23:30.781974  148123 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1010 19:23:30.782011  148123 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19787-81676/.minikube CaCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19787-81676/.minikube}
	I1010 19:23:30.782033  148123 buildroot.go:174] setting up certificates
	I1010 19:23:30.782048  148123 provision.go:84] configureAuth start
	I1010 19:23:30.782059  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetMachineName
	I1010 19:23:30.782379  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetIP
	I1010 19:23:30.785189  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.785584  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:30.785613  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.785782  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHHostname
	I1010 19:23:30.788086  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.788432  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:30.788458  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.788582  148123 provision.go:143] copyHostCerts
	I1010 19:23:30.788645  148123 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem, removing ...
	I1010 19:23:30.788660  148123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem
	I1010 19:23:30.788729  148123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem (1078 bytes)
	I1010 19:23:30.788839  148123 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem, removing ...
	I1010 19:23:30.788867  148123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem
	I1010 19:23:30.788900  148123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem (1123 bytes)
	I1010 19:23:30.788977  148123 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem, removing ...
	I1010 19:23:30.788986  148123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem
	I1010 19:23:30.789013  148123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem (1675 bytes)
	I1010 19:23:30.789081  148123 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-947203 san=[127.0.0.1 192.168.61.112 localhost minikube old-k8s-version-947203]
	I1010 19:23:31.240626  148123 provision.go:177] copyRemoteCerts
	I1010 19:23:31.240698  148123 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1010 19:23:31.240737  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHHostname
	I1010 19:23:31.243678  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.244108  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:31.244138  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.244356  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHPort
	I1010 19:23:31.244565  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:31.244765  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHUsername
	I1010 19:23:31.244929  148123 sshutil.go:53] new ssh client: &{IP:192.168.61.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/old-k8s-version-947203/id_rsa Username:docker}
	I1010 19:23:31.331337  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1010 19:23:31.356266  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1010 19:23:31.381186  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1010 19:23:31.405301  148123 provision.go:87] duration metric: took 623.235619ms to configureAuth
	I1010 19:23:31.405337  148123 buildroot.go:189] setting minikube options for container-runtime
	I1010 19:23:31.405541  148123 config.go:182] Loaded profile config "old-k8s-version-947203": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1010 19:23:31.405620  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHHostname
	I1010 19:23:31.408111  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.408444  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:31.408470  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.408695  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHPort
	I1010 19:23:31.408947  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:31.409109  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:31.409229  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHUsername
	I1010 19:23:31.409374  148123 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:31.409583  148123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.112 22 <nil> <nil>}
	I1010 19:23:31.409609  148123 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1010 19:23:31.647023  148123 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1010 19:23:31.647050  148123 machine.go:96] duration metric: took 1.237364012s to provisionDockerMachine
	I1010 19:23:31.647063  148123 start.go:293] postStartSetup for "old-k8s-version-947203" (driver="kvm2")
	I1010 19:23:31.647073  148123 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1010 19:23:31.647104  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .DriverName
	I1010 19:23:31.647464  148123 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1010 19:23:31.647502  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHHostname
	I1010 19:23:31.649991  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.650288  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:31.650311  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.650484  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHPort
	I1010 19:23:31.650637  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:31.650832  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHUsername
	I1010 19:23:31.650968  148123 sshutil.go:53] new ssh client: &{IP:192.168.61.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/old-k8s-version-947203/id_rsa Username:docker}
	I1010 19:23:31.735985  148123 ssh_runner.go:195] Run: cat /etc/os-release
	I1010 19:23:31.740689  148123 info.go:137] Remote host: Buildroot 2023.02.9
	I1010 19:23:31.740725  148123 filesync.go:126] Scanning /home/jenkins/minikube-integration/19787-81676/.minikube/addons for local assets ...
	I1010 19:23:31.740803  148123 filesync.go:126] Scanning /home/jenkins/minikube-integration/19787-81676/.minikube/files for local assets ...
	I1010 19:23:31.740947  148123 filesync.go:149] local asset: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem -> 888762.pem in /etc/ssl/certs
	I1010 19:23:31.741057  148123 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1010 19:23:31.751383  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem --> /etc/ssl/certs/888762.pem (1708 bytes)
	I1010 19:23:31.777091  148123 start.go:296] duration metric: took 130.011618ms for postStartSetup
	I1010 19:23:31.777134  148123 fix.go:56] duration metric: took 20.455078936s for fixHost
	I1010 19:23:31.777158  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHHostname
	I1010 19:23:31.780083  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.780661  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:31.780708  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.780832  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHPort
	I1010 19:23:31.781069  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:31.781245  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:31.781382  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHUsername
	I1010 19:23:31.781534  148123 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:31.781711  148123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.112 22 <nil> <nil>}
	I1010 19:23:31.781721  148123 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1010 19:23:31.893727  148123 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728588211.849777581
	
	I1010 19:23:31.893756  148123 fix.go:216] guest clock: 1728588211.849777581
	I1010 19:23:31.893763  148123 fix.go:229] Guest: 2024-10-10 19:23:31.849777581 +0000 UTC Remote: 2024-10-10 19:23:31.777138808 +0000 UTC m=+218.253674512 (delta=72.638773ms)
	I1010 19:23:31.893806  148123 fix.go:200] guest clock delta is within tolerance: 72.638773ms
	I1010 19:23:31.893813  148123 start.go:83] releasing machines lock for "old-k8s-version-947203", held for 20.571797307s
	I1010 19:23:31.893848  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .DriverName
	I1010 19:23:31.894151  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetIP
	I1010 19:23:31.896747  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.897156  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:31.897207  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.897385  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .DriverName
	I1010 19:23:31.898036  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .DriverName
	I1010 19:23:31.898229  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .DriverName
	I1010 19:23:31.898375  148123 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1010 19:23:31.898422  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHHostname
	I1010 19:23:31.898461  148123 ssh_runner.go:195] Run: cat /version.json
	I1010 19:23:31.898487  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHHostname
	I1010 19:23:31.901274  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.901450  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.901608  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:31.901649  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.901784  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHPort
	I1010 19:23:31.901920  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:31.901945  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.901952  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:31.902112  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHPort
	I1010 19:23:31.902130  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHUsername
	I1010 19:23:31.902306  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:31.902348  148123 sshutil.go:53] new ssh client: &{IP:192.168.61.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/old-k8s-version-947203/id_rsa Username:docker}
	I1010 19:23:31.902412  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHUsername
	I1010 19:23:31.902533  148123 sshutil.go:53] new ssh client: &{IP:192.168.61.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/old-k8s-version-947203/id_rsa Username:docker}
	I1010 19:23:32.016264  148123 ssh_runner.go:195] Run: systemctl --version
	I1010 19:23:32.022618  148123 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1010 19:23:32.169502  148123 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1010 19:23:32.175960  148123 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1010 19:23:32.176039  148123 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1010 19:23:32.193584  148123 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1010 19:23:32.193615  148123 start.go:495] detecting cgroup driver to use...
	I1010 19:23:32.193698  148123 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1010 19:23:32.210923  148123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1010 19:23:32.227172  148123 docker.go:217] disabling cri-docker service (if available) ...
	I1010 19:23:32.227254  148123 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1010 19:23:32.242142  148123 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1010 19:23:32.256896  148123 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1010 19:23:32.387462  148123 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1010 19:23:32.575184  148123 docker.go:233] disabling docker service ...
	I1010 19:23:32.575257  148123 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1010 19:23:32.590667  148123 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1010 19:23:32.604825  148123 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1010 19:23:32.739827  148123 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1010 19:23:32.863648  148123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1010 19:23:32.879435  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1010 19:23:32.899582  148123 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1010 19:23:32.899659  148123 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:32.915010  148123 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1010 19:23:32.915081  148123 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:32.931181  148123 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:32.944038  148123 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:32.955182  148123 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1010 19:23:32.972220  148123 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1010 19:23:32.982681  148123 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1010 19:23:32.982739  148123 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1010 19:23:32.997012  148123 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1010 19:23:33.009468  148123 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 19:23:33.180403  148123 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1010 19:23:33.284934  148123 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1010 19:23:33.285008  148123 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1010 19:23:33.290063  148123 start.go:563] Will wait 60s for crictl version
	I1010 19:23:33.290124  148123 ssh_runner.go:195] Run: which crictl
	I1010 19:23:33.294752  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1010 19:23:33.342710  148123 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1010 19:23:33.342792  148123 ssh_runner.go:195] Run: crio --version
	I1010 19:23:33.372821  148123 ssh_runner.go:195] Run: crio --version
	I1010 19:23:33.409395  148123 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1010 19:23:33.411012  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetIP
	I1010 19:23:33.414374  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:33.414789  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:33.414836  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:33.415084  148123 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1010 19:23:33.420164  148123 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 19:23:33.433442  148123 kubeadm.go:883] updating cluster {Name:old-k8s-version-947203 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-947203 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.112 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1010 19:23:33.433631  148123 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1010 19:23:33.433717  148123 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 19:23:33.480013  148123 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1010 19:23:33.480078  148123 ssh_runner.go:195] Run: which lz4
	I1010 19:23:33.485443  148123 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1010 19:23:33.490986  148123 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1010 19:23:33.491032  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1010 19:23:31.921836  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .Start
	I1010 19:23:31.922036  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Ensuring networks are active...
	I1010 19:23:31.922890  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Ensuring network default is active
	I1010 19:23:31.923271  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Ensuring network mk-default-k8s-diff-port-361847 is active
	I1010 19:23:31.923685  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Getting domain xml...
	I1010 19:23:31.924449  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Creating domain...
	I1010 19:23:33.241164  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting to get IP...
	I1010 19:23:33.242273  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:33.242713  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | unable to find current IP address of domain default-k8s-diff-port-361847 in network mk-default-k8s-diff-port-361847
	I1010 19:23:33.242814  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | I1010 19:23:33.242702  149213 retry.go:31] will retry after 195.013046ms: waiting for machine to come up
	I1010 19:23:33.438965  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:33.439425  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | unable to find current IP address of domain default-k8s-diff-port-361847 in network mk-default-k8s-diff-port-361847
	I1010 19:23:33.439452  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | I1010 19:23:33.439379  149213 retry.go:31] will retry after 344.223823ms: waiting for machine to come up
	I1010 19:23:33.785167  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:33.785833  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | unable to find current IP address of domain default-k8s-diff-port-361847 in network mk-default-k8s-diff-port-361847
	I1010 19:23:33.785864  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | I1010 19:23:33.785780  149213 retry.go:31] will retry after 342.787658ms: waiting for machine to come up
	I1010 19:23:33.435066  147758 pod_ready.go:103] pod "coredns-7c65d6cfc9-fgtkg" in "kube-system" namespace has status "Ready":"False"
	I1010 19:23:34.936768  147758 pod_ready.go:93] pod "coredns-7c65d6cfc9-fgtkg" in "kube-system" namespace has status "Ready":"True"
	I1010 19:23:34.936800  147758 pod_ready.go:82] duration metric: took 10.009235225s for pod "coredns-7c65d6cfc9-fgtkg" in "kube-system" namespace to be "Ready" ...
	I1010 19:23:34.936814  147758 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:23:35.944395  147758 pod_ready.go:93] pod "etcd-embed-certs-541370" in "kube-system" namespace has status "Ready":"True"
	I1010 19:23:35.944430  147758 pod_ready.go:82] duration metric: took 1.007599746s for pod "etcd-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:23:35.944445  147758 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:23:35.953224  147758 pod_ready.go:93] pod "kube-apiserver-embed-certs-541370" in "kube-system" namespace has status "Ready":"True"
	I1010 19:23:35.953255  147758 pod_ready.go:82] duration metric: took 8.801702ms for pod "kube-apiserver-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:23:35.953266  147758 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:23:35.227279  148123 crio.go:462] duration metric: took 1.74186828s to copy over tarball
	I1010 19:23:35.227383  148123 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1010 19:23:38.216499  148123 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.989082447s)
	I1010 19:23:38.216533  148123 crio.go:469] duration metric: took 2.989218587s to extract the tarball
	I1010 19:23:38.216541  148123 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1010 19:23:38.259699  148123 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 19:23:38.308284  148123 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1010 19:23:38.308313  148123 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1010 19:23:38.308422  148123 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1010 19:23:38.308477  148123 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1010 19:23:38.308482  148123 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1010 19:23:38.308593  148123 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1010 19:23:38.308617  148123 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1010 19:23:38.308405  148123 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 19:23:38.308446  148123 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1010 19:23:38.308530  148123 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1010 19:23:38.310558  148123 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1010 19:23:38.310612  148123 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1010 19:23:38.310642  148123 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1010 19:23:38.310652  148123 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1010 19:23:38.310645  148123 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1010 19:23:38.310560  148123 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 19:23:38.310733  148123 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1010 19:23:38.311068  148123 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1010 19:23:38.469174  148123 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1010 19:23:38.469563  148123 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1010 19:23:38.488463  148123 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1010 19:23:38.490815  148123 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1010 19:23:38.491841  148123 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1010 19:23:38.496079  148123 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1010 19:23:38.501292  148123 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1010 19:23:34.130443  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:34.130968  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | unable to find current IP address of domain default-k8s-diff-port-361847 in network mk-default-k8s-diff-port-361847
	I1010 19:23:34.130998  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | I1010 19:23:34.130915  149213 retry.go:31] will retry after 393.100812ms: waiting for machine to come up
	I1010 19:23:34.525570  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:34.526032  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | unable to find current IP address of domain default-k8s-diff-port-361847 in network mk-default-k8s-diff-port-361847
	I1010 19:23:34.526060  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | I1010 19:23:34.525980  149213 retry.go:31] will retry after 465.468437ms: waiting for machine to come up
	I1010 19:23:34.992775  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:34.993348  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | unable to find current IP address of domain default-k8s-diff-port-361847 in network mk-default-k8s-diff-port-361847
	I1010 19:23:34.993386  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | I1010 19:23:34.993287  149213 retry.go:31] will retry after 907.884473ms: waiting for machine to come up
	I1010 19:23:35.902481  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:35.902942  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | unable to find current IP address of domain default-k8s-diff-port-361847 in network mk-default-k8s-diff-port-361847
	I1010 19:23:35.902974  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | I1010 19:23:35.902878  149213 retry.go:31] will retry after 1.157806188s: waiting for machine to come up
	I1010 19:23:37.062068  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:37.062749  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | unable to find current IP address of domain default-k8s-diff-port-361847 in network mk-default-k8s-diff-port-361847
	I1010 19:23:37.062777  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | I1010 19:23:37.062706  149213 retry.go:31] will retry after 1.432559208s: waiting for machine to come up
	I1010 19:23:38.496653  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:38.497120  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | unable to find current IP address of domain default-k8s-diff-port-361847 in network mk-default-k8s-diff-port-361847
	I1010 19:23:38.497153  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | I1010 19:23:38.497066  149213 retry.go:31] will retry after 1.559787003s: waiting for machine to come up
	I1010 19:23:37.961068  147758 pod_ready.go:103] pod "kube-controller-manager-embed-certs-541370" in "kube-system" namespace has status "Ready":"False"
	I1010 19:23:40.065559  147758 pod_ready.go:103] pod "kube-controller-manager-embed-certs-541370" in "kube-system" namespace has status "Ready":"False"
	I1010 19:23:40.528757  147758 pod_ready.go:93] pod "kube-controller-manager-embed-certs-541370" in "kube-system" namespace has status "Ready":"True"
	I1010 19:23:40.528786  147758 pod_ready.go:82] duration metric: took 4.575513259s for pod "kube-controller-manager-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:23:40.528802  147758 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-f5l6x" in "kube-system" namespace to be "Ready" ...
	I1010 19:23:40.538002  147758 pod_ready.go:93] pod "kube-proxy-f5l6x" in "kube-system" namespace has status "Ready":"True"
	I1010 19:23:40.538034  147758 pod_ready.go:82] duration metric: took 9.22357ms for pod "kube-proxy-f5l6x" in "kube-system" namespace to be "Ready" ...
	I1010 19:23:40.538049  147758 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:23:40.543594  147758 pod_ready.go:93] pod "kube-scheduler-embed-certs-541370" in "kube-system" namespace has status "Ready":"True"
	I1010 19:23:40.543615  147758 pod_ready.go:82] duration metric: took 5.558665ms for pod "kube-scheduler-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:23:40.543626  147758 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace to be "Ready" ...
	I1010 19:23:38.581315  148123 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1010 19:23:38.581361  148123 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1010 19:23:38.581385  148123 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1010 19:23:38.581407  148123 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1010 19:23:38.581442  148123 ssh_runner.go:195] Run: which crictl
	I1010 19:23:38.581457  148123 ssh_runner.go:195] Run: which crictl
	I1010 19:23:38.642668  148123 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1010 19:23:38.642721  148123 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1010 19:23:38.642777  148123 ssh_runner.go:195] Run: which crictl
	I1010 19:23:38.658235  148123 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1010 19:23:38.658291  148123 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1010 19:23:38.658288  148123 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1010 19:23:38.658328  148123 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1010 19:23:38.658346  148123 ssh_runner.go:195] Run: which crictl
	I1010 19:23:38.658373  148123 ssh_runner.go:195] Run: which crictl
	I1010 19:23:38.678038  148123 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1010 19:23:38.678097  148123 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1010 19:23:38.678155  148123 ssh_runner.go:195] Run: which crictl
	I1010 19:23:38.682719  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1010 19:23:38.682773  148123 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1010 19:23:38.682721  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1010 19:23:38.682791  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1010 19:23:38.682812  148123 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1010 19:23:38.682845  148123 ssh_runner.go:195] Run: which crictl
	I1010 19:23:38.682851  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1010 19:23:38.682872  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1010 19:23:38.691181  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1010 19:23:38.842626  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1010 19:23:38.842714  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1010 19:23:38.842754  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1010 19:23:38.842797  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1010 19:23:38.842721  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1010 19:23:38.842851  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1010 19:23:38.842789  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1010 19:23:38.989994  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1010 19:23:39.005815  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1010 19:23:39.005923  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1010 19:23:39.010029  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1010 19:23:39.010105  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1010 19:23:39.010150  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1010 19:23:39.010230  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1010 19:23:39.134646  148123 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 19:23:39.141431  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1010 19:23:39.176750  148123 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1010 19:23:39.176945  148123 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1010 19:23:39.176989  148123 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1010 19:23:39.177038  148123 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1010 19:23:39.177073  148123 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1010 19:23:39.177104  148123 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1010 19:23:39.335056  148123 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1010 19:23:39.335113  148123 cache_images.go:92] duration metric: took 1.026784312s to LoadCachedImages
	W1010 19:23:39.335180  148123 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I1010 19:23:39.335192  148123 kubeadm.go:934] updating node { 192.168.61.112 8443 v1.20.0 crio true true} ...
	I1010 19:23:39.335305  148123 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-947203 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.112
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-947203 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1010 19:23:39.335380  148123 ssh_runner.go:195] Run: crio config
	I1010 19:23:39.394307  148123 cni.go:84] Creating CNI manager for ""
	I1010 19:23:39.394338  148123 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1010 19:23:39.394350  148123 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1010 19:23:39.394378  148123 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.112 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-947203 NodeName:old-k8s-version-947203 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.112"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.112 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1010 19:23:39.394572  148123 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.112
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-947203"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.112
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.112"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1010 19:23:39.394662  148123 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1010 19:23:39.405510  148123 binaries.go:44] Found k8s binaries, skipping transfer
	I1010 19:23:39.405600  148123 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1010 19:23:39.415968  148123 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1010 19:23:39.443234  148123 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1010 19:23:39.462448  148123 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1010 19:23:39.481959  148123 ssh_runner.go:195] Run: grep 192.168.61.112	control-plane.minikube.internal$ /etc/hosts
	I1010 19:23:39.486188  148123 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.112	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 19:23:39.501922  148123 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 19:23:39.642769  148123 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 19:23:39.661335  148123 certs.go:68] Setting up /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/old-k8s-version-947203 for IP: 192.168.61.112
	I1010 19:23:39.661363  148123 certs.go:194] generating shared ca certs ...
	I1010 19:23:39.661384  148123 certs.go:226] acquiring lock for ca certs: {Name:mk934aa2a82050d7b4e8800492bf64f91d140517 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 19:23:39.661595  148123 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key
	I1010 19:23:39.661658  148123 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key
	I1010 19:23:39.661671  148123 certs.go:256] generating profile certs ...
	I1010 19:23:39.661779  148123 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/old-k8s-version-947203/client.key
	I1010 19:23:39.661867  148123 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/old-k8s-version-947203/apiserver.key.8a666a52
	I1010 19:23:39.661922  148123 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/old-k8s-version-947203/proxy-client.key
	I1010 19:23:39.662312  148123 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876.pem (1338 bytes)
	W1010 19:23:39.662395  148123 certs.go:480] ignoring /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876_empty.pem, impossibly tiny 0 bytes
	I1010 19:23:39.662410  148123 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem (1679 bytes)
	I1010 19:23:39.662447  148123 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem (1078 bytes)
	I1010 19:23:39.662501  148123 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem (1123 bytes)
	I1010 19:23:39.662531  148123 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem (1675 bytes)
	I1010 19:23:39.662622  148123 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem (1708 bytes)
	I1010 19:23:39.663372  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1010 19:23:39.710366  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1010 19:23:39.752724  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1010 19:23:39.801408  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1010 19:23:39.843557  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/old-k8s-version-947203/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1010 19:23:39.892522  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/old-k8s-version-947203/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1010 19:23:39.938519  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/old-k8s-version-947203/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1010 19:23:39.966140  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/old-k8s-version-947203/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1010 19:23:39.992974  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem --> /usr/share/ca-certificates/888762.pem (1708 bytes)
	I1010 19:23:40.019381  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1010 19:23:40.045769  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876.pem --> /usr/share/ca-certificates/88876.pem (1338 bytes)
	I1010 19:23:40.080683  148123 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1010 19:23:40.099747  148123 ssh_runner.go:195] Run: openssl version
	I1010 19:23:40.107865  148123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1010 19:23:40.123158  148123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1010 19:23:40.128594  148123 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 10 17:58 /usr/share/ca-certificates/minikubeCA.pem
	I1010 19:23:40.128660  148123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1010 19:23:40.135319  148123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1010 19:23:40.147514  148123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/88876.pem && ln -fs /usr/share/ca-certificates/88876.pem /etc/ssl/certs/88876.pem"
	I1010 19:23:40.162463  148123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/88876.pem
	I1010 19:23:40.168387  148123 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 10 18:09 /usr/share/ca-certificates/88876.pem
	I1010 19:23:40.168512  148123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/88876.pem
	I1010 19:23:40.176304  148123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/88876.pem /etc/ssl/certs/51391683.0"
	I1010 19:23:40.191306  148123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/888762.pem && ln -fs /usr/share/ca-certificates/888762.pem /etc/ssl/certs/888762.pem"
	I1010 19:23:40.204196  148123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/888762.pem
	I1010 19:23:40.211044  148123 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 10 18:09 /usr/share/ca-certificates/888762.pem
	I1010 19:23:40.211122  148123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/888762.pem
	I1010 19:23:40.219574  148123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/888762.pem /etc/ssl/certs/3ec20f2e.0"
	I1010 19:23:40.232634  148123 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1010 19:23:40.237639  148123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1010 19:23:40.244141  148123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1010 19:23:40.251188  148123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1010 19:23:40.258538  148123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1010 19:23:40.265029  148123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1010 19:23:40.271754  148123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1010 19:23:40.278352  148123 kubeadm.go:392] StartCluster: {Name:old-k8s-version-947203 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-947203 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.112 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 19:23:40.278453  148123 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1010 19:23:40.278518  148123 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1010 19:23:40.317771  148123 cri.go:89] found id: ""
	I1010 19:23:40.317838  148123 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1010 19:23:40.330494  148123 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1010 19:23:40.330520  148123 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1010 19:23:40.330576  148123 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1010 19:23:40.341984  148123 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1010 19:23:40.343386  148123 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-947203" does not appear in /home/jenkins/minikube-integration/19787-81676/kubeconfig
	I1010 19:23:40.344334  148123 kubeconfig.go:62] /home/jenkins/minikube-integration/19787-81676/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-947203" cluster setting kubeconfig missing "old-k8s-version-947203" context setting]
	I1010 19:23:40.345360  148123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19787-81676/kubeconfig: {Name:mk31297b607b774870865548d952622c04d970cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 19:23:40.347128  148123 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1010 19:23:40.357902  148123 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.112
	I1010 19:23:40.357944  148123 kubeadm.go:1160] stopping kube-system containers ...
	I1010 19:23:40.357961  148123 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1010 19:23:40.358031  148123 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1010 19:23:40.399970  148123 cri.go:89] found id: ""
	I1010 19:23:40.400057  148123 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1010 19:23:40.418527  148123 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1010 19:23:40.429182  148123 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1010 19:23:40.429217  148123 kubeadm.go:157] found existing configuration files:
	
	I1010 19:23:40.429262  148123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1010 19:23:40.439287  148123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1010 19:23:40.439363  148123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1010 19:23:40.449479  148123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1010 19:23:40.459288  148123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1010 19:23:40.459359  148123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1010 19:23:40.472733  148123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1010 19:23:40.484890  148123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1010 19:23:40.484958  148123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1010 19:23:40.495301  148123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1010 19:23:40.504750  148123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1010 19:23:40.504820  148123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1010 19:23:40.515034  148123 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1010 19:23:40.526168  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:23:40.675022  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:23:41.566371  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:23:41.815296  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:23:41.930082  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:23:42.027597  148123 api_server.go:52] waiting for apiserver process to appear ...
	I1010 19:23:42.027713  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:42.528219  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:43.027889  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:43.527735  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:40.058247  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:40.058783  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | unable to find current IP address of domain default-k8s-diff-port-361847 in network mk-default-k8s-diff-port-361847
	I1010 19:23:40.058835  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | I1010 19:23:40.058696  149213 retry.go:31] will retry after 2.214094081s: waiting for machine to come up
	I1010 19:23:42.274629  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:42.275167  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | unable to find current IP address of domain default-k8s-diff-port-361847 in network mk-default-k8s-diff-port-361847
	I1010 19:23:42.275194  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | I1010 19:23:42.275106  149213 retry.go:31] will retry after 2.126528577s: waiting for machine to come up
	I1010 19:23:42.550865  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:23:45.051043  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:23:44.028463  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:44.527916  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:45.027911  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:45.528797  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:46.028772  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:46.527982  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:47.027799  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:47.527894  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:48.028352  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:48.527935  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:44.403101  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:44.403575  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | unable to find current IP address of domain default-k8s-diff-port-361847 in network mk-default-k8s-diff-port-361847
	I1010 19:23:44.403616  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | I1010 19:23:44.403534  149213 retry.go:31] will retry after 3.603964622s: waiting for machine to come up
	I1010 19:23:48.008726  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:48.009142  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | unable to find current IP address of domain default-k8s-diff-port-361847 in network mk-default-k8s-diff-port-361847
	I1010 19:23:48.009191  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | I1010 19:23:48.009100  149213 retry.go:31] will retry after 3.639744981s: waiting for machine to come up
	I1010 19:23:47.551003  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:23:49.661572  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:23:52.858209  147213 start.go:364] duration metric: took 56.558774237s to acquireMachinesLock for "no-preload-320324"
	I1010 19:23:52.858274  147213 start.go:96] Skipping create...Using existing machine configuration
	I1010 19:23:52.858283  147213 fix.go:54] fixHost starting: 
	I1010 19:23:52.858705  147213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:23:52.858742  147213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:23:52.878428  147213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38525
	I1010 19:23:52.878955  147213 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:23:52.879563  147213 main.go:141] libmachine: Using API Version  1
	I1010 19:23:52.879599  147213 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:23:52.879945  147213 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:23:52.880144  147213 main.go:141] libmachine: (no-preload-320324) Calling .DriverName
	I1010 19:23:52.880282  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetState
	I1010 19:23:52.881626  147213 fix.go:112] recreateIfNeeded on no-preload-320324: state=Stopped err=<nil>
	I1010 19:23:52.881650  147213 main.go:141] libmachine: (no-preload-320324) Calling .DriverName
	W1010 19:23:52.881799  147213 fix.go:138] unexpected machine state, will restart: <nil>
	I1010 19:23:52.883912  147213 out.go:177] * Restarting existing kvm2 VM for "no-preload-320324" ...
	I1010 19:23:49.028421  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:49.528271  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:50.028462  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:50.527867  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:51.028782  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:51.528581  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:52.028732  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:52.527987  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:53.027852  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:53.528647  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:52.885239  147213 main.go:141] libmachine: (no-preload-320324) Calling .Start
	I1010 19:23:52.885429  147213 main.go:141] libmachine: (no-preload-320324) Ensuring networks are active...
	I1010 19:23:52.886211  147213 main.go:141] libmachine: (no-preload-320324) Ensuring network default is active
	I1010 19:23:52.886749  147213 main.go:141] libmachine: (no-preload-320324) Ensuring network mk-no-preload-320324 is active
	I1010 19:23:52.887310  147213 main.go:141] libmachine: (no-preload-320324) Getting domain xml...
	I1010 19:23:52.888034  147213 main.go:141] libmachine: (no-preload-320324) Creating domain...
	I1010 19:23:51.652975  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:51.653464  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Found IP for machine: 192.168.50.32
	I1010 19:23:51.653487  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Reserving static IP address...
	I1010 19:23:51.653509  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has current primary IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:51.653910  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-361847", mac: "52:54:00:a6:72:58", ip: "192.168.50.32"} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:51.653956  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | skip adding static IP to network mk-default-k8s-diff-port-361847 - found existing host DHCP lease matching {name: "default-k8s-diff-port-361847", mac: "52:54:00:a6:72:58", ip: "192.168.50.32"}
	I1010 19:23:51.653974  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Reserved static IP address: 192.168.50.32
	I1010 19:23:51.653993  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for SSH to be available...
	I1010 19:23:51.654006  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | Getting to WaitForSSH function...
	I1010 19:23:51.655927  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:51.656210  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:51.656240  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:51.656334  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | Using SSH client type: external
	I1010 19:23:51.656372  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | Using SSH private key: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/default-k8s-diff-port-361847/id_rsa (-rw-------)
	I1010 19:23:51.656409  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.32 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19787-81676/.minikube/machines/default-k8s-diff-port-361847/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1010 19:23:51.656425  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | About to run SSH command:
	I1010 19:23:51.656436  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | exit 0
	I1010 19:23:51.780839  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | SSH cmd err, output: <nil>: 
	I1010 19:23:51.781206  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetConfigRaw
	I1010 19:23:51.781939  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetIP
	I1010 19:23:51.784347  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:51.784663  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:51.784696  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:51.784918  148525 profile.go:143] Saving config to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/default-k8s-diff-port-361847/config.json ...
	I1010 19:23:51.785134  148525 machine.go:93] provisionDockerMachine start ...
	I1010 19:23:51.785158  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .DriverName
	I1010 19:23:51.785403  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHHostname
	I1010 19:23:51.787817  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:51.788306  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:51.788347  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:51.788547  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHPort
	I1010 19:23:51.788807  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:23:51.789038  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:23:51.789274  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHUsername
	I1010 19:23:51.789515  148525 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:51.789802  148525 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.32 22 <nil> <nil>}
	I1010 19:23:51.789825  148525 main.go:141] libmachine: About to run SSH command:
	hostname
	I1010 19:23:51.893367  148525 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1010 19:23:51.893399  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetMachineName
	I1010 19:23:51.893652  148525 buildroot.go:166] provisioning hostname "default-k8s-diff-port-361847"
	I1010 19:23:51.893699  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetMachineName
	I1010 19:23:51.893921  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHHostname
	I1010 19:23:51.896986  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:51.897377  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:51.897422  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:51.897662  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHPort
	I1010 19:23:51.897815  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:23:51.897949  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:23:51.898064  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHUsername
	I1010 19:23:51.898302  148525 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:51.898489  148525 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.32 22 <nil> <nil>}
	I1010 19:23:51.898502  148525 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-361847 && echo "default-k8s-diff-port-361847" | sudo tee /etc/hostname
	I1010 19:23:52.015158  148525 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-361847
	
	I1010 19:23:52.015199  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHHostname
	I1010 19:23:52.018094  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.018468  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:52.018497  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.018683  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHPort
	I1010 19:23:52.018901  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:23:52.019039  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:23:52.019209  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHUsername
	I1010 19:23:52.019474  148525 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:52.019690  148525 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.32 22 <nil> <nil>}
	I1010 19:23:52.019708  148525 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-361847' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-361847/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-361847' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1010 19:23:52.133923  148525 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1010 19:23:52.133960  148525 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19787-81676/.minikube CaCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19787-81676/.minikube}
	I1010 19:23:52.134007  148525 buildroot.go:174] setting up certificates
	I1010 19:23:52.134023  148525 provision.go:84] configureAuth start
	I1010 19:23:52.134043  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetMachineName
	I1010 19:23:52.134351  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetIP
	I1010 19:23:52.137242  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.137637  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:52.137670  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.137860  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHHostname
	I1010 19:23:52.140264  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.140640  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:52.140672  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.140833  148525 provision.go:143] copyHostCerts
	I1010 19:23:52.140907  148525 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem, removing ...
	I1010 19:23:52.140922  148525 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem
	I1010 19:23:52.140977  148525 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem (1078 bytes)
	I1010 19:23:52.141088  148525 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem, removing ...
	I1010 19:23:52.141098  148525 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem
	I1010 19:23:52.141118  148525 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem (1123 bytes)
	I1010 19:23:52.141175  148525 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem, removing ...
	I1010 19:23:52.141182  148525 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem
	I1010 19:23:52.141213  148525 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem (1675 bytes)
	I1010 19:23:52.141264  148525 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-361847 san=[127.0.0.1 192.168.50.32 default-k8s-diff-port-361847 localhost minikube]
	I1010 19:23:52.241146  148525 provision.go:177] copyRemoteCerts
	I1010 19:23:52.241212  148525 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1010 19:23:52.241241  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHHostname
	I1010 19:23:52.244061  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.244463  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:52.244490  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.244731  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHPort
	I1010 19:23:52.244929  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:23:52.245110  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHUsername
	I1010 19:23:52.245228  148525 sshutil.go:53] new ssh client: &{IP:192.168.50.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/default-k8s-diff-port-361847/id_rsa Username:docker}
	I1010 19:23:52.327309  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1010 19:23:52.352288  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1010 19:23:52.376308  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1010 19:23:52.400807  148525 provision.go:87] duration metric: took 266.765119ms to configureAuth
	I1010 19:23:52.400862  148525 buildroot.go:189] setting minikube options for container-runtime
	I1010 19:23:52.401065  148525 config.go:182] Loaded profile config "default-k8s-diff-port-361847": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 19:23:52.401171  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHHostname
	I1010 19:23:52.403552  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.403919  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:52.403950  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.404173  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHPort
	I1010 19:23:52.404371  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:23:52.404513  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:23:52.404629  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHUsername
	I1010 19:23:52.404743  148525 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:52.404927  148525 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.32 22 <nil> <nil>}
	I1010 19:23:52.404949  148525 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1010 19:23:52.622902  148525 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1010 19:23:52.622930  148525 machine.go:96] duration metric: took 837.779579ms to provisionDockerMachine
	I1010 19:23:52.622942  148525 start.go:293] postStartSetup for "default-k8s-diff-port-361847" (driver="kvm2")
	I1010 19:23:52.622952  148525 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1010 19:23:52.622968  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .DriverName
	I1010 19:23:52.623331  148525 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1010 19:23:52.623369  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHHostname
	I1010 19:23:52.626106  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.626435  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:52.626479  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.626721  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHPort
	I1010 19:23:52.626932  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:23:52.627091  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHUsername
	I1010 19:23:52.627262  148525 sshutil.go:53] new ssh client: &{IP:192.168.50.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/default-k8s-diff-port-361847/id_rsa Username:docker}
	I1010 19:23:52.708050  148525 ssh_runner.go:195] Run: cat /etc/os-release
	I1010 19:23:52.712524  148525 info.go:137] Remote host: Buildroot 2023.02.9
	I1010 19:23:52.712550  148525 filesync.go:126] Scanning /home/jenkins/minikube-integration/19787-81676/.minikube/addons for local assets ...
	I1010 19:23:52.712608  148525 filesync.go:126] Scanning /home/jenkins/minikube-integration/19787-81676/.minikube/files for local assets ...
	I1010 19:23:52.712688  148525 filesync.go:149] local asset: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem -> 888762.pem in /etc/ssl/certs
	I1010 19:23:52.712782  148525 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1010 19:23:52.723719  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem --> /etc/ssl/certs/888762.pem (1708 bytes)
	I1010 19:23:52.747686  148525 start.go:296] duration metric: took 124.729371ms for postStartSetup
	I1010 19:23:52.747727  148525 fix.go:56] duration metric: took 20.853721623s for fixHost
	I1010 19:23:52.747749  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHHostname
	I1010 19:23:52.750316  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.750645  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:52.750677  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.750817  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHPort
	I1010 19:23:52.751046  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:23:52.751195  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:23:52.751333  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHUsername
	I1010 19:23:52.751511  148525 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:52.751733  148525 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.32 22 <nil> <nil>}
	I1010 19:23:52.751749  148525 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1010 19:23:52.857986  148525 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728588232.831281012
	
	I1010 19:23:52.858019  148525 fix.go:216] guest clock: 1728588232.831281012
	I1010 19:23:52.858029  148525 fix.go:229] Guest: 2024-10-10 19:23:52.831281012 +0000 UTC Remote: 2024-10-10 19:23:52.747731551 +0000 UTC m=+158.845659062 (delta=83.549461ms)
	I1010 19:23:52.858075  148525 fix.go:200] guest clock delta is within tolerance: 83.549461ms
	I1010 19:23:52.858088  148525 start.go:83] releasing machines lock for "default-k8s-diff-port-361847", held for 20.964121636s
	I1010 19:23:52.858120  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .DriverName
	I1010 19:23:52.858491  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetIP
	I1010 19:23:52.861220  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.861640  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:52.861672  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.861828  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .DriverName
	I1010 19:23:52.862337  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .DriverName
	I1010 19:23:52.862548  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .DriverName
	I1010 19:23:52.862655  148525 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1010 19:23:52.862702  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHHostname
	I1010 19:23:52.862825  148525 ssh_runner.go:195] Run: cat /version.json
	I1010 19:23:52.862854  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHHostname
	I1010 19:23:52.865579  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.865840  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.865921  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:52.865960  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.866106  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHPort
	I1010 19:23:52.866290  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:23:52.866300  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:52.866319  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.866423  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHUsername
	I1010 19:23:52.866496  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHPort
	I1010 19:23:52.866648  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:23:52.866671  148525 sshutil.go:53] new ssh client: &{IP:192.168.50.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/default-k8s-diff-port-361847/id_rsa Username:docker}
	I1010 19:23:52.866798  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHUsername
	I1010 19:23:52.866910  148525 sshutil.go:53] new ssh client: &{IP:192.168.50.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/default-k8s-diff-port-361847/id_rsa Username:docker}
	I1010 19:23:52.966354  148525 ssh_runner.go:195] Run: systemctl --version
	I1010 19:23:52.972526  148525 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1010 19:23:53.119801  148525 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1010 19:23:53.126287  148525 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1010 19:23:53.126355  148525 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1010 19:23:53.147301  148525 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1010 19:23:53.147325  148525 start.go:495] detecting cgroup driver to use...
	I1010 19:23:53.147381  148525 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1010 19:23:53.167368  148525 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1010 19:23:53.183239  148525 docker.go:217] disabling cri-docker service (if available) ...
	I1010 19:23:53.183308  148525 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1010 19:23:53.203230  148525 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1010 19:23:53.217261  148525 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1010 19:23:53.343555  148525 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1010 19:23:53.491952  148525 docker.go:233] disabling docker service ...
	I1010 19:23:53.492054  148525 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1010 19:23:53.508136  148525 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1010 19:23:53.521662  148525 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1010 19:23:53.651858  148525 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1010 19:23:53.781954  148525 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1010 19:23:53.803934  148525 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1010 19:23:53.826070  148525 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1010 19:23:53.826146  148525 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:53.837506  148525 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1010 19:23:53.837587  148525 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:53.848653  148525 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:53.860511  148525 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:53.873254  148525 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1010 19:23:53.887862  148525 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:53.899507  148525 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:53.923325  148525 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:53.934999  148525 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1010 19:23:53.946869  148525 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1010 19:23:53.946945  148525 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1010 19:23:53.968116  148525 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1010 19:23:53.980109  148525 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 19:23:54.106345  148525 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1010 19:23:54.210345  148525 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1010 19:23:54.210417  148525 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1010 19:23:54.215968  148525 start.go:563] Will wait 60s for crictl version
	I1010 19:23:54.216037  148525 ssh_runner.go:195] Run: which crictl
	I1010 19:23:54.219885  148525 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1010 19:23:54.260286  148525 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1010 19:23:54.260375  148525 ssh_runner.go:195] Run: crio --version
	I1010 19:23:54.289908  148525 ssh_runner.go:195] Run: crio --version
	I1010 19:23:54.320940  148525 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1010 19:23:52.050137  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:23:54.060194  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:23:56.551981  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:23:54.027982  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:54.528065  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:55.027890  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:55.527753  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:56.027877  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:56.527790  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:57.028263  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:57.527790  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:58.028743  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:58.528039  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:54.234149  147213 main.go:141] libmachine: (no-preload-320324) Waiting to get IP...
	I1010 19:23:54.235147  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:23:54.235598  147213 main.go:141] libmachine: (no-preload-320324) DBG | unable to find current IP address of domain no-preload-320324 in network mk-no-preload-320324
	I1010 19:23:54.235657  147213 main.go:141] libmachine: (no-preload-320324) DBG | I1010 19:23:54.235580  149378 retry.go:31] will retry after 308.921504ms: waiting for machine to come up
	I1010 19:23:54.546327  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:23:54.547002  147213 main.go:141] libmachine: (no-preload-320324) DBG | unable to find current IP address of domain no-preload-320324 in network mk-no-preload-320324
	I1010 19:23:54.547029  147213 main.go:141] libmachine: (no-preload-320324) DBG | I1010 19:23:54.546956  149378 retry.go:31] will retry after 288.92327ms: waiting for machine to come up
	I1010 19:23:54.837625  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:23:54.838136  147213 main.go:141] libmachine: (no-preload-320324) DBG | unable to find current IP address of domain no-preload-320324 in network mk-no-preload-320324
	I1010 19:23:54.838164  147213 main.go:141] libmachine: (no-preload-320324) DBG | I1010 19:23:54.838054  149378 retry.go:31] will retry after 321.948113ms: waiting for machine to come up
	I1010 19:23:55.161940  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:23:55.162494  147213 main.go:141] libmachine: (no-preload-320324) DBG | unable to find current IP address of domain no-preload-320324 in network mk-no-preload-320324
	I1010 19:23:55.162526  147213 main.go:141] libmachine: (no-preload-320324) DBG | I1010 19:23:55.162441  149378 retry.go:31] will retry after 573.848095ms: waiting for machine to come up
	I1010 19:23:55.739080  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:23:55.739592  147213 main.go:141] libmachine: (no-preload-320324) DBG | unable to find current IP address of domain no-preload-320324 in network mk-no-preload-320324
	I1010 19:23:55.739620  147213 main.go:141] libmachine: (no-preload-320324) DBG | I1010 19:23:55.739494  149378 retry.go:31] will retry after 529.087622ms: waiting for machine to come up
	I1010 19:23:56.270324  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:23:56.270899  147213 main.go:141] libmachine: (no-preload-320324) DBG | unable to find current IP address of domain no-preload-320324 in network mk-no-preload-320324
	I1010 19:23:56.270929  147213 main.go:141] libmachine: (no-preload-320324) DBG | I1010 19:23:56.270850  149378 retry.go:31] will retry after 629.204989ms: waiting for machine to come up
	I1010 19:23:56.901836  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:23:56.902283  147213 main.go:141] libmachine: (no-preload-320324) DBG | unable to find current IP address of domain no-preload-320324 in network mk-no-preload-320324
	I1010 19:23:56.902325  147213 main.go:141] libmachine: (no-preload-320324) DBG | I1010 19:23:56.902222  149378 retry.go:31] will retry after 804.309499ms: waiting for machine to come up
	I1010 19:23:57.708806  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:23:57.709175  147213 main.go:141] libmachine: (no-preload-320324) DBG | unable to find current IP address of domain no-preload-320324 in network mk-no-preload-320324
	I1010 19:23:57.709208  147213 main.go:141] libmachine: (no-preload-320324) DBG | I1010 19:23:57.709151  149378 retry.go:31] will retry after 1.204078295s: waiting for machine to come up
	I1010 19:23:54.322534  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetIP
	I1010 19:23:54.325744  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:54.326217  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:54.326257  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:54.326533  148525 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1010 19:23:54.331527  148525 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 19:23:54.343881  148525 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-361847 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:default-k8s-diff-port-361847 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.32 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1010 19:23:54.344033  148525 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1010 19:23:54.344084  148525 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 19:23:54.389066  148525 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1010 19:23:54.389149  148525 ssh_runner.go:195] Run: which lz4
	I1010 19:23:54.393550  148525 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1010 19:23:54.397787  148525 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1010 19:23:54.397833  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1010 19:23:55.897111  148525 crio.go:462] duration metric: took 1.503593301s to copy over tarball
	I1010 19:23:55.897212  148525 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1010 19:23:58.060691  148525 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.16343467s)
	I1010 19:23:58.060731  148525 crio.go:469] duration metric: took 2.163580526s to extract the tarball
	I1010 19:23:58.060741  148525 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1010 19:23:58.103877  148525 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 19:23:58.162881  148525 crio.go:514] all images are preloaded for cri-o runtime.
	I1010 19:23:58.162907  148525 cache_images.go:84] Images are preloaded, skipping loading
	I1010 19:23:58.162915  148525 kubeadm.go:934] updating node { 192.168.50.32 8444 v1.31.1 crio true true} ...
	I1010 19:23:58.163031  148525 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-361847 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.32
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-361847 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1010 19:23:58.163098  148525 ssh_runner.go:195] Run: crio config
	I1010 19:23:58.219804  148525 cni.go:84] Creating CNI manager for ""
	I1010 19:23:58.219827  148525 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1010 19:23:58.219837  148525 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1010 19:23:58.219861  148525 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.32 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-361847 NodeName:default-k8s-diff-port-361847 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.32"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.32 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1010 19:23:58.219982  148525 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.32
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-361847"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.32
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.32"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1010 19:23:58.220042  148525 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1010 19:23:58.231444  148525 binaries.go:44] Found k8s binaries, skipping transfer
	I1010 19:23:58.231565  148525 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1010 19:23:58.241835  148525 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I1010 19:23:58.259408  148525 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1010 19:23:58.276571  148525 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I1010 19:23:58.294640  148525 ssh_runner.go:195] Run: grep 192.168.50.32	control-plane.minikube.internal$ /etc/hosts
	I1010 19:23:58.298503  148525 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.32	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 19:23:58.312286  148525 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 19:23:58.449757  148525 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 19:23:58.467342  148525 certs.go:68] Setting up /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/default-k8s-diff-port-361847 for IP: 192.168.50.32
	I1010 19:23:58.467377  148525 certs.go:194] generating shared ca certs ...
	I1010 19:23:58.467398  148525 certs.go:226] acquiring lock for ca certs: {Name:mk934aa2a82050d7b4e8800492bf64f91d140517 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 19:23:58.467583  148525 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key
	I1010 19:23:58.467642  148525 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key
	I1010 19:23:58.467655  148525 certs.go:256] generating profile certs ...
	I1010 19:23:58.467826  148525 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/default-k8s-diff-port-361847/client.key
	I1010 19:23:58.467895  148525 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/default-k8s-diff-port-361847/apiserver.key.ae5e3f04
	I1010 19:23:58.467951  148525 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/default-k8s-diff-port-361847/proxy-client.key
	I1010 19:23:58.468089  148525 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876.pem (1338 bytes)
	W1010 19:23:58.468136  148525 certs.go:480] ignoring /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876_empty.pem, impossibly tiny 0 bytes
	I1010 19:23:58.468153  148525 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem (1679 bytes)
	I1010 19:23:58.468194  148525 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem (1078 bytes)
	I1010 19:23:58.468226  148525 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem (1123 bytes)
	I1010 19:23:58.468260  148525 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem (1675 bytes)
	I1010 19:23:58.468317  148525 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem (1708 bytes)
	I1010 19:23:58.468931  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1010 19:23:58.529632  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1010 19:23:58.571900  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1010 19:23:58.612599  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1010 19:23:58.645536  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/default-k8s-diff-port-361847/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1010 19:23:58.675961  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/default-k8s-diff-port-361847/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1010 19:23:58.700712  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/default-k8s-diff-port-361847/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1010 19:23:58.725355  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/default-k8s-diff-port-361847/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1010 19:23:58.751138  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1010 19:23:58.775832  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876.pem --> /usr/share/ca-certificates/88876.pem (1338 bytes)
	I1010 19:23:58.800729  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem --> /usr/share/ca-certificates/888762.pem (1708 bytes)
	I1010 19:23:58.825558  148525 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1010 19:23:58.843331  148525 ssh_runner.go:195] Run: openssl version
	I1010 19:23:58.849271  148525 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/88876.pem && ln -fs /usr/share/ca-certificates/88876.pem /etc/ssl/certs/88876.pem"
	I1010 19:23:58.861031  148525 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/88876.pem
	I1010 19:23:58.865721  148525 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 10 18:09 /usr/share/ca-certificates/88876.pem
	I1010 19:23:58.865797  148525 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/88876.pem
	I1010 19:23:58.871961  148525 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/88876.pem /etc/ssl/certs/51391683.0"
	I1010 19:23:58.884520  148525 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/888762.pem && ln -fs /usr/share/ca-certificates/888762.pem /etc/ssl/certs/888762.pem"
	I1010 19:23:58.896744  148525 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/888762.pem
	I1010 19:23:58.901507  148525 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 10 18:09 /usr/share/ca-certificates/888762.pem
	I1010 19:23:58.901571  148525 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/888762.pem
	I1010 19:23:58.907366  148525 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/888762.pem /etc/ssl/certs/3ec20f2e.0"
	I1010 19:23:58.919784  148525 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1010 19:23:58.931972  148525 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1010 19:23:58.936897  148525 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 10 17:58 /usr/share/ca-certificates/minikubeCA.pem
	I1010 19:23:58.936981  148525 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1010 19:23:58.943007  148525 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1010 19:23:59.052037  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:01.551982  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:23:59.028519  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:59.527790  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:00.027889  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:00.528623  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:01.028697  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:01.528030  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:02.028062  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:02.527876  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:03.028745  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:03.528569  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:58.914409  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:23:58.914894  147213 main.go:141] libmachine: (no-preload-320324) DBG | unable to find current IP address of domain no-preload-320324 in network mk-no-preload-320324
	I1010 19:23:58.914927  147213 main.go:141] libmachine: (no-preload-320324) DBG | I1010 19:23:58.914831  149378 retry.go:31] will retry after 1.631827888s: waiting for machine to come up
	I1010 19:24:00.548505  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:00.549135  147213 main.go:141] libmachine: (no-preload-320324) DBG | unable to find current IP address of domain no-preload-320324 in network mk-no-preload-320324
	I1010 19:24:00.549164  147213 main.go:141] libmachine: (no-preload-320324) DBG | I1010 19:24:00.549043  149378 retry.go:31] will retry after 2.126895157s: waiting for machine to come up
	I1010 19:24:02.678328  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:02.678907  147213 main.go:141] libmachine: (no-preload-320324) DBG | unable to find current IP address of domain no-preload-320324 in network mk-no-preload-320324
	I1010 19:24:02.678969  147213 main.go:141] libmachine: (no-preload-320324) DBG | I1010 19:24:02.678891  149378 retry.go:31] will retry after 2.754376625s: waiting for machine to come up
	I1010 19:23:58.955104  148525 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1010 19:23:58.959833  148525 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1010 19:23:58.966528  148525 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1010 19:23:58.973590  148525 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1010 19:23:58.982390  148525 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1010 19:23:58.990767  148525 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1010 19:23:58.997162  148525 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1010 19:23:59.003647  148525 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-361847 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:default-k8s-diff-port-361847 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.32 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 19:23:59.003786  148525 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1010 19:23:59.003865  148525 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1010 19:23:59.048772  148525 cri.go:89] found id: ""
	I1010 19:23:59.048869  148525 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1010 19:23:59.061267  148525 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1010 19:23:59.061288  148525 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1010 19:23:59.061338  148525 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1010 19:23:59.072629  148525 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1010 19:23:59.074287  148525 kubeconfig.go:125] found "default-k8s-diff-port-361847" server: "https://192.168.50.32:8444"
	I1010 19:23:59.077880  148525 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1010 19:23:59.090738  148525 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.32
	I1010 19:23:59.090783  148525 kubeadm.go:1160] stopping kube-system containers ...
	I1010 19:23:59.090799  148525 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1010 19:23:59.090886  148525 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1010 19:23:59.136762  148525 cri.go:89] found id: ""
	I1010 19:23:59.136888  148525 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1010 19:23:59.155937  148525 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1010 19:23:59.166471  148525 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1010 19:23:59.166493  148525 kubeadm.go:157] found existing configuration files:
	
	I1010 19:23:59.166549  148525 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1010 19:23:59.178247  148525 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1010 19:23:59.178313  148525 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1010 19:23:59.189455  148525 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1010 19:23:59.200127  148525 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1010 19:23:59.200204  148525 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1010 19:23:59.210764  148525 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1010 19:23:59.221048  148525 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1010 19:23:59.221119  148525 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1010 19:23:59.231762  148525 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1010 19:23:59.242152  148525 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1010 19:23:59.242217  148525 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1010 19:23:59.252608  148525 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1010 19:23:59.265219  148525 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:23:59.391743  148525 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:24:00.243288  148525 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:24:00.453782  148525 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:24:00.532137  148525 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:24:00.623598  148525 api_server.go:52] waiting for apiserver process to appear ...
	I1010 19:24:00.623711  148525 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:01.124678  148525 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:01.624626  148525 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:01.667587  148525 api_server.go:72] duration metric: took 1.043987857s to wait for apiserver process to appear ...
	I1010 19:24:01.667621  148525 api_server.go:88] waiting for apiserver healthz status ...
	I1010 19:24:01.667649  148525 api_server.go:253] Checking apiserver healthz at https://192.168.50.32:8444/healthz ...
	I1010 19:24:01.668298  148525 api_server.go:269] stopped: https://192.168.50.32:8444/healthz: Get "https://192.168.50.32:8444/healthz": dial tcp 192.168.50.32:8444: connect: connection refused
	I1010 19:24:02.168273  148525 api_server.go:253] Checking apiserver healthz at https://192.168.50.32:8444/healthz ...
	I1010 19:24:05.275654  148525 api_server.go:279] https://192.168.50.32:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1010 19:24:05.275695  148525 api_server.go:103] status: https://192.168.50.32:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1010 19:24:05.275713  148525 api_server.go:253] Checking apiserver healthz at https://192.168.50.32:8444/healthz ...
	I1010 19:24:05.309713  148525 api_server.go:279] https://192.168.50.32:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1010 19:24:05.309770  148525 api_server.go:103] status: https://192.168.50.32:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1010 19:24:05.668325  148525 api_server.go:253] Checking apiserver healthz at https://192.168.50.32:8444/healthz ...
	I1010 19:24:05.684992  148525 api_server.go:279] https://192.168.50.32:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1010 19:24:05.685031  148525 api_server.go:103] status: https://192.168.50.32:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1010 19:24:06.168198  148525 api_server.go:253] Checking apiserver healthz at https://192.168.50.32:8444/healthz ...
	I1010 19:24:06.176584  148525 api_server.go:279] https://192.168.50.32:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1010 19:24:06.176627  148525 api_server.go:103] status: https://192.168.50.32:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1010 19:24:06.668130  148525 api_server.go:253] Checking apiserver healthz at https://192.168.50.32:8444/healthz ...
	I1010 19:24:06.682049  148525 api_server.go:279] https://192.168.50.32:8444/healthz returned 200:
	ok
	I1010 19:24:06.692780  148525 api_server.go:141] control plane version: v1.31.1
	I1010 19:24:06.692811  148525 api_server.go:131] duration metric: took 5.025182717s to wait for apiserver health ...
	I1010 19:24:06.692820  148525 cni.go:84] Creating CNI manager for ""
	I1010 19:24:06.692831  148525 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1010 19:24:06.694447  148525 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1010 19:24:03.558797  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:06.054012  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:04.028711  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:04.528770  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:05.028716  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:05.528083  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:06.028204  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:06.528430  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:07.027972  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:07.527987  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:08.027829  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:08.528676  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:05.435450  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:05.435940  147213 main.go:141] libmachine: (no-preload-320324) DBG | unable to find current IP address of domain no-preload-320324 in network mk-no-preload-320324
	I1010 19:24:05.435970  147213 main.go:141] libmachine: (no-preload-320324) DBG | I1010 19:24:05.435888  149378 retry.go:31] will retry after 2.981990051s: waiting for machine to come up
	I1010 19:24:08.419385  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:08.419982  147213 main.go:141] libmachine: (no-preload-320324) DBG | unable to find current IP address of domain no-preload-320324 in network mk-no-preload-320324
	I1010 19:24:08.420006  147213 main.go:141] libmachine: (no-preload-320324) DBG | I1010 19:24:08.419905  149378 retry.go:31] will retry after 3.976204267s: waiting for machine to come up
	I1010 19:24:06.695841  148525 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1010 19:24:06.711212  148525 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1010 19:24:06.747753  148525 system_pods.go:43] waiting for kube-system pods to appear ...
	I1010 19:24:06.768344  148525 system_pods.go:59] 8 kube-system pods found
	I1010 19:24:06.768429  148525 system_pods.go:61] "coredns-7c65d6cfc9-rv8vq" [93b209ea-bb5f-40c5-aea8-8771b785f021] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1010 19:24:06.768446  148525 system_pods.go:61] "etcd-default-k8s-diff-port-361847" [65129999-984d-497c-a6e1-9c53a5374991] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1010 19:24:06.768452  148525 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-361847" [5f18ba24-29cf-433e-a70d-23757278c04f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1010 19:24:06.768460  148525 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-361847" [c189c785-8ac5-4003-802d-9e7c089d450e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1010 19:24:06.768467  148525 system_pods.go:61] "kube-proxy-v5lm8" [e78eabf9-5c65-4cba-83fd-0837cef05126] Running
	I1010 19:24:06.768476  148525 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-361847" [4f84f0f5-e255-4534-9db3-e5cfee0b2447] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1010 19:24:06.768485  148525 system_pods.go:61] "metrics-server-6867b74b74-h5kjm" [a3979b79-bd21-490b-97ac-0a78efd43a99] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1010 19:24:06.768493  148525 system_pods.go:61] "storage-provisioner" [ca8606d3-9adb-46da-886a-3081b11b52a6] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1010 19:24:06.768499  148525 system_pods.go:74] duration metric: took 20.716461ms to wait for pod list to return data ...
	I1010 19:24:06.768509  148525 node_conditions.go:102] verifying NodePressure condition ...
	I1010 19:24:06.777935  148525 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1010 19:24:06.777973  148525 node_conditions.go:123] node cpu capacity is 2
	I1010 19:24:06.777988  148525 node_conditions.go:105] duration metric: took 9.473726ms to run NodePressure ...
	I1010 19:24:06.778019  148525 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:24:07.053296  148525 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1010 19:24:07.057585  148525 kubeadm.go:739] kubelet initialised
	I1010 19:24:07.057608  148525 kubeadm.go:740] duration metric: took 4.283027ms waiting for restarted kubelet to initialise ...
	I1010 19:24:07.057618  148525 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 19:24:07.064157  148525 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-rv8vq" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:07.069962  148525 pod_ready.go:98] node "default-k8s-diff-port-361847" hosting pod "coredns-7c65d6cfc9-rv8vq" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-361847" has status "Ready":"False"
	I1010 19:24:07.069989  148525 pod_ready.go:82] duration metric: took 5.791958ms for pod "coredns-7c65d6cfc9-rv8vq" in "kube-system" namespace to be "Ready" ...
	E1010 19:24:07.069999  148525 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-361847" hosting pod "coredns-7c65d6cfc9-rv8vq" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-361847" has status "Ready":"False"
	I1010 19:24:07.070022  148525 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:07.075615  148525 pod_ready.go:98] node "default-k8s-diff-port-361847" hosting pod "etcd-default-k8s-diff-port-361847" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-361847" has status "Ready":"False"
	I1010 19:24:07.075644  148525 pod_ready.go:82] duration metric: took 5.608749ms for pod "etcd-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	E1010 19:24:07.075654  148525 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-361847" hosting pod "etcd-default-k8s-diff-port-361847" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-361847" has status "Ready":"False"
	I1010 19:24:07.075661  148525 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:07.081717  148525 pod_ready.go:98] node "default-k8s-diff-port-361847" hosting pod "kube-apiserver-default-k8s-diff-port-361847" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-361847" has status "Ready":"False"
	I1010 19:24:07.081743  148525 pod_ready.go:82] duration metric: took 6.074977ms for pod "kube-apiserver-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	E1010 19:24:07.081754  148525 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-361847" hosting pod "kube-apiserver-default-k8s-diff-port-361847" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-361847" has status "Ready":"False"
	I1010 19:24:07.081761  148525 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:07.152204  148525 pod_ready.go:98] node "default-k8s-diff-port-361847" hosting pod "kube-controller-manager-default-k8s-diff-port-361847" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-361847" has status "Ready":"False"
	I1010 19:24:07.152244  148525 pod_ready.go:82] duration metric: took 70.475599ms for pod "kube-controller-manager-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	E1010 19:24:07.152258  148525 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-361847" hosting pod "kube-controller-manager-default-k8s-diff-port-361847" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-361847" has status "Ready":"False"
	I1010 19:24:07.152266  148525 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-v5lm8" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:07.551283  148525 pod_ready.go:93] pod "kube-proxy-v5lm8" in "kube-system" namespace has status "Ready":"True"
	I1010 19:24:07.551311  148525 pod_ready.go:82] duration metric: took 399.036581ms for pod "kube-proxy-v5lm8" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:07.551324  148525 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:08.550896  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:10.551437  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:09.028538  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:09.527883  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:10.028693  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:10.528439  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:11.028528  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:11.528679  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:12.028002  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:12.527904  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:13.028685  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:13.527833  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:12.401115  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.401808  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has current primary IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.401841  147213 main.go:141] libmachine: (no-preload-320324) Found IP for machine: 192.168.72.11
	I1010 19:24:12.401856  147213 main.go:141] libmachine: (no-preload-320324) Reserving static IP address...
	I1010 19:24:12.402368  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "no-preload-320324", mac: "52:54:00:95:03:cd", ip: "192.168.72.11"} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:12.402407  147213 main.go:141] libmachine: (no-preload-320324) DBG | skip adding static IP to network mk-no-preload-320324 - found existing host DHCP lease matching {name: "no-preload-320324", mac: "52:54:00:95:03:cd", ip: "192.168.72.11"}
	I1010 19:24:12.402426  147213 main.go:141] libmachine: (no-preload-320324) Reserved static IP address: 192.168.72.11
	I1010 19:24:12.402443  147213 main.go:141] libmachine: (no-preload-320324) Waiting for SSH to be available...
	I1010 19:24:12.402458  147213 main.go:141] libmachine: (no-preload-320324) DBG | Getting to WaitForSSH function...
	I1010 19:24:12.404803  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.405200  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:12.405226  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.405461  147213 main.go:141] libmachine: (no-preload-320324) DBG | Using SSH client type: external
	I1010 19:24:12.405494  147213 main.go:141] libmachine: (no-preload-320324) DBG | Using SSH private key: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/no-preload-320324/id_rsa (-rw-------)
	I1010 19:24:12.405527  147213 main.go:141] libmachine: (no-preload-320324) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.11 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19787-81676/.minikube/machines/no-preload-320324/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1010 19:24:12.405541  147213 main.go:141] libmachine: (no-preload-320324) DBG | About to run SSH command:
	I1010 19:24:12.405554  147213 main.go:141] libmachine: (no-preload-320324) DBG | exit 0
	I1010 19:24:12.529010  147213 main.go:141] libmachine: (no-preload-320324) DBG | SSH cmd err, output: <nil>: 
	I1010 19:24:12.529401  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetConfigRaw
	I1010 19:24:12.530257  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetIP
	I1010 19:24:12.533285  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.533692  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:12.533727  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.533963  147213 profile.go:143] Saving config to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/no-preload-320324/config.json ...
	I1010 19:24:12.534205  147213 machine.go:93] provisionDockerMachine start ...
	I1010 19:24:12.534230  147213 main.go:141] libmachine: (no-preload-320324) Calling .DriverName
	I1010 19:24:12.534450  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHHostname
	I1010 19:24:12.536585  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.536976  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:12.537003  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.537133  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHPort
	I1010 19:24:12.537323  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:12.537512  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:12.537689  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHUsername
	I1010 19:24:12.537925  147213 main.go:141] libmachine: Using SSH client type: native
	I1010 19:24:12.538138  147213 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.11 22 <nil> <nil>}
	I1010 19:24:12.538151  147213 main.go:141] libmachine: About to run SSH command:
	hostname
	I1010 19:24:12.641679  147213 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1010 19:24:12.641706  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetMachineName
	I1010 19:24:12.641964  147213 buildroot.go:166] provisioning hostname "no-preload-320324"
	I1010 19:24:12.642002  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetMachineName
	I1010 19:24:12.642235  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHHostname
	I1010 19:24:12.645149  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.645488  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:12.645521  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.645647  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHPort
	I1010 19:24:12.645836  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:12.646001  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:12.646155  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHUsername
	I1010 19:24:12.646352  147213 main.go:141] libmachine: Using SSH client type: native
	I1010 19:24:12.646533  147213 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.11 22 <nil> <nil>}
	I1010 19:24:12.646545  147213 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-320324 && echo "no-preload-320324" | sudo tee /etc/hostname
	I1010 19:24:12.766449  147213 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-320324
	
	I1010 19:24:12.766480  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHHostname
	I1010 19:24:12.769836  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.770331  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:12.770356  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.770584  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHPort
	I1010 19:24:12.770810  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:12.770962  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:12.771119  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHUsername
	I1010 19:24:12.771252  147213 main.go:141] libmachine: Using SSH client type: native
	I1010 19:24:12.771448  147213 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.11 22 <nil> <nil>}
	I1010 19:24:12.771470  147213 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-320324' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-320324/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-320324' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1010 19:24:12.882458  147213 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1010 19:24:12.882495  147213 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19787-81676/.minikube CaCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19787-81676/.minikube}
	I1010 19:24:12.882537  147213 buildroot.go:174] setting up certificates
	I1010 19:24:12.882547  147213 provision.go:84] configureAuth start
	I1010 19:24:12.882562  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetMachineName
	I1010 19:24:12.882865  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetIP
	I1010 19:24:12.885854  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.886139  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:12.886173  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.886308  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHHostname
	I1010 19:24:12.888479  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.888788  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:12.888819  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.888976  147213 provision.go:143] copyHostCerts
	I1010 19:24:12.889037  147213 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem, removing ...
	I1010 19:24:12.889049  147213 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem
	I1010 19:24:12.889102  147213 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem (1123 bytes)
	I1010 19:24:12.889235  147213 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem, removing ...
	I1010 19:24:12.889246  147213 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem
	I1010 19:24:12.889278  147213 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem (1675 bytes)
	I1010 19:24:12.889370  147213 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem, removing ...
	I1010 19:24:12.889381  147213 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem
	I1010 19:24:12.889406  147213 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem (1078 bytes)
	I1010 19:24:12.889493  147213 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem org=jenkins.no-preload-320324 san=[127.0.0.1 192.168.72.11 localhost minikube no-preload-320324]
	I1010 19:24:12.978176  147213 provision.go:177] copyRemoteCerts
	I1010 19:24:12.978235  147213 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1010 19:24:12.978261  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHHostname
	I1010 19:24:12.981662  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.982182  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:12.982218  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.982452  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHPort
	I1010 19:24:12.982647  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:12.982829  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHUsername
	I1010 19:24:12.983005  147213 sshutil.go:53] new ssh client: &{IP:192.168.72.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/no-preload-320324/id_rsa Username:docker}
	I1010 19:24:13.067269  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1010 19:24:13.092777  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1010 19:24:13.118530  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1010 19:24:13.143401  147213 provision.go:87] duration metric: took 260.833877ms to configureAuth
	I1010 19:24:13.143436  147213 buildroot.go:189] setting minikube options for container-runtime
	I1010 19:24:13.143678  147213 config.go:182] Loaded profile config "no-preload-320324": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 19:24:13.143776  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHHostname
	I1010 19:24:13.147086  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:13.147507  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:13.147531  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:13.147787  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHPort
	I1010 19:24:13.148032  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:13.148222  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:13.148452  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHUsername
	I1010 19:24:13.148660  147213 main.go:141] libmachine: Using SSH client type: native
	I1010 19:24:13.149013  147213 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.11 22 <nil> <nil>}
	I1010 19:24:13.149041  147213 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1010 19:24:13.375683  147213 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1010 19:24:13.375714  147213 machine.go:96] duration metric: took 841.493636ms to provisionDockerMachine
	I1010 19:24:13.375736  147213 start.go:293] postStartSetup for "no-preload-320324" (driver="kvm2")
	I1010 19:24:13.375754  147213 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1010 19:24:13.375775  147213 main.go:141] libmachine: (no-preload-320324) Calling .DriverName
	I1010 19:24:13.376085  147213 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1010 19:24:13.376116  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHHostname
	I1010 19:24:13.378855  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:13.379179  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:13.379224  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:13.379408  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHPort
	I1010 19:24:13.379608  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:13.379769  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHUsername
	I1010 19:24:13.379910  147213 sshutil.go:53] new ssh client: &{IP:192.168.72.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/no-preload-320324/id_rsa Username:docker}
	I1010 19:24:13.459580  147213 ssh_runner.go:195] Run: cat /etc/os-release
	I1010 19:24:13.463644  147213 info.go:137] Remote host: Buildroot 2023.02.9
	I1010 19:24:13.463674  147213 filesync.go:126] Scanning /home/jenkins/minikube-integration/19787-81676/.minikube/addons for local assets ...
	I1010 19:24:13.463751  147213 filesync.go:126] Scanning /home/jenkins/minikube-integration/19787-81676/.minikube/files for local assets ...
	I1010 19:24:13.463845  147213 filesync.go:149] local asset: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem -> 888762.pem in /etc/ssl/certs
	I1010 19:24:13.463963  147213 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1010 19:24:13.473483  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem --> /etc/ssl/certs/888762.pem (1708 bytes)
	I1010 19:24:13.498773  147213 start.go:296] duration metric: took 123.021762ms for postStartSetup
	I1010 19:24:13.498814  147213 fix.go:56] duration metric: took 20.640532088s for fixHost
	I1010 19:24:13.498834  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHHostname
	I1010 19:24:13.501681  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:13.502243  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:13.502281  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:13.502476  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHPort
	I1010 19:24:13.502679  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:13.502835  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:13.502993  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHUsername
	I1010 19:24:13.503177  147213 main.go:141] libmachine: Using SSH client type: native
	I1010 19:24:13.503383  147213 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.11 22 <nil> <nil>}
	I1010 19:24:13.503396  147213 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1010 19:24:13.613929  147213 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728588253.586950075
	
	I1010 19:24:13.613954  147213 fix.go:216] guest clock: 1728588253.586950075
	I1010 19:24:13.613963  147213 fix.go:229] Guest: 2024-10-10 19:24:13.586950075 +0000 UTC Remote: 2024-10-10 19:24:13.498818059 +0000 UTC m=+359.788559229 (delta=88.132016ms)
	I1010 19:24:13.613988  147213 fix.go:200] guest clock delta is within tolerance: 88.132016ms
	I1010 19:24:13.614020  147213 start.go:83] releasing machines lock for "no-preload-320324", held for 20.755775587s
	I1010 19:24:13.614063  147213 main.go:141] libmachine: (no-preload-320324) Calling .DriverName
	I1010 19:24:13.614473  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetIP
	I1010 19:24:13.617327  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:13.617694  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:13.617721  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:13.618016  147213 main.go:141] libmachine: (no-preload-320324) Calling .DriverName
	I1010 19:24:13.618670  147213 main.go:141] libmachine: (no-preload-320324) Calling .DriverName
	I1010 19:24:13.618884  147213 main.go:141] libmachine: (no-preload-320324) Calling .DriverName
	I1010 19:24:13.618989  147213 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1010 19:24:13.619047  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHHostname
	I1010 19:24:13.619142  147213 ssh_runner.go:195] Run: cat /version.json
	I1010 19:24:13.619185  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHHostname
	I1010 19:24:13.621972  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:13.622229  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:13.622322  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:13.622348  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:13.622533  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHPort
	I1010 19:24:13.622666  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:13.622697  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:13.622736  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:13.622865  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHPort
	I1010 19:24:13.622930  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHUsername
	I1010 19:24:13.623059  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:13.623073  147213 sshutil.go:53] new ssh client: &{IP:192.168.72.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/no-preload-320324/id_rsa Username:docker}
	I1010 19:24:13.623225  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHUsername
	I1010 19:24:13.623349  147213 sshutil.go:53] new ssh client: &{IP:192.168.72.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/no-preload-320324/id_rsa Username:docker}
	I1010 19:24:13.720999  147213 ssh_runner.go:195] Run: systemctl --version
	I1010 19:24:13.727679  147213 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1010 19:24:09.562834  148525 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-361847" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:12.058686  148525 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-361847" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:13.870558  147213 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1010 19:24:13.877853  147213 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1010 19:24:13.877923  147213 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1010 19:24:13.896295  147213 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1010 19:24:13.896325  147213 start.go:495] detecting cgroup driver to use...
	I1010 19:24:13.896400  147213 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1010 19:24:13.913122  147213 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1010 19:24:13.929359  147213 docker.go:217] disabling cri-docker service (if available) ...
	I1010 19:24:13.929437  147213 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1010 19:24:13.944840  147213 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1010 19:24:13.960062  147213 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1010 19:24:14.090774  147213 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1010 19:24:14.246094  147213 docker.go:233] disabling docker service ...
	I1010 19:24:14.246161  147213 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1010 19:24:14.264682  147213 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1010 19:24:14.280264  147213 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1010 19:24:14.437156  147213 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1010 19:24:14.569220  147213 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1010 19:24:14.585723  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1010 19:24:14.607349  147213 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1010 19:24:14.607429  147213 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:24:14.619113  147213 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1010 19:24:14.619198  147213 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:24:14.631818  147213 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:24:14.643977  147213 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:24:14.655753  147213 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1010 19:24:14.667235  147213 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:24:14.679225  147213 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:24:14.698760  147213 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:24:14.710440  147213 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1010 19:24:14.722565  147213 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1010 19:24:14.722625  147213 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1010 19:24:14.740587  147213 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1010 19:24:14.752630  147213 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 19:24:14.887728  147213 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1010 19:24:14.989026  147213 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1010 19:24:14.989109  147213 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1010 19:24:14.995309  147213 start.go:563] Will wait 60s for crictl version
	I1010 19:24:14.995366  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:24:14.999840  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1010 19:24:15.043758  147213 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1010 19:24:15.043856  147213 ssh_runner.go:195] Run: crio --version
	I1010 19:24:15.079274  147213 ssh_runner.go:195] Run: crio --version
	I1010 19:24:15.116630  147213 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1010 19:24:13.050633  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:15.552413  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:14.028090  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:14.528072  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:15.028648  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:15.528388  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:16.028060  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:16.528472  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:17.028098  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:17.528125  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:18.028563  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:18.528613  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:15.118343  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetIP
	I1010 19:24:15.121596  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:15.122101  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:15.122133  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:15.122396  147213 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1010 19:24:15.127140  147213 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 19:24:15.141249  147213 kubeadm.go:883] updating cluster {Name:no-preload-320324 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:no-preload-320324 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.11 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1010 19:24:15.141375  147213 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1010 19:24:15.141417  147213 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 19:24:15.183271  147213 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1010 19:24:15.183303  147213 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1010 19:24:15.183412  147213 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 19:24:15.183444  147213 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1010 19:24:15.183452  147213 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I1010 19:24:15.183459  147213 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1010 19:24:15.183422  147213 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1010 19:24:15.183493  147213 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1010 19:24:15.183512  147213 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I1010 19:24:15.183507  147213 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I1010 19:24:15.185097  147213 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I1010 19:24:15.185097  147213 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1010 19:24:15.185097  147213 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 19:24:15.185099  147213 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I1010 19:24:15.185097  147213 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I1010 19:24:15.185098  147213 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1010 19:24:15.185103  147213 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1010 19:24:15.185106  147213 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1010 19:24:15.328484  147213 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.1
	I1010 19:24:15.333573  147213 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I1010 19:24:15.340047  147213 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I1010 19:24:15.358922  147213 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I1010 19:24:15.359800  147213 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.1
	I1010 19:24:15.366668  147213 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.1
	I1010 19:24:15.409942  147213 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I1010 19:24:15.409995  147213 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1010 19:24:15.410050  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:24:15.416186  147213 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.1
	I1010 19:24:15.452343  147213 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I1010 19:24:15.452385  147213 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I1010 19:24:15.452426  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:24:15.533567  147213 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I1010 19:24:15.533620  147213 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I1010 19:24:15.533671  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:24:15.585611  147213 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I1010 19:24:15.585659  147213 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I1010 19:24:15.585685  147213 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I1010 19:24:15.585712  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:24:15.585724  147213 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I1010 19:24:15.585765  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:24:15.585769  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I1010 19:24:15.585805  147213 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I1010 19:24:15.585832  147213 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I1010 19:24:15.585856  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1010 19:24:15.585872  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:24:15.585943  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1010 19:24:15.603131  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I1010 19:24:15.661918  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1010 19:24:15.683739  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I1010 19:24:15.683760  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I1010 19:24:15.683833  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1010 19:24:15.683880  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I1010 19:24:15.685385  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I1010 19:24:15.792253  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1010 19:24:15.818116  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I1010 19:24:15.818183  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I1010 19:24:15.818289  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1010 19:24:15.818321  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I1010 19:24:15.818402  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I1010 19:24:15.878069  147213 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I1010 19:24:15.878202  147213 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I1010 19:24:15.940520  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I1010 19:24:15.953841  147213 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I1010 19:24:15.953955  147213 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I1010 19:24:15.953990  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I1010 19:24:15.954047  147213 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I1010 19:24:15.954115  147213 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I1010 19:24:15.954120  147213 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I1010 19:24:15.954130  147213 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I1010 19:24:15.954144  147213 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I1010 19:24:15.954157  147213 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I1010 19:24:15.954205  147213 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	I1010 19:24:16.005975  147213 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I1010 19:24:16.006028  147213 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I1010 19:24:16.006090  147213 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I1010 19:24:16.023905  147213 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I1010 19:24:16.023990  147213 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.1 (exists)
	I1010 19:24:16.024024  147213 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I1010 19:24:16.024023  147213 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.1 (exists)
	I1010 19:24:16.033715  147213 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 19:24:18.150881  147213 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1: (2.144766677s)
	I1010 19:24:18.150935  147213 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.1 (exists)
	I1010 19:24:18.150931  147213 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (2.196753845s)
	I1010 19:24:18.150944  147213 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1: (2.126894115s)
	I1010 19:24:18.150973  147213 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.1 (exists)
	I1010 19:24:18.150953  147213 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I1010 19:24:18.150982  147213 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.117235962s)
	I1010 19:24:18.151002  147213 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I1010 19:24:18.151014  147213 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1010 19:24:18.151053  147213 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 19:24:18.151069  147213 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I1010 19:24:18.151097  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:24:14.059223  148525 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-361847" in "kube-system" namespace has status "Ready":"True"
	I1010 19:24:14.059252  148525 pod_ready.go:82] duration metric: took 6.507918149s for pod "kube-scheduler-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:14.059266  148525 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:16.066908  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:18.082398  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:18.051799  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:20.552644  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:19.028306  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:19.527854  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:20.028765  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:20.528245  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:21.027936  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:21.528172  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:22.028527  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:22.528446  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:23.028709  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:23.528711  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:21.952099  147213 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.801005716s)
	I1010 19:24:21.952134  147213 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I1010 19:24:21.952163  147213 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I1010 19:24:21.952165  147213 ssh_runner.go:235] Completed: which crictl: (3.801048272s)
	I1010 19:24:21.952212  147213 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I1010 19:24:21.952225  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 19:24:21.993627  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 19:24:20.566055  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:22.567145  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:23.053514  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:25.554151  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:24.028402  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:24.528516  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:25.028207  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:25.527928  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:26.028032  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:26.527826  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:27.028609  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:27.528448  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:28.028018  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:28.528597  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:23.929370  147213 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1: (1.977128659s)
	I1010 19:24:23.929418  147213 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I1010 19:24:23.929450  147213 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I1010 19:24:23.929498  147213 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.935844384s)
	I1010 19:24:23.929532  147213 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1
	I1010 19:24:23.929551  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 19:24:26.009485  147213 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.079908324s)
	I1010 19:24:26.009567  147213 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1010 19:24:26.009484  147213 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1: (2.079925224s)
	I1010 19:24:26.009641  147213 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I1010 19:24:26.009671  147213 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I1010 19:24:26.009684  147213 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1010 19:24:26.009720  147213 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1
	I1010 19:24:27.968483  147213 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.958772952s)
	I1010 19:24:27.968534  147213 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1010 19:24:27.968559  147213 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1: (1.958813643s)
	I1010 19:24:27.968587  147213 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I1010 19:24:27.968619  147213 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I1010 19:24:27.968686  147213 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1
	I1010 19:24:25.069787  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:27.567013  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:28.050968  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:30.551528  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:29.027960  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:29.528054  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:30.028227  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:30.527791  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:31.027790  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:31.528761  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:32.028343  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:32.528553  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:33.028195  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:33.527895  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:29.315157  147213 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1: (1.346440456s)
	I1010 19:24:29.315211  147213 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I1010 19:24:29.315244  147213 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1010 19:24:29.315296  147213 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1010 19:24:30.173931  147213 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1010 19:24:30.173977  147213 cache_images.go:123] Successfully loaded all cached images
	I1010 19:24:30.173985  147213 cache_images.go:92] duration metric: took 14.990666845s to LoadCachedImages
	I1010 19:24:30.174001  147213 kubeadm.go:934] updating node { 192.168.72.11 8443 v1.31.1 crio true true} ...
	I1010 19:24:30.174129  147213 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-320324 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.11
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-320324 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1010 19:24:30.174221  147213 ssh_runner.go:195] Run: crio config
	I1010 19:24:30.222677  147213 cni.go:84] Creating CNI manager for ""
	I1010 19:24:30.222702  147213 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1010 19:24:30.222711  147213 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1010 19:24:30.222736  147213 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.11 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-320324 NodeName:no-preload-320324 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.11"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.11 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1010 19:24:30.222923  147213 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.11
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-320324"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.11
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.11"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1010 19:24:30.222998  147213 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1010 19:24:30.233755  147213 binaries.go:44] Found k8s binaries, skipping transfer
	I1010 19:24:30.233818  147213 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1010 19:24:30.243829  147213 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I1010 19:24:30.263056  147213 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1010 19:24:30.282362  147213 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I1010 19:24:30.300449  147213 ssh_runner.go:195] Run: grep 192.168.72.11	control-plane.minikube.internal$ /etc/hosts
	I1010 19:24:30.304661  147213 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.11	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 19:24:30.317462  147213 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 19:24:30.445515  147213 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 19:24:30.462816  147213 certs.go:68] Setting up /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/no-preload-320324 for IP: 192.168.72.11
	I1010 19:24:30.462847  147213 certs.go:194] generating shared ca certs ...
	I1010 19:24:30.462871  147213 certs.go:226] acquiring lock for ca certs: {Name:mk934aa2a82050d7b4e8800492bf64f91d140517 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 19:24:30.463074  147213 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key
	I1010 19:24:30.463132  147213 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key
	I1010 19:24:30.463145  147213 certs.go:256] generating profile certs ...
	I1010 19:24:30.463289  147213 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/no-preload-320324/client.key
	I1010 19:24:30.463364  147213 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/no-preload-320324/apiserver.key.a7785fc5
	I1010 19:24:30.463413  147213 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/no-preload-320324/proxy-client.key
	I1010 19:24:30.463565  147213 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876.pem (1338 bytes)
	W1010 19:24:30.463604  147213 certs.go:480] ignoring /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876_empty.pem, impossibly tiny 0 bytes
	I1010 19:24:30.463617  147213 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem (1679 bytes)
	I1010 19:24:30.463657  147213 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem (1078 bytes)
	I1010 19:24:30.463689  147213 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem (1123 bytes)
	I1010 19:24:30.463721  147213 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem (1675 bytes)
	I1010 19:24:30.463774  147213 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem (1708 bytes)
	I1010 19:24:30.464502  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1010 19:24:30.525320  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1010 19:24:30.565229  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1010 19:24:30.597731  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1010 19:24:30.626174  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/no-preload-320324/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1010 19:24:30.659991  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/no-preload-320324/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1010 19:24:30.685662  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/no-preload-320324/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1010 19:24:30.710757  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/no-preload-320324/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1010 19:24:30.736325  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem --> /usr/share/ca-certificates/888762.pem (1708 bytes)
	I1010 19:24:30.771239  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1010 19:24:30.796467  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876.pem --> /usr/share/ca-certificates/88876.pem (1338 bytes)
	I1010 19:24:30.821925  147213 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1010 19:24:30.840743  147213 ssh_runner.go:195] Run: openssl version
	I1010 19:24:30.846898  147213 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/88876.pem && ln -fs /usr/share/ca-certificates/88876.pem /etc/ssl/certs/88876.pem"
	I1010 19:24:30.858410  147213 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/88876.pem
	I1010 19:24:30.863188  147213 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 10 18:09 /usr/share/ca-certificates/88876.pem
	I1010 19:24:30.863260  147213 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/88876.pem
	I1010 19:24:30.869307  147213 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/88876.pem /etc/ssl/certs/51391683.0"
	I1010 19:24:30.880319  147213 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/888762.pem && ln -fs /usr/share/ca-certificates/888762.pem /etc/ssl/certs/888762.pem"
	I1010 19:24:30.891307  147213 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/888762.pem
	I1010 19:24:30.895771  147213 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 10 18:09 /usr/share/ca-certificates/888762.pem
	I1010 19:24:30.895828  147213 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/888762.pem
	I1010 19:24:30.901510  147213 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/888762.pem /etc/ssl/certs/3ec20f2e.0"
	I1010 19:24:30.912627  147213 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1010 19:24:30.924330  147213 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1010 19:24:30.929108  147213 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 10 17:58 /usr/share/ca-certificates/minikubeCA.pem
	I1010 19:24:30.929194  147213 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1010 19:24:30.935266  147213 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1010 19:24:30.946714  147213 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1010 19:24:30.951692  147213 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1010 19:24:30.957910  147213 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1010 19:24:30.964296  147213 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1010 19:24:30.971001  147213 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1010 19:24:30.977427  147213 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1010 19:24:30.984201  147213 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1010 19:24:30.990532  147213 kubeadm.go:392] StartCluster: {Name:no-preload-320324 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:no-preload-320324 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.11 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 19:24:30.990622  147213 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1010 19:24:30.990727  147213 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1010 19:24:31.033544  147213 cri.go:89] found id: ""
	I1010 19:24:31.033624  147213 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1010 19:24:31.044956  147213 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1010 19:24:31.044975  147213 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1010 19:24:31.045025  147213 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1010 19:24:31.056563  147213 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1010 19:24:31.057705  147213 kubeconfig.go:125] found "no-preload-320324" server: "https://192.168.72.11:8443"
	I1010 19:24:31.059853  147213 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1010 19:24:31.071304  147213 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.11
	I1010 19:24:31.071338  147213 kubeadm.go:1160] stopping kube-system containers ...
	I1010 19:24:31.071353  147213 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1010 19:24:31.071444  147213 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1010 19:24:31.107345  147213 cri.go:89] found id: ""
	I1010 19:24:31.107429  147213 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1010 19:24:31.125556  147213 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1010 19:24:31.135390  147213 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1010 19:24:31.135428  147213 kubeadm.go:157] found existing configuration files:
	
	I1010 19:24:31.135478  147213 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1010 19:24:31.144653  147213 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1010 19:24:31.144715  147213 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1010 19:24:31.154458  147213 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1010 19:24:31.163444  147213 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1010 19:24:31.163501  147213 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1010 19:24:31.172633  147213 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1010 19:24:31.181939  147213 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1010 19:24:31.182001  147213 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1010 19:24:31.191638  147213 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1010 19:24:31.200846  147213 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1010 19:24:31.200935  147213 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1010 19:24:31.211048  147213 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1010 19:24:31.221008  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:24:31.352733  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:24:32.270546  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:24:32.474510  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:24:32.551517  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:24:32.707737  147213 api_server.go:52] waiting for apiserver process to appear ...
	I1010 19:24:32.707826  147213 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:33.208647  147213 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:33.708539  147213 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:33.728647  147213 api_server.go:72] duration metric: took 1.020907246s to wait for apiserver process to appear ...
	I1010 19:24:33.728678  147213 api_server.go:88] waiting for apiserver healthz status ...
	I1010 19:24:33.728701  147213 api_server.go:253] Checking apiserver healthz at https://192.168.72.11:8443/healthz ...
	I1010 19:24:30.066635  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:32.066732  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:32.552277  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:35.051399  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:34.028434  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:34.527949  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:35.028224  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:35.528470  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:36.028540  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:36.528499  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:37.028362  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:37.528483  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:38.028531  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:38.527865  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:37.025756  147213 api_server.go:279] https://192.168.72.11:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1010 19:24:37.025787  147213 api_server.go:103] status: https://192.168.72.11:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1010 19:24:37.025802  147213 api_server.go:253] Checking apiserver healthz at https://192.168.72.11:8443/healthz ...
	I1010 19:24:37.078247  147213 api_server.go:279] https://192.168.72.11:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1010 19:24:37.078283  147213 api_server.go:103] status: https://192.168.72.11:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1010 19:24:37.229601  147213 api_server.go:253] Checking apiserver healthz at https://192.168.72.11:8443/healthz ...
	I1010 19:24:37.237166  147213 api_server.go:279] https://192.168.72.11:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1010 19:24:37.237204  147213 api_server.go:103] status: https://192.168.72.11:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1010 19:24:37.728824  147213 api_server.go:253] Checking apiserver healthz at https://192.168.72.11:8443/healthz ...
	I1010 19:24:37.735660  147213 api_server.go:279] https://192.168.72.11:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1010 19:24:37.735700  147213 api_server.go:103] status: https://192.168.72.11:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1010 19:24:38.229746  147213 api_server.go:253] Checking apiserver healthz at https://192.168.72.11:8443/healthz ...
	I1010 19:24:38.234449  147213 api_server.go:279] https://192.168.72.11:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1010 19:24:38.234491  147213 api_server.go:103] status: https://192.168.72.11:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1010 19:24:38.729000  147213 api_server.go:253] Checking apiserver healthz at https://192.168.72.11:8443/healthz ...
	I1010 19:24:38.737564  147213 api_server.go:279] https://192.168.72.11:8443/healthz returned 200:
	ok
	I1010 19:24:38.751982  147213 api_server.go:141] control plane version: v1.31.1
	I1010 19:24:38.752012  147213 api_server.go:131] duration metric: took 5.023326632s to wait for apiserver health ...
	I1010 19:24:38.752023  147213 cni.go:84] Creating CNI manager for ""
	I1010 19:24:38.752030  147213 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1010 19:24:38.753351  147213 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1010 19:24:34.067208  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:36.067413  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:38.566729  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:38.754645  147213 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1010 19:24:38.772086  147213 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1010 19:24:38.792017  147213 system_pods.go:43] waiting for kube-system pods to appear ...
	I1010 19:24:38.800547  147213 system_pods.go:59] 8 kube-system pods found
	I1010 19:24:38.800592  147213 system_pods.go:61] "coredns-7c65d6cfc9-86brb" [28e5f869-f82f-4bd4-9d9c-89499fa89c89] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1010 19:24:38.800602  147213 system_pods.go:61] "etcd-no-preload-320324" [3027fc7b-a2cd-47c9-a6f1-122bfd0475a8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1010 19:24:38.800609  147213 system_pods.go:61] "kube-apiserver-no-preload-320324" [6e3d2eab-4568-48e9-957c-e724ff89b5da] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1010 19:24:38.800617  147213 system_pods.go:61] "kube-controller-manager-no-preload-320324" [a6d65cd3-48b1-4faf-bb7f-f6433641d6f3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1010 19:24:38.800624  147213 system_pods.go:61] "kube-proxy-vn6sv" [e5b2c419-7299-4bc4-b263-99408b9484eb] Running
	I1010 19:24:38.800629  147213 system_pods.go:61] "kube-scheduler-no-preload-320324" [e089c258-e720-4c78-ada1-eebbe89556c1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1010 19:24:38.800638  147213 system_pods.go:61] "metrics-server-6867b74b74-8w9lk" [354939e6-2ca9-44f5-8e8e-c10493c68b79] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1010 19:24:38.800642  147213 system_pods.go:61] "storage-provisioner" [d4965b53-60c3-4a97-bd52-d164d977247a] Running
	I1010 19:24:38.800648  147213 system_pods.go:74] duration metric: took 8.60732ms to wait for pod list to return data ...
	I1010 19:24:38.800654  147213 node_conditions.go:102] verifying NodePressure condition ...
	I1010 19:24:38.804628  147213 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1010 19:24:38.804663  147213 node_conditions.go:123] node cpu capacity is 2
	I1010 19:24:38.804680  147213 node_conditions.go:105] duration metric: took 4.021699ms to run NodePressure ...
	I1010 19:24:38.804700  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:24:39.078452  147213 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1010 19:24:39.087090  147213 kubeadm.go:739] kubelet initialised
	I1010 19:24:39.087116  147213 kubeadm.go:740] duration metric: took 8.636436ms waiting for restarted kubelet to initialise ...
	I1010 19:24:39.087125  147213 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 19:24:39.094468  147213 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-86brb" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:39.108724  147213 pod_ready.go:98] node "no-preload-320324" hosting pod "coredns-7c65d6cfc9-86brb" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:39.108756  147213 pod_ready.go:82] duration metric: took 14.254631ms for pod "coredns-7c65d6cfc9-86brb" in "kube-system" namespace to be "Ready" ...
	E1010 19:24:39.108770  147213 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-320324" hosting pod "coredns-7c65d6cfc9-86brb" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:39.108780  147213 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:39.119304  147213 pod_ready.go:98] node "no-preload-320324" hosting pod "etcd-no-preload-320324" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:39.119335  147213 pod_ready.go:82] duration metric: took 10.543376ms for pod "etcd-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	E1010 19:24:39.119345  147213 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-320324" hosting pod "etcd-no-preload-320324" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:39.119352  147213 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:39.127243  147213 pod_ready.go:98] node "no-preload-320324" hosting pod "kube-apiserver-no-preload-320324" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:39.127268  147213 pod_ready.go:82] duration metric: took 7.907414ms for pod "kube-apiserver-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	E1010 19:24:39.127278  147213 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-320324" hosting pod "kube-apiserver-no-preload-320324" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:39.127285  147213 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:39.195549  147213 pod_ready.go:98] node "no-preload-320324" hosting pod "kube-controller-manager-no-preload-320324" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:39.195578  147213 pod_ready.go:82] duration metric: took 68.282333ms for pod "kube-controller-manager-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	E1010 19:24:39.195588  147213 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-320324" hosting pod "kube-controller-manager-no-preload-320324" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:39.195594  147213 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-vn6sv" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:39.595842  147213 pod_ready.go:98] node "no-preload-320324" hosting pod "kube-proxy-vn6sv" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:39.595871  147213 pod_ready.go:82] duration metric: took 400.267905ms for pod "kube-proxy-vn6sv" in "kube-system" namespace to be "Ready" ...
	E1010 19:24:39.595880  147213 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-320324" hosting pod "kube-proxy-vn6sv" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:39.595886  147213 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:39.995731  147213 pod_ready.go:98] node "no-preload-320324" hosting pod "kube-scheduler-no-preload-320324" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:39.995760  147213 pod_ready.go:82] duration metric: took 399.866947ms for pod "kube-scheduler-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	E1010 19:24:39.995770  147213 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-320324" hosting pod "kube-scheduler-no-preload-320324" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:39.995777  147213 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:40.396420  147213 pod_ready.go:98] node "no-preload-320324" hosting pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:40.396456  147213 pod_ready.go:82] duration metric: took 400.667834ms for pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace to be "Ready" ...
	E1010 19:24:40.396470  147213 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-320324" hosting pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:40.396482  147213 pod_ready.go:39] duration metric: took 1.309346973s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 19:24:40.396508  147213 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1010 19:24:40.409956  147213 ops.go:34] apiserver oom_adj: -16
	I1010 19:24:40.409980  147213 kubeadm.go:597] duration metric: took 9.364998977s to restartPrimaryControlPlane
	I1010 19:24:40.409991  147213 kubeadm.go:394] duration metric: took 9.419470024s to StartCluster
	I1010 19:24:40.410009  147213 settings.go:142] acquiring lock: {Name:mk8d73e472e6ca16e47be8eb712406a95625b8da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 19:24:40.410085  147213 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19787-81676/kubeconfig
	I1010 19:24:40.413037  147213 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19787-81676/kubeconfig: {Name:mk31297b607b774870865548d952622c04d970cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 19:24:40.413448  147213 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.11 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1010 19:24:40.413783  147213 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1010 19:24:40.413979  147213 config.go:182] Loaded profile config "no-preload-320324": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 19:24:40.413996  147213 addons.go:69] Setting default-storageclass=true in profile "no-preload-320324"
	I1010 19:24:40.414020  147213 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-320324"
	I1010 19:24:40.413983  147213 addons.go:69] Setting storage-provisioner=true in profile "no-preload-320324"
	I1010 19:24:40.414048  147213 addons.go:234] Setting addon storage-provisioner=true in "no-preload-320324"
	W1010 19:24:40.414057  147213 addons.go:243] addon storage-provisioner should already be in state true
	I1010 19:24:40.414091  147213 host.go:66] Checking if "no-preload-320324" exists ...
	I1010 19:24:40.414170  147213 addons.go:69] Setting metrics-server=true in profile "no-preload-320324"
	I1010 19:24:40.414230  147213 addons.go:234] Setting addon metrics-server=true in "no-preload-320324"
	W1010 19:24:40.414252  147213 addons.go:243] addon metrics-server should already be in state true
	I1010 19:24:40.414292  147213 host.go:66] Checking if "no-preload-320324" exists ...
	I1010 19:24:40.414612  147213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:24:40.414640  147213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:24:40.414678  147213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:24:40.414712  147213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:24:40.415409  147213 out.go:177] * Verifying Kubernetes components...
	I1010 19:24:40.415412  147213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:24:40.415553  147213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:24:40.416812  147213 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 19:24:40.431363  147213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44727
	I1010 19:24:40.431474  147213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46487
	I1010 19:24:40.431659  147213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33767
	I1010 19:24:40.431983  147213 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:24:40.432136  147213 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:24:40.432156  147213 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:24:40.432567  147213 main.go:141] libmachine: Using API Version  1
	I1010 19:24:40.432587  147213 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:24:40.432710  147213 main.go:141] libmachine: Using API Version  1
	I1010 19:24:40.432732  147213 main.go:141] libmachine: Using API Version  1
	I1010 19:24:40.432740  147213 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:24:40.432749  147213 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:24:40.433000  147213 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:24:40.433079  147213 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:24:40.433103  147213 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:24:40.433468  147213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:24:40.433498  147213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:24:40.436984  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetState
	I1010 19:24:40.453362  147213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:24:40.453426  147213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:24:40.454884  147213 addons.go:234] Setting addon default-storageclass=true in "no-preload-320324"
	W1010 19:24:40.454913  147213 addons.go:243] addon default-storageclass should already be in state true
	I1010 19:24:40.454947  147213 host.go:66] Checking if "no-preload-320324" exists ...
	I1010 19:24:40.455335  147213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:24:40.455394  147213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:24:40.470642  147213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38831
	I1010 19:24:40.471118  147213 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:24:40.471701  147213 main.go:141] libmachine: Using API Version  1
	I1010 19:24:40.471730  147213 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:24:40.472241  147213 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:24:40.472523  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetState
	I1010 19:24:40.473953  147213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36987
	I1010 19:24:40.474196  147213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34527
	I1010 19:24:40.474332  147213 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:24:40.474672  147213 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:24:40.474814  147213 main.go:141] libmachine: Using API Version  1
	I1010 19:24:40.474827  147213 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:24:40.475181  147213 main.go:141] libmachine: Using API Version  1
	I1010 19:24:40.475210  147213 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:24:40.475310  147213 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:24:40.475702  147213 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:24:40.475785  147213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:24:40.475825  147213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:24:40.475922  147213 main.go:141] libmachine: (no-preload-320324) Calling .DriverName
	I1010 19:24:40.476046  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetState
	I1010 19:24:40.478147  147213 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 19:24:40.478395  147213 main.go:141] libmachine: (no-preload-320324) Calling .DriverName
	I1010 19:24:40.479869  147213 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 19:24:40.479896  147213 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1010 19:24:40.479922  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHHostname
	I1010 19:24:40.480549  147213 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1010 19:24:37.051611  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:39.551952  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:41.553895  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:40.482101  147213 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1010 19:24:40.482119  147213 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1010 19:24:40.482144  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHHostname
	I1010 19:24:40.484066  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:40.484560  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:40.484588  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:40.484833  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHPort
	I1010 19:24:40.485065  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:40.485241  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHUsername
	I1010 19:24:40.485272  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:40.485443  147213 sshutil.go:53] new ssh client: &{IP:192.168.72.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/no-preload-320324/id_rsa Username:docker}
	I1010 19:24:40.485788  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:40.485807  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:40.485842  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHPort
	I1010 19:24:40.486017  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:40.486202  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHUsername
	I1010 19:24:40.486454  147213 sshutil.go:53] new ssh client: &{IP:192.168.72.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/no-preload-320324/id_rsa Username:docker}
	I1010 19:24:40.492533  147213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38967
	I1010 19:24:40.493012  147213 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:24:40.493566  147213 main.go:141] libmachine: Using API Version  1
	I1010 19:24:40.493595  147213 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:24:40.494056  147213 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:24:40.494325  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetState
	I1010 19:24:40.496053  147213 main.go:141] libmachine: (no-preload-320324) Calling .DriverName
	I1010 19:24:40.496301  147213 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1010 19:24:40.496321  147213 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1010 19:24:40.496344  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHHostname
	I1010 19:24:40.499125  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:40.499667  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:40.499690  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:40.499843  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHPort
	I1010 19:24:40.500022  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:40.500194  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHUsername
	I1010 19:24:40.500357  147213 sshutil.go:53] new ssh client: &{IP:192.168.72.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/no-preload-320324/id_rsa Username:docker}
	I1010 19:24:40.651454  147213 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 19:24:40.667056  147213 node_ready.go:35] waiting up to 6m0s for node "no-preload-320324" to be "Ready" ...
	I1010 19:24:40.782217  147213 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 19:24:40.803094  147213 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1010 19:24:40.803122  147213 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1010 19:24:40.812288  147213 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1010 19:24:40.837679  147213 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1010 19:24:40.837723  147213 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1010 19:24:40.882090  147213 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1010 19:24:40.882119  147213 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1010 19:24:40.940115  147213 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1010 19:24:41.949181  147213 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.136852217s)
	I1010 19:24:41.949258  147213 main.go:141] libmachine: Making call to close driver server
	I1010 19:24:41.949275  147213 main.go:141] libmachine: (no-preload-320324) Calling .Close
	I1010 19:24:41.949286  147213 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.167030419s)
	I1010 19:24:41.949327  147213 main.go:141] libmachine: Making call to close driver server
	I1010 19:24:41.949345  147213 main.go:141] libmachine: (no-preload-320324) Calling .Close
	I1010 19:24:41.949625  147213 main.go:141] libmachine: (no-preload-320324) DBG | Closing plugin on server side
	I1010 19:24:41.949652  147213 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:24:41.949660  147213 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:24:41.949661  147213 main.go:141] libmachine: (no-preload-320324) DBG | Closing plugin on server side
	I1010 19:24:41.949668  147213 main.go:141] libmachine: Making call to close driver server
	I1010 19:24:41.949679  147213 main.go:141] libmachine: (no-preload-320324) Calling .Close
	I1010 19:24:41.949761  147213 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:24:41.949804  147213 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:24:41.949819  147213 main.go:141] libmachine: Making call to close driver server
	I1010 19:24:41.949826  147213 main.go:141] libmachine: (no-preload-320324) Calling .Close
	I1010 19:24:41.950811  147213 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:24:41.950824  147213 main.go:141] libmachine: (no-preload-320324) DBG | Closing plugin on server side
	I1010 19:24:41.950827  147213 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:24:41.950822  147213 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:24:41.950845  147213 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:24:41.950811  147213 main.go:141] libmachine: (no-preload-320324) DBG | Closing plugin on server side
	I1010 19:24:41.957797  147213 main.go:141] libmachine: Making call to close driver server
	I1010 19:24:41.957814  147213 main.go:141] libmachine: (no-preload-320324) Calling .Close
	I1010 19:24:41.958071  147213 main.go:141] libmachine: (no-preload-320324) DBG | Closing plugin on server side
	I1010 19:24:41.958077  147213 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:24:41.958099  147213 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:24:42.005530  147213 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.065377363s)
	I1010 19:24:42.005590  147213 main.go:141] libmachine: Making call to close driver server
	I1010 19:24:42.005602  147213 main.go:141] libmachine: (no-preload-320324) Calling .Close
	I1010 19:24:42.005914  147213 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:24:42.005937  147213 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:24:42.005935  147213 main.go:141] libmachine: (no-preload-320324) DBG | Closing plugin on server side
	I1010 19:24:42.005972  147213 main.go:141] libmachine: Making call to close driver server
	I1010 19:24:42.006003  147213 main.go:141] libmachine: (no-preload-320324) Calling .Close
	I1010 19:24:42.006280  147213 main.go:141] libmachine: (no-preload-320324) DBG | Closing plugin on server side
	I1010 19:24:42.006313  147213 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:24:42.006335  147213 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:24:42.006354  147213 addons.go:475] Verifying addon metrics-server=true in "no-preload-320324"
	I1010 19:24:42.008523  147213 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1010 19:24:39.028363  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:39.528571  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:40.027992  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:40.528552  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:41.028220  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:41.527889  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:42.028462  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:24:42.028549  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:24:42.070669  148123 cri.go:89] found id: ""
	I1010 19:24:42.070701  148123 logs.go:282] 0 containers: []
	W1010 19:24:42.070710  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:24:42.070716  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:24:42.070775  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:24:42.110693  148123 cri.go:89] found id: ""
	I1010 19:24:42.110731  148123 logs.go:282] 0 containers: []
	W1010 19:24:42.110748  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:24:42.110756  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:24:42.110816  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:24:42.148471  148123 cri.go:89] found id: ""
	I1010 19:24:42.148511  148123 logs.go:282] 0 containers: []
	W1010 19:24:42.148525  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:24:42.148535  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:24:42.148603  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:24:42.184637  148123 cri.go:89] found id: ""
	I1010 19:24:42.184670  148123 logs.go:282] 0 containers: []
	W1010 19:24:42.184683  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:24:42.184691  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:24:42.184759  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:24:42.226794  148123 cri.go:89] found id: ""
	I1010 19:24:42.226834  148123 logs.go:282] 0 containers: []
	W1010 19:24:42.226848  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:24:42.226857  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:24:42.226942  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:24:42.262969  148123 cri.go:89] found id: ""
	I1010 19:24:42.263004  148123 logs.go:282] 0 containers: []
	W1010 19:24:42.263017  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:24:42.263027  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:24:42.263167  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:24:42.301063  148123 cri.go:89] found id: ""
	I1010 19:24:42.301088  148123 logs.go:282] 0 containers: []
	W1010 19:24:42.301096  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:24:42.301102  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:24:42.301153  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:24:42.338829  148123 cri.go:89] found id: ""
	I1010 19:24:42.338862  148123 logs.go:282] 0 containers: []
	W1010 19:24:42.338873  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:24:42.338886  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:24:42.338901  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:24:42.391426  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:24:42.391478  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:24:42.405520  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:24:42.405563  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:24:42.544245  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:24:42.544275  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:24:42.544292  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:24:42.620760  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:24:42.620804  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:24:42.009965  147213 addons.go:510] duration metric: took 1.596190602s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1010 19:24:42.672792  147213 node_ready.go:53] node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:41.066744  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:43.066850  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:43.557231  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:46.051820  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:45.164389  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:45.179714  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:24:45.179776  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:24:45.216271  148123 cri.go:89] found id: ""
	I1010 19:24:45.216308  148123 logs.go:282] 0 containers: []
	W1010 19:24:45.216319  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:24:45.216327  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:24:45.216394  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:24:45.255109  148123 cri.go:89] found id: ""
	I1010 19:24:45.255154  148123 logs.go:282] 0 containers: []
	W1010 19:24:45.255167  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:24:45.255176  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:24:45.255248  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:24:45.294338  148123 cri.go:89] found id: ""
	I1010 19:24:45.294369  148123 logs.go:282] 0 containers: []
	W1010 19:24:45.294380  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:24:45.294388  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:24:45.294457  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:24:45.339573  148123 cri.go:89] found id: ""
	I1010 19:24:45.339606  148123 logs.go:282] 0 containers: []
	W1010 19:24:45.339626  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:24:45.339636  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:24:45.339703  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:24:45.400184  148123 cri.go:89] found id: ""
	I1010 19:24:45.400214  148123 logs.go:282] 0 containers: []
	W1010 19:24:45.400225  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:24:45.400234  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:24:45.400301  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:24:45.436152  148123 cri.go:89] found id: ""
	I1010 19:24:45.436183  148123 logs.go:282] 0 containers: []
	W1010 19:24:45.436195  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:24:45.436203  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:24:45.436262  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:24:45.484321  148123 cri.go:89] found id: ""
	I1010 19:24:45.484347  148123 logs.go:282] 0 containers: []
	W1010 19:24:45.484355  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:24:45.484361  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:24:45.484441  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:24:45.532893  148123 cri.go:89] found id: ""
	I1010 19:24:45.532923  148123 logs.go:282] 0 containers: []
	W1010 19:24:45.532932  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:24:45.532943  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:24:45.532958  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:24:45.585183  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:24:45.585214  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:24:45.638800  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:24:45.638847  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:24:45.653928  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:24:45.653961  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:24:45.733534  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:24:45.733564  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:24:45.733580  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:24:48.314367  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:48.329687  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:24:48.329754  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:24:48.370372  148123 cri.go:89] found id: ""
	I1010 19:24:48.370400  148123 logs.go:282] 0 containers: []
	W1010 19:24:48.370409  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:24:48.370415  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:24:48.370490  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:24:48.409362  148123 cri.go:89] found id: ""
	I1010 19:24:48.409396  148123 logs.go:282] 0 containers: []
	W1010 19:24:48.409407  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:24:48.409415  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:24:48.409500  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:24:48.449640  148123 cri.go:89] found id: ""
	I1010 19:24:48.449672  148123 logs.go:282] 0 containers: []
	W1010 19:24:48.449681  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:24:48.449687  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:24:48.449746  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:24:48.485053  148123 cri.go:89] found id: ""
	I1010 19:24:48.485104  148123 logs.go:282] 0 containers: []
	W1010 19:24:48.485121  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:24:48.485129  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:24:48.485183  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:24:48.520083  148123 cri.go:89] found id: ""
	I1010 19:24:48.520113  148123 logs.go:282] 0 containers: []
	W1010 19:24:48.520121  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:24:48.520127  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:24:48.520185  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:24:48.555112  148123 cri.go:89] found id: ""
	I1010 19:24:48.555138  148123 logs.go:282] 0 containers: []
	W1010 19:24:48.555149  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:24:48.555156  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:24:48.555241  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:24:45.171882  147213 node_ready.go:53] node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:47.673073  147213 node_ready.go:49] node "no-preload-320324" has status "Ready":"True"
	I1010 19:24:47.673103  147213 node_ready.go:38] duration metric: took 7.00601327s for node "no-preload-320324" to be "Ready" ...
	I1010 19:24:47.673117  147213 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 19:24:47.682195  147213 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-86brb" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:47.690079  147213 pod_ready.go:93] pod "coredns-7c65d6cfc9-86brb" in "kube-system" namespace has status "Ready":"True"
	I1010 19:24:47.690111  147213 pod_ready.go:82] duration metric: took 7.882823ms for pod "coredns-7c65d6cfc9-86brb" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:47.690126  147213 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:47.698009  147213 pod_ready.go:93] pod "etcd-no-preload-320324" in "kube-system" namespace has status "Ready":"True"
	I1010 19:24:47.698038  147213 pod_ready.go:82] duration metric: took 7.903016ms for pod "etcd-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:47.698052  147213 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:45.066893  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:47.566144  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:48.551853  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:51.050365  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:48.592682  148123 cri.go:89] found id: ""
	I1010 19:24:48.592710  148123 logs.go:282] 0 containers: []
	W1010 19:24:48.592719  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:24:48.592725  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:24:48.592775  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:24:48.628449  148123 cri.go:89] found id: ""
	I1010 19:24:48.628482  148123 logs.go:282] 0 containers: []
	W1010 19:24:48.628490  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:24:48.628500  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:24:48.628513  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:24:48.709385  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:24:48.709428  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:24:48.752542  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:24:48.752677  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:24:48.812331  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:24:48.812385  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:24:48.827057  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:24:48.827095  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:24:48.908312  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:24:51.409122  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:51.423404  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:24:51.423482  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:24:51.465742  148123 cri.go:89] found id: ""
	I1010 19:24:51.465781  148123 logs.go:282] 0 containers: []
	W1010 19:24:51.465793  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:24:51.465803  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:24:51.465875  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:24:51.501597  148123 cri.go:89] found id: ""
	I1010 19:24:51.501630  148123 logs.go:282] 0 containers: []
	W1010 19:24:51.501641  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:24:51.501648  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:24:51.501711  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:24:51.537500  148123 cri.go:89] found id: ""
	I1010 19:24:51.537539  148123 logs.go:282] 0 containers: []
	W1010 19:24:51.537551  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:24:51.537559  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:24:51.537626  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:24:51.579546  148123 cri.go:89] found id: ""
	I1010 19:24:51.579576  148123 logs.go:282] 0 containers: []
	W1010 19:24:51.579587  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:24:51.579595  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:24:51.579660  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:24:51.619893  148123 cri.go:89] found id: ""
	I1010 19:24:51.619917  148123 logs.go:282] 0 containers: []
	W1010 19:24:51.619925  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:24:51.619931  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:24:51.619999  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:24:51.655787  148123 cri.go:89] found id: ""
	I1010 19:24:51.655824  148123 logs.go:282] 0 containers: []
	W1010 19:24:51.655837  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:24:51.655846  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:24:51.655921  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:24:51.699582  148123 cri.go:89] found id: ""
	I1010 19:24:51.699611  148123 logs.go:282] 0 containers: []
	W1010 19:24:51.699619  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:24:51.699625  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:24:51.699685  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:24:51.738658  148123 cri.go:89] found id: ""
	I1010 19:24:51.738689  148123 logs.go:282] 0 containers: []
	W1010 19:24:51.738697  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:24:51.738707  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:24:51.738721  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:24:51.789556  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:24:51.789587  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:24:51.858919  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:24:51.858968  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:24:51.887818  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:24:51.887854  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:24:51.964408  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:24:51.964434  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:24:51.964449  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:24:49.705130  147213 pod_ready.go:103] pod "kube-apiserver-no-preload-320324" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:51.705847  147213 pod_ready.go:103] pod "kube-apiserver-no-preload-320324" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:53.205374  147213 pod_ready.go:93] pod "kube-apiserver-no-preload-320324" in "kube-system" namespace has status "Ready":"True"
	I1010 19:24:53.205401  147213 pod_ready.go:82] duration metric: took 5.507341974s for pod "kube-apiserver-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:53.205413  147213 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:53.210237  147213 pod_ready.go:93] pod "kube-controller-manager-no-preload-320324" in "kube-system" namespace has status "Ready":"True"
	I1010 19:24:53.210259  147213 pod_ready.go:82] duration metric: took 4.83925ms for pod "kube-controller-manager-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:53.210269  147213 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-vn6sv" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:53.215158  147213 pod_ready.go:93] pod "kube-proxy-vn6sv" in "kube-system" namespace has status "Ready":"True"
	I1010 19:24:53.215186  147213 pod_ready.go:82] duration metric: took 4.909888ms for pod "kube-proxy-vn6sv" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:53.215198  147213 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:53.220077  147213 pod_ready.go:93] pod "kube-scheduler-no-preload-320324" in "kube-system" namespace has status "Ready":"True"
	I1010 19:24:53.220097  147213 pod_ready.go:82] duration metric: took 4.890652ms for pod "kube-scheduler-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:53.220105  147213 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:50.066165  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:52.066343  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:53.552604  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:56.050748  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:54.545627  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:54.560748  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:24:54.560830  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:24:54.595863  148123 cri.go:89] found id: ""
	I1010 19:24:54.595893  148123 logs.go:282] 0 containers: []
	W1010 19:24:54.595903  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:24:54.595912  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:24:54.595978  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:24:54.633645  148123 cri.go:89] found id: ""
	I1010 19:24:54.633681  148123 logs.go:282] 0 containers: []
	W1010 19:24:54.633693  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:24:54.633701  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:24:54.633761  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:24:54.668269  148123 cri.go:89] found id: ""
	I1010 19:24:54.668299  148123 logs.go:282] 0 containers: []
	W1010 19:24:54.668311  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:24:54.668317  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:24:54.668369  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:24:54.706559  148123 cri.go:89] found id: ""
	I1010 19:24:54.706591  148123 logs.go:282] 0 containers: []
	W1010 19:24:54.706600  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:24:54.706608  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:24:54.706673  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:24:54.746248  148123 cri.go:89] found id: ""
	I1010 19:24:54.746283  148123 logs.go:282] 0 containers: []
	W1010 19:24:54.746295  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:24:54.746303  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:24:54.746383  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:24:54.782982  148123 cri.go:89] found id: ""
	I1010 19:24:54.783017  148123 logs.go:282] 0 containers: []
	W1010 19:24:54.783027  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:24:54.783033  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:24:54.783085  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:24:54.819664  148123 cri.go:89] found id: ""
	I1010 19:24:54.819700  148123 logs.go:282] 0 containers: []
	W1010 19:24:54.819713  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:24:54.819722  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:24:54.819797  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:24:54.859603  148123 cri.go:89] found id: ""
	I1010 19:24:54.859632  148123 logs.go:282] 0 containers: []
	W1010 19:24:54.859640  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:24:54.859650  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:24:54.859662  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:24:54.910949  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:24:54.910987  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:24:54.925941  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:24:54.925975  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:24:55.009626  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:24:55.009654  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:24:55.009669  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:24:55.097196  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:24:55.097237  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:24:57.641732  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:57.661141  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:24:57.661222  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:24:57.716054  148123 cri.go:89] found id: ""
	I1010 19:24:57.716086  148123 logs.go:282] 0 containers: []
	W1010 19:24:57.716094  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:24:57.716100  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:24:57.716178  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:24:57.766862  148123 cri.go:89] found id: ""
	I1010 19:24:57.766892  148123 logs.go:282] 0 containers: []
	W1010 19:24:57.766906  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:24:57.766917  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:24:57.766989  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:24:57.814776  148123 cri.go:89] found id: ""
	I1010 19:24:57.814808  148123 logs.go:282] 0 containers: []
	W1010 19:24:57.814821  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:24:57.814829  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:24:57.814899  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:24:57.850458  148123 cri.go:89] found id: ""
	I1010 19:24:57.850495  148123 logs.go:282] 0 containers: []
	W1010 19:24:57.850505  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:24:57.850516  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:24:57.850667  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:24:57.886541  148123 cri.go:89] found id: ""
	I1010 19:24:57.886566  148123 logs.go:282] 0 containers: []
	W1010 19:24:57.886575  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:24:57.886581  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:24:57.886645  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:24:57.920748  148123 cri.go:89] found id: ""
	I1010 19:24:57.920783  148123 logs.go:282] 0 containers: []
	W1010 19:24:57.920795  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:24:57.920802  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:24:57.920887  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:24:57.957801  148123 cri.go:89] found id: ""
	I1010 19:24:57.957833  148123 logs.go:282] 0 containers: []
	W1010 19:24:57.957844  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:24:57.957852  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:24:57.957919  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:24:57.995648  148123 cri.go:89] found id: ""
	I1010 19:24:57.995683  148123 logs.go:282] 0 containers: []
	W1010 19:24:57.995694  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:24:57.995706  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:24:57.995723  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:24:58.034987  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:24:58.035030  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:24:58.089014  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:24:58.089066  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:24:58.104179  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:24:58.104211  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:24:58.178239  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:24:58.178271  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:24:58.178291  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:24:55.229459  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:57.727298  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:54.566779  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:57.065902  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:58.051248  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:00.550512  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:00.756057  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:00.770086  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:00.770150  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:00.811400  148123 cri.go:89] found id: ""
	I1010 19:25:00.811444  148123 logs.go:282] 0 containers: []
	W1010 19:25:00.811456  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:00.811466  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:00.811533  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:00.848821  148123 cri.go:89] found id: ""
	I1010 19:25:00.848859  148123 logs.go:282] 0 containers: []
	W1010 19:25:00.848870  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:00.848877  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:00.848947  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:00.883529  148123 cri.go:89] found id: ""
	I1010 19:25:00.883557  148123 logs.go:282] 0 containers: []
	W1010 19:25:00.883566  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:00.883573  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:00.883631  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:00.922952  148123 cri.go:89] found id: ""
	I1010 19:25:00.922982  148123 logs.go:282] 0 containers: []
	W1010 19:25:00.922992  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:00.922998  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:00.923057  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:00.956116  148123 cri.go:89] found id: ""
	I1010 19:25:00.956147  148123 logs.go:282] 0 containers: []
	W1010 19:25:00.956159  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:00.956168  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:00.956233  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:00.998892  148123 cri.go:89] found id: ""
	I1010 19:25:00.998919  148123 logs.go:282] 0 containers: []
	W1010 19:25:00.998930  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:00.998939  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:00.998996  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:01.037617  148123 cri.go:89] found id: ""
	I1010 19:25:01.037649  148123 logs.go:282] 0 containers: []
	W1010 19:25:01.037657  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:01.037663  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:01.037717  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:01.078003  148123 cri.go:89] found id: ""
	I1010 19:25:01.078034  148123 logs.go:282] 0 containers: []
	W1010 19:25:01.078046  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:01.078055  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:01.078069  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:01.118948  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:01.118977  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:01.171053  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:01.171091  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:01.187027  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:01.187060  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:01.261288  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:01.261315  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:01.261330  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:24:59.728997  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:02.227142  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:59.566448  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:02.066184  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:02.551951  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:05.050558  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:03.848144  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:03.862527  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:03.862601  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:03.899120  148123 cri.go:89] found id: ""
	I1010 19:25:03.899159  148123 logs.go:282] 0 containers: []
	W1010 19:25:03.899180  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:03.899187  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:03.899276  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:03.935461  148123 cri.go:89] found id: ""
	I1010 19:25:03.935492  148123 logs.go:282] 0 containers: []
	W1010 19:25:03.935500  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:03.935507  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:03.935569  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:03.971892  148123 cri.go:89] found id: ""
	I1010 19:25:03.971925  148123 logs.go:282] 0 containers: []
	W1010 19:25:03.971937  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:03.971945  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:03.972019  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:04.008341  148123 cri.go:89] found id: ""
	I1010 19:25:04.008368  148123 logs.go:282] 0 containers: []
	W1010 19:25:04.008377  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:04.008390  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:04.008447  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:04.042007  148123 cri.go:89] found id: ""
	I1010 19:25:04.042036  148123 logs.go:282] 0 containers: []
	W1010 19:25:04.042044  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:04.042051  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:04.042101  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:04.078017  148123 cri.go:89] found id: ""
	I1010 19:25:04.078045  148123 logs.go:282] 0 containers: []
	W1010 19:25:04.078053  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:04.078059  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:04.078112  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:04.116792  148123 cri.go:89] found id: ""
	I1010 19:25:04.116823  148123 logs.go:282] 0 containers: []
	W1010 19:25:04.116832  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:04.116839  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:04.116928  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:04.153468  148123 cri.go:89] found id: ""
	I1010 19:25:04.153496  148123 logs.go:282] 0 containers: []
	W1010 19:25:04.153503  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:04.153513  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:04.153525  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:04.230646  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:04.230683  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:04.270975  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:04.271015  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:04.320845  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:04.320902  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:04.337789  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:04.337828  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:04.416077  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:06.916412  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:06.931309  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:06.931376  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:06.969224  148123 cri.go:89] found id: ""
	I1010 19:25:06.969257  148123 logs.go:282] 0 containers: []
	W1010 19:25:06.969269  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:06.969277  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:06.969346  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:07.006598  148123 cri.go:89] found id: ""
	I1010 19:25:07.006653  148123 logs.go:282] 0 containers: []
	W1010 19:25:07.006667  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:07.006676  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:07.006744  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:07.046392  148123 cri.go:89] found id: ""
	I1010 19:25:07.046418  148123 logs.go:282] 0 containers: []
	W1010 19:25:07.046427  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:07.046433  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:07.046491  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:07.089570  148123 cri.go:89] found id: ""
	I1010 19:25:07.089603  148123 logs.go:282] 0 containers: []
	W1010 19:25:07.089615  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:07.089624  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:07.089689  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:07.127152  148123 cri.go:89] found id: ""
	I1010 19:25:07.127185  148123 logs.go:282] 0 containers: []
	W1010 19:25:07.127195  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:07.127204  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:07.127282  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:07.160516  148123 cri.go:89] found id: ""
	I1010 19:25:07.160543  148123 logs.go:282] 0 containers: []
	W1010 19:25:07.160554  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:07.160563  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:07.160635  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:07.197270  148123 cri.go:89] found id: ""
	I1010 19:25:07.197307  148123 logs.go:282] 0 containers: []
	W1010 19:25:07.197318  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:07.197327  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:07.197395  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:07.232658  148123 cri.go:89] found id: ""
	I1010 19:25:07.232687  148123 logs.go:282] 0 containers: []
	W1010 19:25:07.232696  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:07.232706  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:07.232720  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:07.246579  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:07.246609  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:07.315200  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:07.315230  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:07.315251  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:07.389532  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:07.389580  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:07.438808  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:07.438837  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:04.227537  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:06.727865  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:04.067121  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:06.565089  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:08.565565  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:07.051371  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:09.051420  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:11.054211  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:09.990294  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:10.004155  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:10.004270  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:10.039034  148123 cri.go:89] found id: ""
	I1010 19:25:10.039068  148123 logs.go:282] 0 containers: []
	W1010 19:25:10.039078  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:10.039087  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:10.039174  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:10.079936  148123 cri.go:89] found id: ""
	I1010 19:25:10.079967  148123 logs.go:282] 0 containers: []
	W1010 19:25:10.079978  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:10.079985  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:10.080068  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:10.116441  148123 cri.go:89] found id: ""
	I1010 19:25:10.116471  148123 logs.go:282] 0 containers: []
	W1010 19:25:10.116483  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:10.116491  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:10.116556  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:10.151998  148123 cri.go:89] found id: ""
	I1010 19:25:10.152045  148123 logs.go:282] 0 containers: []
	W1010 19:25:10.152058  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:10.152075  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:10.152153  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:10.188514  148123 cri.go:89] found id: ""
	I1010 19:25:10.188547  148123 logs.go:282] 0 containers: []
	W1010 19:25:10.188565  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:10.188574  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:10.188640  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:10.223648  148123 cri.go:89] found id: ""
	I1010 19:25:10.223682  148123 logs.go:282] 0 containers: []
	W1010 19:25:10.223694  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:10.223702  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:10.223771  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:10.264117  148123 cri.go:89] found id: ""
	I1010 19:25:10.264143  148123 logs.go:282] 0 containers: []
	W1010 19:25:10.264151  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:10.264158  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:10.264215  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:10.302566  148123 cri.go:89] found id: ""
	I1010 19:25:10.302601  148123 logs.go:282] 0 containers: []
	W1010 19:25:10.302613  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:10.302625  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:10.302649  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:10.316567  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:10.316606  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:10.388642  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:10.388674  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:10.388692  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:10.471035  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:10.471090  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:10.519600  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:10.519645  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:13.075428  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:13.090830  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:13.090915  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:13.126302  148123 cri.go:89] found id: ""
	I1010 19:25:13.126336  148123 logs.go:282] 0 containers: []
	W1010 19:25:13.126348  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:13.126357  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:13.126429  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:13.163309  148123 cri.go:89] found id: ""
	I1010 19:25:13.163344  148123 logs.go:282] 0 containers: []
	W1010 19:25:13.163357  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:13.163365  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:13.163428  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:13.197569  148123 cri.go:89] found id: ""
	I1010 19:25:13.197605  148123 logs.go:282] 0 containers: []
	W1010 19:25:13.197615  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:13.197621  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:13.197692  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:13.236299  148123 cri.go:89] found id: ""
	I1010 19:25:13.236328  148123 logs.go:282] 0 containers: []
	W1010 19:25:13.236336  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:13.236342  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:13.236406  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:13.275667  148123 cri.go:89] found id: ""
	I1010 19:25:13.275696  148123 logs.go:282] 0 containers: []
	W1010 19:25:13.275705  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:13.275711  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:13.275776  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:13.309728  148123 cri.go:89] found id: ""
	I1010 19:25:13.309763  148123 logs.go:282] 0 containers: []
	W1010 19:25:13.309774  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:13.309786  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:13.309854  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:13.347464  148123 cri.go:89] found id: ""
	I1010 19:25:13.347493  148123 logs.go:282] 0 containers: []
	W1010 19:25:13.347504  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:13.347513  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:13.347576  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:13.385086  148123 cri.go:89] found id: ""
	I1010 19:25:13.385119  148123 logs.go:282] 0 containers: []
	W1010 19:25:13.385130  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:13.385142  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:13.385156  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:13.466513  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:13.466553  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:13.508610  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:13.508651  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:09.226850  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:11.227241  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:13.726879  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:10.565663  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:12.565845  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:13.555465  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:16.051764  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:13.564897  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:13.564932  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:13.578771  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:13.578803  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:13.651857  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:16.153066  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:16.167613  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:16.167698  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:16.203867  148123 cri.go:89] found id: ""
	I1010 19:25:16.203897  148123 logs.go:282] 0 containers: []
	W1010 19:25:16.203906  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:16.203912  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:16.203965  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:16.242077  148123 cri.go:89] found id: ""
	I1010 19:25:16.242109  148123 logs.go:282] 0 containers: []
	W1010 19:25:16.242130  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:16.242138  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:16.242234  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:16.283792  148123 cri.go:89] found id: ""
	I1010 19:25:16.283825  148123 logs.go:282] 0 containers: []
	W1010 19:25:16.283836  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:16.283844  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:16.283907  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:16.319934  148123 cri.go:89] found id: ""
	I1010 19:25:16.319969  148123 logs.go:282] 0 containers: []
	W1010 19:25:16.319980  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:16.319990  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:16.320063  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:16.360439  148123 cri.go:89] found id: ""
	I1010 19:25:16.360482  148123 logs.go:282] 0 containers: []
	W1010 19:25:16.360495  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:16.360504  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:16.360569  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:16.395890  148123 cri.go:89] found id: ""
	I1010 19:25:16.395922  148123 logs.go:282] 0 containers: []
	W1010 19:25:16.395931  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:16.395941  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:16.396009  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:16.431396  148123 cri.go:89] found id: ""
	I1010 19:25:16.431442  148123 logs.go:282] 0 containers: []
	W1010 19:25:16.431456  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:16.431463  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:16.431531  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:16.471768  148123 cri.go:89] found id: ""
	I1010 19:25:16.471796  148123 logs.go:282] 0 containers: []
	W1010 19:25:16.471804  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:16.471814  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:16.471830  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:16.526112  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:16.526155  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:16.541041  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:16.541074  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:16.623245  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:16.623273  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:16.623289  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:16.710840  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:16.710884  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:15.727171  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:17.728705  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:15.067362  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:17.566242  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:18.551207  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:21.050222  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:19.257566  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:19.273982  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:19.274063  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:19.311360  148123 cri.go:89] found id: ""
	I1010 19:25:19.311392  148123 logs.go:282] 0 containers: []
	W1010 19:25:19.311404  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:19.311411  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:19.311475  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:19.347025  148123 cri.go:89] found id: ""
	I1010 19:25:19.347053  148123 logs.go:282] 0 containers: []
	W1010 19:25:19.347062  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:19.347068  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:19.347120  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:19.383575  148123 cri.go:89] found id: ""
	I1010 19:25:19.383608  148123 logs.go:282] 0 containers: []
	W1010 19:25:19.383620  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:19.383633  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:19.383698  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:19.419245  148123 cri.go:89] found id: ""
	I1010 19:25:19.419270  148123 logs.go:282] 0 containers: []
	W1010 19:25:19.419278  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:19.419284  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:19.419334  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:19.453563  148123 cri.go:89] found id: ""
	I1010 19:25:19.453591  148123 logs.go:282] 0 containers: []
	W1010 19:25:19.453601  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:19.453658  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:19.453715  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:19.496430  148123 cri.go:89] found id: ""
	I1010 19:25:19.496458  148123 logs.go:282] 0 containers: []
	W1010 19:25:19.496466  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:19.496473  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:19.496533  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:19.534626  148123 cri.go:89] found id: ""
	I1010 19:25:19.534670  148123 logs.go:282] 0 containers: []
	W1010 19:25:19.534682  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:19.534691  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:19.534757  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:19.574186  148123 cri.go:89] found id: ""
	I1010 19:25:19.574236  148123 logs.go:282] 0 containers: []
	W1010 19:25:19.574248  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:19.574261  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:19.574283  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:19.587952  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:19.587989  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:19.664607  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:19.664644  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:19.664664  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:19.742185  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:19.742224  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:19.781530  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:19.781562  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:22.335398  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:22.349627  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:22.349693  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:22.389075  148123 cri.go:89] found id: ""
	I1010 19:25:22.389102  148123 logs.go:282] 0 containers: []
	W1010 19:25:22.389110  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:22.389117  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:22.389168  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:22.424331  148123 cri.go:89] found id: ""
	I1010 19:25:22.424368  148123 logs.go:282] 0 containers: []
	W1010 19:25:22.424381  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:22.424389  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:22.424460  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:22.464011  148123 cri.go:89] found id: ""
	I1010 19:25:22.464051  148123 logs.go:282] 0 containers: []
	W1010 19:25:22.464062  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:22.464070  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:22.464141  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:22.500786  148123 cri.go:89] found id: ""
	I1010 19:25:22.500819  148123 logs.go:282] 0 containers: []
	W1010 19:25:22.500828  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:22.500835  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:22.500914  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:22.538016  148123 cri.go:89] found id: ""
	I1010 19:25:22.538053  148123 logs.go:282] 0 containers: []
	W1010 19:25:22.538061  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:22.538068  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:22.538124  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:22.576600  148123 cri.go:89] found id: ""
	I1010 19:25:22.576629  148123 logs.go:282] 0 containers: []
	W1010 19:25:22.576637  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:22.576643  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:22.576702  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:22.616020  148123 cri.go:89] found id: ""
	I1010 19:25:22.616048  148123 logs.go:282] 0 containers: []
	W1010 19:25:22.616057  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:22.616064  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:22.616114  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:22.652238  148123 cri.go:89] found id: ""
	I1010 19:25:22.652271  148123 logs.go:282] 0 containers: []
	W1010 19:25:22.652283  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:22.652295  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:22.652311  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:22.726182  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:22.726216  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:22.726234  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:22.814040  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:22.814086  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:22.859254  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:22.859289  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:22.916565  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:22.916607  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:20.227871  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:22.732566  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:20.066872  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:22.566173  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:23.050833  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:25.551662  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:25.432636  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:25.446982  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:25.447047  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:25.482440  148123 cri.go:89] found id: ""
	I1010 19:25:25.482472  148123 logs.go:282] 0 containers: []
	W1010 19:25:25.482484  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:25.482492  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:25.482552  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:25.517709  148123 cri.go:89] found id: ""
	I1010 19:25:25.517742  148123 logs.go:282] 0 containers: []
	W1010 19:25:25.517756  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:25.517764  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:25.517867  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:25.561504  148123 cri.go:89] found id: ""
	I1010 19:25:25.561532  148123 logs.go:282] 0 containers: []
	W1010 19:25:25.561544  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:25.561552  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:25.561616  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:25.601704  148123 cri.go:89] found id: ""
	I1010 19:25:25.601741  148123 logs.go:282] 0 containers: []
	W1010 19:25:25.601753  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:25.601762  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:25.601825  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:25.638519  148123 cri.go:89] found id: ""
	I1010 19:25:25.638544  148123 logs.go:282] 0 containers: []
	W1010 19:25:25.638552  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:25.638558  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:25.638609  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:25.673390  148123 cri.go:89] found id: ""
	I1010 19:25:25.673425  148123 logs.go:282] 0 containers: []
	W1010 19:25:25.673436  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:25.673447  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:25.673525  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:25.708251  148123 cri.go:89] found id: ""
	I1010 19:25:25.708278  148123 logs.go:282] 0 containers: []
	W1010 19:25:25.708286  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:25.708293  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:25.708354  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:25.749793  148123 cri.go:89] found id: ""
	I1010 19:25:25.749826  148123 logs.go:282] 0 containers: []
	W1010 19:25:25.749837  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:25.749846  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:25.749861  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:25.802511  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:25.802554  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:25.817523  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:25.817562  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:25.898595  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:25.898619  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:25.898635  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:25.978629  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:25.978684  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:28.520165  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:28.532725  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:28.532793  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:25.226875  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:27.729015  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:25.066298  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:27.066963  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:27.551915  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:29.558497  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:28.571667  148123 cri.go:89] found id: ""
	I1010 19:25:28.571695  148123 logs.go:282] 0 containers: []
	W1010 19:25:28.571704  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:28.571710  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:28.571770  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:28.608136  148123 cri.go:89] found id: ""
	I1010 19:25:28.608165  148123 logs.go:282] 0 containers: []
	W1010 19:25:28.608174  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:28.608181  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:28.608244  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:28.642123  148123 cri.go:89] found id: ""
	I1010 19:25:28.642161  148123 logs.go:282] 0 containers: []
	W1010 19:25:28.642173  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:28.642181  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:28.642242  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:28.678600  148123 cri.go:89] found id: ""
	I1010 19:25:28.678633  148123 logs.go:282] 0 containers: []
	W1010 19:25:28.678643  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:28.678651  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:28.678702  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:28.716627  148123 cri.go:89] found id: ""
	I1010 19:25:28.716660  148123 logs.go:282] 0 containers: []
	W1010 19:25:28.716673  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:28.716681  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:28.716766  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:28.752750  148123 cri.go:89] found id: ""
	I1010 19:25:28.752786  148123 logs.go:282] 0 containers: []
	W1010 19:25:28.752798  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:28.752806  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:28.752898  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:28.790021  148123 cri.go:89] found id: ""
	I1010 19:25:28.790054  148123 logs.go:282] 0 containers: []
	W1010 19:25:28.790066  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:28.790075  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:28.790128  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:28.828760  148123 cri.go:89] found id: ""
	I1010 19:25:28.828803  148123 logs.go:282] 0 containers: []
	W1010 19:25:28.828827  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:28.828839  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:28.828877  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:28.879773  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:28.879811  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:28.894311  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:28.894342  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:28.968675  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:28.968706  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:28.968729  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:29.045862  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:29.045913  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:31.588772  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:31.601453  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:31.601526  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:31.637056  148123 cri.go:89] found id: ""
	I1010 19:25:31.637090  148123 logs.go:282] 0 containers: []
	W1010 19:25:31.637101  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:31.637108  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:31.637183  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:31.677825  148123 cri.go:89] found id: ""
	I1010 19:25:31.677854  148123 logs.go:282] 0 containers: []
	W1010 19:25:31.677862  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:31.677867  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:31.677920  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:31.715372  148123 cri.go:89] found id: ""
	I1010 19:25:31.715402  148123 logs.go:282] 0 containers: []
	W1010 19:25:31.715411  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:31.715419  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:31.715470  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:31.753767  148123 cri.go:89] found id: ""
	I1010 19:25:31.753794  148123 logs.go:282] 0 containers: []
	W1010 19:25:31.753806  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:31.753814  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:31.753879  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:31.792999  148123 cri.go:89] found id: ""
	I1010 19:25:31.793024  148123 logs.go:282] 0 containers: []
	W1010 19:25:31.793035  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:31.793050  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:31.793106  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:31.831571  148123 cri.go:89] found id: ""
	I1010 19:25:31.831598  148123 logs.go:282] 0 containers: []
	W1010 19:25:31.831607  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:31.831614  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:31.831673  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:31.880382  148123 cri.go:89] found id: ""
	I1010 19:25:31.880422  148123 logs.go:282] 0 containers: []
	W1010 19:25:31.880436  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:31.880447  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:31.880527  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:31.919596  148123 cri.go:89] found id: ""
	I1010 19:25:31.919627  148123 logs.go:282] 0 containers: []
	W1010 19:25:31.919637  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:31.919647  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:31.919661  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:31.967963  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:31.968001  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:31.983064  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:31.983106  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:32.056073  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:32.056105  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:32.056122  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:32.142927  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:32.142978  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:30.226683  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:32.227047  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:29.565699  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:31.566109  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:32.051411  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:34.052064  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:36.550062  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:34.690025  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:34.705326  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:34.705413  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:34.740756  148123 cri.go:89] found id: ""
	I1010 19:25:34.740786  148123 logs.go:282] 0 containers: []
	W1010 19:25:34.740793  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:34.740799  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:34.740872  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:34.777021  148123 cri.go:89] found id: ""
	I1010 19:25:34.777049  148123 logs.go:282] 0 containers: []
	W1010 19:25:34.777061  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:34.777069  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:34.777123  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:34.826699  148123 cri.go:89] found id: ""
	I1010 19:25:34.826735  148123 logs.go:282] 0 containers: []
	W1010 19:25:34.826747  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:34.826754  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:34.826896  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:34.864297  148123 cri.go:89] found id: ""
	I1010 19:25:34.864329  148123 logs.go:282] 0 containers: []
	W1010 19:25:34.864340  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:34.864348  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:34.864415  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:34.899198  148123 cri.go:89] found id: ""
	I1010 19:25:34.899234  148123 logs.go:282] 0 containers: []
	W1010 19:25:34.899247  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:34.899256  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:34.899315  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:34.934132  148123 cri.go:89] found id: ""
	I1010 19:25:34.934162  148123 logs.go:282] 0 containers: []
	W1010 19:25:34.934170  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:34.934176  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:34.934238  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:34.969858  148123 cri.go:89] found id: ""
	I1010 19:25:34.969891  148123 logs.go:282] 0 containers: []
	W1010 19:25:34.969903  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:34.969911  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:34.969978  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:35.006082  148123 cri.go:89] found id: ""
	I1010 19:25:35.006121  148123 logs.go:282] 0 containers: []
	W1010 19:25:35.006132  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:35.006145  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:35.006160  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:35.060596  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:35.060631  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:35.076246  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:35.076281  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:35.151879  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:35.151912  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:35.151931  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:35.231802  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:35.231845  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:37.774550  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:37.787871  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:37.787938  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:37.824273  148123 cri.go:89] found id: ""
	I1010 19:25:37.824306  148123 logs.go:282] 0 containers: []
	W1010 19:25:37.824317  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:37.824325  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:37.824410  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:37.860548  148123 cri.go:89] found id: ""
	I1010 19:25:37.860582  148123 logs.go:282] 0 containers: []
	W1010 19:25:37.860593  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:37.860601  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:37.860677  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:37.894616  148123 cri.go:89] found id: ""
	I1010 19:25:37.894644  148123 logs.go:282] 0 containers: []
	W1010 19:25:37.894654  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:37.894662  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:37.894730  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:37.931247  148123 cri.go:89] found id: ""
	I1010 19:25:37.931272  148123 logs.go:282] 0 containers: []
	W1010 19:25:37.931280  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:37.931287  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:37.931337  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:37.968028  148123 cri.go:89] found id: ""
	I1010 19:25:37.968068  148123 logs.go:282] 0 containers: []
	W1010 19:25:37.968079  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:37.968086  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:37.968153  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:38.004661  148123 cri.go:89] found id: ""
	I1010 19:25:38.004692  148123 logs.go:282] 0 containers: []
	W1010 19:25:38.004700  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:38.004706  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:38.004760  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:38.040874  148123 cri.go:89] found id: ""
	I1010 19:25:38.040906  148123 logs.go:282] 0 containers: []
	W1010 19:25:38.040915  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:38.040922  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:38.040990  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:38.083771  148123 cri.go:89] found id: ""
	I1010 19:25:38.083802  148123 logs.go:282] 0 containers: []
	W1010 19:25:38.083811  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:38.083821  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:38.083836  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:38.099645  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:38.099683  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:38.172168  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:38.172207  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:38.172223  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:38.248523  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:38.248561  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:38.306594  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:38.306630  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:34.728106  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:37.226285  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:34.065919  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:36.066751  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:38.067361  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:38.550359  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:40.551190  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:40.873868  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:40.889561  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:40.889668  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:40.926769  148123 cri.go:89] found id: ""
	I1010 19:25:40.926800  148123 logs.go:282] 0 containers: []
	W1010 19:25:40.926808  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:40.926816  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:40.926869  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:40.962564  148123 cri.go:89] found id: ""
	I1010 19:25:40.962592  148123 logs.go:282] 0 containers: []
	W1010 19:25:40.962600  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:40.962606  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:40.962659  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:40.997856  148123 cri.go:89] found id: ""
	I1010 19:25:40.997885  148123 logs.go:282] 0 containers: []
	W1010 19:25:40.997894  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:40.997900  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:40.997951  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:41.035479  148123 cri.go:89] found id: ""
	I1010 19:25:41.035506  148123 logs.go:282] 0 containers: []
	W1010 19:25:41.035514  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:41.035520  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:41.035575  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:41.077733  148123 cri.go:89] found id: ""
	I1010 19:25:41.077817  148123 logs.go:282] 0 containers: []
	W1010 19:25:41.077830  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:41.077840  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:41.077916  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:41.115439  148123 cri.go:89] found id: ""
	I1010 19:25:41.115476  148123 logs.go:282] 0 containers: []
	W1010 19:25:41.115489  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:41.115498  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:41.115552  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:41.152586  148123 cri.go:89] found id: ""
	I1010 19:25:41.152625  148123 logs.go:282] 0 containers: []
	W1010 19:25:41.152636  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:41.152643  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:41.152695  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:41.186807  148123 cri.go:89] found id: ""
	I1010 19:25:41.186840  148123 logs.go:282] 0 containers: []
	W1010 19:25:41.186851  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:41.186862  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:41.186880  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:41.200404  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:41.200447  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:41.280717  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:41.280744  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:41.280760  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:41.360561  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:41.360611  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:41.404461  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:41.404497  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:39.226903  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:41.227077  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:43.727197  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:40.570404  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:43.066523  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:43.050813  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:45.051094  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:43.955723  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:43.970895  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:43.970964  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:44.012162  148123 cri.go:89] found id: ""
	I1010 19:25:44.012194  148123 logs.go:282] 0 containers: []
	W1010 19:25:44.012203  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:44.012209  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:44.012282  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:44.074583  148123 cri.go:89] found id: ""
	I1010 19:25:44.074606  148123 logs.go:282] 0 containers: []
	W1010 19:25:44.074614  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:44.074620  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:44.074675  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:44.133562  148123 cri.go:89] found id: ""
	I1010 19:25:44.133588  148123 logs.go:282] 0 containers: []
	W1010 19:25:44.133596  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:44.133602  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:44.133658  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:44.168832  148123 cri.go:89] found id: ""
	I1010 19:25:44.168883  148123 logs.go:282] 0 containers: []
	W1010 19:25:44.168896  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:44.168908  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:44.168967  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:44.204896  148123 cri.go:89] found id: ""
	I1010 19:25:44.204923  148123 logs.go:282] 0 containers: []
	W1010 19:25:44.204934  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:44.204943  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:44.205002  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:44.242410  148123 cri.go:89] found id: ""
	I1010 19:25:44.242437  148123 logs.go:282] 0 containers: []
	W1010 19:25:44.242448  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:44.242456  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:44.242524  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:44.278553  148123 cri.go:89] found id: ""
	I1010 19:25:44.278581  148123 logs.go:282] 0 containers: []
	W1010 19:25:44.278589  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:44.278595  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:44.278658  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:44.316099  148123 cri.go:89] found id: ""
	I1010 19:25:44.316125  148123 logs.go:282] 0 containers: []
	W1010 19:25:44.316132  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:44.316141  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:44.316155  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:44.365809  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:44.365860  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:44.380129  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:44.380162  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:44.454241  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:44.454267  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:44.454281  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:44.530928  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:44.530974  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:47.078279  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:47.092233  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:47.092310  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:47.129517  148123 cri.go:89] found id: ""
	I1010 19:25:47.129557  148123 logs.go:282] 0 containers: []
	W1010 19:25:47.129573  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:47.129582  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:47.129654  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:47.170309  148123 cri.go:89] found id: ""
	I1010 19:25:47.170345  148123 logs.go:282] 0 containers: []
	W1010 19:25:47.170357  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:47.170365  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:47.170432  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:47.205110  148123 cri.go:89] found id: ""
	I1010 19:25:47.205145  148123 logs.go:282] 0 containers: []
	W1010 19:25:47.205156  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:47.205162  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:47.205228  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:47.245668  148123 cri.go:89] found id: ""
	I1010 19:25:47.245695  148123 logs.go:282] 0 containers: []
	W1010 19:25:47.245704  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:47.245711  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:47.245768  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:47.284549  148123 cri.go:89] found id: ""
	I1010 19:25:47.284576  148123 logs.go:282] 0 containers: []
	W1010 19:25:47.284584  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:47.284590  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:47.284643  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:47.324676  148123 cri.go:89] found id: ""
	I1010 19:25:47.324708  148123 logs.go:282] 0 containers: []
	W1010 19:25:47.324720  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:47.324729  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:47.324782  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:47.364315  148123 cri.go:89] found id: ""
	I1010 19:25:47.364347  148123 logs.go:282] 0 containers: []
	W1010 19:25:47.364356  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:47.364362  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:47.364433  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:47.400611  148123 cri.go:89] found id: ""
	I1010 19:25:47.400641  148123 logs.go:282] 0 containers: []
	W1010 19:25:47.400652  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:47.400664  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:47.400680  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:47.415185  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:47.415227  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:47.481638  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:47.481665  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:47.481681  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:47.560135  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:47.560171  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:47.602144  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:47.602174  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:46.227386  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:48.227699  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:45.066887  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:47.565340  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:47.051459  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:49.550170  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:51.554542  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:50.151770  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:50.165336  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:50.165403  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:50.201045  148123 cri.go:89] found id: ""
	I1010 19:25:50.201072  148123 logs.go:282] 0 containers: []
	W1010 19:25:50.201082  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:50.201089  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:50.201154  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:50.237047  148123 cri.go:89] found id: ""
	I1010 19:25:50.237082  148123 logs.go:282] 0 containers: []
	W1010 19:25:50.237094  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:50.237102  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:50.237174  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:50.273727  148123 cri.go:89] found id: ""
	I1010 19:25:50.273756  148123 logs.go:282] 0 containers: []
	W1010 19:25:50.273767  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:50.273780  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:50.273843  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:50.311397  148123 cri.go:89] found id: ""
	I1010 19:25:50.311424  148123 logs.go:282] 0 containers: []
	W1010 19:25:50.311433  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:50.311450  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:50.311513  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:50.347593  148123 cri.go:89] found id: ""
	I1010 19:25:50.347625  148123 logs.go:282] 0 containers: []
	W1010 19:25:50.347637  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:50.347705  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:50.347782  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:50.382195  148123 cri.go:89] found id: ""
	I1010 19:25:50.382228  148123 logs.go:282] 0 containers: []
	W1010 19:25:50.382240  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:50.382247  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:50.382313  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:50.419189  148123 cri.go:89] found id: ""
	I1010 19:25:50.419221  148123 logs.go:282] 0 containers: []
	W1010 19:25:50.419229  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:50.419236  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:50.419297  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:50.454368  148123 cri.go:89] found id: ""
	I1010 19:25:50.454399  148123 logs.go:282] 0 containers: []
	W1010 19:25:50.454410  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:50.454421  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:50.454440  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:50.495575  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:50.495623  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:50.548691  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:50.548737  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:50.564585  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:50.564621  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:50.637152  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:50.637174  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:50.637190  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:53.221939  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:53.235889  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:53.235968  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:53.278589  148123 cri.go:89] found id: ""
	I1010 19:25:53.278620  148123 logs.go:282] 0 containers: []
	W1010 19:25:53.278632  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:53.278640  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:53.278705  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:53.315062  148123 cri.go:89] found id: ""
	I1010 19:25:53.315090  148123 logs.go:282] 0 containers: []
	W1010 19:25:53.315101  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:53.315109  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:53.315182  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:53.348994  148123 cri.go:89] found id: ""
	I1010 19:25:53.349031  148123 logs.go:282] 0 containers: []
	W1010 19:25:53.349040  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:53.349046  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:53.349099  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:53.383334  148123 cri.go:89] found id: ""
	I1010 19:25:53.383363  148123 logs.go:282] 0 containers: []
	W1010 19:25:53.383373  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:53.383380  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:53.383450  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:53.417888  148123 cri.go:89] found id: ""
	I1010 19:25:53.417920  148123 logs.go:282] 0 containers: []
	W1010 19:25:53.417931  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:53.417938  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:53.418013  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:53.454757  148123 cri.go:89] found id: ""
	I1010 19:25:53.454787  148123 logs.go:282] 0 containers: []
	W1010 19:25:53.454798  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:53.454806  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:53.454871  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:53.491422  148123 cri.go:89] found id: ""
	I1010 19:25:53.491452  148123 logs.go:282] 0 containers: []
	W1010 19:25:53.491464  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:53.491472  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:53.491541  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:53.531240  148123 cri.go:89] found id: ""
	I1010 19:25:53.531271  148123 logs.go:282] 0 containers: []
	W1010 19:25:53.531282  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:53.531294  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:53.531310  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:50.727196  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:53.226957  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:50.065907  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:52.567056  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:54.051112  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:56.554137  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:53.582576  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:53.582608  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:53.596481  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:53.596511  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:53.666134  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:53.666162  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:53.666178  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:53.745621  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:53.745659  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:56.290744  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:56.307536  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:56.307610  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:56.353456  148123 cri.go:89] found id: ""
	I1010 19:25:56.353484  148123 logs.go:282] 0 containers: []
	W1010 19:25:56.353494  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:56.353501  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:56.353553  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:56.394093  148123 cri.go:89] found id: ""
	I1010 19:25:56.394120  148123 logs.go:282] 0 containers: []
	W1010 19:25:56.394131  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:56.394138  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:56.394203  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:56.428388  148123 cri.go:89] found id: ""
	I1010 19:25:56.428428  148123 logs.go:282] 0 containers: []
	W1010 19:25:56.428441  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:56.428450  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:56.428524  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:56.464414  148123 cri.go:89] found id: ""
	I1010 19:25:56.464459  148123 logs.go:282] 0 containers: []
	W1010 19:25:56.464472  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:56.464480  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:56.464563  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:56.505271  148123 cri.go:89] found id: ""
	I1010 19:25:56.505307  148123 logs.go:282] 0 containers: []
	W1010 19:25:56.505316  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:56.505322  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:56.505374  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:56.546424  148123 cri.go:89] found id: ""
	I1010 19:25:56.546456  148123 logs.go:282] 0 containers: []
	W1010 19:25:56.546467  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:56.546475  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:56.546545  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:56.587458  148123 cri.go:89] found id: ""
	I1010 19:25:56.587489  148123 logs.go:282] 0 containers: []
	W1010 19:25:56.587501  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:56.587509  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:56.587588  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:56.628248  148123 cri.go:89] found id: ""
	I1010 19:25:56.628279  148123 logs.go:282] 0 containers: []
	W1010 19:25:56.628291  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:56.628304  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:56.628320  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:56.642109  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:56.642141  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:56.723646  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:56.723678  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:56.723696  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:56.805849  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:56.805899  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:56.849863  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:56.849891  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:55.230447  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:57.726896  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:55.066248  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:57.565240  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:59.051145  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:01.554276  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:59.407093  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:59.422915  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:59.423005  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:59.462867  148123 cri.go:89] found id: ""
	I1010 19:25:59.462898  148123 logs.go:282] 0 containers: []
	W1010 19:25:59.462909  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:59.462917  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:59.462980  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:59.496931  148123 cri.go:89] found id: ""
	I1010 19:25:59.496958  148123 logs.go:282] 0 containers: []
	W1010 19:25:59.496968  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:59.496983  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:59.497049  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:59.532241  148123 cri.go:89] found id: ""
	I1010 19:25:59.532271  148123 logs.go:282] 0 containers: []
	W1010 19:25:59.532283  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:59.532291  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:59.532352  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:59.571285  148123 cri.go:89] found id: ""
	I1010 19:25:59.571313  148123 logs.go:282] 0 containers: []
	W1010 19:25:59.571324  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:59.571331  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:59.571391  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:59.606687  148123 cri.go:89] found id: ""
	I1010 19:25:59.606721  148123 logs.go:282] 0 containers: []
	W1010 19:25:59.606734  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:59.606741  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:59.606800  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:59.643247  148123 cri.go:89] found id: ""
	I1010 19:25:59.643276  148123 logs.go:282] 0 containers: []
	W1010 19:25:59.643286  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:59.643294  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:59.643369  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:59.679305  148123 cri.go:89] found id: ""
	I1010 19:25:59.679335  148123 logs.go:282] 0 containers: []
	W1010 19:25:59.679344  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:59.679350  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:59.679407  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:59.716149  148123 cri.go:89] found id: ""
	I1010 19:25:59.716198  148123 logs.go:282] 0 containers: []
	W1010 19:25:59.716210  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:59.716222  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:59.716239  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:59.730334  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:59.730364  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:59.799759  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:59.799786  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:59.799804  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:59.881883  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:59.881925  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:59.923755  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:59.923784  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:02.478043  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:02.492627  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:02.492705  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:02.532133  148123 cri.go:89] found id: ""
	I1010 19:26:02.532160  148123 logs.go:282] 0 containers: []
	W1010 19:26:02.532172  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:02.532179  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:02.532244  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:02.571915  148123 cri.go:89] found id: ""
	I1010 19:26:02.571943  148123 logs.go:282] 0 containers: []
	W1010 19:26:02.571951  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:02.571957  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:02.572014  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:02.616330  148123 cri.go:89] found id: ""
	I1010 19:26:02.616365  148123 logs.go:282] 0 containers: []
	W1010 19:26:02.616376  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:02.616385  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:02.616455  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:02.662245  148123 cri.go:89] found id: ""
	I1010 19:26:02.662275  148123 logs.go:282] 0 containers: []
	W1010 19:26:02.662284  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:02.662291  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:02.662354  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:02.698692  148123 cri.go:89] found id: ""
	I1010 19:26:02.698720  148123 logs.go:282] 0 containers: []
	W1010 19:26:02.698731  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:02.698739  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:02.698805  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:02.736643  148123 cri.go:89] found id: ""
	I1010 19:26:02.736667  148123 logs.go:282] 0 containers: []
	W1010 19:26:02.736677  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:02.736685  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:02.736742  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:02.780356  148123 cri.go:89] found id: ""
	I1010 19:26:02.780388  148123 logs.go:282] 0 containers: []
	W1010 19:26:02.780399  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:02.780407  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:02.780475  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:02.816716  148123 cri.go:89] found id: ""
	I1010 19:26:02.816744  148123 logs.go:282] 0 containers: []
	W1010 19:26:02.816752  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:02.816761  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:02.816775  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:02.899002  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:02.899046  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:02.942060  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:02.942093  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:02.997605  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:02.997661  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:03.011768  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:03.011804  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:03.096404  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:59.727075  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:02.227526  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:59.565903  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:02.066179  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:04.049656  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:06.050425  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:05.596971  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:05.612524  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:05.612598  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:05.649999  148123 cri.go:89] found id: ""
	I1010 19:26:05.650032  148123 logs.go:282] 0 containers: []
	W1010 19:26:05.650044  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:05.650052  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:05.650119  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:05.684365  148123 cri.go:89] found id: ""
	I1010 19:26:05.684398  148123 logs.go:282] 0 containers: []
	W1010 19:26:05.684419  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:05.684427  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:05.684494  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:05.721801  148123 cri.go:89] found id: ""
	I1010 19:26:05.721832  148123 logs.go:282] 0 containers: []
	W1010 19:26:05.721840  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:05.721847  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:05.721908  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:05.762010  148123 cri.go:89] found id: ""
	I1010 19:26:05.762037  148123 logs.go:282] 0 containers: []
	W1010 19:26:05.762046  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:05.762056  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:05.762117  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:05.803425  148123 cri.go:89] found id: ""
	I1010 19:26:05.803457  148123 logs.go:282] 0 containers: []
	W1010 19:26:05.803468  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:05.803476  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:05.803551  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:05.838800  148123 cri.go:89] found id: ""
	I1010 19:26:05.838833  148123 logs.go:282] 0 containers: []
	W1010 19:26:05.838845  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:05.838853  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:05.838921  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:05.875110  148123 cri.go:89] found id: ""
	I1010 19:26:05.875147  148123 logs.go:282] 0 containers: []
	W1010 19:26:05.875158  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:05.875167  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:05.875245  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:05.908951  148123 cri.go:89] found id: ""
	I1010 19:26:05.908988  148123 logs.go:282] 0 containers: []
	W1010 19:26:05.909000  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:05.909012  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:05.909027  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:05.922263  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:05.922293  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:05.996441  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:05.996465  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:05.996484  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:06.078113  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:06.078160  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:06.122919  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:06.122956  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:04.726335  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:06.728178  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:04.066573  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:06.564991  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:08.566655  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:08.050522  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:10.550288  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:08.674464  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:08.688452  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:08.688570  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:08.726240  148123 cri.go:89] found id: ""
	I1010 19:26:08.726269  148123 logs.go:282] 0 containers: []
	W1010 19:26:08.726279  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:08.726287  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:08.726370  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:08.764762  148123 cri.go:89] found id: ""
	I1010 19:26:08.764790  148123 logs.go:282] 0 containers: []
	W1010 19:26:08.764799  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:08.764806  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:08.764887  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:08.805522  148123 cri.go:89] found id: ""
	I1010 19:26:08.805552  148123 logs.go:282] 0 containers: []
	W1010 19:26:08.805563  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:08.805572  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:08.805642  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:08.845694  148123 cri.go:89] found id: ""
	I1010 19:26:08.845729  148123 logs.go:282] 0 containers: []
	W1010 19:26:08.845740  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:08.845747  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:08.845817  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:08.885024  148123 cri.go:89] found id: ""
	I1010 19:26:08.885054  148123 logs.go:282] 0 containers: []
	W1010 19:26:08.885066  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:08.885074  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:08.885138  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:08.927500  148123 cri.go:89] found id: ""
	I1010 19:26:08.927531  148123 logs.go:282] 0 containers: []
	W1010 19:26:08.927540  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:08.927547  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:08.927616  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:08.963870  148123 cri.go:89] found id: ""
	I1010 19:26:08.963904  148123 logs.go:282] 0 containers: []
	W1010 19:26:08.963916  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:08.963924  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:08.963988  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:08.997984  148123 cri.go:89] found id: ""
	I1010 19:26:08.998017  148123 logs.go:282] 0 containers: []
	W1010 19:26:08.998027  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:08.998039  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:08.998056  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:09.049307  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:09.049341  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:09.063341  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:09.063432  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:09.145190  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:09.145214  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:09.145226  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:09.231409  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:09.231445  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:11.773475  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:11.788981  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:11.789055  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:11.830289  148123 cri.go:89] found id: ""
	I1010 19:26:11.830319  148123 logs.go:282] 0 containers: []
	W1010 19:26:11.830327  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:11.830333  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:11.830399  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:11.867093  148123 cri.go:89] found id: ""
	I1010 19:26:11.867120  148123 logs.go:282] 0 containers: []
	W1010 19:26:11.867131  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:11.867138  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:11.867212  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:11.902146  148123 cri.go:89] found id: ""
	I1010 19:26:11.902181  148123 logs.go:282] 0 containers: []
	W1010 19:26:11.902192  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:11.902201  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:11.902271  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:11.938652  148123 cri.go:89] found id: ""
	I1010 19:26:11.938681  148123 logs.go:282] 0 containers: []
	W1010 19:26:11.938691  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:11.938703  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:11.938771  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:11.975903  148123 cri.go:89] found id: ""
	I1010 19:26:11.975937  148123 logs.go:282] 0 containers: []
	W1010 19:26:11.975947  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:11.975955  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:11.976020  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:12.014083  148123 cri.go:89] found id: ""
	I1010 19:26:12.014111  148123 logs.go:282] 0 containers: []
	W1010 19:26:12.014119  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:12.014126  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:12.014190  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:12.057832  148123 cri.go:89] found id: ""
	I1010 19:26:12.057858  148123 logs.go:282] 0 containers: []
	W1010 19:26:12.057867  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:12.057874  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:12.057925  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:12.103253  148123 cri.go:89] found id: ""
	I1010 19:26:12.103286  148123 logs.go:282] 0 containers: []
	W1010 19:26:12.103298  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:12.103311  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:12.103326  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:12.171062  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:12.171104  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:12.185463  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:12.185496  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:12.261568  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:12.261592  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:12.261607  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:12.345819  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:12.345860  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:09.226954  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:11.227205  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:13.227457  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:11.066777  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:13.565854  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:12.551323  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:15.051745  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:14.886283  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:14.901117  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:14.901213  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:14.935235  148123 cri.go:89] found id: ""
	I1010 19:26:14.935264  148123 logs.go:282] 0 containers: []
	W1010 19:26:14.935273  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:14.935280  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:14.935336  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:14.970617  148123 cri.go:89] found id: ""
	I1010 19:26:14.970642  148123 logs.go:282] 0 containers: []
	W1010 19:26:14.970652  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:14.970658  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:14.970722  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:15.011086  148123 cri.go:89] found id: ""
	I1010 19:26:15.011113  148123 logs.go:282] 0 containers: []
	W1010 19:26:15.011122  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:15.011128  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:15.011188  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:15.053004  148123 cri.go:89] found id: ""
	I1010 19:26:15.053026  148123 logs.go:282] 0 containers: []
	W1010 19:26:15.053033  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:15.053039  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:15.053099  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:15.089275  148123 cri.go:89] found id: ""
	I1010 19:26:15.089304  148123 logs.go:282] 0 containers: []
	W1010 19:26:15.089312  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:15.089319  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:15.089383  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:15.126620  148123 cri.go:89] found id: ""
	I1010 19:26:15.126645  148123 logs.go:282] 0 containers: []
	W1010 19:26:15.126654  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:15.126660  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:15.126723  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:15.160920  148123 cri.go:89] found id: ""
	I1010 19:26:15.160956  148123 logs.go:282] 0 containers: []
	W1010 19:26:15.160969  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:15.160979  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:15.161037  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:15.197430  148123 cri.go:89] found id: ""
	I1010 19:26:15.197462  148123 logs.go:282] 0 containers: []
	W1010 19:26:15.197474  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:15.197486  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:15.197507  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:15.240905  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:15.240953  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:15.297663  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:15.297719  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:15.312279  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:15.312313  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:15.395296  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:15.395320  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:15.395336  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:17.978030  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:17.992574  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:17.992651  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:18.028846  148123 cri.go:89] found id: ""
	I1010 19:26:18.028890  148123 logs.go:282] 0 containers: []
	W1010 19:26:18.028901  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:18.028908  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:18.028965  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:18.073004  148123 cri.go:89] found id: ""
	I1010 19:26:18.073033  148123 logs.go:282] 0 containers: []
	W1010 19:26:18.073041  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:18.073047  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:18.073118  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:18.112062  148123 cri.go:89] found id: ""
	I1010 19:26:18.112098  148123 logs.go:282] 0 containers: []
	W1010 19:26:18.112111  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:18.112121  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:18.112209  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:18.151565  148123 cri.go:89] found id: ""
	I1010 19:26:18.151597  148123 logs.go:282] 0 containers: []
	W1010 19:26:18.151608  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:18.151616  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:18.151675  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:18.197483  148123 cri.go:89] found id: ""
	I1010 19:26:18.197509  148123 logs.go:282] 0 containers: []
	W1010 19:26:18.197518  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:18.197524  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:18.197591  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:18.235151  148123 cri.go:89] found id: ""
	I1010 19:26:18.235181  148123 logs.go:282] 0 containers: []
	W1010 19:26:18.235193  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:18.235201  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:18.235279  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:18.273807  148123 cri.go:89] found id: ""
	I1010 19:26:18.273841  148123 logs.go:282] 0 containers: []
	W1010 19:26:18.273852  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:18.273858  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:18.273920  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:18.311644  148123 cri.go:89] found id: ""
	I1010 19:26:18.311677  148123 logs.go:282] 0 containers: []
	W1010 19:26:18.311688  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:18.311700  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:18.311716  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:18.355599  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:18.355635  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:18.407786  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:18.407830  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:18.422944  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:18.422976  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:18.496830  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:18.496873  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:18.496887  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:15.227600  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:17.726712  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:16.065701  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:18.066861  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:17.558257  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:20.050914  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:21.108300  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:21.123177  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:21.123243  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:21.162544  148123 cri.go:89] found id: ""
	I1010 19:26:21.162575  148123 logs.go:282] 0 containers: []
	W1010 19:26:21.162586  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:21.162595  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:21.162662  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:21.199799  148123 cri.go:89] found id: ""
	I1010 19:26:21.199828  148123 logs.go:282] 0 containers: []
	W1010 19:26:21.199839  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:21.199845  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:21.199901  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:21.246333  148123 cri.go:89] found id: ""
	I1010 19:26:21.246357  148123 logs.go:282] 0 containers: []
	W1010 19:26:21.246366  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:21.246372  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:21.246433  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:21.282681  148123 cri.go:89] found id: ""
	I1010 19:26:21.282719  148123 logs.go:282] 0 containers: []
	W1010 19:26:21.282732  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:21.282740  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:21.282810  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:21.319500  148123 cri.go:89] found id: ""
	I1010 19:26:21.319535  148123 logs.go:282] 0 containers: []
	W1010 19:26:21.319546  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:21.319554  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:21.319623  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:21.355779  148123 cri.go:89] found id: ""
	I1010 19:26:21.355809  148123 logs.go:282] 0 containers: []
	W1010 19:26:21.355820  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:21.355828  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:21.355902  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:21.394567  148123 cri.go:89] found id: ""
	I1010 19:26:21.394608  148123 logs.go:282] 0 containers: []
	W1010 19:26:21.394617  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:21.394623  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:21.394684  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:21.430009  148123 cri.go:89] found id: ""
	I1010 19:26:21.430046  148123 logs.go:282] 0 containers: []
	W1010 19:26:21.430058  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:21.430069  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:21.430089  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:21.443267  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:21.443301  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:21.517046  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:21.517068  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:21.517083  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:21.594927  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:21.594978  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:21.634274  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:21.634311  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:20.227157  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:22.727736  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:20.566652  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:23.066459  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:22.550526  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:25.050647  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:24.189760  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:24.204355  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:24.204420  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:24.248130  148123 cri.go:89] found id: ""
	I1010 19:26:24.248160  148123 logs.go:282] 0 containers: []
	W1010 19:26:24.248171  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:24.248178  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:24.248244  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:24.293281  148123 cri.go:89] found id: ""
	I1010 19:26:24.293312  148123 logs.go:282] 0 containers: []
	W1010 19:26:24.293324  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:24.293331  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:24.293400  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:24.350709  148123 cri.go:89] found id: ""
	I1010 19:26:24.350743  148123 logs.go:282] 0 containers: []
	W1010 19:26:24.350755  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:24.350765  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:24.350838  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:24.394113  148123 cri.go:89] found id: ""
	I1010 19:26:24.394152  148123 logs.go:282] 0 containers: []
	W1010 19:26:24.394170  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:24.394181  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:24.394256  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:24.445999  148123 cri.go:89] found id: ""
	I1010 19:26:24.446030  148123 logs.go:282] 0 containers: []
	W1010 19:26:24.446042  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:24.446051  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:24.446120  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:24.483566  148123 cri.go:89] found id: ""
	I1010 19:26:24.483596  148123 logs.go:282] 0 containers: []
	W1010 19:26:24.483605  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:24.483612  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:24.483665  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:24.520705  148123 cri.go:89] found id: ""
	I1010 19:26:24.520736  148123 logs.go:282] 0 containers: []
	W1010 19:26:24.520748  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:24.520757  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:24.520825  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:24.557316  148123 cri.go:89] found id: ""
	I1010 19:26:24.557346  148123 logs.go:282] 0 containers: []
	W1010 19:26:24.557355  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:24.557364  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:24.557376  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:24.608065  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:24.608109  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:24.623202  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:24.623250  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:24.694665  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:24.694697  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:24.694716  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:24.783369  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:24.783414  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:27.327524  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:27.341132  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:27.341214  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:27.376589  148123 cri.go:89] found id: ""
	I1010 19:26:27.376618  148123 logs.go:282] 0 containers: []
	W1010 19:26:27.376627  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:27.376633  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:27.376684  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:27.414452  148123 cri.go:89] found id: ""
	I1010 19:26:27.414491  148123 logs.go:282] 0 containers: []
	W1010 19:26:27.414502  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:27.414510  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:27.414572  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:27.449839  148123 cri.go:89] found id: ""
	I1010 19:26:27.449867  148123 logs.go:282] 0 containers: []
	W1010 19:26:27.449875  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:27.449882  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:27.449932  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:27.492445  148123 cri.go:89] found id: ""
	I1010 19:26:27.492472  148123 logs.go:282] 0 containers: []
	W1010 19:26:27.492480  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:27.492487  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:27.492549  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:27.533032  148123 cri.go:89] found id: ""
	I1010 19:26:27.533085  148123 logs.go:282] 0 containers: []
	W1010 19:26:27.533095  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:27.533122  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:27.533199  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:27.571096  148123 cri.go:89] found id: ""
	I1010 19:26:27.571122  148123 logs.go:282] 0 containers: []
	W1010 19:26:27.571130  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:27.571135  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:27.571204  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:27.608762  148123 cri.go:89] found id: ""
	I1010 19:26:27.608798  148123 logs.go:282] 0 containers: []
	W1010 19:26:27.608809  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:27.608818  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:27.608896  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:27.648200  148123 cri.go:89] found id: ""
	I1010 19:26:27.648236  148123 logs.go:282] 0 containers: []
	W1010 19:26:27.648248  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:27.648260  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:27.648275  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:27.698124  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:27.698154  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:27.755009  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:27.755052  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:27.770048  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:27.770084  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:27.845475  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:27.845505  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:27.845519  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:24.729352  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:26.731831  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:25.566028  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:27.567052  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:27.555698  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:30.049914  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:30.423117  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:30.436706  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:30.436768  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:30.473367  148123 cri.go:89] found id: ""
	I1010 19:26:30.473396  148123 logs.go:282] 0 containers: []
	W1010 19:26:30.473408  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:30.473417  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:30.473470  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:30.510560  148123 cri.go:89] found id: ""
	I1010 19:26:30.510599  148123 logs.go:282] 0 containers: []
	W1010 19:26:30.510610  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:30.510619  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:30.510687  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:30.549682  148123 cri.go:89] found id: ""
	I1010 19:26:30.549715  148123 logs.go:282] 0 containers: []
	W1010 19:26:30.549726  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:30.549734  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:30.549798  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:30.586401  148123 cri.go:89] found id: ""
	I1010 19:26:30.586425  148123 logs.go:282] 0 containers: []
	W1010 19:26:30.586434  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:30.586441  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:30.586492  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:30.622012  148123 cri.go:89] found id: ""
	I1010 19:26:30.622037  148123 logs.go:282] 0 containers: []
	W1010 19:26:30.622045  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:30.622052  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:30.622107  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:30.658336  148123 cri.go:89] found id: ""
	I1010 19:26:30.658364  148123 logs.go:282] 0 containers: []
	W1010 19:26:30.658372  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:30.658378  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:30.658442  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:30.695495  148123 cri.go:89] found id: ""
	I1010 19:26:30.695523  148123 logs.go:282] 0 containers: []
	W1010 19:26:30.695532  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:30.695538  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:30.695591  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:30.735093  148123 cri.go:89] found id: ""
	I1010 19:26:30.735125  148123 logs.go:282] 0 containers: []
	W1010 19:26:30.735136  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:30.735148  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:30.735163  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:30.816097  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:30.816140  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:30.853878  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:30.853915  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:30.906289  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:30.906341  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:30.923742  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:30.923769  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:30.993226  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:33.493799  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:33.508083  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:33.508166  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:33.545019  148123 cri.go:89] found id: ""
	I1010 19:26:33.545055  148123 logs.go:282] 0 containers: []
	W1010 19:26:33.545072  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:33.545081  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:33.545150  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:29.226673  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:31.227117  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:33.727777  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:30.068231  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:32.566025  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:32.050118  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:34.051720  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:36.550138  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:33.584215  148123 cri.go:89] found id: ""
	I1010 19:26:33.584242  148123 logs.go:282] 0 containers: []
	W1010 19:26:33.584252  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:33.584259  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:33.584315  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:33.620598  148123 cri.go:89] found id: ""
	I1010 19:26:33.620627  148123 logs.go:282] 0 containers: []
	W1010 19:26:33.620636  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:33.620643  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:33.620691  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:33.655225  148123 cri.go:89] found id: ""
	I1010 19:26:33.655252  148123 logs.go:282] 0 containers: []
	W1010 19:26:33.655260  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:33.655267  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:33.655322  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:33.693464  148123 cri.go:89] found id: ""
	I1010 19:26:33.693490  148123 logs.go:282] 0 containers: []
	W1010 19:26:33.693498  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:33.693505  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:33.693554  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:33.734024  148123 cri.go:89] found id: ""
	I1010 19:26:33.734059  148123 logs.go:282] 0 containers: []
	W1010 19:26:33.734071  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:33.734079  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:33.734141  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:33.769434  148123 cri.go:89] found id: ""
	I1010 19:26:33.769467  148123 logs.go:282] 0 containers: []
	W1010 19:26:33.769476  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:33.769483  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:33.769533  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:33.809011  148123 cri.go:89] found id: ""
	I1010 19:26:33.809040  148123 logs.go:282] 0 containers: []
	W1010 19:26:33.809049  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:33.809059  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:33.809071  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:33.864186  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:33.864230  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:33.878681  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:33.878711  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:33.958225  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:33.958250  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:33.958265  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:34.034730  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:34.034774  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:36.577123  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:36.591494  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:36.591560  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:36.625170  148123 cri.go:89] found id: ""
	I1010 19:26:36.625210  148123 logs.go:282] 0 containers: []
	W1010 19:26:36.625222  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:36.625230  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:36.625295  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:36.660790  148123 cri.go:89] found id: ""
	I1010 19:26:36.660821  148123 logs.go:282] 0 containers: []
	W1010 19:26:36.660831  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:36.660838  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:36.660911  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:36.695742  148123 cri.go:89] found id: ""
	I1010 19:26:36.695774  148123 logs.go:282] 0 containers: []
	W1010 19:26:36.695786  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:36.695793  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:36.695854  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:36.729523  148123 cri.go:89] found id: ""
	I1010 19:26:36.729552  148123 logs.go:282] 0 containers: []
	W1010 19:26:36.729561  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:36.729569  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:36.729630  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:36.765515  148123 cri.go:89] found id: ""
	I1010 19:26:36.765549  148123 logs.go:282] 0 containers: []
	W1010 19:26:36.765561  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:36.765571  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:36.765635  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:36.799110  148123 cri.go:89] found id: ""
	I1010 19:26:36.799144  148123 logs.go:282] 0 containers: []
	W1010 19:26:36.799155  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:36.799164  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:36.799224  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:36.832806  148123 cri.go:89] found id: ""
	I1010 19:26:36.832834  148123 logs.go:282] 0 containers: []
	W1010 19:26:36.832845  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:36.832865  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:36.832933  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:36.869093  148123 cri.go:89] found id: ""
	I1010 19:26:36.869126  148123 logs.go:282] 0 containers: []
	W1010 19:26:36.869137  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:36.869149  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:36.869165  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:36.922229  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:36.922276  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:36.936973  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:36.937019  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:37.016400  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:37.016425  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:37.016448  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:37.106308  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:37.106354  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:36.227451  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:38.726229  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:35.067396  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:37.565711  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:38.550438  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:41.050698  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:39.652101  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:39.665546  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:39.665639  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:39.699646  148123 cri.go:89] found id: ""
	I1010 19:26:39.699675  148123 logs.go:282] 0 containers: []
	W1010 19:26:39.699686  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:39.699693  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:39.699745  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:39.735568  148123 cri.go:89] found id: ""
	I1010 19:26:39.735592  148123 logs.go:282] 0 containers: []
	W1010 19:26:39.735600  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:39.735606  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:39.735656  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:39.770021  148123 cri.go:89] found id: ""
	I1010 19:26:39.770049  148123 logs.go:282] 0 containers: []
	W1010 19:26:39.770060  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:39.770067  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:39.770139  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:39.804867  148123 cri.go:89] found id: ""
	I1010 19:26:39.804894  148123 logs.go:282] 0 containers: []
	W1010 19:26:39.804903  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:39.804910  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:39.804972  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:39.847074  148123 cri.go:89] found id: ""
	I1010 19:26:39.847104  148123 logs.go:282] 0 containers: []
	W1010 19:26:39.847115  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:39.847124  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:39.847200  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:39.890334  148123 cri.go:89] found id: ""
	I1010 19:26:39.890368  148123 logs.go:282] 0 containers: []
	W1010 19:26:39.890379  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:39.890387  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:39.890456  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:39.926316  148123 cri.go:89] found id: ""
	I1010 19:26:39.926346  148123 logs.go:282] 0 containers: []
	W1010 19:26:39.926355  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:39.926362  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:39.926416  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:39.966179  148123 cri.go:89] found id: ""
	I1010 19:26:39.966215  148123 logs.go:282] 0 containers: []
	W1010 19:26:39.966227  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:39.966239  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:39.966255  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:40.047457  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:40.047505  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:40.086611  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:40.086656  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:40.139123  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:40.139167  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:40.152966  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:40.152995  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:40.231009  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:42.731851  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:42.748201  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:42.748287  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:42.787567  148123 cri.go:89] found id: ""
	I1010 19:26:42.787599  148123 logs.go:282] 0 containers: []
	W1010 19:26:42.787607  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:42.787614  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:42.787697  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:42.831051  148123 cri.go:89] found id: ""
	I1010 19:26:42.831089  148123 logs.go:282] 0 containers: []
	W1010 19:26:42.831100  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:42.831109  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:42.831180  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:42.871352  148123 cri.go:89] found id: ""
	I1010 19:26:42.871390  148123 logs.go:282] 0 containers: []
	W1010 19:26:42.871402  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:42.871410  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:42.871484  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:42.910283  148123 cri.go:89] found id: ""
	I1010 19:26:42.910316  148123 logs.go:282] 0 containers: []
	W1010 19:26:42.910327  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:42.910335  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:42.910403  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:42.946971  148123 cri.go:89] found id: ""
	I1010 19:26:42.947003  148123 logs.go:282] 0 containers: []
	W1010 19:26:42.947012  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:42.947019  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:42.947087  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:42.985484  148123 cri.go:89] found id: ""
	I1010 19:26:42.985517  148123 logs.go:282] 0 containers: []
	W1010 19:26:42.985528  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:42.985537  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:42.985603  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:43.024500  148123 cri.go:89] found id: ""
	I1010 19:26:43.024535  148123 logs.go:282] 0 containers: []
	W1010 19:26:43.024544  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:43.024552  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:43.024608  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:43.064008  148123 cri.go:89] found id: ""
	I1010 19:26:43.064039  148123 logs.go:282] 0 containers: []
	W1010 19:26:43.064049  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:43.064060  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:43.064075  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:43.119367  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:43.119415  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:43.133396  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:43.133428  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:43.207447  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:43.207470  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:43.207491  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:43.290416  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:43.290456  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:40.727919  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:43.227782  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:40.066461  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:42.565505  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:43.051835  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:45.052308  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:45.834115  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:45.849172  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:45.849238  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:45.886020  148123 cri.go:89] found id: ""
	I1010 19:26:45.886049  148123 logs.go:282] 0 containers: []
	W1010 19:26:45.886060  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:45.886067  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:45.886152  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:45.923188  148123 cri.go:89] found id: ""
	I1010 19:26:45.923218  148123 logs.go:282] 0 containers: []
	W1010 19:26:45.923245  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:45.923253  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:45.923316  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:45.960297  148123 cri.go:89] found id: ""
	I1010 19:26:45.960337  148123 logs.go:282] 0 containers: []
	W1010 19:26:45.960351  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:45.960361  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:45.960432  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:45.995140  148123 cri.go:89] found id: ""
	I1010 19:26:45.995173  148123 logs.go:282] 0 containers: []
	W1010 19:26:45.995184  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:45.995191  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:45.995265  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:46.040390  148123 cri.go:89] found id: ""
	I1010 19:26:46.040418  148123 logs.go:282] 0 containers: []
	W1010 19:26:46.040426  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:46.040433  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:46.040500  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:46.079999  148123 cri.go:89] found id: ""
	I1010 19:26:46.080029  148123 logs.go:282] 0 containers: []
	W1010 19:26:46.080042  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:46.080052  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:46.080105  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:46.117331  148123 cri.go:89] found id: ""
	I1010 19:26:46.117357  148123 logs.go:282] 0 containers: []
	W1010 19:26:46.117364  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:46.117370  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:46.117441  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:46.152248  148123 cri.go:89] found id: ""
	I1010 19:26:46.152279  148123 logs.go:282] 0 containers: []
	W1010 19:26:46.152290  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:46.152303  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:46.152319  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:46.192503  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:46.192533  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:46.246419  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:46.246456  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:46.259864  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:46.259895  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:46.338518  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:46.338549  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:46.338567  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:45.726776  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:48.228318  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:44.567056  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:47.065636  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:47.551013  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:50.053824  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:48.924216  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:48.938741  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:48.938805  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:48.973310  148123 cri.go:89] found id: ""
	I1010 19:26:48.973339  148123 logs.go:282] 0 containers: []
	W1010 19:26:48.973348  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:48.973356  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:48.973411  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:49.010467  148123 cri.go:89] found id: ""
	I1010 19:26:49.010492  148123 logs.go:282] 0 containers: []
	W1010 19:26:49.010500  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:49.010506  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:49.010585  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:49.051621  148123 cri.go:89] found id: ""
	I1010 19:26:49.051646  148123 logs.go:282] 0 containers: []
	W1010 19:26:49.051653  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:49.051664  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:49.051727  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:49.091090  148123 cri.go:89] found id: ""
	I1010 19:26:49.091121  148123 logs.go:282] 0 containers: []
	W1010 19:26:49.091132  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:49.091140  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:49.091202  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:49.127669  148123 cri.go:89] found id: ""
	I1010 19:26:49.127712  148123 logs.go:282] 0 containers: []
	W1010 19:26:49.127724  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:49.127732  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:49.127804  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:49.164446  148123 cri.go:89] found id: ""
	I1010 19:26:49.164476  148123 logs.go:282] 0 containers: []
	W1010 19:26:49.164485  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:49.164491  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:49.164553  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:49.201222  148123 cri.go:89] found id: ""
	I1010 19:26:49.201263  148123 logs.go:282] 0 containers: []
	W1010 19:26:49.201275  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:49.201284  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:49.201345  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:49.238258  148123 cri.go:89] found id: ""
	I1010 19:26:49.238283  148123 logs.go:282] 0 containers: []
	W1010 19:26:49.238290  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:49.238299  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:49.238311  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:49.313576  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:49.313619  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:49.351066  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:49.351096  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:49.402772  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:49.402823  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:49.417713  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:49.417752  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:49.488834  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:51.989031  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:52.003056  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:52.003140  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:52.039682  148123 cri.go:89] found id: ""
	I1010 19:26:52.039709  148123 logs.go:282] 0 containers: []
	W1010 19:26:52.039718  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:52.039725  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:52.039788  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:52.081402  148123 cri.go:89] found id: ""
	I1010 19:26:52.081433  148123 logs.go:282] 0 containers: []
	W1010 19:26:52.081443  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:52.081449  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:52.081502  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:52.118270  148123 cri.go:89] found id: ""
	I1010 19:26:52.118304  148123 logs.go:282] 0 containers: []
	W1010 19:26:52.118315  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:52.118325  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:52.118392  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:52.154692  148123 cri.go:89] found id: ""
	I1010 19:26:52.154724  148123 logs.go:282] 0 containers: []
	W1010 19:26:52.154735  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:52.154743  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:52.154807  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:52.190056  148123 cri.go:89] found id: ""
	I1010 19:26:52.190085  148123 logs.go:282] 0 containers: []
	W1010 19:26:52.190094  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:52.190101  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:52.190161  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:52.225468  148123 cri.go:89] found id: ""
	I1010 19:26:52.225501  148123 logs.go:282] 0 containers: []
	W1010 19:26:52.225511  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:52.225521  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:52.225589  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:52.260682  148123 cri.go:89] found id: ""
	I1010 19:26:52.260710  148123 logs.go:282] 0 containers: []
	W1010 19:26:52.260718  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:52.260724  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:52.260774  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:52.297613  148123 cri.go:89] found id: ""
	I1010 19:26:52.297642  148123 logs.go:282] 0 containers: []
	W1010 19:26:52.297659  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:52.297672  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:52.297689  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:52.352224  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:52.352267  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:52.367003  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:52.367033  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:52.443124  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:52.443157  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:52.443173  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:52.522391  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:52.522433  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:50.726363  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:52.727069  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:49.069109  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:51.566132  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:53.567867  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:52.554195  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:55.050995  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:55.066877  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:55.082191  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:55.082258  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:55.120729  148123 cri.go:89] found id: ""
	I1010 19:26:55.120763  148123 logs.go:282] 0 containers: []
	W1010 19:26:55.120775  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:55.120784  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:55.120863  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:55.154795  148123 cri.go:89] found id: ""
	I1010 19:26:55.154827  148123 logs.go:282] 0 containers: []
	W1010 19:26:55.154839  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:55.154848  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:55.154904  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:55.194846  148123 cri.go:89] found id: ""
	I1010 19:26:55.194876  148123 logs.go:282] 0 containers: []
	W1010 19:26:55.194889  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:55.194897  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:55.194958  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:55.230173  148123 cri.go:89] found id: ""
	I1010 19:26:55.230201  148123 logs.go:282] 0 containers: []
	W1010 19:26:55.230212  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:55.230220  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:55.230290  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:55.264999  148123 cri.go:89] found id: ""
	I1010 19:26:55.265025  148123 logs.go:282] 0 containers: []
	W1010 19:26:55.265032  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:55.265039  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:55.265096  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:55.303666  148123 cri.go:89] found id: ""
	I1010 19:26:55.303695  148123 logs.go:282] 0 containers: []
	W1010 19:26:55.303703  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:55.303724  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:55.303797  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:55.342061  148123 cri.go:89] found id: ""
	I1010 19:26:55.342087  148123 logs.go:282] 0 containers: []
	W1010 19:26:55.342098  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:55.342106  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:55.342171  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:55.378305  148123 cri.go:89] found id: ""
	I1010 19:26:55.378336  148123 logs.go:282] 0 containers: []
	W1010 19:26:55.378345  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:55.378354  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:55.378365  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:55.416815  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:55.416865  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:55.467371  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:55.467415  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:55.481704  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:55.481744  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:55.559219  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:55.559242  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:55.559254  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:58.141472  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:58.154326  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:58.154394  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:58.193053  148123 cri.go:89] found id: ""
	I1010 19:26:58.193080  148123 logs.go:282] 0 containers: []
	W1010 19:26:58.193088  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:58.193094  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:58.193142  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:58.232514  148123 cri.go:89] found id: ""
	I1010 19:26:58.232538  148123 logs.go:282] 0 containers: []
	W1010 19:26:58.232545  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:58.232551  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:58.232599  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:58.266724  148123 cri.go:89] found id: ""
	I1010 19:26:58.266756  148123 logs.go:282] 0 containers: []
	W1010 19:26:58.266765  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:58.266771  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:58.266824  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:58.301986  148123 cri.go:89] found id: ""
	I1010 19:26:58.302012  148123 logs.go:282] 0 containers: []
	W1010 19:26:58.302019  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:58.302031  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:58.302080  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:58.340394  148123 cri.go:89] found id: ""
	I1010 19:26:58.340431  148123 logs.go:282] 0 containers: []
	W1010 19:26:58.340440  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:58.340448  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:58.340500  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:58.407035  148123 cri.go:89] found id: ""
	I1010 19:26:58.407069  148123 logs.go:282] 0 containers: []
	W1010 19:26:58.407081  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:58.407088  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:58.407151  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:58.446650  148123 cri.go:89] found id: ""
	I1010 19:26:58.446682  148123 logs.go:282] 0 containers: []
	W1010 19:26:58.446694  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:58.446703  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:58.446763  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:58.480719  148123 cri.go:89] found id: ""
	I1010 19:26:58.480750  148123 logs.go:282] 0 containers: []
	W1010 19:26:58.480759  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:58.480768  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:58.480781  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:58.561568  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:58.561602  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:55.227199  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:57.726841  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:56.065787  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:58.566732  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:57.550718  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:59.550793  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:58.609237  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:58.609264  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:58.659705  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:58.659748  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:58.674646  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:58.674680  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:58.750922  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:01.252098  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:01.265420  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:01.265497  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:01.300133  148123 cri.go:89] found id: ""
	I1010 19:27:01.300168  148123 logs.go:282] 0 containers: []
	W1010 19:27:01.300180  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:01.300188  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:01.300272  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:01.337451  148123 cri.go:89] found id: ""
	I1010 19:27:01.337477  148123 logs.go:282] 0 containers: []
	W1010 19:27:01.337485  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:01.337492  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:01.337539  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:01.371987  148123 cri.go:89] found id: ""
	I1010 19:27:01.372019  148123 logs.go:282] 0 containers: []
	W1010 19:27:01.372028  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:01.372034  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:01.372098  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:01.408996  148123 cri.go:89] found id: ""
	I1010 19:27:01.409022  148123 logs.go:282] 0 containers: []
	W1010 19:27:01.409033  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:01.409042  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:01.409109  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:01.443558  148123 cri.go:89] found id: ""
	I1010 19:27:01.443587  148123 logs.go:282] 0 containers: []
	W1010 19:27:01.443595  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:01.443602  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:01.443663  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:01.478517  148123 cri.go:89] found id: ""
	I1010 19:27:01.478546  148123 logs.go:282] 0 containers: []
	W1010 19:27:01.478555  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:01.478561  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:01.478611  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:01.513528  148123 cri.go:89] found id: ""
	I1010 19:27:01.513556  148123 logs.go:282] 0 containers: []
	W1010 19:27:01.513568  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:01.513576  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:01.513641  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:01.558861  148123 cri.go:89] found id: ""
	I1010 19:27:01.558897  148123 logs.go:282] 0 containers: []
	W1010 19:27:01.558909  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:01.558921  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:01.558937  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:01.599719  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:01.599747  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:01.650736  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:01.650774  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:01.664643  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:01.664669  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:01.736533  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:01.736557  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:01.736570  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:00.225540  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:02.226962  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:00.567193  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:03.066587  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:02.050439  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:04.050984  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:06.550977  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:04.317302  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:04.330450  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:04.330519  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:04.368893  148123 cri.go:89] found id: ""
	I1010 19:27:04.368923  148123 logs.go:282] 0 containers: []
	W1010 19:27:04.368932  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:04.368939  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:04.368993  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:04.406598  148123 cri.go:89] found id: ""
	I1010 19:27:04.406625  148123 logs.go:282] 0 containers: []
	W1010 19:27:04.406634  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:04.406640  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:04.406692  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:04.441485  148123 cri.go:89] found id: ""
	I1010 19:27:04.441515  148123 logs.go:282] 0 containers: []
	W1010 19:27:04.441525  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:04.441532  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:04.441581  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:04.476663  148123 cri.go:89] found id: ""
	I1010 19:27:04.476690  148123 logs.go:282] 0 containers: []
	W1010 19:27:04.476698  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:04.476704  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:04.476765  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:04.512124  148123 cri.go:89] found id: ""
	I1010 19:27:04.512171  148123 logs.go:282] 0 containers: []
	W1010 19:27:04.512186  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:04.512195  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:04.512260  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:04.547902  148123 cri.go:89] found id: ""
	I1010 19:27:04.547929  148123 logs.go:282] 0 containers: []
	W1010 19:27:04.547940  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:04.547949  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:04.548008  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:04.585257  148123 cri.go:89] found id: ""
	I1010 19:27:04.585287  148123 logs.go:282] 0 containers: []
	W1010 19:27:04.585297  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:04.585303  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:04.585352  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:04.621019  148123 cri.go:89] found id: ""
	I1010 19:27:04.621048  148123 logs.go:282] 0 containers: []
	W1010 19:27:04.621057  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:04.621068  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:04.621080  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:04.662770  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:04.662801  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:04.714551  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:04.714592  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:04.730922  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:04.730951  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:04.841139  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:04.841163  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:04.841178  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:07.428528  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:07.442423  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:07.442498  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:07.476722  148123 cri.go:89] found id: ""
	I1010 19:27:07.476753  148123 logs.go:282] 0 containers: []
	W1010 19:27:07.476764  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:07.476772  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:07.476835  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:07.512211  148123 cri.go:89] found id: ""
	I1010 19:27:07.512245  148123 logs.go:282] 0 containers: []
	W1010 19:27:07.512256  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:07.512263  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:07.512325  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:07.546546  148123 cri.go:89] found id: ""
	I1010 19:27:07.546588  148123 logs.go:282] 0 containers: []
	W1010 19:27:07.546601  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:07.546619  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:07.546688  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:07.585743  148123 cri.go:89] found id: ""
	I1010 19:27:07.585768  148123 logs.go:282] 0 containers: []
	W1010 19:27:07.585777  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:07.585783  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:07.585848  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:07.626827  148123 cri.go:89] found id: ""
	I1010 19:27:07.626855  148123 logs.go:282] 0 containers: []
	W1010 19:27:07.626865  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:07.626874  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:07.626960  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:07.661910  148123 cri.go:89] found id: ""
	I1010 19:27:07.661940  148123 logs.go:282] 0 containers: []
	W1010 19:27:07.661948  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:07.661955  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:07.662072  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:07.698255  148123 cri.go:89] found id: ""
	I1010 19:27:07.698288  148123 logs.go:282] 0 containers: []
	W1010 19:27:07.698301  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:07.698309  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:07.698374  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:07.734389  148123 cri.go:89] found id: ""
	I1010 19:27:07.734417  148123 logs.go:282] 0 containers: []
	W1010 19:27:07.734428  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:07.734440  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:07.734454  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:07.814089  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:07.814130  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:07.854799  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:07.854831  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:07.906595  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:07.906635  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:07.921928  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:07.921966  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:07.989839  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:04.727522  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:07.226694  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:05.565868  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:07.567139  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:09.050772  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:11.051291  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:10.490115  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:10.504216  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:10.504292  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:10.554789  148123 cri.go:89] found id: ""
	I1010 19:27:10.554815  148123 logs.go:282] 0 containers: []
	W1010 19:27:10.554825  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:10.554831  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:10.554889  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:10.618970  148123 cri.go:89] found id: ""
	I1010 19:27:10.618997  148123 logs.go:282] 0 containers: []
	W1010 19:27:10.619005  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:10.619012  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:10.619061  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:10.666900  148123 cri.go:89] found id: ""
	I1010 19:27:10.666933  148123 logs.go:282] 0 containers: []
	W1010 19:27:10.666946  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:10.666953  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:10.667023  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:10.704364  148123 cri.go:89] found id: ""
	I1010 19:27:10.704405  148123 logs.go:282] 0 containers: []
	W1010 19:27:10.704430  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:10.704450  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:10.704525  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:10.742341  148123 cri.go:89] found id: ""
	I1010 19:27:10.742380  148123 logs.go:282] 0 containers: []
	W1010 19:27:10.742389  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:10.742396  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:10.742461  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:10.783056  148123 cri.go:89] found id: ""
	I1010 19:27:10.783088  148123 logs.go:282] 0 containers: []
	W1010 19:27:10.783098  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:10.783105  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:10.783174  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:10.823198  148123 cri.go:89] found id: ""
	I1010 19:27:10.823224  148123 logs.go:282] 0 containers: []
	W1010 19:27:10.823233  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:10.823243  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:10.823302  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:10.858154  148123 cri.go:89] found id: ""
	I1010 19:27:10.858195  148123 logs.go:282] 0 containers: []
	W1010 19:27:10.858204  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:10.858213  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:10.858229  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:10.935491  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:10.935518  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:10.935532  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:11.015653  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:11.015694  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:11.058765  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:11.058793  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:11.116748  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:11.116799  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:09.727270  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:12.225797  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:10.065372  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:12.065695  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:13.550669  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:16.051044  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:13.631579  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:13.644885  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:13.644953  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:13.685242  148123 cri.go:89] found id: ""
	I1010 19:27:13.685273  148123 logs.go:282] 0 containers: []
	W1010 19:27:13.685285  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:13.685294  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:13.685364  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:13.720831  148123 cri.go:89] found id: ""
	I1010 19:27:13.720866  148123 logs.go:282] 0 containers: []
	W1010 19:27:13.720877  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:13.720885  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:13.720947  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:13.766374  148123 cri.go:89] found id: ""
	I1010 19:27:13.766404  148123 logs.go:282] 0 containers: []
	W1010 19:27:13.766413  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:13.766419  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:13.766468  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:13.804550  148123 cri.go:89] found id: ""
	I1010 19:27:13.804585  148123 logs.go:282] 0 containers: []
	W1010 19:27:13.804597  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:13.804606  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:13.804674  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:13.842406  148123 cri.go:89] found id: ""
	I1010 19:27:13.842432  148123 logs.go:282] 0 containers: []
	W1010 19:27:13.842440  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:13.842460  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:13.842678  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:13.882365  148123 cri.go:89] found id: ""
	I1010 19:27:13.882402  148123 logs.go:282] 0 containers: []
	W1010 19:27:13.882415  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:13.882423  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:13.882505  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:13.926016  148123 cri.go:89] found id: ""
	I1010 19:27:13.926052  148123 logs.go:282] 0 containers: []
	W1010 19:27:13.926065  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:13.926075  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:13.926177  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:13.960485  148123 cri.go:89] found id: ""
	I1010 19:27:13.960523  148123 logs.go:282] 0 containers: []
	W1010 19:27:13.960533  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:13.960542  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:13.960558  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:14.013013  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:14.013055  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:14.026906  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:14.026935  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:14.104469  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:14.104494  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:14.104507  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:14.185917  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:14.185959  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:16.732088  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:16.746646  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:16.746710  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:16.784715  148123 cri.go:89] found id: ""
	I1010 19:27:16.784739  148123 logs.go:282] 0 containers: []
	W1010 19:27:16.784749  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:16.784755  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:16.784807  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:16.824181  148123 cri.go:89] found id: ""
	I1010 19:27:16.824210  148123 logs.go:282] 0 containers: []
	W1010 19:27:16.824220  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:16.824228  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:16.824289  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:16.860901  148123 cri.go:89] found id: ""
	I1010 19:27:16.860932  148123 logs.go:282] 0 containers: []
	W1010 19:27:16.860941  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:16.860947  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:16.860997  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:16.895127  148123 cri.go:89] found id: ""
	I1010 19:27:16.895159  148123 logs.go:282] 0 containers: []
	W1010 19:27:16.895175  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:16.895183  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:16.895266  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:16.931989  148123 cri.go:89] found id: ""
	I1010 19:27:16.932020  148123 logs.go:282] 0 containers: []
	W1010 19:27:16.932028  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:16.932035  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:16.932086  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:16.967254  148123 cri.go:89] found id: ""
	I1010 19:27:16.967283  148123 logs.go:282] 0 containers: []
	W1010 19:27:16.967292  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:16.967299  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:16.967347  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:17.003010  148123 cri.go:89] found id: ""
	I1010 19:27:17.003035  148123 logs.go:282] 0 containers: []
	W1010 19:27:17.003043  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:17.003050  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:17.003097  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:17.040053  148123 cri.go:89] found id: ""
	I1010 19:27:17.040093  148123 logs.go:282] 0 containers: []
	W1010 19:27:17.040102  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:17.040118  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:17.040131  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:17.093176  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:17.093217  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:17.108086  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:17.108116  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:17.186423  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:17.186452  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:17.186467  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:17.271884  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:17.271926  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:14.227197  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:16.739354  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:14.066233  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:16.565852  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:18.566337  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:18.051613  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:20.549888  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:19.817954  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:19.831924  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:19.831993  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:19.873415  148123 cri.go:89] found id: ""
	I1010 19:27:19.873441  148123 logs.go:282] 0 containers: []
	W1010 19:27:19.873459  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:19.873466  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:19.873527  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:19.912174  148123 cri.go:89] found id: ""
	I1010 19:27:19.912207  148123 logs.go:282] 0 containers: []
	W1010 19:27:19.912219  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:19.912227  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:19.912298  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:19.948422  148123 cri.go:89] found id: ""
	I1010 19:27:19.948456  148123 logs.go:282] 0 containers: []
	W1010 19:27:19.948466  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:19.948472  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:19.948524  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:19.983917  148123 cri.go:89] found id: ""
	I1010 19:27:19.983949  148123 logs.go:282] 0 containers: []
	W1010 19:27:19.983962  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:19.983970  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:19.984024  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:20.019160  148123 cri.go:89] found id: ""
	I1010 19:27:20.019187  148123 logs.go:282] 0 containers: []
	W1010 19:27:20.019198  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:20.019207  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:20.019271  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:20.056073  148123 cri.go:89] found id: ""
	I1010 19:27:20.056104  148123 logs.go:282] 0 containers: []
	W1010 19:27:20.056116  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:20.056124  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:20.056187  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:20.095877  148123 cri.go:89] found id: ""
	I1010 19:27:20.095916  148123 logs.go:282] 0 containers: []
	W1010 19:27:20.095928  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:20.095935  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:20.096007  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:20.132360  148123 cri.go:89] found id: ""
	I1010 19:27:20.132385  148123 logs.go:282] 0 containers: []
	W1010 19:27:20.132394  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:20.132402  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:20.132413  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:20.190573  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:20.190619  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:20.205785  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:20.205819  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:20.278882  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:20.278911  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:20.278924  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:20.363982  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:20.364024  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:22.906059  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:22.919839  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:22.919912  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:22.958203  148123 cri.go:89] found id: ""
	I1010 19:27:22.958242  148123 logs.go:282] 0 containers: []
	W1010 19:27:22.958252  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:22.958258  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:22.958312  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:22.995834  148123 cri.go:89] found id: ""
	I1010 19:27:22.995866  148123 logs.go:282] 0 containers: []
	W1010 19:27:22.995874  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:22.995880  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:22.995945  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:23.035913  148123 cri.go:89] found id: ""
	I1010 19:27:23.035950  148123 logs.go:282] 0 containers: []
	W1010 19:27:23.035962  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:23.035970  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:23.036039  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:23.073995  148123 cri.go:89] found id: ""
	I1010 19:27:23.074036  148123 logs.go:282] 0 containers: []
	W1010 19:27:23.074049  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:23.074057  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:23.074117  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:23.111190  148123 cri.go:89] found id: ""
	I1010 19:27:23.111222  148123 logs.go:282] 0 containers: []
	W1010 19:27:23.111234  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:23.111243  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:23.111305  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:23.147771  148123 cri.go:89] found id: ""
	I1010 19:27:23.147797  148123 logs.go:282] 0 containers: []
	W1010 19:27:23.147806  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:23.147814  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:23.147878  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:23.185485  148123 cri.go:89] found id: ""
	I1010 19:27:23.185517  148123 logs.go:282] 0 containers: []
	W1010 19:27:23.185527  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:23.185535  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:23.185598  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:23.222030  148123 cri.go:89] found id: ""
	I1010 19:27:23.222060  148123 logs.go:282] 0 containers: []
	W1010 19:27:23.222070  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:23.222081  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:23.222097  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:23.301826  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:23.301849  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:23.301861  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:23.378688  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:23.378730  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:23.426683  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:23.426717  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:23.478425  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:23.478467  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:19.226994  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:21.727366  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:21.067094  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:23.567075  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:22.550076  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:24.551681  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:25.993768  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:26.008311  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:26.008383  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:26.046315  148123 cri.go:89] found id: ""
	I1010 19:27:26.046343  148123 logs.go:282] 0 containers: []
	W1010 19:27:26.046355  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:26.046364  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:26.046418  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:26.081823  148123 cri.go:89] found id: ""
	I1010 19:27:26.081847  148123 logs.go:282] 0 containers: []
	W1010 19:27:26.081855  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:26.081861  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:26.081913  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:26.117139  148123 cri.go:89] found id: ""
	I1010 19:27:26.117167  148123 logs.go:282] 0 containers: []
	W1010 19:27:26.117183  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:26.117191  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:26.117242  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:26.154477  148123 cri.go:89] found id: ""
	I1010 19:27:26.154501  148123 logs.go:282] 0 containers: []
	W1010 19:27:26.154510  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:26.154517  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:26.154568  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:26.187497  148123 cri.go:89] found id: ""
	I1010 19:27:26.187527  148123 logs.go:282] 0 containers: []
	W1010 19:27:26.187535  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:26.187541  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:26.187604  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:26.223110  148123 cri.go:89] found id: ""
	I1010 19:27:26.223140  148123 logs.go:282] 0 containers: []
	W1010 19:27:26.223151  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:26.223160  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:26.223226  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:26.264975  148123 cri.go:89] found id: ""
	I1010 19:27:26.265003  148123 logs.go:282] 0 containers: []
	W1010 19:27:26.265011  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:26.265017  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:26.265067  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:26.307467  148123 cri.go:89] found id: ""
	I1010 19:27:26.307494  148123 logs.go:282] 0 containers: []
	W1010 19:27:26.307503  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:26.307512  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:26.307524  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:26.346313  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:26.346342  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:26.400069  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:26.400109  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:26.415467  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:26.415500  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:26.486986  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:26.487016  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:26.487033  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:24.226736  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:26.228720  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:28.726470  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:26.067100  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:28.565675  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:27.051110  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:29.051207  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:31.553085  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:29.064522  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:29.077631  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:29.077696  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:29.116127  148123 cri.go:89] found id: ""
	I1010 19:27:29.116154  148123 logs.go:282] 0 containers: []
	W1010 19:27:29.116164  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:29.116172  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:29.116241  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:29.152863  148123 cri.go:89] found id: ""
	I1010 19:27:29.152893  148123 logs.go:282] 0 containers: []
	W1010 19:27:29.152901  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:29.152907  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:29.152963  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:29.189951  148123 cri.go:89] found id: ""
	I1010 19:27:29.189983  148123 logs.go:282] 0 containers: []
	W1010 19:27:29.189992  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:29.189999  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:29.190060  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:29.226611  148123 cri.go:89] found id: ""
	I1010 19:27:29.226646  148123 logs.go:282] 0 containers: []
	W1010 19:27:29.226657  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:29.226665  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:29.226729  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:29.271884  148123 cri.go:89] found id: ""
	I1010 19:27:29.271921  148123 logs.go:282] 0 containers: []
	W1010 19:27:29.271934  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:29.271944  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:29.272016  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:29.306128  148123 cri.go:89] found id: ""
	I1010 19:27:29.306168  148123 logs.go:282] 0 containers: []
	W1010 19:27:29.306181  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:29.306191  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:29.306255  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:29.341869  148123 cri.go:89] found id: ""
	I1010 19:27:29.341893  148123 logs.go:282] 0 containers: []
	W1010 19:27:29.341901  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:29.341908  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:29.341962  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:29.376213  148123 cri.go:89] found id: ""
	I1010 19:27:29.376240  148123 logs.go:282] 0 containers: []
	W1010 19:27:29.376249  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:29.376259  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:29.376273  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:29.428827  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:29.428878  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:29.443719  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:29.443754  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:29.516166  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:29.516193  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:29.516208  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:29.596643  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:29.596681  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:32.135286  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:32.148791  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:32.148893  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:32.185511  148123 cri.go:89] found id: ""
	I1010 19:27:32.185542  148123 logs.go:282] 0 containers: []
	W1010 19:27:32.185553  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:32.185560  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:32.185626  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:32.222908  148123 cri.go:89] found id: ""
	I1010 19:27:32.222934  148123 logs.go:282] 0 containers: []
	W1010 19:27:32.222942  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:32.222948  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:32.222999  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:32.260998  148123 cri.go:89] found id: ""
	I1010 19:27:32.261033  148123 logs.go:282] 0 containers: []
	W1010 19:27:32.261045  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:32.261063  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:32.261128  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:32.298311  148123 cri.go:89] found id: ""
	I1010 19:27:32.298339  148123 logs.go:282] 0 containers: []
	W1010 19:27:32.298348  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:32.298355  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:32.298409  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:32.334134  148123 cri.go:89] found id: ""
	I1010 19:27:32.334220  148123 logs.go:282] 0 containers: []
	W1010 19:27:32.334236  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:32.334249  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:32.334319  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:32.369690  148123 cri.go:89] found id: ""
	I1010 19:27:32.369723  148123 logs.go:282] 0 containers: []
	W1010 19:27:32.369735  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:32.369743  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:32.369807  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:32.407164  148123 cri.go:89] found id: ""
	I1010 19:27:32.407210  148123 logs.go:282] 0 containers: []
	W1010 19:27:32.407218  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:32.407224  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:32.407279  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:32.446468  148123 cri.go:89] found id: ""
	I1010 19:27:32.446494  148123 logs.go:282] 0 containers: []
	W1010 19:27:32.446505  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:32.446519  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:32.446535  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:32.497953  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:32.497997  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:32.513590  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:32.513620  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:32.594037  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:32.594072  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:32.594090  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:32.673546  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:32.673587  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:30.727725  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:32.727813  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:31.066731  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:33.067815  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:34.050574  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:36.550119  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:35.226084  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:35.241152  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:35.241222  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:35.283208  148123 cri.go:89] found id: ""
	I1010 19:27:35.283245  148123 logs.go:282] 0 containers: []
	W1010 19:27:35.283269  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:35.283286  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:35.283346  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:35.320406  148123 cri.go:89] found id: ""
	I1010 19:27:35.320444  148123 logs.go:282] 0 containers: []
	W1010 19:27:35.320457  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:35.320466  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:35.320523  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:35.360984  148123 cri.go:89] found id: ""
	I1010 19:27:35.361015  148123 logs.go:282] 0 containers: []
	W1010 19:27:35.361027  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:35.361034  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:35.361097  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:35.400191  148123 cri.go:89] found id: ""
	I1010 19:27:35.400219  148123 logs.go:282] 0 containers: []
	W1010 19:27:35.400230  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:35.400244  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:35.400314  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:35.438092  148123 cri.go:89] found id: ""
	I1010 19:27:35.438126  148123 logs.go:282] 0 containers: []
	W1010 19:27:35.438139  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:35.438147  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:35.438215  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:35.472656  148123 cri.go:89] found id: ""
	I1010 19:27:35.472681  148123 logs.go:282] 0 containers: []
	W1010 19:27:35.472688  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:35.472694  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:35.472743  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:35.505895  148123 cri.go:89] found id: ""
	I1010 19:27:35.505931  148123 logs.go:282] 0 containers: []
	W1010 19:27:35.505942  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:35.505950  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:35.506015  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:35.540406  148123 cri.go:89] found id: ""
	I1010 19:27:35.540441  148123 logs.go:282] 0 containers: []
	W1010 19:27:35.540451  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:35.540462  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:35.540478  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:35.555874  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:35.555903  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:35.628400  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:35.628427  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:35.628453  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:35.708262  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:35.708305  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:35.752796  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:35.752824  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:38.306212  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:38.319473  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:38.319543  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:38.357493  148123 cri.go:89] found id: ""
	I1010 19:27:38.357520  148123 logs.go:282] 0 containers: []
	W1010 19:27:38.357529  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:38.357536  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:38.357588  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:38.395908  148123 cri.go:89] found id: ""
	I1010 19:27:38.395935  148123 logs.go:282] 0 containers: []
	W1010 19:27:38.395943  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:38.395949  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:38.396010  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:38.434495  148123 cri.go:89] found id: ""
	I1010 19:27:38.434529  148123 logs.go:282] 0 containers: []
	W1010 19:27:38.434539  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:38.434545  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:38.434608  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:38.474647  148123 cri.go:89] found id: ""
	I1010 19:27:38.474686  148123 logs.go:282] 0 containers: []
	W1010 19:27:38.474698  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:38.474710  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:38.474784  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:38.511105  148123 cri.go:89] found id: ""
	I1010 19:27:38.511133  148123 logs.go:282] 0 containers: []
	W1010 19:27:38.511141  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:38.511148  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:38.511199  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:38.551011  148123 cri.go:89] found id: ""
	I1010 19:27:38.551055  148123 logs.go:282] 0 containers: []
	W1010 19:27:38.551066  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:38.551074  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:38.551139  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:35.227301  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:37.726528  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:35.567838  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:38.066658  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:38.552499  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:40.544561  147758 pod_ready.go:82] duration metric: took 4m0.00091784s for pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace to be "Ready" ...
	E1010 19:27:40.544600  147758 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace to be "Ready" (will not retry!)
	I1010 19:27:40.544623  147758 pod_ready.go:39] duration metric: took 4m15.623470592s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 19:27:40.544664  147758 kubeadm.go:597] duration metric: took 4m22.92080204s to restartPrimaryControlPlane
	W1010 19:27:40.544737  147758 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1010 19:27:40.544829  147758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1010 19:27:38.590579  148123 cri.go:89] found id: ""
	I1010 19:27:38.590606  148123 logs.go:282] 0 containers: []
	W1010 19:27:38.590615  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:38.590621  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:38.590686  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:38.627775  148123 cri.go:89] found id: ""
	I1010 19:27:38.627812  148123 logs.go:282] 0 containers: []
	W1010 19:27:38.627826  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:38.627838  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:38.627854  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:38.683195  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:38.683239  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:38.697220  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:38.697250  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:38.771619  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:38.771651  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:38.771668  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:38.851324  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:38.851362  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:41.408165  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:41.422958  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:41.423032  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:41.466774  148123 cri.go:89] found id: ""
	I1010 19:27:41.466806  148123 logs.go:282] 0 containers: []
	W1010 19:27:41.466816  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:41.466823  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:41.466874  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:41.506429  148123 cri.go:89] found id: ""
	I1010 19:27:41.506470  148123 logs.go:282] 0 containers: []
	W1010 19:27:41.506482  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:41.506489  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:41.506555  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:41.548929  148123 cri.go:89] found id: ""
	I1010 19:27:41.548965  148123 logs.go:282] 0 containers: []
	W1010 19:27:41.548976  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:41.548983  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:41.549044  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:41.592243  148123 cri.go:89] found id: ""
	I1010 19:27:41.592274  148123 logs.go:282] 0 containers: []
	W1010 19:27:41.592283  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:41.592290  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:41.592342  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:41.628516  148123 cri.go:89] found id: ""
	I1010 19:27:41.628558  148123 logs.go:282] 0 containers: []
	W1010 19:27:41.628570  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:41.628578  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:41.628650  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:41.665011  148123 cri.go:89] found id: ""
	I1010 19:27:41.665041  148123 logs.go:282] 0 containers: []
	W1010 19:27:41.665053  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:41.665060  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:41.665112  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:41.702650  148123 cri.go:89] found id: ""
	I1010 19:27:41.702681  148123 logs.go:282] 0 containers: []
	W1010 19:27:41.702692  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:41.702700  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:41.702764  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:41.738971  148123 cri.go:89] found id: ""
	I1010 19:27:41.739002  148123 logs.go:282] 0 containers: []
	W1010 19:27:41.739021  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:41.739031  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:41.739062  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:41.813296  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:41.813321  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:41.813335  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:41.895138  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:41.895185  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:41.941975  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:41.942012  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:41.994838  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:41.994888  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:39.727140  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:41.728263  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:40.566241  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:43.065219  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:44.511153  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:44.524871  148123 kubeadm.go:597] duration metric: took 4m4.194337013s to restartPrimaryControlPlane
	W1010 19:27:44.524953  148123 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1010 19:27:44.524988  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1010 19:27:44.994063  148123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 19:27:45.010926  148123 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1010 19:27:45.021757  148123 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1010 19:27:45.032246  148123 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1010 19:27:45.032270  148123 kubeadm.go:157] found existing configuration files:
	
	I1010 19:27:45.032326  148123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1010 19:27:45.042799  148123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1010 19:27:45.042866  148123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1010 19:27:45.053017  148123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1010 19:27:45.063373  148123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1010 19:27:45.063445  148123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1010 19:27:45.074689  148123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1010 19:27:45.085187  148123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1010 19:27:45.085241  148123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1010 19:27:45.096275  148123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1010 19:27:45.106686  148123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1010 19:27:45.106750  148123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1010 19:27:45.117018  148123 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1010 19:27:45.358920  148123 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1010 19:27:44.226853  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:46.227586  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:48.727469  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:45.066410  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:47.569864  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:51.230704  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:53.727351  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:50.065845  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:52.066267  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:55.727457  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:58.226861  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:54.564611  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:56.566702  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:00.728542  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:03.225779  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:59.065614  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:01.068088  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:03.566502  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:06.739904  147758 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.195045639s)
	I1010 19:28:06.739984  147758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 19:28:06.756046  147758 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1010 19:28:06.768580  147758 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1010 19:28:06.780663  147758 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1010 19:28:06.780732  147758 kubeadm.go:157] found existing configuration files:
	
	I1010 19:28:06.780807  147758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1010 19:28:06.792092  147758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1010 19:28:06.792179  147758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1010 19:28:06.804515  147758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1010 19:28:06.814969  147758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1010 19:28:06.815040  147758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1010 19:28:06.826056  147758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1010 19:28:06.836050  147758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1010 19:28:06.836108  147758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1010 19:28:06.846125  147758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1010 19:28:06.855505  147758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1010 19:28:06.855559  147758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1010 19:28:06.865367  147758 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1010 19:28:06.916227  147758 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1010 19:28:06.916375  147758 kubeadm.go:310] [preflight] Running pre-flight checks
	I1010 19:28:07.036539  147758 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1010 19:28:07.036652  147758 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1010 19:28:07.036762  147758 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1010 19:28:07.044897  147758 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1010 19:28:07.046978  147758 out.go:235]   - Generating certificates and keys ...
	I1010 19:28:07.047117  147758 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1010 19:28:07.047229  147758 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1010 19:28:07.047384  147758 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1010 19:28:07.047467  147758 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1010 19:28:07.047584  147758 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1010 19:28:07.047675  147758 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1010 19:28:07.047794  147758 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1010 19:28:07.047902  147758 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1010 19:28:07.048005  147758 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1010 19:28:07.048093  147758 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1010 19:28:07.048142  147758 kubeadm.go:310] [certs] Using the existing "sa" key
	I1010 19:28:07.048210  147758 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1010 19:28:07.127836  147758 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1010 19:28:07.434492  147758 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1010 19:28:07.487567  147758 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1010 19:28:07.731314  147758 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1010 19:28:07.919060  147758 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1010 19:28:07.919565  147758 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1010 19:28:07.922740  147758 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1010 19:28:05.227611  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:07.229836  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:06.065246  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:08.067360  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:07.925140  147758 out.go:235]   - Booting up control plane ...
	I1010 19:28:07.925239  147758 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1010 19:28:07.925356  147758 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1010 19:28:07.925444  147758 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1010 19:28:07.944375  147758 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1010 19:28:07.951182  147758 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1010 19:28:07.951274  147758 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1010 19:28:08.087325  147758 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1010 19:28:08.087560  147758 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1010 19:28:08.598361  147758 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 511.081439ms
	I1010 19:28:08.598502  147758 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1010 19:28:09.727932  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:12.227939  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:10.566945  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:13.067142  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:14.100517  147758 kubeadm.go:310] [api-check] The API server is healthy after 5.501985157s
	I1010 19:28:14.119932  147758 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1010 19:28:14.149557  147758 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1010 19:28:14.207413  147758 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1010 19:28:14.207735  147758 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-541370 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1010 19:28:14.226199  147758 kubeadm.go:310] [bootstrap-token] Using token: sbg4v0.t5me93bb5vn8m913
	I1010 19:28:14.228059  147758 out.go:235]   - Configuring RBAC rules ...
	I1010 19:28:14.228208  147758 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1010 19:28:14.241706  147758 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1010 19:28:14.256554  147758 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1010 19:28:14.263129  147758 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1010 19:28:14.274346  147758 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1010 19:28:14.282313  147758 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1010 19:28:14.507850  147758 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1010 19:28:14.970234  147758 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1010 19:28:15.508328  147758 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1010 19:28:15.509530  147758 kubeadm.go:310] 
	I1010 19:28:15.509635  147758 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1010 19:28:15.509653  147758 kubeadm.go:310] 
	I1010 19:28:15.509743  147758 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1010 19:28:15.509762  147758 kubeadm.go:310] 
	I1010 19:28:15.509795  147758 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1010 19:28:15.509888  147758 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1010 19:28:15.509954  147758 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1010 19:28:15.509970  147758 kubeadm.go:310] 
	I1010 19:28:15.510083  147758 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1010 19:28:15.510103  147758 kubeadm.go:310] 
	I1010 19:28:15.510203  147758 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1010 19:28:15.510214  147758 kubeadm.go:310] 
	I1010 19:28:15.510297  147758 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1010 19:28:15.510410  147758 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1010 19:28:15.510489  147758 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1010 19:28:15.510495  147758 kubeadm.go:310] 
	I1010 19:28:15.510603  147758 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1010 19:28:15.510707  147758 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1010 19:28:15.510724  147758 kubeadm.go:310] 
	I1010 19:28:15.510807  147758 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token sbg4v0.t5me93bb5vn8m913 \
	I1010 19:28:15.510958  147758 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:bc8568f96f96483e4403a7eff9866ac7bd8b0e76d8203796a49dd1a769f11bda \
	I1010 19:28:15.511005  147758 kubeadm.go:310] 	--control-plane 
	I1010 19:28:15.511034  147758 kubeadm.go:310] 
	I1010 19:28:15.511161  147758 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1010 19:28:15.511173  147758 kubeadm.go:310] 
	I1010 19:28:15.511268  147758 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token sbg4v0.t5me93bb5vn8m913 \
	I1010 19:28:15.511403  147758 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:bc8568f96f96483e4403a7eff9866ac7bd8b0e76d8203796a49dd1a769f11bda 
	I1010 19:28:15.512298  147758 kubeadm.go:310] W1010 19:28:06.890572    2539 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1010 19:28:15.512594  147758 kubeadm.go:310] W1010 19:28:06.891448    2539 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1010 19:28:15.512702  147758 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1010 19:28:15.512734  147758 cni.go:84] Creating CNI manager for ""
	I1010 19:28:15.512744  147758 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1010 19:28:15.514703  147758 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1010 19:28:15.516229  147758 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1010 19:28:15.527554  147758 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1010 19:28:15.549266  147758 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1010 19:28:15.549362  147758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:15.549399  147758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-541370 minikube.k8s.io/updated_at=2024_10_10T19_28_15_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=e90d7de550e839433dc631623053398b652747fc minikube.k8s.io/name=embed-certs-541370 minikube.k8s.io/primary=true
	I1010 19:28:15.590732  147758 ops.go:34] apiserver oom_adj: -16
	I1010 19:28:15.740942  147758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:16.241392  147758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:16.741807  147758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:14.229241  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:16.727260  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:14.059512  148525 pod_ready.go:82] duration metric: took 4m0.00022742s for pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace to be "Ready" ...
	E1010 19:28:14.059550  148525 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace to be "Ready" (will not retry!)
	I1010 19:28:14.059569  148525 pod_ready.go:39] duration metric: took 4m7.001942194s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 19:28:14.059614  148525 kubeadm.go:597] duration metric: took 4m14.998320151s to restartPrimaryControlPlane
	W1010 19:28:14.059672  148525 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1010 19:28:14.059698  148525 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1010 19:28:17.241315  147758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:17.741580  147758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:18.241006  147758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:18.742042  147758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:19.241251  147758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:19.741030  147758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:19.862541  147758 kubeadm.go:1113] duration metric: took 4.313246481s to wait for elevateKubeSystemPrivileges
	I1010 19:28:19.862579  147758 kubeadm.go:394] duration metric: took 5m2.288571479s to StartCluster
	I1010 19:28:19.862628  147758 settings.go:142] acquiring lock: {Name:mk8d73e472e6ca16e47be8eb712406a95625b8da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 19:28:19.862751  147758 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19787-81676/kubeconfig
	I1010 19:28:19.864528  147758 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19787-81676/kubeconfig: {Name:mk31297b607b774870865548d952622c04d970cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 19:28:19.864812  147758 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.120 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1010 19:28:19.864910  147758 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1010 19:28:19.865019  147758 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-541370"
	I1010 19:28:19.865041  147758 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-541370"
	W1010 19:28:19.865053  147758 addons.go:243] addon storage-provisioner should already be in state true
	I1010 19:28:19.865062  147758 addons.go:69] Setting default-storageclass=true in profile "embed-certs-541370"
	I1010 19:28:19.865085  147758 host.go:66] Checking if "embed-certs-541370" exists ...
	I1010 19:28:19.865077  147758 config.go:182] Loaded profile config "embed-certs-541370": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 19:28:19.865129  147758 addons.go:69] Setting metrics-server=true in profile "embed-certs-541370"
	I1010 19:28:19.865164  147758 addons.go:234] Setting addon metrics-server=true in "embed-certs-541370"
	W1010 19:28:19.865179  147758 addons.go:243] addon metrics-server should already be in state true
	I1010 19:28:19.865115  147758 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-541370"
	I1010 19:28:19.865215  147758 host.go:66] Checking if "embed-certs-541370" exists ...
	I1010 19:28:19.865558  147758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:28:19.865593  147758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:28:19.865607  147758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:28:19.865629  147758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:28:19.865595  147758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:28:19.865725  147758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:28:19.866857  147758 out.go:177] * Verifying Kubernetes components...
	I1010 19:28:19.868590  147758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 19:28:19.882524  147758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42899
	I1010 19:28:19.882595  147758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39477
	I1010 19:28:19.882678  147758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42999
	I1010 19:28:19.883065  147758 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:28:19.883168  147758 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:28:19.883281  147758 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:28:19.883559  147758 main.go:141] libmachine: Using API Version  1
	I1010 19:28:19.883575  147758 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:28:19.883657  147758 main.go:141] libmachine: Using API Version  1
	I1010 19:28:19.883669  147758 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:28:19.883802  147758 main.go:141] libmachine: Using API Version  1
	I1010 19:28:19.883818  147758 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:28:19.883968  147758 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:28:19.883976  147758 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:28:19.884141  147758 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:28:19.884194  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetState
	I1010 19:28:19.884408  147758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:28:19.884437  147758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:28:19.884684  147758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:28:19.884746  147758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:28:19.887912  147758 addons.go:234] Setting addon default-storageclass=true in "embed-certs-541370"
	W1010 19:28:19.887942  147758 addons.go:243] addon default-storageclass should already be in state true
	I1010 19:28:19.887973  147758 host.go:66] Checking if "embed-certs-541370" exists ...
	I1010 19:28:19.888333  147758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:28:19.888383  147758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:28:19.901588  147758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37895
	I1010 19:28:19.902131  147758 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:28:19.902597  147758 main.go:141] libmachine: Using API Version  1
	I1010 19:28:19.902621  147758 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:28:19.902927  147758 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:28:19.903101  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetState
	I1010 19:28:19.904556  147758 main.go:141] libmachine: (embed-certs-541370) Calling .DriverName
	I1010 19:28:19.905207  147758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33927
	I1010 19:28:19.905621  147758 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:28:19.906188  147758 main.go:141] libmachine: Using API Version  1
	I1010 19:28:19.906209  147758 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:28:19.906599  147758 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:28:19.906647  147758 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 19:28:19.906837  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetState
	I1010 19:28:19.907699  147758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34245
	I1010 19:28:19.908147  147758 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:28:19.908557  147758 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 19:28:19.908584  147758 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1010 19:28:19.908610  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHHostname
	I1010 19:28:19.908705  147758 main.go:141] libmachine: (embed-certs-541370) Calling .DriverName
	I1010 19:28:19.908717  147758 main.go:141] libmachine: Using API Version  1
	I1010 19:28:19.908745  147758 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:28:19.909364  147758 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:28:19.910154  147758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:28:19.910208  147758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:28:19.910840  147758 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1010 19:28:19.912716  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:28:19.912722  147758 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1010 19:28:19.912743  147758 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1010 19:28:19.912769  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHHostname
	I1010 19:28:19.913199  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:28:19.913224  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:28:19.913500  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHPort
	I1010 19:28:19.913682  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:28:19.913845  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHUsername
	I1010 19:28:19.913972  147758 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/embed-certs-541370/id_rsa Username:docker}
	I1010 19:28:19.921800  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:28:19.922343  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:28:19.922374  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:28:19.922653  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHPort
	I1010 19:28:19.922842  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:28:19.922965  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHUsername
	I1010 19:28:19.923108  147758 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/embed-certs-541370/id_rsa Username:docker}
	I1010 19:28:19.935097  147758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39933
	I1010 19:28:19.935605  147758 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:28:19.936123  147758 main.go:141] libmachine: Using API Version  1
	I1010 19:28:19.936146  147758 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:28:19.936561  147758 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:28:19.936747  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetState
	I1010 19:28:19.938789  147758 main.go:141] libmachine: (embed-certs-541370) Calling .DriverName
	I1010 19:28:19.939019  147758 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1010 19:28:19.939034  147758 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1010 19:28:19.939054  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHHostname
	I1010 19:28:19.941682  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:28:19.942137  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:28:19.942165  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:28:19.942404  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHPort
	I1010 19:28:19.942642  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:28:19.942767  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHUsername
	I1010 19:28:19.942915  147758 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/embed-certs-541370/id_rsa Username:docker}
	I1010 19:28:20.108247  147758 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 19:28:20.149819  147758 node_ready.go:35] waiting up to 6m0s for node "embed-certs-541370" to be "Ready" ...
	I1010 19:28:20.163096  147758 node_ready.go:49] node "embed-certs-541370" has status "Ready":"True"
	I1010 19:28:20.163118  147758 node_ready.go:38] duration metric: took 13.26779ms for node "embed-certs-541370" to be "Ready" ...
	I1010 19:28:20.163128  147758 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 19:28:20.168620  147758 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-59752" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:20.241952  147758 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1010 19:28:20.241978  147758 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1010 19:28:20.249679  147758 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1010 19:28:20.290149  147758 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1010 19:28:20.290190  147758 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1010 19:28:20.291475  147758 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 19:28:20.410539  147758 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1010 19:28:20.410582  147758 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1010 19:28:20.491567  147758 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1010 19:28:20.684370  147758 main.go:141] libmachine: Making call to close driver server
	I1010 19:28:20.684403  147758 main.go:141] libmachine: (embed-certs-541370) Calling .Close
	I1010 19:28:20.684695  147758 main.go:141] libmachine: (embed-certs-541370) DBG | Closing plugin on server side
	I1010 19:28:20.684742  147758 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:28:20.684749  147758 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:28:20.684756  147758 main.go:141] libmachine: Making call to close driver server
	I1010 19:28:20.684764  147758 main.go:141] libmachine: (embed-certs-541370) Calling .Close
	I1010 19:28:20.685029  147758 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:28:20.685059  147758 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:28:20.685036  147758 main.go:141] libmachine: (embed-certs-541370) DBG | Closing plugin on server side
	I1010 19:28:20.695901  147758 main.go:141] libmachine: Making call to close driver server
	I1010 19:28:20.695926  147758 main.go:141] libmachine: (embed-certs-541370) Calling .Close
	I1010 19:28:20.696202  147758 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:28:20.696249  147758 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:28:21.439463  147758 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.147952803s)
	I1010 19:28:21.439626  147758 main.go:141] libmachine: Making call to close driver server
	I1010 19:28:21.439659  147758 main.go:141] libmachine: (embed-certs-541370) Calling .Close
	I1010 19:28:21.439951  147758 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:28:21.439969  147758 main.go:141] libmachine: (embed-certs-541370) DBG | Closing plugin on server side
	I1010 19:28:21.439976  147758 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:28:21.439997  147758 main.go:141] libmachine: Making call to close driver server
	I1010 19:28:21.440009  147758 main.go:141] libmachine: (embed-certs-541370) Calling .Close
	I1010 19:28:21.440299  147758 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:28:21.440298  147758 main.go:141] libmachine: (embed-certs-541370) DBG | Closing plugin on server side
	I1010 19:28:21.440314  147758 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:28:21.780486  147758 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.288854773s)
	I1010 19:28:21.780551  147758 main.go:141] libmachine: Making call to close driver server
	I1010 19:28:21.780567  147758 main.go:141] libmachine: (embed-certs-541370) Calling .Close
	I1010 19:28:21.780948  147758 main.go:141] libmachine: (embed-certs-541370) DBG | Closing plugin on server side
	I1010 19:28:21.780980  147758 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:28:21.780996  147758 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:28:21.781007  147758 main.go:141] libmachine: Making call to close driver server
	I1010 19:28:21.781016  147758 main.go:141] libmachine: (embed-certs-541370) Calling .Close
	I1010 19:28:21.781289  147758 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:28:21.781310  147758 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:28:21.781331  147758 addons.go:475] Verifying addon metrics-server=true in "embed-certs-541370"
	I1010 19:28:21.783512  147758 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1010 19:28:21.784958  147758 addons.go:510] duration metric: took 1.92006141s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1010 19:28:19.225844  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:21.227960  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:23.726439  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:22.195129  147758 pod_ready.go:103] pod "coredns-7c65d6cfc9-59752" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:24.678736  147758 pod_ready.go:103] pod "coredns-7c65d6cfc9-59752" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:25.727053  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:27.727657  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:27.177348  147758 pod_ready.go:103] pod "coredns-7c65d6cfc9-59752" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:29.177459  147758 pod_ready.go:93] pod "coredns-7c65d6cfc9-59752" in "kube-system" namespace has status "Ready":"True"
	I1010 19:28:29.177485  147758 pod_ready.go:82] duration metric: took 9.008841503s for pod "coredns-7c65d6cfc9-59752" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:29.177495  147758 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-n7wxs" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:29.182744  147758 pod_ready.go:93] pod "coredns-7c65d6cfc9-n7wxs" in "kube-system" namespace has status "Ready":"True"
	I1010 19:28:29.182777  147758 pod_ready.go:82] duration metric: took 5.273263ms for pod "coredns-7c65d6cfc9-n7wxs" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:29.182791  147758 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:29.191507  147758 pod_ready.go:93] pod "etcd-embed-certs-541370" in "kube-system" namespace has status "Ready":"True"
	I1010 19:28:29.191539  147758 pod_ready.go:82] duration metric: took 8.738961ms for pod "etcd-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:29.191554  147758 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:29.199167  147758 pod_ready.go:93] pod "kube-apiserver-embed-certs-541370" in "kube-system" namespace has status "Ready":"True"
	I1010 19:28:29.199218  147758 pod_ready.go:82] duration metric: took 7.635672ms for pod "kube-apiserver-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:29.199234  147758 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:29.204558  147758 pod_ready.go:93] pod "kube-controller-manager-embed-certs-541370" in "kube-system" namespace has status "Ready":"True"
	I1010 19:28:29.204581  147758 pod_ready.go:82] duration metric: took 5.337574ms for pod "kube-controller-manager-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:29.204591  147758 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-6hdds" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:29.573781  147758 pod_ready.go:93] pod "kube-proxy-6hdds" in "kube-system" namespace has status "Ready":"True"
	I1010 19:28:29.573808  147758 pod_ready.go:82] duration metric: took 369.210969ms for pod "kube-proxy-6hdds" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:29.573818  147758 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:29.974015  147758 pod_ready.go:93] pod "kube-scheduler-embed-certs-541370" in "kube-system" namespace has status "Ready":"True"
	I1010 19:28:29.974039  147758 pod_ready.go:82] duration metric: took 400.214845ms for pod "kube-scheduler-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:29.974048  147758 pod_ready.go:39] duration metric: took 9.810911064s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 19:28:29.974066  147758 api_server.go:52] waiting for apiserver process to appear ...
	I1010 19:28:29.974120  147758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:28:29.991332  147758 api_server.go:72] duration metric: took 10.126480862s to wait for apiserver process to appear ...
	I1010 19:28:29.991356  147758 api_server.go:88] waiting for apiserver healthz status ...
	I1010 19:28:29.991382  147758 api_server.go:253] Checking apiserver healthz at https://192.168.39.120:8443/healthz ...
	I1010 19:28:29.995855  147758 api_server.go:279] https://192.168.39.120:8443/healthz returned 200:
	ok
	I1010 19:28:29.997488  147758 api_server.go:141] control plane version: v1.31.1
	I1010 19:28:29.997516  147758 api_server.go:131] duration metric: took 6.152312ms to wait for apiserver health ...
	I1010 19:28:29.997526  147758 system_pods.go:43] waiting for kube-system pods to appear ...
	I1010 19:28:30.176631  147758 system_pods.go:59] 9 kube-system pods found
	I1010 19:28:30.176662  147758 system_pods.go:61] "coredns-7c65d6cfc9-59752" [f7980c69-dd8e-42e0-a0ab-1dedf2203367] Running
	I1010 19:28:30.176668  147758 system_pods.go:61] "coredns-7c65d6cfc9-n7wxs" [ac89df92-bbdb-432b-8c4a-d9ced3c2f2b5] Running
	I1010 19:28:30.176672  147758 system_pods.go:61] "etcd-embed-certs-541370" [94f92d38-753f-47c7-95b6-7ef0486a172d] Running
	I1010 19:28:30.176676  147758 system_pods.go:61] "kube-apiserver-embed-certs-541370" [b60f7ec1-6416-4e0b-8f49-d673496187b6] Running
	I1010 19:28:30.176680  147758 system_pods.go:61] "kube-controller-manager-embed-certs-541370" [ca09511b-ec93-4146-9d43-5fbc0880394a] Running
	I1010 19:28:30.176683  147758 system_pods.go:61] "kube-proxy-6hdds" [fe7cbbf4-12be-469d-b176-37c4daccab96] Running
	I1010 19:28:30.176686  147758 system_pods.go:61] "kube-scheduler-embed-certs-541370" [37c96f22-d5a4-4233-ba65-7367b075656a] Running
	I1010 19:28:30.176693  147758 system_pods.go:61] "metrics-server-6867b74b74-znhn4" [5dc1f764-c7c7-480e-b787-5f5cf6c14a84] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1010 19:28:30.176699  147758 system_pods.go:61] "storage-provisioner" [cfb28184-daef-40be-9170-b42058727418] Running
	I1010 19:28:30.176707  147758 system_pods.go:74] duration metric: took 179.174083ms to wait for pod list to return data ...
	I1010 19:28:30.176714  147758 default_sa.go:34] waiting for default service account to be created ...
	I1010 19:28:30.375326  147758 default_sa.go:45] found service account: "default"
	I1010 19:28:30.375361  147758 default_sa.go:55] duration metric: took 198.640267ms for default service account to be created ...
	I1010 19:28:30.375374  147758 system_pods.go:116] waiting for k8s-apps to be running ...
	I1010 19:28:30.578749  147758 system_pods.go:86] 9 kube-system pods found
	I1010 19:28:30.578780  147758 system_pods.go:89] "coredns-7c65d6cfc9-59752" [f7980c69-dd8e-42e0-a0ab-1dedf2203367] Running
	I1010 19:28:30.578786  147758 system_pods.go:89] "coredns-7c65d6cfc9-n7wxs" [ac89df92-bbdb-432b-8c4a-d9ced3c2f2b5] Running
	I1010 19:28:30.578790  147758 system_pods.go:89] "etcd-embed-certs-541370" [94f92d38-753f-47c7-95b6-7ef0486a172d] Running
	I1010 19:28:30.578794  147758 system_pods.go:89] "kube-apiserver-embed-certs-541370" [b60f7ec1-6416-4e0b-8f49-d673496187b6] Running
	I1010 19:28:30.578797  147758 system_pods.go:89] "kube-controller-manager-embed-certs-541370" [ca09511b-ec93-4146-9d43-5fbc0880394a] Running
	I1010 19:28:30.578801  147758 system_pods.go:89] "kube-proxy-6hdds" [fe7cbbf4-12be-469d-b176-37c4daccab96] Running
	I1010 19:28:30.578804  147758 system_pods.go:89] "kube-scheduler-embed-certs-541370" [37c96f22-d5a4-4233-ba65-7367b075656a] Running
	I1010 19:28:30.578810  147758 system_pods.go:89] "metrics-server-6867b74b74-znhn4" [5dc1f764-c7c7-480e-b787-5f5cf6c14a84] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1010 19:28:30.578814  147758 system_pods.go:89] "storage-provisioner" [cfb28184-daef-40be-9170-b42058727418] Running
	I1010 19:28:30.578822  147758 system_pods.go:126] duration metric: took 203.441477ms to wait for k8s-apps to be running ...
	I1010 19:28:30.578829  147758 system_svc.go:44] waiting for kubelet service to be running ....
	I1010 19:28:30.578877  147758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 19:28:30.596523  147758 system_svc.go:56] duration metric: took 17.684729ms WaitForService to wait for kubelet
	I1010 19:28:30.596553  147758 kubeadm.go:582] duration metric: took 10.731708748s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1010 19:28:30.596573  147758 node_conditions.go:102] verifying NodePressure condition ...
	I1010 19:28:30.774749  147758 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1010 19:28:30.774783  147758 node_conditions.go:123] node cpu capacity is 2
	I1010 19:28:30.774807  147758 node_conditions.go:105] duration metric: took 178.228671ms to run NodePressure ...
	I1010 19:28:30.774822  147758 start.go:241] waiting for startup goroutines ...
	I1010 19:28:30.774831  147758 start.go:246] waiting for cluster config update ...
	I1010 19:28:30.774845  147758 start.go:255] writing updated cluster config ...
	I1010 19:28:30.775121  147758 ssh_runner.go:195] Run: rm -f paused
	I1010 19:28:30.826689  147758 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1010 19:28:30.828795  147758 out.go:177] * Done! kubectl is now configured to use "embed-certs-541370" cluster and "default" namespace by default
	I1010 19:28:29.728096  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:32.229632  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:34.726536  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:36.727032  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:38.727488  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:40.372903  148525 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.31317648s)
	I1010 19:28:40.372991  148525 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 19:28:40.389319  148525 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1010 19:28:40.400123  148525 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1010 19:28:40.411906  148525 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1010 19:28:40.411932  148525 kubeadm.go:157] found existing configuration files:
	
	I1010 19:28:40.411976  148525 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1010 19:28:40.421840  148525 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1010 19:28:40.421904  148525 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1010 19:28:40.432229  148525 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1010 19:28:40.442121  148525 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1010 19:28:40.442203  148525 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1010 19:28:40.452969  148525 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1010 19:28:40.463085  148525 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1010 19:28:40.463146  148525 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1010 19:28:40.473103  148525 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1010 19:28:40.482854  148525 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1010 19:28:40.482914  148525 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1010 19:28:40.494023  148525 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1010 19:28:40.543369  148525 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1010 19:28:40.543466  148525 kubeadm.go:310] [preflight] Running pre-flight checks
	I1010 19:28:40.657301  148525 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1010 19:28:40.657462  148525 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1010 19:28:40.657579  148525 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1010 19:28:40.669222  148525 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1010 19:28:40.670995  148525 out.go:235]   - Generating certificates and keys ...
	I1010 19:28:40.671102  148525 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1010 19:28:40.671171  148525 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1010 19:28:40.671284  148525 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1010 19:28:40.671374  148525 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1010 19:28:40.671471  148525 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1010 19:28:40.671557  148525 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1010 19:28:40.671650  148525 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1010 19:28:40.671751  148525 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1010 19:28:40.671895  148525 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1010 19:28:40.672000  148525 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1010 19:28:40.672056  148525 kubeadm.go:310] [certs] Using the existing "sa" key
	I1010 19:28:40.672136  148525 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1010 19:28:40.876613  148525 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1010 19:28:41.109518  148525 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1010 19:28:41.186751  148525 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1010 19:28:41.424710  148525 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1010 19:28:41.479611  148525 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1010 19:28:41.480235  148525 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1010 19:28:41.483222  148525 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1010 19:28:41.227521  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:43.728023  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:41.484809  148525 out.go:235]   - Booting up control plane ...
	I1010 19:28:41.484935  148525 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1010 19:28:41.485020  148525 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1010 19:28:41.485317  148525 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1010 19:28:41.506919  148525 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1010 19:28:41.517006  148525 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1010 19:28:41.517077  148525 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1010 19:28:41.653211  148525 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1010 19:28:41.653364  148525 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1010 19:28:42.655360  148525 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001910447s
	I1010 19:28:42.655482  148525 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1010 19:28:47.658431  148525 kubeadm.go:310] [api-check] The API server is healthy after 5.003169217s
	I1010 19:28:47.676178  148525 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1010 19:28:47.694752  148525 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1010 19:28:47.720376  148525 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1010 19:28:47.720645  148525 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-361847 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1010 19:28:47.736489  148525 kubeadm.go:310] [bootstrap-token] Using token: cprf0t.lm4xp75yi0cdu4sy
	I1010 19:28:46.228217  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:48.726740  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:47.737958  148525 out.go:235]   - Configuring RBAC rules ...
	I1010 19:28:47.738089  148525 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1010 19:28:47.750073  148525 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1010 19:28:47.758010  148525 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1010 19:28:47.761649  148525 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1010 19:28:47.768953  148525 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1010 19:28:47.774428  148525 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1010 19:28:48.065988  148525 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1010 19:28:48.502538  148525 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1010 19:28:49.066479  148525 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1010 19:28:49.069842  148525 kubeadm.go:310] 
	I1010 19:28:49.069937  148525 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1010 19:28:49.069947  148525 kubeadm.go:310] 
	I1010 19:28:49.070046  148525 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1010 19:28:49.070058  148525 kubeadm.go:310] 
	I1010 19:28:49.070089  148525 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1010 19:28:49.070166  148525 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1010 19:28:49.070254  148525 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1010 19:28:49.070265  148525 kubeadm.go:310] 
	I1010 19:28:49.070342  148525 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1010 19:28:49.070353  148525 kubeadm.go:310] 
	I1010 19:28:49.070446  148525 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1010 19:28:49.070478  148525 kubeadm.go:310] 
	I1010 19:28:49.070544  148525 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1010 19:28:49.070640  148525 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1010 19:28:49.070750  148525 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1010 19:28:49.070773  148525 kubeadm.go:310] 
	I1010 19:28:49.070880  148525 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1010 19:28:49.070990  148525 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1010 19:28:49.071001  148525 kubeadm.go:310] 
	I1010 19:28:49.071153  148525 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token cprf0t.lm4xp75yi0cdu4sy \
	I1010 19:28:49.071299  148525 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:bc8568f96f96483e4403a7eff9866ac7bd8b0e76d8203796a49dd1a769f11bda \
	I1010 19:28:49.071330  148525 kubeadm.go:310] 	--control-plane 
	I1010 19:28:49.071349  148525 kubeadm.go:310] 
	I1010 19:28:49.071468  148525 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1010 19:28:49.071497  148525 kubeadm.go:310] 
	I1010 19:28:49.072228  148525 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token cprf0t.lm4xp75yi0cdu4sy \
	I1010 19:28:49.072354  148525 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:bc8568f96f96483e4403a7eff9866ac7bd8b0e76d8203796a49dd1a769f11bda 
	I1010 19:28:49.074595  148525 kubeadm.go:310] W1010 19:28:40.525557    2560 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1010 19:28:49.074944  148525 kubeadm.go:310] W1010 19:28:40.526329    2560 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1010 19:28:49.075102  148525 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1010 19:28:49.075143  148525 cni.go:84] Creating CNI manager for ""
	I1010 19:28:49.075166  148525 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1010 19:28:49.077190  148525 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1010 19:28:49.078665  148525 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1010 19:28:49.091792  148525 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1010 19:28:49.113801  148525 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1010 19:28:49.113920  148525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-361847 minikube.k8s.io/updated_at=2024_10_10T19_28_49_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=e90d7de550e839433dc631623053398b652747fc minikube.k8s.io/name=default-k8s-diff-port-361847 minikube.k8s.io/primary=true
	I1010 19:28:49.114074  148525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:49.154398  148525 ops.go:34] apiserver oom_adj: -16
	I1010 19:28:49.351271  148525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:49.852049  148525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:50.351441  148525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:50.852022  148525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:51.351391  148525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:51.851329  148525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:52.351840  148525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:52.852392  148525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:53.351397  148525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:53.443325  148525 kubeadm.go:1113] duration metric: took 4.329288133s to wait for elevateKubeSystemPrivileges
	I1010 19:28:53.443363  148525 kubeadm.go:394] duration metric: took 4m54.439732071s to StartCluster
	I1010 19:28:53.443386  148525 settings.go:142] acquiring lock: {Name:mk8d73e472e6ca16e47be8eb712406a95625b8da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 19:28:53.443481  148525 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19787-81676/kubeconfig
	I1010 19:28:53.445465  148525 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19787-81676/kubeconfig: {Name:mk31297b607b774870865548d952622c04d970cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 19:28:53.445747  148525 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.32 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1010 19:28:53.445842  148525 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1010 19:28:53.445957  148525 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-361847"
	I1010 19:28:53.445980  148525 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-361847"
	W1010 19:28:53.445992  148525 addons.go:243] addon storage-provisioner should already be in state true
	I1010 19:28:53.446004  148525 config.go:182] Loaded profile config "default-k8s-diff-port-361847": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 19:28:53.446026  148525 host.go:66] Checking if "default-k8s-diff-port-361847" exists ...
	I1010 19:28:53.446065  148525 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-361847"
	I1010 19:28:53.446100  148525 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-361847"
	I1010 19:28:53.446085  148525 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-361847"
	I1010 19:28:53.446137  148525 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-361847"
	W1010 19:28:53.446151  148525 addons.go:243] addon metrics-server should already be in state true
	I1010 19:28:53.446242  148525 host.go:66] Checking if "default-k8s-diff-port-361847" exists ...
	I1010 19:28:53.446515  148525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:28:53.446562  148525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:28:53.447089  148525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:28:53.447135  148525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:28:53.447315  148525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:28:53.447360  148525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:28:53.450779  148525 out.go:177] * Verifying Kubernetes components...
	I1010 19:28:53.452838  148525 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 19:28:53.465502  148525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36411
	I1010 19:28:53.466020  148525 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:28:53.466572  148525 main.go:141] libmachine: Using API Version  1
	I1010 19:28:53.466594  148525 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:28:53.466772  148525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42293
	I1010 19:28:53.467034  148525 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:28:53.467209  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetState
	I1010 19:28:53.467310  148525 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:28:53.467828  148525 main.go:141] libmachine: Using API Version  1
	I1010 19:28:53.467857  148525 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:28:53.467899  148525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36387
	I1010 19:28:53.468270  148525 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:28:53.468451  148525 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:28:53.468866  148525 main.go:141] libmachine: Using API Version  1
	I1010 19:28:53.468891  148525 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:28:53.469102  148525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:28:53.469150  148525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:28:53.469484  148525 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:28:53.470068  148525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:28:53.470114  148525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:28:53.471192  148525 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-361847"
	W1010 19:28:53.471213  148525 addons.go:243] addon default-storageclass should already be in state true
	I1010 19:28:53.471261  148525 host.go:66] Checking if "default-k8s-diff-port-361847" exists ...
	I1010 19:28:53.471618  148525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:28:53.471664  148525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:28:53.486550  148525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38971
	I1010 19:28:53.487068  148525 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:28:53.487608  148525 main.go:141] libmachine: Using API Version  1
	I1010 19:28:53.487626  148525 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:28:53.488015  148525 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:28:53.488329  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetState
	I1010 19:28:53.490200  148525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44971
	I1010 19:28:53.490240  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .DriverName
	I1010 19:28:53.490790  148525 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:28:53.491318  148525 main.go:141] libmachine: Using API Version  1
	I1010 19:28:53.491341  148525 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:28:53.491682  148525 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:28:53.491957  148525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45297
	I1010 19:28:53.492100  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetState
	I1010 19:28:53.492423  148525 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:28:53.492731  148525 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1010 19:28:53.492811  148525 main.go:141] libmachine: Using API Version  1
	I1010 19:28:53.492831  148525 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:28:53.493240  148525 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:28:53.493885  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .DriverName
	I1010 19:28:53.493979  148525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:28:53.494031  148525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:28:53.494359  148525 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1010 19:28:53.494381  148525 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1010 19:28:53.494397  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHHostname
	I1010 19:28:53.495771  148525 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 19:28:51.226596  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:53.227299  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:53.227335  147213 pod_ready.go:82] duration metric: took 4m0.007224391s for pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace to be "Ready" ...
	E1010 19:28:53.227346  147213 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1010 19:28:53.227355  147213 pod_ready.go:39] duration metric: took 4m5.554224355s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 19:28:53.227375  147213 api_server.go:52] waiting for apiserver process to appear ...
	I1010 19:28:53.227419  147213 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:28:53.227484  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:28:53.288713  147213 cri.go:89] found id: "20a9cb514f18abab89d82173a52c43693b13548712f96726c491e14100e5deba"
	I1010 19:28:53.288749  147213 cri.go:89] found id: ""
	I1010 19:28:53.288759  147213 logs.go:282] 1 containers: [20a9cb514f18abab89d82173a52c43693b13548712f96726c491e14100e5deba]
	I1010 19:28:53.288823  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:53.294819  147213 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:28:53.294904  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:28:53.340169  147213 cri.go:89] found id: "bfc9f1f069a0299235908025d3c8840e059ddc2fc2f579932edb134cff568c36"
	I1010 19:28:53.340197  147213 cri.go:89] found id: ""
	I1010 19:28:53.340207  147213 logs.go:282] 1 containers: [bfc9f1f069a0299235908025d3c8840e059ddc2fc2f579932edb134cff568c36]
	I1010 19:28:53.340271  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:53.345214  147213 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:28:53.345292  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:28:53.392808  147213 cri.go:89] found id: "3c98f0e3e46ce82feb8638281ffb8c8cdf326b15115a6edfe91de59de6868846"
	I1010 19:28:53.392838  147213 cri.go:89] found id: ""
	I1010 19:28:53.392859  147213 logs.go:282] 1 containers: [3c98f0e3e46ce82feb8638281ffb8c8cdf326b15115a6edfe91de59de6868846]
	I1010 19:28:53.392921  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:53.398275  147213 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:28:53.398361  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:28:53.439567  147213 cri.go:89] found id: "d397ef1d012accb6f9cb41fa5bd5ed4649588e9ad8552f87d2f9a579a9c3f023"
	I1010 19:28:53.439594  147213 cri.go:89] found id: ""
	I1010 19:28:53.439604  147213 logs.go:282] 1 containers: [d397ef1d012accb6f9cb41fa5bd5ed4649588e9ad8552f87d2f9a579a9c3f023]
	I1010 19:28:53.439665  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:53.444366  147213 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:28:53.444436  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:28:53.522580  147213 cri.go:89] found id: "3a26f9cbec8dc8e6e1b1c1cc24a2432dc85819a78557be2263db6bc34e847664"
	I1010 19:28:53.522597  147213 cri.go:89] found id: ""
	I1010 19:28:53.522605  147213 logs.go:282] 1 containers: [3a26f9cbec8dc8e6e1b1c1cc24a2432dc85819a78557be2263db6bc34e847664]
	I1010 19:28:53.522654  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:53.528890  147213 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:28:53.528974  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:28:53.575933  147213 cri.go:89] found id: "d59196636b2826805e226757e3c5d3683d49f2fddb43f2e965aeae02026742ea"
	I1010 19:28:53.575963  147213 cri.go:89] found id: ""
	I1010 19:28:53.575975  147213 logs.go:282] 1 containers: [d59196636b2826805e226757e3c5d3683d49f2fddb43f2e965aeae02026742ea]
	I1010 19:28:53.576035  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:53.581693  147213 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:28:53.581763  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:28:53.619789  147213 cri.go:89] found id: ""
	I1010 19:28:53.619819  147213 logs.go:282] 0 containers: []
	W1010 19:28:53.619831  147213 logs.go:284] No container was found matching "kindnet"
	I1010 19:28:53.619839  147213 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1010 19:28:53.619899  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1010 19:28:53.659715  147213 cri.go:89] found id: "dfabbf70cd44985666c6b54a456e49c66a41cc19300a483565decd937ce240ab"
	I1010 19:28:53.659746  147213 cri.go:89] found id: "e14d37c6da3f630d60720c5dc7ec414f060a7b327fafd27b633f3402e552840e"
	I1010 19:28:53.659752  147213 cri.go:89] found id: ""
	I1010 19:28:53.659762  147213 logs.go:282] 2 containers: [dfabbf70cd44985666c6b54a456e49c66a41cc19300a483565decd937ce240ab e14d37c6da3f630d60720c5dc7ec414f060a7b327fafd27b633f3402e552840e]
	I1010 19:28:53.659828  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:53.664377  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:53.668766  147213 logs.go:123] Gathering logs for dmesg ...
	I1010 19:28:53.668796  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:28:53.685976  147213 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:28:53.686007  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 19:28:53.497232  148525 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 19:28:53.497251  148525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1010 19:28:53.497273  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHHostname
	I1010 19:28:53.497732  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:28:53.498599  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:28:53.498627  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:28:53.498971  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHPort
	I1010 19:28:53.499159  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:28:53.499312  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHUsername
	I1010 19:28:53.499414  148525 sshutil.go:53] new ssh client: &{IP:192.168.50.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/default-k8s-diff-port-361847/id_rsa Username:docker}
	I1010 19:28:53.501044  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:28:53.501509  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:28:53.501531  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:28:53.501782  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHPort
	I1010 19:28:53.501956  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:28:53.502080  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHUsername
	I1010 19:28:53.502232  148525 sshutil.go:53] new ssh client: &{IP:192.168.50.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/default-k8s-diff-port-361847/id_rsa Username:docker}
	I1010 19:28:53.512240  148525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34899
	I1010 19:28:53.512809  148525 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:28:53.513347  148525 main.go:141] libmachine: Using API Version  1
	I1010 19:28:53.513368  148525 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:28:53.513787  148525 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:28:53.514001  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetState
	I1010 19:28:53.515436  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .DriverName
	I1010 19:28:53.515639  148525 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1010 19:28:53.515659  148525 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1010 19:28:53.515681  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHHostname
	I1010 19:28:53.518128  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:28:53.518596  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:28:53.518628  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:28:53.518909  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHPort
	I1010 19:28:53.519080  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:28:53.519216  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHUsername
	I1010 19:28:53.519376  148525 sshutil.go:53] new ssh client: &{IP:192.168.50.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/default-k8s-diff-port-361847/id_rsa Username:docker}
	I1010 19:28:53.712871  148525 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 19:28:53.755059  148525 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-361847" to be "Ready" ...
	I1010 19:28:53.766564  148525 node_ready.go:49] node "default-k8s-diff-port-361847" has status "Ready":"True"
	I1010 19:28:53.766590  148525 node_ready.go:38] duration metric: took 11.490223ms for node "default-k8s-diff-port-361847" to be "Ready" ...
	I1010 19:28:53.766603  148525 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 19:28:53.777458  148525 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-dh9th" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:53.875493  148525 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1010 19:28:53.875525  148525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1010 19:28:53.911443  148525 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1010 19:28:53.944885  148525 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1010 19:28:53.944919  148525 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1010 19:28:53.945487  148525 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 19:28:54.011209  148525 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1010 19:28:54.011239  148525 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1010 19:28:54.039679  148525 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1010 19:28:54.598172  148525 main.go:141] libmachine: Making call to close driver server
	I1010 19:28:54.598226  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .Close
	I1010 19:28:54.598584  148525 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:28:54.598608  148525 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:28:54.598619  148525 main.go:141] libmachine: Making call to close driver server
	I1010 19:28:54.598629  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .Close
	I1010 19:28:54.598898  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | Closing plugin on server side
	I1010 19:28:54.598931  148525 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:28:54.598939  148525 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:28:54.643365  148525 main.go:141] libmachine: Making call to close driver server
	I1010 19:28:54.643392  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .Close
	I1010 19:28:54.643734  148525 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:28:54.643760  148525 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:28:55.287018  148525 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.341483807s)
	I1010 19:28:55.287045  148525 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.247326452s)
	I1010 19:28:55.287089  148525 main.go:141] libmachine: Making call to close driver server
	I1010 19:28:55.287094  148525 main.go:141] libmachine: Making call to close driver server
	I1010 19:28:55.287106  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .Close
	I1010 19:28:55.287112  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .Close
	I1010 19:28:55.287440  148525 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:28:55.287479  148525 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:28:55.287506  148525 main.go:141] libmachine: Making call to close driver server
	I1010 19:28:55.287524  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .Close
	I1010 19:28:55.287570  148525 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:28:55.287589  148525 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:28:55.287598  148525 main.go:141] libmachine: Making call to close driver server
	I1010 19:28:55.287607  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .Close
	I1010 19:28:55.287818  148525 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:28:55.287831  148525 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:28:55.287840  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | Closing plugin on server side
	I1010 19:28:55.287855  148525 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:28:55.287862  148525 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:28:55.287872  148525 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-361847"
	I1010 19:28:55.287880  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | Closing plugin on server side
	I1010 19:28:55.289944  148525 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1010 19:28:53.841387  147213 logs.go:123] Gathering logs for kube-apiserver [20a9cb514f18abab89d82173a52c43693b13548712f96726c491e14100e5deba] ...
	I1010 19:28:53.841441  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 20a9cb514f18abab89d82173a52c43693b13548712f96726c491e14100e5deba"
	I1010 19:28:53.892951  147213 logs.go:123] Gathering logs for coredns [3c98f0e3e46ce82feb8638281ffb8c8cdf326b15115a6edfe91de59de6868846] ...
	I1010 19:28:53.893005  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c98f0e3e46ce82feb8638281ffb8c8cdf326b15115a6edfe91de59de6868846"
	I1010 19:28:53.947636  147213 logs.go:123] Gathering logs for kube-scheduler [d397ef1d012accb6f9cb41fa5bd5ed4649588e9ad8552f87d2f9a579a9c3f023] ...
	I1010 19:28:53.947668  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d397ef1d012accb6f9cb41fa5bd5ed4649588e9ad8552f87d2f9a579a9c3f023"
	I1010 19:28:53.992969  147213 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:28:53.992998  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:28:54.520652  147213 logs.go:123] Gathering logs for kubelet ...
	I1010 19:28:54.520703  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:28:54.588366  147213 logs.go:123] Gathering logs for etcd [bfc9f1f069a0299235908025d3c8840e059ddc2fc2f579932edb134cff568c36] ...
	I1010 19:28:54.588418  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bfc9f1f069a0299235908025d3c8840e059ddc2fc2f579932edb134cff568c36"
	I1010 19:28:54.651179  147213 logs.go:123] Gathering logs for kube-proxy [3a26f9cbec8dc8e6e1b1c1cc24a2432dc85819a78557be2263db6bc34e847664] ...
	I1010 19:28:54.651227  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3a26f9cbec8dc8e6e1b1c1cc24a2432dc85819a78557be2263db6bc34e847664"
	I1010 19:28:54.712881  147213 logs.go:123] Gathering logs for kube-controller-manager [d59196636b2826805e226757e3c5d3683d49f2fddb43f2e965aeae02026742ea] ...
	I1010 19:28:54.712925  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d59196636b2826805e226757e3c5d3683d49f2fddb43f2e965aeae02026742ea"
	I1010 19:28:54.779030  147213 logs.go:123] Gathering logs for storage-provisioner [dfabbf70cd44985666c6b54a456e49c66a41cc19300a483565decd937ce240ab] ...
	I1010 19:28:54.779094  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dfabbf70cd44985666c6b54a456e49c66a41cc19300a483565decd937ce240ab"
	I1010 19:28:54.821961  147213 logs.go:123] Gathering logs for storage-provisioner [e14d37c6da3f630d60720c5dc7ec414f060a7b327fafd27b633f3402e552840e] ...
	I1010 19:28:54.822002  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e14d37c6da3f630d60720c5dc7ec414f060a7b327fafd27b633f3402e552840e"
	I1010 19:28:54.871409  147213 logs.go:123] Gathering logs for container status ...
	I1010 19:28:54.871446  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:28:57.425310  147213 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:28:57.442308  147213 api_server.go:72] duration metric: took 4m17.02881034s to wait for apiserver process to appear ...
	I1010 19:28:57.442343  147213 api_server.go:88] waiting for apiserver healthz status ...
	I1010 19:28:57.442383  147213 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:28:57.442444  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:28:57.481392  147213 cri.go:89] found id: "20a9cb514f18abab89d82173a52c43693b13548712f96726c491e14100e5deba"
	I1010 19:28:57.481420  147213 cri.go:89] found id: ""
	I1010 19:28:57.481430  147213 logs.go:282] 1 containers: [20a9cb514f18abab89d82173a52c43693b13548712f96726c491e14100e5deba]
	I1010 19:28:57.481503  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:57.486191  147213 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:28:57.486269  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:28:57.532238  147213 cri.go:89] found id: "bfc9f1f069a0299235908025d3c8840e059ddc2fc2f579932edb134cff568c36"
	I1010 19:28:57.532271  147213 cri.go:89] found id: ""
	I1010 19:28:57.532284  147213 logs.go:282] 1 containers: [bfc9f1f069a0299235908025d3c8840e059ddc2fc2f579932edb134cff568c36]
	I1010 19:28:57.532357  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:57.538105  147213 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:28:57.538188  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:28:57.579729  147213 cri.go:89] found id: "3c98f0e3e46ce82feb8638281ffb8c8cdf326b15115a6edfe91de59de6868846"
	I1010 19:28:57.579757  147213 cri.go:89] found id: ""
	I1010 19:28:57.579767  147213 logs.go:282] 1 containers: [3c98f0e3e46ce82feb8638281ffb8c8cdf326b15115a6edfe91de59de6868846]
	I1010 19:28:57.579833  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:57.584494  147213 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:28:57.584568  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:28:57.623920  147213 cri.go:89] found id: "d397ef1d012accb6f9cb41fa5bd5ed4649588e9ad8552f87d2f9a579a9c3f023"
	I1010 19:28:57.623949  147213 cri.go:89] found id: ""
	I1010 19:28:57.623960  147213 logs.go:282] 1 containers: [d397ef1d012accb6f9cb41fa5bd5ed4649588e9ad8552f87d2f9a579a9c3f023]
	I1010 19:28:57.624028  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:57.628927  147213 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:28:57.629018  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:28:57.669669  147213 cri.go:89] found id: "3a26f9cbec8dc8e6e1b1c1cc24a2432dc85819a78557be2263db6bc34e847664"
	I1010 19:28:57.669698  147213 cri.go:89] found id: ""
	I1010 19:28:57.669707  147213 logs.go:282] 1 containers: [3a26f9cbec8dc8e6e1b1c1cc24a2432dc85819a78557be2263db6bc34e847664]
	I1010 19:28:57.669771  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:57.674449  147213 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:28:57.674526  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:28:57.721856  147213 cri.go:89] found id: "d59196636b2826805e226757e3c5d3683d49f2fddb43f2e965aeae02026742ea"
	I1010 19:28:57.721881  147213 cri.go:89] found id: ""
	I1010 19:28:57.721891  147213 logs.go:282] 1 containers: [d59196636b2826805e226757e3c5d3683d49f2fddb43f2e965aeae02026742ea]
	I1010 19:28:57.721955  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:57.726422  147213 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:28:57.726497  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:28:57.764464  147213 cri.go:89] found id: ""
	I1010 19:28:57.764499  147213 logs.go:282] 0 containers: []
	W1010 19:28:57.764512  147213 logs.go:284] No container was found matching "kindnet"
	I1010 19:28:57.764521  147213 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1010 19:28:57.764595  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1010 19:28:57.809758  147213 cri.go:89] found id: "dfabbf70cd44985666c6b54a456e49c66a41cc19300a483565decd937ce240ab"
	I1010 19:28:57.809784  147213 cri.go:89] found id: "e14d37c6da3f630d60720c5dc7ec414f060a7b327fafd27b633f3402e552840e"
	I1010 19:28:57.809788  147213 cri.go:89] found id: ""
	I1010 19:28:57.809797  147213 logs.go:282] 2 containers: [dfabbf70cd44985666c6b54a456e49c66a41cc19300a483565decd937ce240ab e14d37c6da3f630d60720c5dc7ec414f060a7b327fafd27b633f3402e552840e]
	I1010 19:28:57.809854  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:57.815576  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:57.820152  147213 logs.go:123] Gathering logs for kube-apiserver [20a9cb514f18abab89d82173a52c43693b13548712f96726c491e14100e5deba] ...
	I1010 19:28:57.820181  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 20a9cb514f18abab89d82173a52c43693b13548712f96726c491e14100e5deba"
	I1010 19:28:57.869339  147213 logs.go:123] Gathering logs for etcd [bfc9f1f069a0299235908025d3c8840e059ddc2fc2f579932edb134cff568c36] ...
	I1010 19:28:57.869383  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bfc9f1f069a0299235908025d3c8840e059ddc2fc2f579932edb134cff568c36"
	I1010 19:28:57.918698  147213 logs.go:123] Gathering logs for kube-proxy [3a26f9cbec8dc8e6e1b1c1cc24a2432dc85819a78557be2263db6bc34e847664] ...
	I1010 19:28:57.918739  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3a26f9cbec8dc8e6e1b1c1cc24a2432dc85819a78557be2263db6bc34e847664"
	I1010 19:28:57.960939  147213 logs.go:123] Gathering logs for kube-controller-manager [d59196636b2826805e226757e3c5d3683d49f2fddb43f2e965aeae02026742ea] ...
	I1010 19:28:57.960985  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d59196636b2826805e226757e3c5d3683d49f2fddb43f2e965aeae02026742ea"
	I1010 19:28:58.013572  147213 logs.go:123] Gathering logs for storage-provisioner [dfabbf70cd44985666c6b54a456e49c66a41cc19300a483565decd937ce240ab] ...
	I1010 19:28:58.013612  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dfabbf70cd44985666c6b54a456e49c66a41cc19300a483565decd937ce240ab"
	I1010 19:28:58.053247  147213 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:28:58.053277  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:28:58.507428  147213 logs.go:123] Gathering logs for container status ...
	I1010 19:28:58.507473  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:28:58.552704  147213 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:28:58.552742  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 19:28:58.672077  147213 logs.go:123] Gathering logs for dmesg ...
	I1010 19:28:58.672127  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:28:58.690997  147213 logs.go:123] Gathering logs for coredns [3c98f0e3e46ce82feb8638281ffb8c8cdf326b15115a6edfe91de59de6868846] ...
	I1010 19:28:58.691049  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c98f0e3e46ce82feb8638281ffb8c8cdf326b15115a6edfe91de59de6868846"
	I1010 19:28:58.735251  147213 logs.go:123] Gathering logs for kube-scheduler [d397ef1d012accb6f9cb41fa5bd5ed4649588e9ad8552f87d2f9a579a9c3f023] ...
	I1010 19:28:58.735287  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d397ef1d012accb6f9cb41fa5bd5ed4649588e9ad8552f87d2f9a579a9c3f023"
	I1010 19:28:55.291700  148525 addons.go:510] duration metric: took 1.845864985s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1010 19:28:55.785186  148525 pod_ready.go:103] pod "coredns-7c65d6cfc9-dh9th" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:57.789567  148525 pod_ready.go:103] pod "coredns-7c65d6cfc9-dh9th" in "kube-system" namespace has status "Ready":"False"
	I1010 19:29:00.284444  148525 pod_ready.go:103] pod "coredns-7c65d6cfc9-dh9th" in "kube-system" namespace has status "Ready":"False"
	I1010 19:29:01.297627  148525 pod_ready.go:93] pod "coredns-7c65d6cfc9-dh9th" in "kube-system" namespace has status "Ready":"True"
	I1010 19:29:01.297660  148525 pod_ready.go:82] duration metric: took 7.520173084s for pod "coredns-7c65d6cfc9-dh9th" in "kube-system" namespace to be "Ready" ...
	I1010 19:29:01.297676  148525 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-fgxh7" in "kube-system" namespace to be "Ready" ...
	I1010 19:29:01.804654  148525 pod_ready.go:93] pod "coredns-7c65d6cfc9-fgxh7" in "kube-system" namespace has status "Ready":"True"
	I1010 19:29:01.804676  148525 pod_ready.go:82] duration metric: took 506.992872ms for pod "coredns-7c65d6cfc9-fgxh7" in "kube-system" namespace to be "Ready" ...
	I1010 19:29:01.804690  148525 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	I1010 19:29:01.809788  148525 pod_ready.go:93] pod "etcd-default-k8s-diff-port-361847" in "kube-system" namespace has status "Ready":"True"
	I1010 19:29:01.809814  148525 pod_ready.go:82] duration metric: took 5.116023ms for pod "etcd-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	I1010 19:29:01.809825  148525 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	I1010 19:29:01.814460  148525 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-361847" in "kube-system" namespace has status "Ready":"True"
	I1010 19:29:01.814486  148525 pod_ready.go:82] duration metric: took 4.652085ms for pod "kube-apiserver-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	I1010 19:29:01.814501  148525 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	I1010 19:29:01.819719  148525 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-361847" in "kube-system" namespace has status "Ready":"True"
	I1010 19:29:01.819741  148525 pod_ready.go:82] duration metric: took 5.231258ms for pod "kube-controller-manager-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	I1010 19:29:01.819753  148525 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-jlvn6" in "kube-system" namespace to be "Ready" ...
	I1010 19:29:02.082285  148525 pod_ready.go:93] pod "kube-proxy-jlvn6" in "kube-system" namespace has status "Ready":"True"
	I1010 19:29:02.082325  148525 pod_ready.go:82] duration metric: took 262.562954ms for pod "kube-proxy-jlvn6" in "kube-system" namespace to be "Ready" ...
	I1010 19:29:02.082342  148525 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	I1010 19:29:02.481705  148525 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-361847" in "kube-system" namespace has status "Ready":"True"
	I1010 19:29:02.481730  148525 pod_ready.go:82] duration metric: took 399.378957ms for pod "kube-scheduler-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	I1010 19:29:02.481742  148525 pod_ready.go:39] duration metric: took 8.715126416s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 19:29:02.481779  148525 api_server.go:52] waiting for apiserver process to appear ...
	I1010 19:29:02.481832  148525 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:29:02.498706  148525 api_server.go:72] duration metric: took 9.052891898s to wait for apiserver process to appear ...
	I1010 19:29:02.498760  148525 api_server.go:88] waiting for apiserver healthz status ...
	I1010 19:29:02.498795  148525 api_server.go:253] Checking apiserver healthz at https://192.168.50.32:8444/healthz ...
	I1010 19:29:02.503501  148525 api_server.go:279] https://192.168.50.32:8444/healthz returned 200:
	ok
	I1010 19:29:02.504594  148525 api_server.go:141] control plane version: v1.31.1
	I1010 19:29:02.504620  148525 api_server.go:131] duration metric: took 5.850548ms to wait for apiserver health ...
	I1010 19:29:02.504629  148525 system_pods.go:43] waiting for kube-system pods to appear ...
	I1010 19:29:02.685579  148525 system_pods.go:59] 9 kube-system pods found
	I1010 19:29:02.685611  148525 system_pods.go:61] "coredns-7c65d6cfc9-dh9th" [ff14d755-810a-497a-b1fc-7fe231748af3] Running
	I1010 19:29:02.685618  148525 system_pods.go:61] "coredns-7c65d6cfc9-fgxh7" [b4faa977-3205-4395-bda3-8fe24fdcf6cc] Running
	I1010 19:29:02.685624  148525 system_pods.go:61] "etcd-default-k8s-diff-port-361847" [d7e1e625-2945-4755-927a-aab5e40f5392] Running
	I1010 19:29:02.685630  148525 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-361847" [0f7dace6-6cdb-4438-a4b3-3fecac56c709] Running
	I1010 19:29:02.685635  148525 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-361847" [d08fb57e-d916-4d4f-929a-509c1c8c0f89] Running
	I1010 19:29:02.685639  148525 system_pods.go:61] "kube-proxy-jlvn6" [6336f682-0362-4855-b848-3540052aec19] Running
	I1010 19:29:02.685644  148525 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-361847" [60282768-5672-42b9-b9f5-6d5cd0b186cf] Running
	I1010 19:29:02.685653  148525 system_pods.go:61] "metrics-server-6867b74b74-fdf7p" [6f8ca204-13fe-4adb-9c09-33ec6821ff2d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1010 19:29:02.685658  148525 system_pods.go:61] "storage-provisioner" [ea1a4ade-9648-401f-a0ad-633ab3c1196b] Running
	I1010 19:29:02.685669  148525 system_pods.go:74] duration metric: took 181.032548ms to wait for pod list to return data ...
	I1010 19:29:02.685683  148525 default_sa.go:34] waiting for default service account to be created ...
	I1010 19:29:02.883256  148525 default_sa.go:45] found service account: "default"
	I1010 19:29:02.883288  148525 default_sa.go:55] duration metric: took 197.59742ms for default service account to be created ...
	I1010 19:29:02.883298  148525 system_pods.go:116] waiting for k8s-apps to be running ...
	I1010 19:29:03.084706  148525 system_pods.go:86] 9 kube-system pods found
	I1010 19:29:03.084737  148525 system_pods.go:89] "coredns-7c65d6cfc9-dh9th" [ff14d755-810a-497a-b1fc-7fe231748af3] Running
	I1010 19:29:03.084742  148525 system_pods.go:89] "coredns-7c65d6cfc9-fgxh7" [b4faa977-3205-4395-bda3-8fe24fdcf6cc] Running
	I1010 19:29:03.084746  148525 system_pods.go:89] "etcd-default-k8s-diff-port-361847" [d7e1e625-2945-4755-927a-aab5e40f5392] Running
	I1010 19:29:03.084751  148525 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-361847" [0f7dace6-6cdb-4438-a4b3-3fecac56c709] Running
	I1010 19:29:03.084755  148525 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-361847" [d08fb57e-d916-4d4f-929a-509c1c8c0f89] Running
	I1010 19:29:03.084759  148525 system_pods.go:89] "kube-proxy-jlvn6" [6336f682-0362-4855-b848-3540052aec19] Running
	I1010 19:29:03.084762  148525 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-361847" [60282768-5672-42b9-b9f5-6d5cd0b186cf] Running
	I1010 19:29:03.084768  148525 system_pods.go:89] "metrics-server-6867b74b74-fdf7p" [6f8ca204-13fe-4adb-9c09-33ec6821ff2d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1010 19:29:03.084772  148525 system_pods.go:89] "storage-provisioner" [ea1a4ade-9648-401f-a0ad-633ab3c1196b] Running
	I1010 19:29:03.084779  148525 system_pods.go:126] duration metric: took 201.476637ms to wait for k8s-apps to be running ...
	I1010 19:29:03.084787  148525 system_svc.go:44] waiting for kubelet service to be running ....
	I1010 19:29:03.084832  148525 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 19:29:03.100986  148525 system_svc.go:56] duration metric: took 16.183062ms WaitForService to wait for kubelet
	I1010 19:29:03.101026  148525 kubeadm.go:582] duration metric: took 9.655245557s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1010 19:29:03.101050  148525 node_conditions.go:102] verifying NodePressure condition ...
	I1010 19:29:03.282063  148525 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1010 19:29:03.282095  148525 node_conditions.go:123] node cpu capacity is 2
	I1010 19:29:03.282106  148525 node_conditions.go:105] duration metric: took 181.049888ms to run NodePressure ...
	I1010 19:29:03.282119  148525 start.go:241] waiting for startup goroutines ...
	I1010 19:29:03.282125  148525 start.go:246] waiting for cluster config update ...
	I1010 19:29:03.282135  148525 start.go:255] writing updated cluster config ...
	I1010 19:29:03.282414  148525 ssh_runner.go:195] Run: rm -f paused
	I1010 19:29:03.331838  148525 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1010 19:29:03.333698  148525 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-361847" cluster and "default" namespace by default
	I1010 19:28:58.775358  147213 logs.go:123] Gathering logs for storage-provisioner [e14d37c6da3f630d60720c5dc7ec414f060a7b327fafd27b633f3402e552840e] ...
	I1010 19:28:58.775396  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e14d37c6da3f630d60720c5dc7ec414f060a7b327fafd27b633f3402e552840e"
	I1010 19:28:58.812210  147213 logs.go:123] Gathering logs for kubelet ...
	I1010 19:28:58.812269  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:29:01.381750  147213 api_server.go:253] Checking apiserver healthz at https://192.168.72.11:8443/healthz ...
	I1010 19:29:01.386658  147213 api_server.go:279] https://192.168.72.11:8443/healthz returned 200:
	ok
	I1010 19:29:01.387793  147213 api_server.go:141] control plane version: v1.31.1
	I1010 19:29:01.387819  147213 api_server.go:131] duration metric: took 3.945468552s to wait for apiserver health ...
	I1010 19:29:01.387829  147213 system_pods.go:43] waiting for kube-system pods to appear ...
	I1010 19:29:01.387861  147213 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:29:01.387948  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:29:01.433312  147213 cri.go:89] found id: "20a9cb514f18abab89d82173a52c43693b13548712f96726c491e14100e5deba"
	I1010 19:29:01.433344  147213 cri.go:89] found id: ""
	I1010 19:29:01.433433  147213 logs.go:282] 1 containers: [20a9cb514f18abab89d82173a52c43693b13548712f96726c491e14100e5deba]
	I1010 19:29:01.433521  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:29:01.437920  147213 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:29:01.437983  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:29:01.476429  147213 cri.go:89] found id: "bfc9f1f069a0299235908025d3c8840e059ddc2fc2f579932edb134cff568c36"
	I1010 19:29:01.476458  147213 cri.go:89] found id: ""
	I1010 19:29:01.476470  147213 logs.go:282] 1 containers: [bfc9f1f069a0299235908025d3c8840e059ddc2fc2f579932edb134cff568c36]
	I1010 19:29:01.476522  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:29:01.480912  147213 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:29:01.480987  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:29:01.522141  147213 cri.go:89] found id: "3c98f0e3e46ce82feb8638281ffb8c8cdf326b15115a6edfe91de59de6868846"
	I1010 19:29:01.522164  147213 cri.go:89] found id: ""
	I1010 19:29:01.522173  147213 logs.go:282] 1 containers: [3c98f0e3e46ce82feb8638281ffb8c8cdf326b15115a6edfe91de59de6868846]
	I1010 19:29:01.522238  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:29:01.526742  147213 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:29:01.526803  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:29:01.572715  147213 cri.go:89] found id: "d397ef1d012accb6f9cb41fa5bd5ed4649588e9ad8552f87d2f9a579a9c3f023"
	I1010 19:29:01.572747  147213 cri.go:89] found id: ""
	I1010 19:29:01.572759  147213 logs.go:282] 1 containers: [d397ef1d012accb6f9cb41fa5bd5ed4649588e9ad8552f87d2f9a579a9c3f023]
	I1010 19:29:01.572814  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:29:01.577754  147213 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:29:01.577832  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:29:01.616077  147213 cri.go:89] found id: "3a26f9cbec8dc8e6e1b1c1cc24a2432dc85819a78557be2263db6bc34e847664"
	I1010 19:29:01.616104  147213 cri.go:89] found id: ""
	I1010 19:29:01.616121  147213 logs.go:282] 1 containers: [3a26f9cbec8dc8e6e1b1c1cc24a2432dc85819a78557be2263db6bc34e847664]
	I1010 19:29:01.616185  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:29:01.620622  147213 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:29:01.620702  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:29:01.662859  147213 cri.go:89] found id: "d59196636b2826805e226757e3c5d3683d49f2fddb43f2e965aeae02026742ea"
	I1010 19:29:01.662889  147213 cri.go:89] found id: ""
	I1010 19:29:01.662903  147213 logs.go:282] 1 containers: [d59196636b2826805e226757e3c5d3683d49f2fddb43f2e965aeae02026742ea]
	I1010 19:29:01.662964  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:29:01.667491  147213 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:29:01.667585  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:29:01.706191  147213 cri.go:89] found id: ""
	I1010 19:29:01.706217  147213 logs.go:282] 0 containers: []
	W1010 19:29:01.706228  147213 logs.go:284] No container was found matching "kindnet"
	I1010 19:29:01.706234  147213 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1010 19:29:01.706299  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1010 19:29:01.753559  147213 cri.go:89] found id: "dfabbf70cd44985666c6b54a456e49c66a41cc19300a483565decd937ce240ab"
	I1010 19:29:01.753581  147213 cri.go:89] found id: "e14d37c6da3f630d60720c5dc7ec414f060a7b327fafd27b633f3402e552840e"
	I1010 19:29:01.753584  147213 cri.go:89] found id: ""
	I1010 19:29:01.753591  147213 logs.go:282] 2 containers: [dfabbf70cd44985666c6b54a456e49c66a41cc19300a483565decd937ce240ab e14d37c6da3f630d60720c5dc7ec414f060a7b327fafd27b633f3402e552840e]
	I1010 19:29:01.753645  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:29:01.758179  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:29:01.762336  147213 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:29:01.762358  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 19:29:01.867667  147213 logs.go:123] Gathering logs for etcd [bfc9f1f069a0299235908025d3c8840e059ddc2fc2f579932edb134cff568c36] ...
	I1010 19:29:01.867698  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bfc9f1f069a0299235908025d3c8840e059ddc2fc2f579932edb134cff568c36"
	I1010 19:29:01.911722  147213 logs.go:123] Gathering logs for coredns [3c98f0e3e46ce82feb8638281ffb8c8cdf326b15115a6edfe91de59de6868846] ...
	I1010 19:29:01.911756  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c98f0e3e46ce82feb8638281ffb8c8cdf326b15115a6edfe91de59de6868846"
	I1010 19:29:01.955152  147213 logs.go:123] Gathering logs for kube-proxy [3a26f9cbec8dc8e6e1b1c1cc24a2432dc85819a78557be2263db6bc34e847664] ...
	I1010 19:29:01.955189  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3a26f9cbec8dc8e6e1b1c1cc24a2432dc85819a78557be2263db6bc34e847664"
	I1010 19:29:01.995010  147213 logs.go:123] Gathering logs for kube-controller-manager [d59196636b2826805e226757e3c5d3683d49f2fddb43f2e965aeae02026742ea] ...
	I1010 19:29:01.995041  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d59196636b2826805e226757e3c5d3683d49f2fddb43f2e965aeae02026742ea"
	I1010 19:29:02.047505  147213 logs.go:123] Gathering logs for storage-provisioner [dfabbf70cd44985666c6b54a456e49c66a41cc19300a483565decd937ce240ab] ...
	I1010 19:29:02.047546  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dfabbf70cd44985666c6b54a456e49c66a41cc19300a483565decd937ce240ab"
	I1010 19:29:02.085080  147213 logs.go:123] Gathering logs for storage-provisioner [e14d37c6da3f630d60720c5dc7ec414f060a7b327fafd27b633f3402e552840e] ...
	I1010 19:29:02.085110  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e14d37c6da3f630d60720c5dc7ec414f060a7b327fafd27b633f3402e552840e"
	I1010 19:29:02.128482  147213 logs.go:123] Gathering logs for kubelet ...
	I1010 19:29:02.128527  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:29:02.194867  147213 logs.go:123] Gathering logs for dmesg ...
	I1010 19:29:02.194904  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:29:02.211881  147213 logs.go:123] Gathering logs for kube-apiserver [20a9cb514f18abab89d82173a52c43693b13548712f96726c491e14100e5deba] ...
	I1010 19:29:02.211911  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 20a9cb514f18abab89d82173a52c43693b13548712f96726c491e14100e5deba"
	I1010 19:29:02.262969  147213 logs.go:123] Gathering logs for kube-scheduler [d397ef1d012accb6f9cb41fa5bd5ed4649588e9ad8552f87d2f9a579a9c3f023] ...
	I1010 19:29:02.263013  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d397ef1d012accb6f9cb41fa5bd5ed4649588e9ad8552f87d2f9a579a9c3f023"
	I1010 19:29:02.302921  147213 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:29:02.302956  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:29:02.671102  147213 logs.go:123] Gathering logs for container status ...
	I1010 19:29:02.671169  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:29:05.241477  147213 system_pods.go:59] 8 kube-system pods found
	I1010 19:29:05.241508  147213 system_pods.go:61] "coredns-7c65d6cfc9-86brb" [28e5f869-f82f-4bd4-9d9c-89499fa89c89] Running
	I1010 19:29:05.241513  147213 system_pods.go:61] "etcd-no-preload-320324" [3027fc7b-a2cd-47c9-a6f1-122bfd0475a8] Running
	I1010 19:29:05.241517  147213 system_pods.go:61] "kube-apiserver-no-preload-320324" [6e3d2eab-4568-48e9-957c-e724ff89b5da] Running
	I1010 19:29:05.241521  147213 system_pods.go:61] "kube-controller-manager-no-preload-320324" [a6d65cd3-48b1-4faf-bb7f-f6433641d6f3] Running
	I1010 19:29:05.241525  147213 system_pods.go:61] "kube-proxy-vn6sv" [e5b2c419-7299-4bc4-b263-99408b9484eb] Running
	I1010 19:29:05.241528  147213 system_pods.go:61] "kube-scheduler-no-preload-320324" [e089c258-e720-4c78-ada1-eebbe89556c1] Running
	I1010 19:29:05.241534  147213 system_pods.go:61] "metrics-server-6867b74b74-8w9lk" [354939e6-2ca9-44f5-8e8e-c10493c68b79] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1010 19:29:05.241540  147213 system_pods.go:61] "storage-provisioner" [d4965b53-60c3-4a97-bd52-d164d977247a] Running
	I1010 19:29:05.241549  147213 system_pods.go:74] duration metric: took 3.853712488s to wait for pod list to return data ...
	I1010 19:29:05.241556  147213 default_sa.go:34] waiting for default service account to be created ...
	I1010 19:29:05.244686  147213 default_sa.go:45] found service account: "default"
	I1010 19:29:05.244721  147213 default_sa.go:55] duration metric: took 3.158069ms for default service account to be created ...
	I1010 19:29:05.244733  147213 system_pods.go:116] waiting for k8s-apps to be running ...
	I1010 19:29:05.249372  147213 system_pods.go:86] 8 kube-system pods found
	I1010 19:29:05.249398  147213 system_pods.go:89] "coredns-7c65d6cfc9-86brb" [28e5f869-f82f-4bd4-9d9c-89499fa89c89] Running
	I1010 19:29:05.249404  147213 system_pods.go:89] "etcd-no-preload-320324" [3027fc7b-a2cd-47c9-a6f1-122bfd0475a8] Running
	I1010 19:29:05.249408  147213 system_pods.go:89] "kube-apiserver-no-preload-320324" [6e3d2eab-4568-48e9-957c-e724ff89b5da] Running
	I1010 19:29:05.249413  147213 system_pods.go:89] "kube-controller-manager-no-preload-320324" [a6d65cd3-48b1-4faf-bb7f-f6433641d6f3] Running
	I1010 19:29:05.249418  147213 system_pods.go:89] "kube-proxy-vn6sv" [e5b2c419-7299-4bc4-b263-99408b9484eb] Running
	I1010 19:29:05.249425  147213 system_pods.go:89] "kube-scheduler-no-preload-320324" [e089c258-e720-4c78-ada1-eebbe89556c1] Running
	I1010 19:29:05.249433  147213 system_pods.go:89] "metrics-server-6867b74b74-8w9lk" [354939e6-2ca9-44f5-8e8e-c10493c68b79] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1010 19:29:05.249442  147213 system_pods.go:89] "storage-provisioner" [d4965b53-60c3-4a97-bd52-d164d977247a] Running
	I1010 19:29:05.249455  147213 system_pods.go:126] duration metric: took 4.715381ms to wait for k8s-apps to be running ...
	I1010 19:29:05.249467  147213 system_svc.go:44] waiting for kubelet service to be running ....
	I1010 19:29:05.249519  147213 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 19:29:05.265180  147213 system_svc.go:56] duration metric: took 15.703413ms WaitForService to wait for kubelet
	I1010 19:29:05.265216  147213 kubeadm.go:582] duration metric: took 4m24.851723603s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1010 19:29:05.265237  147213 node_conditions.go:102] verifying NodePressure condition ...
	I1010 19:29:05.268775  147213 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1010 19:29:05.268807  147213 node_conditions.go:123] node cpu capacity is 2
	I1010 19:29:05.268821  147213 node_conditions.go:105] duration metric: took 3.575195ms to run NodePressure ...
	I1010 19:29:05.268834  147213 start.go:241] waiting for startup goroutines ...
	I1010 19:29:05.268840  147213 start.go:246] waiting for cluster config update ...
	I1010 19:29:05.268869  147213 start.go:255] writing updated cluster config ...
	I1010 19:29:05.269148  147213 ssh_runner.go:195] Run: rm -f paused
	I1010 19:29:05.319999  147213 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1010 19:29:05.322189  147213 out.go:177] * Done! kubectl is now configured to use "no-preload-320324" cluster and "default" namespace by default
	I1010 19:29:41.275119  148123 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1010 19:29:41.275272  148123 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1010 19:29:41.276822  148123 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1010 19:29:41.276919  148123 kubeadm.go:310] [preflight] Running pre-flight checks
	I1010 19:29:41.277017  148123 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1010 19:29:41.277142  148123 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1010 19:29:41.277254  148123 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1010 19:29:41.277357  148123 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1010 19:29:41.279069  148123 out.go:235]   - Generating certificates and keys ...
	I1010 19:29:41.279160  148123 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1010 19:29:41.279217  148123 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1010 19:29:41.279306  148123 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1010 19:29:41.279381  148123 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1010 19:29:41.279484  148123 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1010 19:29:41.279576  148123 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1010 19:29:41.279674  148123 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1010 19:29:41.279779  148123 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1010 19:29:41.279906  148123 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1010 19:29:41.279971  148123 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1010 19:29:41.280005  148123 kubeadm.go:310] [certs] Using the existing "sa" key
	I1010 19:29:41.280052  148123 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1010 19:29:41.280095  148123 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1010 19:29:41.280144  148123 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1010 19:29:41.280219  148123 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1010 19:29:41.280317  148123 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1010 19:29:41.280474  148123 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1010 19:29:41.280583  148123 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1010 19:29:41.280648  148123 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1010 19:29:41.280736  148123 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1010 19:29:41.282180  148123 out.go:235]   - Booting up control plane ...
	I1010 19:29:41.282266  148123 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1010 19:29:41.282336  148123 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1010 19:29:41.282414  148123 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1010 19:29:41.282538  148123 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1010 19:29:41.282748  148123 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1010 19:29:41.282822  148123 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1010 19:29:41.282896  148123 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1010 19:29:41.283061  148123 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1010 19:29:41.283123  148123 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1010 19:29:41.283287  148123 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1010 19:29:41.283348  148123 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1010 19:29:41.283519  148123 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1010 19:29:41.283623  148123 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1010 19:29:41.283809  148123 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1010 19:29:41.283900  148123 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1010 19:29:41.284115  148123 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1010 19:29:41.284135  148123 kubeadm.go:310] 
	I1010 19:29:41.284201  148123 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1010 19:29:41.284243  148123 kubeadm.go:310] 		timed out waiting for the condition
	I1010 19:29:41.284257  148123 kubeadm.go:310] 
	I1010 19:29:41.284307  148123 kubeadm.go:310] 	This error is likely caused by:
	I1010 19:29:41.284346  148123 kubeadm.go:310] 		- The kubelet is not running
	I1010 19:29:41.284481  148123 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1010 19:29:41.284505  148123 kubeadm.go:310] 
	I1010 19:29:41.284655  148123 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1010 19:29:41.284714  148123 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1010 19:29:41.284752  148123 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1010 19:29:41.284758  148123 kubeadm.go:310] 
	I1010 19:29:41.284913  148123 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1010 19:29:41.285038  148123 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1010 19:29:41.285055  148123 kubeadm.go:310] 
	I1010 19:29:41.285169  148123 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1010 19:29:41.285307  148123 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1010 19:29:41.285475  148123 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1010 19:29:41.285587  148123 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1010 19:29:41.285644  148123 kubeadm.go:310] 
	W1010 19:29:41.285747  148123 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1010 19:29:41.285792  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1010 19:29:46.681684  148123 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.395862838s)
	I1010 19:29:46.681797  148123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 19:29:46.696927  148123 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1010 19:29:46.708250  148123 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1010 19:29:46.708280  148123 kubeadm.go:157] found existing configuration files:
	
	I1010 19:29:46.708339  148123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1010 19:29:46.719748  148123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1010 19:29:46.719828  148123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1010 19:29:46.730892  148123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1010 19:29:46.742329  148123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1010 19:29:46.742401  148123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1010 19:29:46.752919  148123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1010 19:29:46.762860  148123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1010 19:29:46.762932  148123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1010 19:29:46.772513  148123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1010 19:29:46.781672  148123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1010 19:29:46.781746  148123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1010 19:29:46.791250  148123 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1010 19:29:47.018706  148123 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1010 19:31:43.351851  148123 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1010 19:31:43.351992  148123 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1010 19:31:43.353664  148123 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1010 19:31:43.353752  148123 kubeadm.go:310] [preflight] Running pre-flight checks
	I1010 19:31:43.353861  148123 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1010 19:31:43.353998  148123 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1010 19:31:43.354130  148123 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1010 19:31:43.354235  148123 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1010 19:31:43.356450  148123 out.go:235]   - Generating certificates and keys ...
	I1010 19:31:43.356557  148123 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1010 19:31:43.356652  148123 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1010 19:31:43.356783  148123 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1010 19:31:43.356902  148123 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1010 19:31:43.357002  148123 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1010 19:31:43.357074  148123 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1010 19:31:43.357181  148123 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1010 19:31:43.357247  148123 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1010 19:31:43.357325  148123 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1010 19:31:43.357408  148123 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1010 19:31:43.357459  148123 kubeadm.go:310] [certs] Using the existing "sa" key
	I1010 19:31:43.357529  148123 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1010 19:31:43.357604  148123 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1010 19:31:43.357676  148123 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1010 19:31:43.357751  148123 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1010 19:31:43.357830  148123 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1010 19:31:43.357979  148123 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1010 19:31:43.358092  148123 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1010 19:31:43.358158  148123 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1010 19:31:43.358263  148123 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1010 19:31:43.360216  148123 out.go:235]   - Booting up control plane ...
	I1010 19:31:43.360332  148123 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1010 19:31:43.360463  148123 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1010 19:31:43.360551  148123 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1010 19:31:43.360673  148123 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1010 19:31:43.360907  148123 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1010 19:31:43.360976  148123 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1010 19:31:43.361058  148123 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1010 19:31:43.361309  148123 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1010 19:31:43.361410  148123 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1010 19:31:43.361648  148123 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1010 19:31:43.361742  148123 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1010 19:31:43.361960  148123 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1010 19:31:43.362058  148123 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1010 19:31:43.362308  148123 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1010 19:31:43.362399  148123 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1010 19:31:43.362640  148123 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1010 19:31:43.362652  148123 kubeadm.go:310] 
	I1010 19:31:43.362708  148123 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1010 19:31:43.362758  148123 kubeadm.go:310] 		timed out waiting for the condition
	I1010 19:31:43.362768  148123 kubeadm.go:310] 
	I1010 19:31:43.362809  148123 kubeadm.go:310] 	This error is likely caused by:
	I1010 19:31:43.362858  148123 kubeadm.go:310] 		- The kubelet is not running
	I1010 19:31:43.362981  148123 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1010 19:31:43.362993  148123 kubeadm.go:310] 
	I1010 19:31:43.363119  148123 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1010 19:31:43.363164  148123 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1010 19:31:43.363204  148123 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1010 19:31:43.363219  148123 kubeadm.go:310] 
	I1010 19:31:43.363344  148123 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1010 19:31:43.363454  148123 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1010 19:31:43.363465  148123 kubeadm.go:310] 
	I1010 19:31:43.363591  148123 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1010 19:31:43.363708  148123 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1010 19:31:43.363803  148123 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1010 19:31:43.363892  148123 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1010 19:31:43.363978  148123 kubeadm.go:394] duration metric: took 8m3.085634474s to StartCluster
	I1010 19:31:43.364052  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:31:43.364159  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:31:43.364268  148123 kubeadm.go:310] 
	I1010 19:31:43.413041  148123 cri.go:89] found id: ""
	I1010 19:31:43.413075  148123 logs.go:282] 0 containers: []
	W1010 19:31:43.413086  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:31:43.413092  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:31:43.413206  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:31:43.456454  148123 cri.go:89] found id: ""
	I1010 19:31:43.456487  148123 logs.go:282] 0 containers: []
	W1010 19:31:43.456499  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:31:43.456507  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:31:43.456567  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:31:43.498582  148123 cri.go:89] found id: ""
	I1010 19:31:43.498614  148123 logs.go:282] 0 containers: []
	W1010 19:31:43.498624  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:31:43.498633  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:31:43.498694  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:31:43.540557  148123 cri.go:89] found id: ""
	I1010 19:31:43.540589  148123 logs.go:282] 0 containers: []
	W1010 19:31:43.540598  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:31:43.540605  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:31:43.540661  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:31:43.576743  148123 cri.go:89] found id: ""
	I1010 19:31:43.576776  148123 logs.go:282] 0 containers: []
	W1010 19:31:43.576787  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:31:43.576796  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:31:43.576887  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:31:43.614620  148123 cri.go:89] found id: ""
	I1010 19:31:43.614652  148123 logs.go:282] 0 containers: []
	W1010 19:31:43.614662  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:31:43.614668  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:31:43.614730  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:31:43.652008  148123 cri.go:89] found id: ""
	I1010 19:31:43.652061  148123 logs.go:282] 0 containers: []
	W1010 19:31:43.652075  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:31:43.652084  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:31:43.652153  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:31:43.689990  148123 cri.go:89] found id: ""
	I1010 19:31:43.690019  148123 logs.go:282] 0 containers: []
	W1010 19:31:43.690028  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:31:43.690043  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:31:43.690056  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:31:43.745634  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:31:43.745673  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:31:43.760064  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:31:43.760095  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:31:43.840168  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:31:43.840197  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:31:43.840214  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:31:43.943265  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:31:43.943303  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1010 19:31:43.987758  148123 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1010 19:31:43.987816  148123 out.go:270] * 
	W1010 19:31:43.987891  148123 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1010 19:31:43.987906  148123 out.go:270] * 
	W1010 19:31:43.988703  148123 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1010 19:31:43.991928  148123 out.go:201] 
	W1010 19:31:43.993494  148123 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1010 19:31:43.993556  148123 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1010 19:31:43.993585  148123 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1010 19:31:43.995273  148123 out.go:201] 
	
	
	==> CRI-O <==
	Oct 10 19:38:07 no-preload-320324 crio[719]: time="2024-10-10 19:38:07.722065512Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728589087722034937,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=92989591-83cf-481c-95a3-32f3b66d8b45 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 10 19:38:07 no-preload-320324 crio[719]: time="2024-10-10 19:38:07.722892529Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=17808ca7-2e16-44f8-bb4e-11751a3bf858 name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 19:38:07 no-preload-320324 crio[719]: time="2024-10-10 19:38:07.722998100Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=17808ca7-2e16-44f8-bb4e-11751a3bf858 name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 19:38:07 no-preload-320324 crio[719]: time="2024-10-10 19:38:07.723520882Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dfabbf70cd44985666c6b54a456e49c66a41cc19300a483565decd937ce240ab,PodSandboxId:3fd1f120d7e860d22ca8af701b88b3cd9684b1ead0bb35aa7ba45bc017f5a780,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728588308885762612,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4965b53-60c3-4a97-bd52-d164d977247a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2cdd328ebee5dae2ff3960247044a0983c6992c0f2770f8d7093a26068e1d385,PodSandboxId:86ec69250ff8253f8bc74a5d230ebbcf7a36105b8133f79a7c9c870db90ed0b0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1728588288025681818,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0d6bf6d0-79fc-47a0-b588-a7c47c06e191,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c98f0e3e46ce82feb8638281ffb8c8cdf326b15115a6edfe91de59de6868846,PodSandboxId:fbc0eb700ee8cbff7cc6e919d6c674db8b6d7b3694c8ca70a2d81ce5a385c0a8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728588285772217003,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-86brb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28e5f869-f82f-4bd4-9d9c-89499fa89c89,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e14d37c6da3f630d60720c5dc7ec414f060a7b327fafd27b633f3402e552840e,PodSandboxId:3fd1f120d7e860d22ca8af701b88b3cd9684b1ead0bb35aa7ba45bc017f5a780,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1728588278099877129,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d
4965b53-60c3-4a97-bd52-d164d977247a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a26f9cbec8dc8e6e1b1c1cc24a2432dc85819a78557be2263db6bc34e847664,PodSandboxId:453ffe467d42c751befb3e2f09eaa4e43416314d797244a4c255966077ed50eb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728588278100034786,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vn6sv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5b2c419-7299-4bc4-b263-99408b9484
eb,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d59196636b2826805e226757e3c5d3683d49f2fddb43f2e965aeae02026742ea,PodSandboxId:39c9cdf3f7d947934aa345dbc5ee85c57af2eba8066e6683886d2bb0efae5893,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728588273334377136,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-320324,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb9368a480685ae4
528d20670060406c,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfc9f1f069a0299235908025d3c8840e059ddc2fc2f579932edb134cff568c36,PodSandboxId:b590286c4520957ce4dfe6cb3da44b285f654565000cbe5d407c96b76d43c1d9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728588273283344969,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-320324,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3cbcec58925c6e71fe76a35de28ca1b8,},Annotations:map[string]s
tring{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d397ef1d012accb6f9cb41fa5bd5ed4649588e9ad8552f87d2f9a579a9c3f023,PodSandboxId:21ba7aa94ead484c32379a5c41acc3e6cd5e6e587e0e66f72a9cb0c7ce8b29d8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728588273280730422,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-320324,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56981c4dc0d3b087e0e04dd21a000497,},Annotations:map[string]string{io.kube
rnetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20a9cb514f18abab89d82173a52c43693b13548712f96726c491e14100e5deba,PodSandboxId:37f20f27a64ee15d5185bf4dd7bee2bc6e698ebe0826a2b8b13462e3a3ff441e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728588273287152746,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-320324,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4b732db1a340209d45b2d9b80eca5de,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=17808ca7-2e16-44f8-bb4e-11751a3bf858 name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 19:38:07 no-preload-320324 crio[719]: time="2024-10-10 19:38:07.769280400Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ec35eb8d-57ea-4dfc-9fcc-d9a861b2948f name=/runtime.v1.RuntimeService/Version
	Oct 10 19:38:07 no-preload-320324 crio[719]: time="2024-10-10 19:38:07.769450356Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ec35eb8d-57ea-4dfc-9fcc-d9a861b2948f name=/runtime.v1.RuntimeService/Version
	Oct 10 19:38:07 no-preload-320324 crio[719]: time="2024-10-10 19:38:07.771061781Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3d8767a8-7b54-46b3-85ae-3d8b7fb371f9 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 10 19:38:07 no-preload-320324 crio[719]: time="2024-10-10 19:38:07.772337176Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728589087772304773,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3d8767a8-7b54-46b3-85ae-3d8b7fb371f9 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 10 19:38:07 no-preload-320324 crio[719]: time="2024-10-10 19:38:07.773788001Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=73af3d64-dd3f-442e-8ca3-5fa10ad667c1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 19:38:07 no-preload-320324 crio[719]: time="2024-10-10 19:38:07.773881574Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=73af3d64-dd3f-442e-8ca3-5fa10ad667c1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 19:38:07 no-preload-320324 crio[719]: time="2024-10-10 19:38:07.774075916Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dfabbf70cd44985666c6b54a456e49c66a41cc19300a483565decd937ce240ab,PodSandboxId:3fd1f120d7e860d22ca8af701b88b3cd9684b1ead0bb35aa7ba45bc017f5a780,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728588308885762612,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4965b53-60c3-4a97-bd52-d164d977247a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2cdd328ebee5dae2ff3960247044a0983c6992c0f2770f8d7093a26068e1d385,PodSandboxId:86ec69250ff8253f8bc74a5d230ebbcf7a36105b8133f79a7c9c870db90ed0b0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1728588288025681818,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0d6bf6d0-79fc-47a0-b588-a7c47c06e191,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c98f0e3e46ce82feb8638281ffb8c8cdf326b15115a6edfe91de59de6868846,PodSandboxId:fbc0eb700ee8cbff7cc6e919d6c674db8b6d7b3694c8ca70a2d81ce5a385c0a8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728588285772217003,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-86brb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28e5f869-f82f-4bd4-9d9c-89499fa89c89,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e14d37c6da3f630d60720c5dc7ec414f060a7b327fafd27b633f3402e552840e,PodSandboxId:3fd1f120d7e860d22ca8af701b88b3cd9684b1ead0bb35aa7ba45bc017f5a780,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1728588278099877129,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d
4965b53-60c3-4a97-bd52-d164d977247a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a26f9cbec8dc8e6e1b1c1cc24a2432dc85819a78557be2263db6bc34e847664,PodSandboxId:453ffe467d42c751befb3e2f09eaa4e43416314d797244a4c255966077ed50eb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728588278100034786,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vn6sv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5b2c419-7299-4bc4-b263-99408b9484
eb,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d59196636b2826805e226757e3c5d3683d49f2fddb43f2e965aeae02026742ea,PodSandboxId:39c9cdf3f7d947934aa345dbc5ee85c57af2eba8066e6683886d2bb0efae5893,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728588273334377136,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-320324,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb9368a480685ae4
528d20670060406c,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfc9f1f069a0299235908025d3c8840e059ddc2fc2f579932edb134cff568c36,PodSandboxId:b590286c4520957ce4dfe6cb3da44b285f654565000cbe5d407c96b76d43c1d9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728588273283344969,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-320324,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3cbcec58925c6e71fe76a35de28ca1b8,},Annotations:map[string]s
tring{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d397ef1d012accb6f9cb41fa5bd5ed4649588e9ad8552f87d2f9a579a9c3f023,PodSandboxId:21ba7aa94ead484c32379a5c41acc3e6cd5e6e587e0e66f72a9cb0c7ce8b29d8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728588273280730422,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-320324,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56981c4dc0d3b087e0e04dd21a000497,},Annotations:map[string]string{io.kube
rnetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20a9cb514f18abab89d82173a52c43693b13548712f96726c491e14100e5deba,PodSandboxId:37f20f27a64ee15d5185bf4dd7bee2bc6e698ebe0826a2b8b13462e3a3ff441e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728588273287152746,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-320324,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4b732db1a340209d45b2d9b80eca5de,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=73af3d64-dd3f-442e-8ca3-5fa10ad667c1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 19:38:07 no-preload-320324 crio[719]: time="2024-10-10 19:38:07.837722421Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8ce2cfd3-d8b8-42fa-b778-5b90246f8648 name=/runtime.v1.RuntimeService/Version
	Oct 10 19:38:07 no-preload-320324 crio[719]: time="2024-10-10 19:38:07.837844742Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8ce2cfd3-d8b8-42fa-b778-5b90246f8648 name=/runtime.v1.RuntimeService/Version
	Oct 10 19:38:07 no-preload-320324 crio[719]: time="2024-10-10 19:38:07.839205535Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=60cf6517-5d20-4094-b6b2-f525a7705323 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 10 19:38:07 no-preload-320324 crio[719]: time="2024-10-10 19:38:07.839943165Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728589087839913000,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=60cf6517-5d20-4094-b6b2-f525a7705323 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 10 19:38:07 no-preload-320324 crio[719]: time="2024-10-10 19:38:07.840813966Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ee32779a-0069-48dc-ac8f-fcf8539ddb35 name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 19:38:07 no-preload-320324 crio[719]: time="2024-10-10 19:38:07.840905454Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ee32779a-0069-48dc-ac8f-fcf8539ddb35 name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 19:38:07 no-preload-320324 crio[719]: time="2024-10-10 19:38:07.841161955Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dfabbf70cd44985666c6b54a456e49c66a41cc19300a483565decd937ce240ab,PodSandboxId:3fd1f120d7e860d22ca8af701b88b3cd9684b1ead0bb35aa7ba45bc017f5a780,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728588308885762612,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4965b53-60c3-4a97-bd52-d164d977247a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2cdd328ebee5dae2ff3960247044a0983c6992c0f2770f8d7093a26068e1d385,PodSandboxId:86ec69250ff8253f8bc74a5d230ebbcf7a36105b8133f79a7c9c870db90ed0b0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1728588288025681818,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0d6bf6d0-79fc-47a0-b588-a7c47c06e191,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c98f0e3e46ce82feb8638281ffb8c8cdf326b15115a6edfe91de59de6868846,PodSandboxId:fbc0eb700ee8cbff7cc6e919d6c674db8b6d7b3694c8ca70a2d81ce5a385c0a8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728588285772217003,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-86brb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28e5f869-f82f-4bd4-9d9c-89499fa89c89,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e14d37c6da3f630d60720c5dc7ec414f060a7b327fafd27b633f3402e552840e,PodSandboxId:3fd1f120d7e860d22ca8af701b88b3cd9684b1ead0bb35aa7ba45bc017f5a780,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1728588278099877129,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d
4965b53-60c3-4a97-bd52-d164d977247a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a26f9cbec8dc8e6e1b1c1cc24a2432dc85819a78557be2263db6bc34e847664,PodSandboxId:453ffe467d42c751befb3e2f09eaa4e43416314d797244a4c255966077ed50eb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728588278100034786,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vn6sv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5b2c419-7299-4bc4-b263-99408b9484
eb,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d59196636b2826805e226757e3c5d3683d49f2fddb43f2e965aeae02026742ea,PodSandboxId:39c9cdf3f7d947934aa345dbc5ee85c57af2eba8066e6683886d2bb0efae5893,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728588273334377136,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-320324,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb9368a480685ae4
528d20670060406c,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfc9f1f069a0299235908025d3c8840e059ddc2fc2f579932edb134cff568c36,PodSandboxId:b590286c4520957ce4dfe6cb3da44b285f654565000cbe5d407c96b76d43c1d9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728588273283344969,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-320324,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3cbcec58925c6e71fe76a35de28ca1b8,},Annotations:map[string]s
tring{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d397ef1d012accb6f9cb41fa5bd5ed4649588e9ad8552f87d2f9a579a9c3f023,PodSandboxId:21ba7aa94ead484c32379a5c41acc3e6cd5e6e587e0e66f72a9cb0c7ce8b29d8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728588273280730422,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-320324,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56981c4dc0d3b087e0e04dd21a000497,},Annotations:map[string]string{io.kube
rnetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20a9cb514f18abab89d82173a52c43693b13548712f96726c491e14100e5deba,PodSandboxId:37f20f27a64ee15d5185bf4dd7bee2bc6e698ebe0826a2b8b13462e3a3ff441e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728588273287152746,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-320324,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4b732db1a340209d45b2d9b80eca5de,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ee32779a-0069-48dc-ac8f-fcf8539ddb35 name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 19:38:07 no-preload-320324 crio[719]: time="2024-10-10 19:38:07.888216439Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2d7460e1-04ac-48fb-b60b-8076d31ce49d name=/runtime.v1.RuntimeService/Version
	Oct 10 19:38:07 no-preload-320324 crio[719]: time="2024-10-10 19:38:07.888334770Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2d7460e1-04ac-48fb-b60b-8076d31ce49d name=/runtime.v1.RuntimeService/Version
	Oct 10 19:38:07 no-preload-320324 crio[719]: time="2024-10-10 19:38:07.889896918Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=139a33dd-00ad-4b53-ac25-08a31b4a4ce0 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 10 19:38:07 no-preload-320324 crio[719]: time="2024-10-10 19:38:07.890487023Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728589087890387437,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=139a33dd-00ad-4b53-ac25-08a31b4a4ce0 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 10 19:38:07 no-preload-320324 crio[719]: time="2024-10-10 19:38:07.891156997Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=29c981d6-78a3-423e-8434-26942d834932 name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 19:38:07 no-preload-320324 crio[719]: time="2024-10-10 19:38:07.891246054Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=29c981d6-78a3-423e-8434-26942d834932 name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 19:38:07 no-preload-320324 crio[719]: time="2024-10-10 19:38:07.891581597Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dfabbf70cd44985666c6b54a456e49c66a41cc19300a483565decd937ce240ab,PodSandboxId:3fd1f120d7e860d22ca8af701b88b3cd9684b1ead0bb35aa7ba45bc017f5a780,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728588308885762612,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4965b53-60c3-4a97-bd52-d164d977247a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2cdd328ebee5dae2ff3960247044a0983c6992c0f2770f8d7093a26068e1d385,PodSandboxId:86ec69250ff8253f8bc74a5d230ebbcf7a36105b8133f79a7c9c870db90ed0b0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1728588288025681818,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0d6bf6d0-79fc-47a0-b588-a7c47c06e191,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c98f0e3e46ce82feb8638281ffb8c8cdf326b15115a6edfe91de59de6868846,PodSandboxId:fbc0eb700ee8cbff7cc6e919d6c674db8b6d7b3694c8ca70a2d81ce5a385c0a8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728588285772217003,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-86brb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28e5f869-f82f-4bd4-9d9c-89499fa89c89,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e14d37c6da3f630d60720c5dc7ec414f060a7b327fafd27b633f3402e552840e,PodSandboxId:3fd1f120d7e860d22ca8af701b88b3cd9684b1ead0bb35aa7ba45bc017f5a780,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1728588278099877129,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d
4965b53-60c3-4a97-bd52-d164d977247a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a26f9cbec8dc8e6e1b1c1cc24a2432dc85819a78557be2263db6bc34e847664,PodSandboxId:453ffe467d42c751befb3e2f09eaa4e43416314d797244a4c255966077ed50eb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728588278100034786,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vn6sv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5b2c419-7299-4bc4-b263-99408b9484
eb,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d59196636b2826805e226757e3c5d3683d49f2fddb43f2e965aeae02026742ea,PodSandboxId:39c9cdf3f7d947934aa345dbc5ee85c57af2eba8066e6683886d2bb0efae5893,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728588273334377136,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-320324,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb9368a480685ae4
528d20670060406c,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfc9f1f069a0299235908025d3c8840e059ddc2fc2f579932edb134cff568c36,PodSandboxId:b590286c4520957ce4dfe6cb3da44b285f654565000cbe5d407c96b76d43c1d9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728588273283344969,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-320324,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3cbcec58925c6e71fe76a35de28ca1b8,},Annotations:map[string]s
tring{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d397ef1d012accb6f9cb41fa5bd5ed4649588e9ad8552f87d2f9a579a9c3f023,PodSandboxId:21ba7aa94ead484c32379a5c41acc3e6cd5e6e587e0e66f72a9cb0c7ce8b29d8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728588273280730422,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-320324,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56981c4dc0d3b087e0e04dd21a000497,},Annotations:map[string]string{io.kube
rnetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20a9cb514f18abab89d82173a52c43693b13548712f96726c491e14100e5deba,PodSandboxId:37f20f27a64ee15d5185bf4dd7bee2bc6e698ebe0826a2b8b13462e3a3ff441e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728588273287152746,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-320324,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4b732db1a340209d45b2d9b80eca5de,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=29c981d6-78a3-423e-8434-26942d834932 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	dfabbf70cd449       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       2                   3fd1f120d7e86       storage-provisioner
	2cdd328ebee5d       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   86ec69250ff82       busybox
	3c98f0e3e46ce       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      13 minutes ago      Running             coredns                   1                   fbc0eb700ee8c       coredns-7c65d6cfc9-86brb
	3a26f9cbec8dc       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      13 minutes ago      Running             kube-proxy                1                   453ffe467d42c       kube-proxy-vn6sv
	e14d37c6da3f6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   3fd1f120d7e86       storage-provisioner
	d59196636b282       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      13 minutes ago      Running             kube-controller-manager   1                   39c9cdf3f7d94       kube-controller-manager-no-preload-320324
	20a9cb514f18a       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      13 minutes ago      Running             kube-apiserver            1                   37f20f27a64ee       kube-apiserver-no-preload-320324
	bfc9f1f069a02       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      13 minutes ago      Running             etcd                      1                   b590286c45209       etcd-no-preload-320324
	d397ef1d012ac       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      13 minutes ago      Running             kube-scheduler            1                   21ba7aa94ead4       kube-scheduler-no-preload-320324
	
	
	==> coredns [3c98f0e3e46ce82feb8638281ffb8c8cdf326b15115a6edfe91de59de6868846] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:51761 - 27500 "HINFO IN 1293154880471448858.3682064905009596402. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.025178284s
	
	
	==> describe nodes <==
	Name:               no-preload-320324
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-320324
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e90d7de550e839433dc631623053398b652747fc
	                    minikube.k8s.io/name=no-preload-320324
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_10T19_15_18_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 10 Oct 2024 19:15:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-320324
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 10 Oct 2024 19:38:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 10 Oct 2024 19:35:20 +0000   Thu, 10 Oct 2024 19:15:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 10 Oct 2024 19:35:20 +0000   Thu, 10 Oct 2024 19:15:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 10 Oct 2024 19:35:20 +0000   Thu, 10 Oct 2024 19:15:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 10 Oct 2024 19:35:20 +0000   Thu, 10 Oct 2024 19:24:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.11
	  Hostname:    no-preload-320324
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 de1127283fbd43379c307da5d31f891b
	  System UUID:                de112728-3fbd-4337-9c30-7da5d31f891b
	  Boot ID:                    b2b25208-4027-431b-8637-789bdffffd2d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 coredns-7c65d6cfc9-86brb                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     22m
	  kube-system                 etcd-no-preload-320324                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         22m
	  kube-system                 kube-apiserver-no-preload-320324             250m (12%)    0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-controller-manager-no-preload-320324    200m (10%)    0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-proxy-vn6sv                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-scheduler-no-preload-320324             100m (5%)     0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 metrics-server-6867b74b74-8w9lk              100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         22m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (17%)  170Mi (8%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 22m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientPID     22m                kubelet          Node no-preload-320324 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  22m                kubelet          Node no-preload-320324 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m                kubelet          Node no-preload-320324 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 22m                kubelet          Starting kubelet.
	  Normal  NodeReady                22m                kubelet          Node no-preload-320324 status is now: NodeReady
	  Normal  RegisteredNode           22m                node-controller  Node no-preload-320324 event: Registered Node no-preload-320324 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node no-preload-320324 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node no-preload-320324 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node no-preload-320324 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node no-preload-320324 event: Registered Node no-preload-320324 in Controller
	
	
	==> dmesg <==
	[Oct10 19:23] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051625] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042153] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Oct10 19:24] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.736405] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.597623] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.667265] systemd-fstab-generator[640]: Ignoring "noauto" option for root device
	[  +0.066371] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.073612] systemd-fstab-generator[652]: Ignoring "noauto" option for root device
	[  +0.187890] systemd-fstab-generator[666]: Ignoring "noauto" option for root device
	[  +0.153053] systemd-fstab-generator[678]: Ignoring "noauto" option for root device
	[  +0.317917] systemd-fstab-generator[711]: Ignoring "noauto" option for root device
	[ +15.554998] systemd-fstab-generator[1252]: Ignoring "noauto" option for root device
	[  +0.067907] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.962336] systemd-fstab-generator[1374]: Ignoring "noauto" option for root device
	[  +3.366540] kauditd_printk_skb: 97 callbacks suppressed
	[  +4.745112] systemd-fstab-generator[2004]: Ignoring "noauto" option for root device
	[  +3.165730] kauditd_printk_skb: 61 callbacks suppressed
	[Oct10 19:25] kauditd_printk_skb: 46 callbacks suppressed
	
	
	==> etcd [bfc9f1f069a0299235908025d3c8840e059ddc2fc2f579932edb134cff568c36] <==
	{"level":"info","ts":"2024-10-10T19:24:33.872048Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-10-10T19:24:33.874705Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"a2fc37409d191146","initial-advertise-peer-urls":["https://192.168.72.11:2380"],"listen-peer-urls":["https://192.168.72.11:2380"],"advertise-client-urls":["https://192.168.72.11:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.11:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-10-10T19:24:33.874786Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-10-10T19:24:33.874903Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.72.11:2380"}
	{"level":"info","ts":"2024-10-10T19:24:33.875450Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.72.11:2380"}
	{"level":"info","ts":"2024-10-10T19:24:33.875715Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-10T19:24:35.671177Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a2fc37409d191146 is starting a new election at term 2"}
	{"level":"info","ts":"2024-10-10T19:24:35.671255Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a2fc37409d191146 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-10-10T19:24:35.671296Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a2fc37409d191146 received MsgPreVoteResp from a2fc37409d191146 at term 2"}
	{"level":"info","ts":"2024-10-10T19:24:35.671325Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a2fc37409d191146 became candidate at term 3"}
	{"level":"info","ts":"2024-10-10T19:24:35.671333Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a2fc37409d191146 received MsgVoteResp from a2fc37409d191146 at term 3"}
	{"level":"info","ts":"2024-10-10T19:24:35.671345Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a2fc37409d191146 became leader at term 3"}
	{"level":"info","ts":"2024-10-10T19:24:35.671355Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: a2fc37409d191146 elected leader a2fc37409d191146 at term 3"}
	{"level":"info","ts":"2024-10-10T19:24:35.689874Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-10T19:24:35.690094Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-10T19:24:35.690677Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-10T19:24:35.690718Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-10T19:24:35.689871Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"a2fc37409d191146","local-member-attributes":"{Name:no-preload-320324 ClientURLs:[https://192.168.72.11:2379]}","request-path":"/0/members/a2fc37409d191146/attributes","cluster-id":"df21a150cc67cfa3","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-10T19:24:35.691655Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-10T19:24:35.691658Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-10T19:24:35.692578Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.11:2379"}
	{"level":"info","ts":"2024-10-10T19:24:35.693393Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-10T19:34:35.728701Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":814}
	{"level":"info","ts":"2024-10-10T19:34:35.740588Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":814,"took":"11.515394ms","hash":3626833409,"current-db-size-bytes":2711552,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":2711552,"current-db-size-in-use":"2.7 MB"}
	{"level":"info","ts":"2024-10-10T19:34:35.740662Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3626833409,"revision":814,"compact-revision":-1}
	
	
	==> kernel <==
	 19:38:08 up 14 min,  0 users,  load average: 0.89, 0.34, 0.21
	Linux no-preload-320324 5.10.207 #1 SMP Tue Oct 8 15:16:25 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [20a9cb514f18abab89d82173a52c43693b13548712f96726c491e14100e5deba] <==
	W1010 19:34:38.091857       1 handler_proxy.go:99] no RequestInfo found in the context
	E1010 19:34:38.092066       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1010 19:34:38.093123       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1010 19:34:38.093153       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1010 19:35:38.094114       1 handler_proxy.go:99] no RequestInfo found in the context
	E1010 19:35:38.094361       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1010 19:35:38.094539       1 handler_proxy.go:99] no RequestInfo found in the context
	E1010 19:35:38.094623       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1010 19:35:38.095724       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1010 19:35:38.095744       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1010 19:37:38.096965       1 handler_proxy.go:99] no RequestInfo found in the context
	E1010 19:37:38.097081       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1010 19:37:38.096966       1 handler_proxy.go:99] no RequestInfo found in the context
	E1010 19:37:38.097180       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1010 19:37:38.098503       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1010 19:37:38.098592       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [d59196636b2826805e226757e3c5d3683d49f2fddb43f2e965aeae02026742ea] <==
	E1010 19:32:40.769978       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1010 19:32:41.229696       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1010 19:33:10.775918       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1010 19:33:11.237499       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1010 19:33:40.782637       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1010 19:33:41.245153       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1010 19:34:10.789354       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1010 19:34:11.252086       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1010 19:34:40.795752       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1010 19:34:41.259948       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1010 19:35:10.801943       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1010 19:35:11.267457       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1010 19:35:20.190301       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-320324"
	I1010 19:35:33.653755       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="286.896µs"
	E1010 19:35:40.808478       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1010 19:35:41.279913       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1010 19:35:47.652990       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="134.319µs"
	E1010 19:36:10.815650       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1010 19:36:11.289074       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1010 19:36:40.822075       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1010 19:36:41.296005       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1010 19:37:10.828311       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1010 19:37:11.304161       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1010 19:37:40.834795       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1010 19:37:41.312022       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [3a26f9cbec8dc8e6e1b1c1cc24a2432dc85819a78557be2263db6bc34e847664] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1010 19:24:38.411824       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1010 19:24:38.421685       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.72.11"]
	E1010 19:24:38.421766       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1010 19:24:38.458130       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1010 19:24:38.458178       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1010 19:24:38.458201       1 server_linux.go:169] "Using iptables Proxier"
	I1010 19:24:38.461131       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1010 19:24:38.461391       1 server.go:483] "Version info" version="v1.31.1"
	I1010 19:24:38.461570       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1010 19:24:38.463182       1 config.go:199] "Starting service config controller"
	I1010 19:24:38.463227       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1010 19:24:38.463264       1 config.go:105] "Starting endpoint slice config controller"
	I1010 19:24:38.463285       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1010 19:24:38.463832       1 config.go:328] "Starting node config controller"
	I1010 19:24:38.463865       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1010 19:24:38.564462       1 shared_informer.go:320] Caches are synced for node config
	I1010 19:24:38.564516       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1010 19:24:38.564485       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [d397ef1d012accb6f9cb41fa5bd5ed4649588e9ad8552f87d2f9a579a9c3f023] <==
	I1010 19:24:34.563688       1 serving.go:386] Generated self-signed cert in-memory
	W1010 19:24:37.065842       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1010 19:24:37.065952       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1010 19:24:37.065992       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1010 19:24:37.066018       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1010 19:24:37.092595       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I1010 19:24:37.092823       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1010 19:24:37.094920       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1010 19:24:37.095061       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1010 19:24:37.095304       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1010 19:24:37.095385       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1010 19:24:37.196478       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 10 19:36:59 no-preload-320324 kubelet[1381]: E1010 19:36:59.635799    1381 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-8w9lk" podUID="354939e6-2ca9-44f5-8e8e-c10493c68b79"
	Oct 10 19:37:02 no-preload-320324 kubelet[1381]: E1010 19:37:02.886953    1381 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728589022886383163,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 19:37:02 no-preload-320324 kubelet[1381]: E1010 19:37:02.887342    1381 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728589022886383163,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 19:37:12 no-preload-320324 kubelet[1381]: E1010 19:37:12.889912    1381 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728589032889225270,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 19:37:12 no-preload-320324 kubelet[1381]: E1010 19:37:12.890018    1381 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728589032889225270,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 19:37:13 no-preload-320324 kubelet[1381]: E1010 19:37:13.635583    1381 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-8w9lk" podUID="354939e6-2ca9-44f5-8e8e-c10493c68b79"
	Oct 10 19:37:22 no-preload-320324 kubelet[1381]: E1010 19:37:22.891768    1381 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728589042891064047,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 19:37:22 no-preload-320324 kubelet[1381]: E1010 19:37:22.892038    1381 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728589042891064047,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 19:37:28 no-preload-320324 kubelet[1381]: E1010 19:37:28.635991    1381 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-8w9lk" podUID="354939e6-2ca9-44f5-8e8e-c10493c68b79"
	Oct 10 19:37:32 no-preload-320324 kubelet[1381]: E1010 19:37:32.676903    1381 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 10 19:37:32 no-preload-320324 kubelet[1381]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 10 19:37:32 no-preload-320324 kubelet[1381]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 10 19:37:32 no-preload-320324 kubelet[1381]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 10 19:37:32 no-preload-320324 kubelet[1381]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 10 19:37:32 no-preload-320324 kubelet[1381]: E1010 19:37:32.893888    1381 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728589052893320073,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 19:37:32 no-preload-320324 kubelet[1381]: E1010 19:37:32.893995    1381 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728589052893320073,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 19:37:40 no-preload-320324 kubelet[1381]: E1010 19:37:40.636651    1381 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-8w9lk" podUID="354939e6-2ca9-44f5-8e8e-c10493c68b79"
	Oct 10 19:37:42 no-preload-320324 kubelet[1381]: E1010 19:37:42.895513    1381 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728589062894993639,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 19:37:42 no-preload-320324 kubelet[1381]: E1010 19:37:42.895785    1381 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728589062894993639,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 19:37:52 no-preload-320324 kubelet[1381]: E1010 19:37:52.636774    1381 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-8w9lk" podUID="354939e6-2ca9-44f5-8e8e-c10493c68b79"
	Oct 10 19:37:52 no-preload-320324 kubelet[1381]: E1010 19:37:52.897680    1381 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728589072897204418,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 19:37:52 no-preload-320324 kubelet[1381]: E1010 19:37:52.897763    1381 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728589072897204418,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 19:38:02 no-preload-320324 kubelet[1381]: E1010 19:38:02.899498    1381 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728589082899215249,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 19:38:02 no-preload-320324 kubelet[1381]: E1010 19:38:02.899527    1381 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728589082899215249,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 19:38:07 no-preload-320324 kubelet[1381]: E1010 19:38:07.636304    1381 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-8w9lk" podUID="354939e6-2ca9-44f5-8e8e-c10493c68b79"
	
	
	==> storage-provisioner [dfabbf70cd44985666c6b54a456e49c66a41cc19300a483565decd937ce240ab] <==
	I1010 19:25:08.969389       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1010 19:25:08.982242       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1010 19:25:08.982376       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1010 19:25:26.394136       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1010 19:25:26.394650       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a148e308-6b33-4484-b27c-21c54d403579", APIVersion:"v1", ResourceVersion:"598", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-320324_1eb5e73e-87ab-4f71-aa5d-dc947bcff248 became leader
	I1010 19:25:26.394713       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-320324_1eb5e73e-87ab-4f71-aa5d-dc947bcff248!
	I1010 19:25:26.495632       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-320324_1eb5e73e-87ab-4f71-aa5d-dc947bcff248!
	
	
	==> storage-provisioner [e14d37c6da3f630d60720c5dc7ec414f060a7b327fafd27b633f3402e552840e] <==
	I1010 19:24:38.345339       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1010 19:25:08.350627       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-320324 -n no-preload-320324
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-320324 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-8w9lk
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-320324 describe pod metrics-server-6867b74b74-8w9lk
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-320324 describe pod metrics-server-6867b74b74-8w9lk: exit status 1 (64.524666ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-8w9lk" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-320324 describe pod metrics-server-6867b74b74-8w9lk: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.77s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.55s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
E1010 19:31:58.552761   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/kindnet-873515/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
E1010 19:32:04.753299   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/calico-873515/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
E1010 19:32:46.989860   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/auto-873515/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
E1010 19:32:50.017957   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/functional-346405/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
E1010 19:33:02.611002   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/addons-473910/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
E1010 19:33:11.751281   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/custom-flannel-873515/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
E1010 19:33:21.616998   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/kindnet-873515/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
E1010 19:33:27.816583   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/calico-873515/client.crt: no such file or directory" logger="UnhandledError"
E1010 19:33:28.350434   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/enable-default-cni-873515/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
E1010 19:34:14.583827   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/flannel-873515/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
E1010 19:34:29.521519   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/bridge-873515/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
E1010 19:34:34.815458   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/custom-flannel-873515/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
E1010 19:34:51.414828   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/enable-default-cni-873515/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
E1010 19:34:59.530777   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/addons-473910/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
E1010 19:35:37.649391   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/flannel-873515/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
E1010 19:35:52.587595   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/bridge-873515/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
E1010 19:36:23.923647   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/auto-873515/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
E1010 19:36:58.552906   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/kindnet-873515/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
E1010 19:37:04.753277   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/calico-873515/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
E1010 19:37:50.018116   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/functional-346405/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
E1010 19:38:11.751335   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/custom-flannel-873515/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
E1010 19:38:28.350061   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/enable-default-cni-873515/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
E1010 19:39:14.583557   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/flannel-873515/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
E1010 19:39:29.522119   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/bridge-873515/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
E1010 19:39:59.530245   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/addons-473910/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-947203 -n old-k8s-version-947203
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-947203 -n old-k8s-version-947203: exit status 2 (249.186048ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-947203" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-947203 -n old-k8s-version-947203
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-947203 -n old-k8s-version-947203: exit status 2 (243.812809ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-947203 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-947203 logs -n 25: (1.606697578s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p no-preload-320324                                   | no-preload-320324            | jenkins | v1.34.0 | 10 Oct 24 19:15 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-029826             | newest-cni-029826            | jenkins | v1.34.0 | 10 Oct 24 19:16 UTC | 10 Oct 24 19:16 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-029826                                   | newest-cni-029826            | jenkins | v1.34.0 | 10 Oct 24 19:16 UTC | 10 Oct 24 19:16 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-029826                  | newest-cni-029826            | jenkins | v1.34.0 | 10 Oct 24 19:16 UTC | 10 Oct 24 19:16 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-029826 --memory=2200 --alsologtostderr   | newest-cni-029826            | jenkins | v1.34.0 | 10 Oct 24 19:16 UTC | 10 Oct 24 19:16 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-541370            | embed-certs-541370           | jenkins | v1.34.0 | 10 Oct 24 19:16 UTC | 10 Oct 24 19:16 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-541370                                  | embed-certs-541370           | jenkins | v1.34.0 | 10 Oct 24 19:16 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| image   | newest-cni-029826 image list                           | newest-cni-029826            | jenkins | v1.34.0 | 10 Oct 24 19:16 UTC | 10 Oct 24 19:16 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-029826                                   | newest-cni-029826            | jenkins | v1.34.0 | 10 Oct 24 19:16 UTC | 10 Oct 24 19:16 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-029826                                   | newest-cni-029826            | jenkins | v1.34.0 | 10 Oct 24 19:16 UTC | 10 Oct 24 19:16 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-029826                                   | newest-cni-029826            | jenkins | v1.34.0 | 10 Oct 24 19:16 UTC | 10 Oct 24 19:16 UTC |
	| delete  | -p newest-cni-029826                                   | newest-cni-029826            | jenkins | v1.34.0 | 10 Oct 24 19:16 UTC | 10 Oct 24 19:17 UTC |
	| start   | -p                                                     | default-k8s-diff-port-361847 | jenkins | v1.34.0 | 10 Oct 24 19:17 UTC | 10 Oct 24 19:18 UTC |
	|         | default-k8s-diff-port-361847                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-320324                  | no-preload-320324            | jenkins | v1.34.0 | 10 Oct 24 19:18 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-320324                                   | no-preload-320324            | jenkins | v1.34.0 | 10 Oct 24 19:18 UTC | 10 Oct 24 19:29 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-947203        | old-k8s-version-947203       | jenkins | v1.34.0 | 10 Oct 24 19:18 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-361847  | default-k8s-diff-port-361847 | jenkins | v1.34.0 | 10 Oct 24 19:18 UTC | 10 Oct 24 19:18 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-361847 | jenkins | v1.34.0 | 10 Oct 24 19:18 UTC |                     |
	|         | default-k8s-diff-port-361847                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-541370                 | embed-certs-541370           | jenkins | v1.34.0 | 10 Oct 24 19:18 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-541370                                  | embed-certs-541370           | jenkins | v1.34.0 | 10 Oct 24 19:19 UTC | 10 Oct 24 19:28 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-947203                              | old-k8s-version-947203       | jenkins | v1.34.0 | 10 Oct 24 19:19 UTC | 10 Oct 24 19:19 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-947203             | old-k8s-version-947203       | jenkins | v1.34.0 | 10 Oct 24 19:19 UTC | 10 Oct 24 19:19 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-947203                              | old-k8s-version-947203       | jenkins | v1.34.0 | 10 Oct 24 19:19 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-361847       | default-k8s-diff-port-361847 | jenkins | v1.34.0 | 10 Oct 24 19:21 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-361847 | jenkins | v1.34.0 | 10 Oct 24 19:21 UTC | 10 Oct 24 19:29 UTC |
	|         | default-k8s-diff-port-361847                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/10 19:21:13
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1010 19:21:13.943219  148525 out.go:345] Setting OutFile to fd 1 ...
	I1010 19:21:13.943336  148525 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 19:21:13.943343  148525 out.go:358] Setting ErrFile to fd 2...
	I1010 19:21:13.943347  148525 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 19:21:13.943560  148525 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19787-81676/.minikube/bin
	I1010 19:21:13.944109  148525 out.go:352] Setting JSON to false
	I1010 19:21:13.945219  148525 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":11020,"bootTime":1728577054,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1010 19:21:13.945321  148525 start.go:139] virtualization: kvm guest
	I1010 19:21:13.947915  148525 out.go:177] * [default-k8s-diff-port-361847] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1010 19:21:13.950021  148525 out.go:177]   - MINIKUBE_LOCATION=19787
	I1010 19:21:13.950037  148525 notify.go:220] Checking for updates...
	I1010 19:21:13.952994  148525 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1010 19:21:13.954661  148525 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19787-81676/kubeconfig
	I1010 19:21:13.956438  148525 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19787-81676/.minikube
	I1010 19:21:13.958502  148525 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1010 19:21:13.960099  148525 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1010 19:21:13.961930  148525 config.go:182] Loaded profile config "default-k8s-diff-port-361847": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 19:21:13.962374  148525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:21:13.962450  148525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:21:13.978323  148525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41247
	I1010 19:21:13.978926  148525 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:21:13.979520  148525 main.go:141] libmachine: Using API Version  1
	I1010 19:21:13.979538  148525 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:21:13.979954  148525 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:21:13.980144  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .DriverName
	I1010 19:21:13.980446  148525 driver.go:394] Setting default libvirt URI to qemu:///system
	I1010 19:21:13.980745  148525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:21:13.980784  148525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:21:13.996046  148525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37617
	I1010 19:21:13.996534  148525 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:21:13.997069  148525 main.go:141] libmachine: Using API Version  1
	I1010 19:21:13.997097  148525 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:21:13.997530  148525 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:21:13.997788  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .DriverName
	I1010 19:21:14.033593  148525 out.go:177] * Using the kvm2 driver based on existing profile
	I1010 19:21:14.035367  148525 start.go:297] selected driver: kvm2
	I1010 19:21:14.035394  148525 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-361847 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-361847 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.32 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 19:21:14.035526  148525 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1010 19:21:14.036341  148525 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 19:21:14.036452  148525 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19787-81676/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1010 19:21:14.052462  148525 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1010 19:21:14.052918  148525 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1010 19:21:14.052967  148525 cni.go:84] Creating CNI manager for ""
	I1010 19:21:14.053019  148525 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1010 19:21:14.053067  148525 start.go:340] cluster config:
	{Name:default-k8s-diff-port-361847 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-361847 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.32 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-h
ost Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 19:21:14.053178  148525 iso.go:125] acquiring lock: {Name:mk53f4624ffe67cac4f5d6b29b9ed87292a010a5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 19:21:14.055485  148525 out.go:177] * Starting "default-k8s-diff-port-361847" primary control-plane node in "default-k8s-diff-port-361847" cluster
	I1010 19:21:16.773106  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:21:14.056945  148525 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1010 19:21:14.057002  148525 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1010 19:21:14.057014  148525 cache.go:56] Caching tarball of preloaded images
	I1010 19:21:14.057118  148525 preload.go:172] Found /home/jenkins/minikube-integration/19787-81676/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1010 19:21:14.057134  148525 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1010 19:21:14.057268  148525 profile.go:143] Saving config to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/default-k8s-diff-port-361847/config.json ...
	I1010 19:21:14.057476  148525 start.go:360] acquireMachinesLock for default-k8s-diff-port-361847: {Name:mkf50a8a3600473176c10a3ea212772e747151e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1010 19:21:22.853158  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:21:25.925174  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:21:32.005160  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:21:35.077198  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:21:41.157130  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:21:44.229127  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:21:50.309136  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:21:53.381191  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:21:59.461129  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:22:02.533201  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:22:08.613124  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:22:11.685169  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:22:17.765161  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:22:20.837208  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:22:26.917127  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:22:29.989172  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:22:36.069147  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:22:39.141173  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:22:45.221160  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:22:48.293141  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:22:51.297376  147758 start.go:364] duration metric: took 3m49.312490934s to acquireMachinesLock for "embed-certs-541370"
	I1010 19:22:51.297453  147758 start.go:96] Skipping create...Using existing machine configuration
	I1010 19:22:51.297464  147758 fix.go:54] fixHost starting: 
	I1010 19:22:51.297787  147758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:22:51.297848  147758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:22:51.314087  147758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43639
	I1010 19:22:51.314588  147758 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:22:51.315115  147758 main.go:141] libmachine: Using API Version  1
	I1010 19:22:51.315138  147758 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:22:51.315509  147758 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:22:51.315691  147758 main.go:141] libmachine: (embed-certs-541370) Calling .DriverName
	I1010 19:22:51.315879  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetState
	I1010 19:22:51.317597  147758 fix.go:112] recreateIfNeeded on embed-certs-541370: state=Stopped err=<nil>
	I1010 19:22:51.317621  147758 main.go:141] libmachine: (embed-certs-541370) Calling .DriverName
	W1010 19:22:51.317781  147758 fix.go:138] unexpected machine state, will restart: <nil>
	I1010 19:22:51.319664  147758 out.go:177] * Restarting existing kvm2 VM for "embed-certs-541370" ...
	I1010 19:22:51.320967  147758 main.go:141] libmachine: (embed-certs-541370) Calling .Start
	I1010 19:22:51.321134  147758 main.go:141] libmachine: (embed-certs-541370) Ensuring networks are active...
	I1010 19:22:51.322026  147758 main.go:141] libmachine: (embed-certs-541370) Ensuring network default is active
	I1010 19:22:51.322468  147758 main.go:141] libmachine: (embed-certs-541370) Ensuring network mk-embed-certs-541370 is active
	I1010 19:22:51.322874  147758 main.go:141] libmachine: (embed-certs-541370) Getting domain xml...
	I1010 19:22:51.323687  147758 main.go:141] libmachine: (embed-certs-541370) Creating domain...
	I1010 19:22:51.294881  147213 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1010 19:22:51.294927  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetMachineName
	I1010 19:22:51.295226  147213 buildroot.go:166] provisioning hostname "no-preload-320324"
	I1010 19:22:51.295256  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetMachineName
	I1010 19:22:51.295454  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHHostname
	I1010 19:22:51.297198  147213 machine.go:96] duration metric: took 4m37.414594306s to provisionDockerMachine
	I1010 19:22:51.297252  147213 fix.go:56] duration metric: took 4m37.436635356s for fixHost
	I1010 19:22:51.297259  147213 start.go:83] releasing machines lock for "no-preload-320324", held for 4m37.436668423s
	W1010 19:22:51.297278  147213 start.go:714] error starting host: provision: host is not running
	W1010 19:22:51.297382  147213 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1010 19:22:51.297396  147213 start.go:729] Will try again in 5 seconds ...
	I1010 19:22:52.568699  147758 main.go:141] libmachine: (embed-certs-541370) Waiting to get IP...
	I1010 19:22:52.569582  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:22:52.569952  147758 main.go:141] libmachine: (embed-certs-541370) DBG | unable to find current IP address of domain embed-certs-541370 in network mk-embed-certs-541370
	I1010 19:22:52.570018  147758 main.go:141] libmachine: (embed-certs-541370) DBG | I1010 19:22:52.569935  148914 retry.go:31] will retry after 261.244287ms: waiting for machine to come up
	I1010 19:22:52.832639  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:22:52.833280  147758 main.go:141] libmachine: (embed-certs-541370) DBG | unable to find current IP address of domain embed-certs-541370 in network mk-embed-certs-541370
	I1010 19:22:52.833310  147758 main.go:141] libmachine: (embed-certs-541370) DBG | I1010 19:22:52.833200  148914 retry.go:31] will retry after 304.116732ms: waiting for machine to come up
	I1010 19:22:53.138770  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:22:53.139091  147758 main.go:141] libmachine: (embed-certs-541370) DBG | unable to find current IP address of domain embed-certs-541370 in network mk-embed-certs-541370
	I1010 19:22:53.139124  147758 main.go:141] libmachine: (embed-certs-541370) DBG | I1010 19:22:53.139055  148914 retry.go:31] will retry after 484.354474ms: waiting for machine to come up
	I1010 19:22:53.624831  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:22:53.625293  147758 main.go:141] libmachine: (embed-certs-541370) DBG | unable to find current IP address of domain embed-certs-541370 in network mk-embed-certs-541370
	I1010 19:22:53.625323  147758 main.go:141] libmachine: (embed-certs-541370) DBG | I1010 19:22:53.625234  148914 retry.go:31] will retry after 591.916836ms: waiting for machine to come up
	I1010 19:22:54.219214  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:22:54.219732  147758 main.go:141] libmachine: (embed-certs-541370) DBG | unable to find current IP address of domain embed-certs-541370 in network mk-embed-certs-541370
	I1010 19:22:54.219763  147758 main.go:141] libmachine: (embed-certs-541370) DBG | I1010 19:22:54.219673  148914 retry.go:31] will retry after 614.162479ms: waiting for machine to come up
	I1010 19:22:54.835573  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:22:54.836038  147758 main.go:141] libmachine: (embed-certs-541370) DBG | unable to find current IP address of domain embed-certs-541370 in network mk-embed-certs-541370
	I1010 19:22:54.836063  147758 main.go:141] libmachine: (embed-certs-541370) DBG | I1010 19:22:54.835988  148914 retry.go:31] will retry after 824.170953ms: waiting for machine to come up
	I1010 19:22:55.662092  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:22:55.662646  147758 main.go:141] libmachine: (embed-certs-541370) DBG | unable to find current IP address of domain embed-certs-541370 in network mk-embed-certs-541370
	I1010 19:22:55.662668  147758 main.go:141] libmachine: (embed-certs-541370) DBG | I1010 19:22:55.662586  148914 retry.go:31] will retry after 928.483848ms: waiting for machine to come up
	I1010 19:22:56.593200  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:22:56.593724  147758 main.go:141] libmachine: (embed-certs-541370) DBG | unable to find current IP address of domain embed-certs-541370 in network mk-embed-certs-541370
	I1010 19:22:56.593756  147758 main.go:141] libmachine: (embed-certs-541370) DBG | I1010 19:22:56.593679  148914 retry.go:31] will retry after 941.138644ms: waiting for machine to come up
	I1010 19:22:56.299351  147213 start.go:360] acquireMachinesLock for no-preload-320324: {Name:mkf50a8a3600473176c10a3ea212772e747151e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1010 19:22:57.536977  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:22:57.537403  147758 main.go:141] libmachine: (embed-certs-541370) DBG | unable to find current IP address of domain embed-certs-541370 in network mk-embed-certs-541370
	I1010 19:22:57.537429  147758 main.go:141] libmachine: (embed-certs-541370) DBG | I1010 19:22:57.537331  148914 retry.go:31] will retry after 1.262203584s: waiting for machine to come up
	I1010 19:22:58.801921  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:22:58.802420  147758 main.go:141] libmachine: (embed-certs-541370) DBG | unable to find current IP address of domain embed-certs-541370 in network mk-embed-certs-541370
	I1010 19:22:58.802454  147758 main.go:141] libmachine: (embed-certs-541370) DBG | I1010 19:22:58.802381  148914 retry.go:31] will retry after 2.154751391s: waiting for machine to come up
	I1010 19:23:00.960100  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:00.960661  147758 main.go:141] libmachine: (embed-certs-541370) DBG | unable to find current IP address of domain embed-certs-541370 in network mk-embed-certs-541370
	I1010 19:23:00.960684  147758 main.go:141] libmachine: (embed-certs-541370) DBG | I1010 19:23:00.960607  148914 retry.go:31] will retry after 1.945155171s: waiting for machine to come up
	I1010 19:23:02.907705  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:02.908097  147758 main.go:141] libmachine: (embed-certs-541370) DBG | unable to find current IP address of domain embed-certs-541370 in network mk-embed-certs-541370
	I1010 19:23:02.908129  147758 main.go:141] libmachine: (embed-certs-541370) DBG | I1010 19:23:02.908038  148914 retry.go:31] will retry after 3.245262469s: waiting for machine to come up
	I1010 19:23:06.157527  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:06.157897  147758 main.go:141] libmachine: (embed-certs-541370) DBG | unable to find current IP address of domain embed-certs-541370 in network mk-embed-certs-541370
	I1010 19:23:06.157925  147758 main.go:141] libmachine: (embed-certs-541370) DBG | I1010 19:23:06.157858  148914 retry.go:31] will retry after 3.973579024s: waiting for machine to come up
	I1010 19:23:11.321975  148123 start.go:364] duration metric: took 3m17.648521443s to acquireMachinesLock for "old-k8s-version-947203"
	I1010 19:23:11.322043  148123 start.go:96] Skipping create...Using existing machine configuration
	I1010 19:23:11.322056  148123 fix.go:54] fixHost starting: 
	I1010 19:23:11.322537  148123 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:23:11.322596  148123 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:23:11.339586  148123 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42157
	I1010 19:23:11.340091  148123 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:23:11.340686  148123 main.go:141] libmachine: Using API Version  1
	I1010 19:23:11.340719  148123 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:23:11.341101  148123 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:23:11.341295  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .DriverName
	I1010 19:23:11.341453  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetState
	I1010 19:23:11.343215  148123 fix.go:112] recreateIfNeeded on old-k8s-version-947203: state=Stopped err=<nil>
	I1010 19:23:11.343247  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .DriverName
	W1010 19:23:11.343408  148123 fix.go:138] unexpected machine state, will restart: <nil>
	I1010 19:23:11.345653  148123 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-947203" ...
	I1010 19:23:10.135369  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.135799  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has current primary IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.135830  147758 main.go:141] libmachine: (embed-certs-541370) Found IP for machine: 192.168.39.120
	I1010 19:23:10.135839  147758 main.go:141] libmachine: (embed-certs-541370) Reserving static IP address...
	I1010 19:23:10.136283  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "embed-certs-541370", mac: "52:54:00:e2:ee:d0", ip: "192.168.39.120"} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:10.136311  147758 main.go:141] libmachine: (embed-certs-541370) Reserved static IP address: 192.168.39.120
	I1010 19:23:10.136327  147758 main.go:141] libmachine: (embed-certs-541370) DBG | skip adding static IP to network mk-embed-certs-541370 - found existing host DHCP lease matching {name: "embed-certs-541370", mac: "52:54:00:e2:ee:d0", ip: "192.168.39.120"}
	I1010 19:23:10.136339  147758 main.go:141] libmachine: (embed-certs-541370) DBG | Getting to WaitForSSH function...
	I1010 19:23:10.136351  147758 main.go:141] libmachine: (embed-certs-541370) Waiting for SSH to be available...
	I1010 19:23:10.138861  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.139259  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:10.139293  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.139438  147758 main.go:141] libmachine: (embed-certs-541370) DBG | Using SSH client type: external
	I1010 19:23:10.139472  147758 main.go:141] libmachine: (embed-certs-541370) DBG | Using SSH private key: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/embed-certs-541370/id_rsa (-rw-------)
	I1010 19:23:10.139517  147758 main.go:141] libmachine: (embed-certs-541370) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.120 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19787-81676/.minikube/machines/embed-certs-541370/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1010 19:23:10.139541  147758 main.go:141] libmachine: (embed-certs-541370) DBG | About to run SSH command:
	I1010 19:23:10.139562  147758 main.go:141] libmachine: (embed-certs-541370) DBG | exit 0
	I1010 19:23:10.261078  147758 main.go:141] libmachine: (embed-certs-541370) DBG | SSH cmd err, output: <nil>: 
	I1010 19:23:10.261533  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetConfigRaw
	I1010 19:23:10.262192  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetIP
	I1010 19:23:10.265071  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.265467  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:10.265515  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.265737  147758 profile.go:143] Saving config to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/embed-certs-541370/config.json ...
	I1010 19:23:10.265941  147758 machine.go:93] provisionDockerMachine start ...
	I1010 19:23:10.265960  147758 main.go:141] libmachine: (embed-certs-541370) Calling .DriverName
	I1010 19:23:10.266188  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHHostname
	I1010 19:23:10.269186  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.269618  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:10.269649  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.269799  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHPort
	I1010 19:23:10.269984  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:23:10.270206  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:23:10.270345  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHUsername
	I1010 19:23:10.270550  147758 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:10.270834  147758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.120 22 <nil> <nil>}
	I1010 19:23:10.270849  147758 main.go:141] libmachine: About to run SSH command:
	hostname
	I1010 19:23:10.373285  147758 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1010 19:23:10.373316  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetMachineName
	I1010 19:23:10.373625  147758 buildroot.go:166] provisioning hostname "embed-certs-541370"
	I1010 19:23:10.373660  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetMachineName
	I1010 19:23:10.373835  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHHostname
	I1010 19:23:10.376552  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.376951  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:10.376994  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.377132  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHPort
	I1010 19:23:10.377332  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:23:10.377489  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:23:10.377606  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHUsername
	I1010 19:23:10.377745  147758 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:10.377918  147758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.120 22 <nil> <nil>}
	I1010 19:23:10.377930  147758 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-541370 && echo "embed-certs-541370" | sudo tee /etc/hostname
	I1010 19:23:10.495847  147758 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-541370
	
	I1010 19:23:10.495880  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHHostname
	I1010 19:23:10.498868  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.499205  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:10.499247  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.499362  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHPort
	I1010 19:23:10.499556  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:23:10.499700  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:23:10.499829  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHUsername
	I1010 19:23:10.499961  147758 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:10.500187  147758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.120 22 <nil> <nil>}
	I1010 19:23:10.500210  147758 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-541370' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-541370/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-541370' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1010 19:23:10.614318  147758 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1010 19:23:10.614357  147758 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19787-81676/.minikube CaCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19787-81676/.minikube}
	I1010 19:23:10.614412  147758 buildroot.go:174] setting up certificates
	I1010 19:23:10.614429  147758 provision.go:84] configureAuth start
	I1010 19:23:10.614457  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetMachineName
	I1010 19:23:10.614763  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetIP
	I1010 19:23:10.617457  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.617888  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:10.617916  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.618078  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHHostname
	I1010 19:23:10.620243  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.620635  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:10.620666  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.620789  147758 provision.go:143] copyHostCerts
	I1010 19:23:10.620895  147758 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem, removing ...
	I1010 19:23:10.620913  147758 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem
	I1010 19:23:10.620998  147758 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem (1078 bytes)
	I1010 19:23:10.621111  147758 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem, removing ...
	I1010 19:23:10.621123  147758 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem
	I1010 19:23:10.621159  147758 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem (1123 bytes)
	I1010 19:23:10.621245  147758 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem, removing ...
	I1010 19:23:10.621257  147758 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem
	I1010 19:23:10.621292  147758 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem (1675 bytes)
	I1010 19:23:10.621364  147758 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem org=jenkins.embed-certs-541370 san=[127.0.0.1 192.168.39.120 embed-certs-541370 localhost minikube]
	I1010 19:23:10.697456  147758 provision.go:177] copyRemoteCerts
	I1010 19:23:10.697515  147758 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1010 19:23:10.697547  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHHostname
	I1010 19:23:10.700439  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.700763  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:10.700799  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.700956  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHPort
	I1010 19:23:10.701162  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:23:10.701320  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHUsername
	I1010 19:23:10.701465  147758 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/embed-certs-541370/id_rsa Username:docker}
	I1010 19:23:10.783442  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1010 19:23:10.808446  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1010 19:23:10.832117  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1010 19:23:10.856286  147758 provision.go:87] duration metric: took 241.840139ms to configureAuth
	I1010 19:23:10.856318  147758 buildroot.go:189] setting minikube options for container-runtime
	I1010 19:23:10.856528  147758 config.go:182] Loaded profile config "embed-certs-541370": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 19:23:10.856640  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHHostname
	I1010 19:23:10.859252  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.859677  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:10.859708  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.859916  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHPort
	I1010 19:23:10.860087  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:23:10.860222  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:23:10.860344  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHUsername
	I1010 19:23:10.860524  147758 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:10.860688  147758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.120 22 <nil> <nil>}
	I1010 19:23:10.860702  147758 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1010 19:23:11.086349  147758 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1010 19:23:11.086375  147758 machine.go:96] duration metric: took 820.421344ms to provisionDockerMachine
	I1010 19:23:11.086386  147758 start.go:293] postStartSetup for "embed-certs-541370" (driver="kvm2")
	I1010 19:23:11.086401  147758 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1010 19:23:11.086423  147758 main.go:141] libmachine: (embed-certs-541370) Calling .DriverName
	I1010 19:23:11.086755  147758 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1010 19:23:11.086783  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHHostname
	I1010 19:23:11.089482  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:11.089838  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:11.089860  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:11.090042  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHPort
	I1010 19:23:11.090253  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:23:11.090410  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHUsername
	I1010 19:23:11.090535  147758 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/embed-certs-541370/id_rsa Username:docker}
	I1010 19:23:11.172474  147758 ssh_runner.go:195] Run: cat /etc/os-release
	I1010 19:23:11.176699  147758 info.go:137] Remote host: Buildroot 2023.02.9
	I1010 19:23:11.176733  147758 filesync.go:126] Scanning /home/jenkins/minikube-integration/19787-81676/.minikube/addons for local assets ...
	I1010 19:23:11.176800  147758 filesync.go:126] Scanning /home/jenkins/minikube-integration/19787-81676/.minikube/files for local assets ...
	I1010 19:23:11.176899  147758 filesync.go:149] local asset: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem -> 888762.pem in /etc/ssl/certs
	I1010 19:23:11.177044  147758 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1010 19:23:11.186985  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem --> /etc/ssl/certs/888762.pem (1708 bytes)
	I1010 19:23:11.211385  147758 start.go:296] duration metric: took 124.982089ms for postStartSetup
	I1010 19:23:11.211442  147758 fix.go:56] duration metric: took 19.913977793s for fixHost
	I1010 19:23:11.211472  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHHostname
	I1010 19:23:11.214421  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:11.214780  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:11.214812  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:11.214999  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHPort
	I1010 19:23:11.215219  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:23:11.215429  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:23:11.215612  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHUsername
	I1010 19:23:11.215786  147758 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:11.215974  147758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.120 22 <nil> <nil>}
	I1010 19:23:11.215985  147758 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1010 19:23:11.321786  147758 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728588191.295446348
	
	I1010 19:23:11.321814  147758 fix.go:216] guest clock: 1728588191.295446348
	I1010 19:23:11.321822  147758 fix.go:229] Guest: 2024-10-10 19:23:11.295446348 +0000 UTC Remote: 2024-10-10 19:23:11.211447413 +0000 UTC m=+249.373680838 (delta=83.998935ms)
	I1010 19:23:11.321870  147758 fix.go:200] guest clock delta is within tolerance: 83.998935ms
	I1010 19:23:11.321877  147758 start.go:83] releasing machines lock for "embed-certs-541370", held for 20.024455781s
	I1010 19:23:11.321905  147758 main.go:141] libmachine: (embed-certs-541370) Calling .DriverName
	I1010 19:23:11.322169  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetIP
	I1010 19:23:11.325004  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:11.325350  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:11.325375  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:11.325566  147758 main.go:141] libmachine: (embed-certs-541370) Calling .DriverName
	I1010 19:23:11.326090  147758 main.go:141] libmachine: (embed-certs-541370) Calling .DriverName
	I1010 19:23:11.326294  147758 main.go:141] libmachine: (embed-certs-541370) Calling .DriverName
	I1010 19:23:11.326383  147758 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1010 19:23:11.326444  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHHostname
	I1010 19:23:11.326501  147758 ssh_runner.go:195] Run: cat /version.json
	I1010 19:23:11.326529  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHHostname
	I1010 19:23:11.329311  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:11.329657  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:11.329690  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:11.329713  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:11.329866  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHPort
	I1010 19:23:11.330057  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:23:11.330160  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:11.330188  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:11.330204  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHUsername
	I1010 19:23:11.330344  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHPort
	I1010 19:23:11.330346  147758 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/embed-certs-541370/id_rsa Username:docker}
	I1010 19:23:11.330538  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:23:11.330687  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHUsername
	I1010 19:23:11.330821  147758 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/embed-certs-541370/id_rsa Username:docker}
	I1010 19:23:11.406525  147758 ssh_runner.go:195] Run: systemctl --version
	I1010 19:23:11.428958  147758 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1010 19:23:11.577663  147758 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1010 19:23:11.584024  147758 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1010 19:23:11.584112  147758 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1010 19:23:11.603163  147758 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1010 19:23:11.603190  147758 start.go:495] detecting cgroup driver to use...
	I1010 19:23:11.603291  147758 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1010 19:23:11.624744  147758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1010 19:23:11.645477  147758 docker.go:217] disabling cri-docker service (if available) ...
	I1010 19:23:11.645537  147758 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1010 19:23:11.660216  147758 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1010 19:23:11.675019  147758 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1010 19:23:11.796038  147758 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1010 19:23:11.967750  147758 docker.go:233] disabling docker service ...
	I1010 19:23:11.967828  147758 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1010 19:23:11.983184  147758 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1010 19:23:12.001603  147758 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1010 19:23:12.149408  147758 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1010 19:23:12.306724  147758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1010 19:23:12.324302  147758 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1010 19:23:12.345426  147758 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1010 19:23:12.345508  147758 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:12.357812  147758 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1010 19:23:12.357883  147758 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:12.370095  147758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:12.382389  147758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:12.395000  147758 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1010 19:23:12.408429  147758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:12.426851  147758 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:12.450568  147758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:12.463434  147758 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1010 19:23:12.474537  147758 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1010 19:23:12.474606  147758 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1010 19:23:12.489074  147758 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1010 19:23:12.500048  147758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 19:23:12.635695  147758 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1010 19:23:12.733511  147758 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1010 19:23:12.733593  147758 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1010 19:23:12.739072  147758 start.go:563] Will wait 60s for crictl version
	I1010 19:23:12.739138  147758 ssh_runner.go:195] Run: which crictl
	I1010 19:23:12.743675  147758 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1010 19:23:12.792272  147758 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1010 19:23:12.792379  147758 ssh_runner.go:195] Run: crio --version
	I1010 19:23:12.829968  147758 ssh_runner.go:195] Run: crio --version
	I1010 19:23:12.862579  147758 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1010 19:23:11.347158  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .Start
	I1010 19:23:11.347359  148123 main.go:141] libmachine: (old-k8s-version-947203) Ensuring networks are active...
	I1010 19:23:11.348252  148123 main.go:141] libmachine: (old-k8s-version-947203) Ensuring network default is active
	I1010 19:23:11.348689  148123 main.go:141] libmachine: (old-k8s-version-947203) Ensuring network mk-old-k8s-version-947203 is active
	I1010 19:23:11.349139  148123 main.go:141] libmachine: (old-k8s-version-947203) Getting domain xml...
	I1010 19:23:11.349799  148123 main.go:141] libmachine: (old-k8s-version-947203) Creating domain...
	I1010 19:23:12.671472  148123 main.go:141] libmachine: (old-k8s-version-947203) Waiting to get IP...
	I1010 19:23:12.672628  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:12.673145  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:12.673278  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:12.673132  149057 retry.go:31] will retry after 280.111667ms: waiting for machine to come up
	I1010 19:23:12.954865  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:12.955325  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:12.955377  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:12.955300  149057 retry.go:31] will retry after 289.967238ms: waiting for machine to come up
	I1010 19:23:13.247039  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:13.247663  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:13.247769  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:13.247632  149057 retry.go:31] will retry after 319.085935ms: waiting for machine to come up
	I1010 19:23:12.863797  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetIP
	I1010 19:23:12.867335  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:12.867760  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:12.867794  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:12.868029  147758 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1010 19:23:12.872503  147758 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 19:23:12.887684  147758 kubeadm.go:883] updating cluster {Name:embed-certs-541370 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:embed-certs-541370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.120 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1010 19:23:12.887809  147758 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1010 19:23:12.887853  147758 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 19:23:12.924155  147758 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1010 19:23:12.924240  147758 ssh_runner.go:195] Run: which lz4
	I1010 19:23:12.928613  147758 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1010 19:23:12.933024  147758 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1010 19:23:12.933069  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1010 19:23:14.450790  147758 crio.go:462] duration metric: took 1.522223644s to copy over tarball
	I1010 19:23:14.450893  147758 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1010 19:23:16.642155  147758 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.191220673s)
	I1010 19:23:16.642193  147758 crio.go:469] duration metric: took 2.191371146s to extract the tarball
	I1010 19:23:16.642202  147758 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1010 19:23:16.679611  147758 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 19:23:16.723840  147758 crio.go:514] all images are preloaded for cri-o runtime.
	I1010 19:23:16.723865  147758 cache_images.go:84] Images are preloaded, skipping loading
	I1010 19:23:16.723874  147758 kubeadm.go:934] updating node { 192.168.39.120 8443 v1.31.1 crio true true} ...
	I1010 19:23:16.723998  147758 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-541370 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.120
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-541370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1010 19:23:16.724081  147758 ssh_runner.go:195] Run: crio config
	I1010 19:23:16.779659  147758 cni.go:84] Creating CNI manager for ""
	I1010 19:23:16.779682  147758 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1010 19:23:16.779693  147758 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1010 19:23:16.779714  147758 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.120 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-541370 NodeName:embed-certs-541370 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.120"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.120 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1010 19:23:16.779842  147758 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.120
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-541370"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.120
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.120"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1010 19:23:16.779904  147758 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1010 19:23:16.791424  147758 binaries.go:44] Found k8s binaries, skipping transfer
	I1010 19:23:16.791493  147758 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1010 19:23:16.801715  147758 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I1010 19:23:16.821364  147758 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1010 19:23:16.842703  147758 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I1010 19:23:16.864835  147758 ssh_runner.go:195] Run: grep 192.168.39.120	control-plane.minikube.internal$ /etc/hosts
	I1010 19:23:16.868928  147758 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.120	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 19:23:13.568192  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:13.568690  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:13.568721  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:13.568657  149057 retry.go:31] will retry after 575.032546ms: waiting for machine to come up
	I1010 19:23:14.145650  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:14.146293  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:14.146319  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:14.146234  149057 retry.go:31] will retry after 702.803794ms: waiting for machine to come up
	I1010 19:23:14.851201  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:14.851736  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:14.851769  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:14.851706  149057 retry.go:31] will retry after 883.195401ms: waiting for machine to come up
	I1010 19:23:15.736627  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:15.737199  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:15.737252  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:15.737155  149057 retry.go:31] will retry after 794.699815ms: waiting for machine to come up
	I1010 19:23:16.533952  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:16.534510  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:16.534545  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:16.534443  149057 retry.go:31] will retry after 961.751912ms: waiting for machine to come up
	I1010 19:23:17.497535  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:17.498007  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:17.498035  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:17.497936  149057 retry.go:31] will retry after 1.423503471s: waiting for machine to come up
	I1010 19:23:16.883162  147758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 19:23:17.027646  147758 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 19:23:17.045083  147758 certs.go:68] Setting up /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/embed-certs-541370 for IP: 192.168.39.120
	I1010 19:23:17.045108  147758 certs.go:194] generating shared ca certs ...
	I1010 19:23:17.045130  147758 certs.go:226] acquiring lock for ca certs: {Name:mk934aa2a82050d7b4e8800492bf64f91d140517 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 19:23:17.045491  147758 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key
	I1010 19:23:17.045561  147758 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key
	I1010 19:23:17.045579  147758 certs.go:256] generating profile certs ...
	I1010 19:23:17.045730  147758 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/embed-certs-541370/client.key
	I1010 19:23:17.045814  147758 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/embed-certs-541370/apiserver.key.dd7630a8
	I1010 19:23:17.045874  147758 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/embed-certs-541370/proxy-client.key
	I1010 19:23:17.046015  147758 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876.pem (1338 bytes)
	W1010 19:23:17.046055  147758 certs.go:480] ignoring /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876_empty.pem, impossibly tiny 0 bytes
	I1010 19:23:17.046075  147758 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem (1679 bytes)
	I1010 19:23:17.046114  147758 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem (1078 bytes)
	I1010 19:23:17.046150  147758 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem (1123 bytes)
	I1010 19:23:17.046177  147758 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem (1675 bytes)
	I1010 19:23:17.046235  147758 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem (1708 bytes)
	I1010 19:23:17.047131  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1010 19:23:17.087057  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1010 19:23:17.137707  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1010 19:23:17.181707  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1010 19:23:17.213227  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/embed-certs-541370/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1010 19:23:17.247846  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/embed-certs-541370/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1010 19:23:17.275989  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/embed-certs-541370/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1010 19:23:17.301144  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/embed-certs-541370/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1010 19:23:17.326232  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem --> /usr/share/ca-certificates/888762.pem (1708 bytes)
	I1010 19:23:17.350586  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1010 19:23:17.374666  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876.pem --> /usr/share/ca-certificates/88876.pem (1338 bytes)
	I1010 19:23:17.399570  147758 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1010 19:23:17.417846  147758 ssh_runner.go:195] Run: openssl version
	I1010 19:23:17.424206  147758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/888762.pem && ln -fs /usr/share/ca-certificates/888762.pem /etc/ssl/certs/888762.pem"
	I1010 19:23:17.436091  147758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/888762.pem
	I1010 19:23:17.441020  147758 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 10 18:09 /usr/share/ca-certificates/888762.pem
	I1010 19:23:17.441090  147758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/888762.pem
	I1010 19:23:17.447318  147758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/888762.pem /etc/ssl/certs/3ec20f2e.0"
	I1010 19:23:17.459191  147758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1010 19:23:17.470878  147758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1010 19:23:17.476185  147758 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 10 17:58 /usr/share/ca-certificates/minikubeCA.pem
	I1010 19:23:17.476248  147758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1010 19:23:17.482808  147758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1010 19:23:17.494626  147758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/88876.pem && ln -fs /usr/share/ca-certificates/88876.pem /etc/ssl/certs/88876.pem"
	I1010 19:23:17.506522  147758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/88876.pem
	I1010 19:23:17.511484  147758 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 10 18:09 /usr/share/ca-certificates/88876.pem
	I1010 19:23:17.511558  147758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/88876.pem
	I1010 19:23:17.517445  147758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/88876.pem /etc/ssl/certs/51391683.0"
	I1010 19:23:17.529109  147758 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1010 19:23:17.534139  147758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1010 19:23:17.540846  147758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1010 19:23:17.547429  147758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1010 19:23:17.554350  147758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1010 19:23:17.561036  147758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1010 19:23:17.567571  147758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1010 19:23:17.574019  147758 kubeadm.go:392] StartCluster: {Name:embed-certs-541370 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:embed-certs-541370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.120 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 19:23:17.574128  147758 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1010 19:23:17.574187  147758 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1010 19:23:17.612699  147758 cri.go:89] found id: ""
	I1010 19:23:17.612804  147758 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1010 19:23:17.623827  147758 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1010 19:23:17.623856  147758 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1010 19:23:17.623917  147758 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1010 19:23:17.634732  147758 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1010 19:23:17.635754  147758 kubeconfig.go:125] found "embed-certs-541370" server: "https://192.168.39.120:8443"
	I1010 19:23:17.637813  147758 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1010 19:23:17.648543  147758 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.120
	I1010 19:23:17.648590  147758 kubeadm.go:1160] stopping kube-system containers ...
	I1010 19:23:17.648606  147758 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1010 19:23:17.648671  147758 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1010 19:23:17.693966  147758 cri.go:89] found id: ""
	I1010 19:23:17.694057  147758 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1010 19:23:17.715977  147758 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1010 19:23:17.727871  147758 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1010 19:23:17.727891  147758 kubeadm.go:157] found existing configuration files:
	
	I1010 19:23:17.727942  147758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1010 19:23:17.738274  147758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1010 19:23:17.738340  147758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1010 19:23:17.748925  147758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1010 19:23:17.758945  147758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1010 19:23:17.759008  147758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1010 19:23:17.769169  147758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1010 19:23:17.779196  147758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1010 19:23:17.779282  147758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1010 19:23:17.790948  147758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1010 19:23:17.802264  147758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1010 19:23:17.802332  147758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1010 19:23:17.814009  147758 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1010 19:23:17.826820  147758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:23:17.947270  147758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:23:19.128720  147758 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.181409785s)
	I1010 19:23:19.128770  147758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:23:19.343735  147758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:23:19.419728  147758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:23:19.529802  147758 api_server.go:52] waiting for apiserver process to appear ...
	I1010 19:23:19.529930  147758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:20.030019  147758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:20.530833  147758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:20.558314  147758 api_server.go:72] duration metric: took 1.028510044s to wait for apiserver process to appear ...
	I1010 19:23:20.558350  147758 api_server.go:88] waiting for apiserver healthz status ...
	I1010 19:23:20.558375  147758 api_server.go:253] Checking apiserver healthz at https://192.168.39.120:8443/healthz ...
	I1010 19:23:20.558991  147758 api_server.go:269] stopped: https://192.168.39.120:8443/healthz: Get "https://192.168.39.120:8443/healthz": dial tcp 192.168.39.120:8443: connect: connection refused
	I1010 19:23:21.058727  147758 api_server.go:253] Checking apiserver healthz at https://192.168.39.120:8443/healthz ...
	I1010 19:23:18.923057  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:18.923636  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:18.923671  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:18.923582  149057 retry.go:31] will retry after 2.09836426s: waiting for machine to come up
	I1010 19:23:21.023500  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:21.024054  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:21.024084  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:21.023999  149057 retry.go:31] will retry after 2.809962093s: waiting for machine to come up
	I1010 19:23:23.187135  147758 api_server.go:279] https://192.168.39.120:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1010 19:23:23.187187  147758 api_server.go:103] status: https://192.168.39.120:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1010 19:23:23.187203  147758 api_server.go:253] Checking apiserver healthz at https://192.168.39.120:8443/healthz ...
	I1010 19:23:23.233367  147758 api_server.go:279] https://192.168.39.120:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1010 19:23:23.233414  147758 api_server.go:103] status: https://192.168.39.120:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1010 19:23:23.558658  147758 api_server.go:253] Checking apiserver healthz at https://192.168.39.120:8443/healthz ...
	I1010 19:23:23.575108  147758 api_server.go:279] https://192.168.39.120:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1010 19:23:23.575139  147758 api_server.go:103] status: https://192.168.39.120:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1010 19:23:24.058679  147758 api_server.go:253] Checking apiserver healthz at https://192.168.39.120:8443/healthz ...
	I1010 19:23:24.065735  147758 api_server.go:279] https://192.168.39.120:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1010 19:23:24.065763  147758 api_server.go:103] status: https://192.168.39.120:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1010 19:23:24.559440  147758 api_server.go:253] Checking apiserver healthz at https://192.168.39.120:8443/healthz ...
	I1010 19:23:24.565460  147758 api_server.go:279] https://192.168.39.120:8443/healthz returned 200:
	ok
	I1010 19:23:24.571828  147758 api_server.go:141] control plane version: v1.31.1
	I1010 19:23:24.571859  147758 api_server.go:131] duration metric: took 4.013501806s to wait for apiserver health ...
	I1010 19:23:24.571869  147758 cni.go:84] Creating CNI manager for ""
	I1010 19:23:24.571875  147758 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1010 19:23:24.573875  147758 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1010 19:23:24.575458  147758 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1010 19:23:24.586870  147758 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1010 19:23:24.624362  147758 system_pods.go:43] waiting for kube-system pods to appear ...
	I1010 19:23:24.643465  147758 system_pods.go:59] 8 kube-system pods found
	I1010 19:23:24.643516  147758 system_pods.go:61] "coredns-7c65d6cfc9-fgtkg" [df696e79-ca6f-4d73-a57e-9c6cdc93c505] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1010 19:23:24.643532  147758 system_pods.go:61] "etcd-embed-certs-541370" [254fa12c-b0d2-499f-8dd9-c1505efeaaab] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1010 19:23:24.643543  147758 system_pods.go:61] "kube-apiserver-embed-certs-541370" [fcd3809d-d325-4481-8e86-c246e29458fe] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1010 19:23:24.643565  147758 system_pods.go:61] "kube-controller-manager-embed-certs-541370" [ab0fdd6b-d9b7-48dc-b82f-29b21d2295ee] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1010 19:23:24.643584  147758 system_pods.go:61] "kube-proxy-f5l6x" [446383fa-44c5-4b9e-bfc5-e38799597e75] Running
	I1010 19:23:24.643592  147758 system_pods.go:61] "kube-scheduler-embed-certs-541370" [1c6af7e7-ce16-4ae2-8feb-e5d474173de1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1010 19:23:24.643603  147758 system_pods.go:61] "metrics-server-6867b74b74-kw529" [aad00321-d499-4563-849e-286d6e699fc8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1010 19:23:24.643611  147758 system_pods.go:61] "storage-provisioner" [df4ae621-5066-4276-9276-a0538a9f9dd1] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1010 19:23:24.643620  147758 system_pods.go:74] duration metric: took 19.234558ms to wait for pod list to return data ...
	I1010 19:23:24.643637  147758 node_conditions.go:102] verifying NodePressure condition ...
	I1010 19:23:24.651647  147758 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1010 19:23:24.651683  147758 node_conditions.go:123] node cpu capacity is 2
	I1010 19:23:24.651699  147758 node_conditions.go:105] duration metric: took 8.056629ms to run NodePressure ...
	I1010 19:23:24.651720  147758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:23:24.915651  147758 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1010 19:23:24.921104  147758 kubeadm.go:739] kubelet initialised
	I1010 19:23:24.921131  147758 kubeadm.go:740] duration metric: took 5.44643ms waiting for restarted kubelet to initialise ...
	I1010 19:23:24.921142  147758 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 19:23:24.927535  147758 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-fgtkg" in "kube-system" namespace to be "Ready" ...
	I1010 19:23:23.837827  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:23.838271  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:23.838295  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:23.838220  149057 retry.go:31] will retry after 2.562944835s: waiting for machine to come up
	I1010 19:23:26.402699  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:26.403250  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:26.403294  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:26.403192  149057 retry.go:31] will retry after 3.867656846s: waiting for machine to come up
	I1010 19:23:26.932764  147758 pod_ready.go:103] pod "coredns-7c65d6cfc9-fgtkg" in "kube-system" namespace has status "Ready":"False"
	I1010 19:23:28.936055  147758 pod_ready.go:103] pod "coredns-7c65d6cfc9-fgtkg" in "kube-system" namespace has status "Ready":"False"
	I1010 19:23:31.434959  147758 pod_ready.go:103] pod "coredns-7c65d6cfc9-fgtkg" in "kube-system" namespace has status "Ready":"False"
	I1010 19:23:31.893914  148525 start.go:364] duration metric: took 2m17.836396131s to acquireMachinesLock for "default-k8s-diff-port-361847"
	I1010 19:23:31.893993  148525 start.go:96] Skipping create...Using existing machine configuration
	I1010 19:23:31.894007  148525 fix.go:54] fixHost starting: 
	I1010 19:23:31.894438  148525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:23:31.894502  148525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:23:31.914583  148525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37215
	I1010 19:23:31.915054  148525 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:23:31.915535  148525 main.go:141] libmachine: Using API Version  1
	I1010 19:23:31.915560  148525 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:23:31.915967  148525 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:23:31.916207  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .DriverName
	I1010 19:23:31.916387  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetState
	I1010 19:23:31.918035  148525 fix.go:112] recreateIfNeeded on default-k8s-diff-port-361847: state=Stopped err=<nil>
	I1010 19:23:31.918073  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .DriverName
	W1010 19:23:31.918241  148525 fix.go:138] unexpected machine state, will restart: <nil>
	I1010 19:23:31.920390  148525 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-361847" ...
	I1010 19:23:30.275222  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.275762  148123 main.go:141] libmachine: (old-k8s-version-947203) Found IP for machine: 192.168.61.112
	I1010 19:23:30.275779  148123 main.go:141] libmachine: (old-k8s-version-947203) Reserving static IP address...
	I1010 19:23:30.275794  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has current primary IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.276478  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "old-k8s-version-947203", mac: "52:54:00:b5:0b:b2", ip: "192.168.61.112"} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:30.276529  148123 main.go:141] libmachine: (old-k8s-version-947203) Reserved static IP address: 192.168.61.112
	I1010 19:23:30.276553  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | skip adding static IP to network mk-old-k8s-version-947203 - found existing host DHCP lease matching {name: "old-k8s-version-947203", mac: "52:54:00:b5:0b:b2", ip: "192.168.61.112"}
	I1010 19:23:30.276574  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | Getting to WaitForSSH function...
	I1010 19:23:30.276590  148123 main.go:141] libmachine: (old-k8s-version-947203) Waiting for SSH to be available...
	I1010 19:23:30.278486  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.278854  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:30.278890  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.278984  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | Using SSH client type: external
	I1010 19:23:30.279016  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | Using SSH private key: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/old-k8s-version-947203/id_rsa (-rw-------)
	I1010 19:23:30.279051  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.112 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19787-81676/.minikube/machines/old-k8s-version-947203/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1010 19:23:30.279068  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | About to run SSH command:
	I1010 19:23:30.279080  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | exit 0
	I1010 19:23:30.405053  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | SSH cmd err, output: <nil>: 
	I1010 19:23:30.405492  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetConfigRaw
	I1010 19:23:30.406224  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetIP
	I1010 19:23:30.408830  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.409178  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:30.409204  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.409462  148123 profile.go:143] Saving config to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/old-k8s-version-947203/config.json ...
	I1010 19:23:30.409673  148123 machine.go:93] provisionDockerMachine start ...
	I1010 19:23:30.409693  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .DriverName
	I1010 19:23:30.409916  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHHostname
	I1010 19:23:30.412131  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.412400  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:30.412427  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.412574  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHPort
	I1010 19:23:30.412770  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:30.412965  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:30.413117  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHUsername
	I1010 19:23:30.413368  148123 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:30.413653  148123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.112 22 <nil> <nil>}
	I1010 19:23:30.413668  148123 main.go:141] libmachine: About to run SSH command:
	hostname
	I1010 19:23:30.525149  148123 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1010 19:23:30.525176  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetMachineName
	I1010 19:23:30.525417  148123 buildroot.go:166] provisioning hostname "old-k8s-version-947203"
	I1010 19:23:30.525478  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetMachineName
	I1010 19:23:30.525703  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHHostname
	I1010 19:23:30.528343  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.528681  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:30.528708  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.528841  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHPort
	I1010 19:23:30.529035  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:30.529185  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:30.529303  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHUsername
	I1010 19:23:30.529460  148123 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:30.529635  148123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.112 22 <nil> <nil>}
	I1010 19:23:30.529645  148123 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-947203 && echo "old-k8s-version-947203" | sudo tee /etc/hostname
	I1010 19:23:30.655605  148123 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-947203
	
	I1010 19:23:30.655644  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHHostname
	I1010 19:23:30.658439  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.658834  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:30.658869  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.659063  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHPort
	I1010 19:23:30.659285  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:30.659458  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:30.659574  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHUsername
	I1010 19:23:30.659733  148123 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:30.659905  148123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.112 22 <nil> <nil>}
	I1010 19:23:30.659921  148123 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-947203' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-947203/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-947203' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1010 19:23:30.781974  148123 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1010 19:23:30.782011  148123 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19787-81676/.minikube CaCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19787-81676/.minikube}
	I1010 19:23:30.782033  148123 buildroot.go:174] setting up certificates
	I1010 19:23:30.782048  148123 provision.go:84] configureAuth start
	I1010 19:23:30.782059  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetMachineName
	I1010 19:23:30.782379  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetIP
	I1010 19:23:30.785189  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.785584  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:30.785613  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.785782  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHHostname
	I1010 19:23:30.788086  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.788432  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:30.788458  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.788582  148123 provision.go:143] copyHostCerts
	I1010 19:23:30.788645  148123 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem, removing ...
	I1010 19:23:30.788660  148123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem
	I1010 19:23:30.788729  148123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem (1078 bytes)
	I1010 19:23:30.788839  148123 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem, removing ...
	I1010 19:23:30.788867  148123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem
	I1010 19:23:30.788900  148123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem (1123 bytes)
	I1010 19:23:30.788977  148123 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem, removing ...
	I1010 19:23:30.788986  148123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem
	I1010 19:23:30.789013  148123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem (1675 bytes)
	I1010 19:23:30.789081  148123 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-947203 san=[127.0.0.1 192.168.61.112 localhost minikube old-k8s-version-947203]
	I1010 19:23:31.240626  148123 provision.go:177] copyRemoteCerts
	I1010 19:23:31.240698  148123 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1010 19:23:31.240737  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHHostname
	I1010 19:23:31.243678  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.244108  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:31.244138  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.244356  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHPort
	I1010 19:23:31.244565  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:31.244765  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHUsername
	I1010 19:23:31.244929  148123 sshutil.go:53] new ssh client: &{IP:192.168.61.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/old-k8s-version-947203/id_rsa Username:docker}
	I1010 19:23:31.331337  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1010 19:23:31.356266  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1010 19:23:31.381186  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1010 19:23:31.405301  148123 provision.go:87] duration metric: took 623.235619ms to configureAuth
	I1010 19:23:31.405337  148123 buildroot.go:189] setting minikube options for container-runtime
	I1010 19:23:31.405541  148123 config.go:182] Loaded profile config "old-k8s-version-947203": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1010 19:23:31.405620  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHHostname
	I1010 19:23:31.408111  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.408444  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:31.408470  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.408695  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHPort
	I1010 19:23:31.408947  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:31.409109  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:31.409229  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHUsername
	I1010 19:23:31.409374  148123 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:31.409583  148123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.112 22 <nil> <nil>}
	I1010 19:23:31.409609  148123 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1010 19:23:31.647023  148123 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1010 19:23:31.647050  148123 machine.go:96] duration metric: took 1.237364012s to provisionDockerMachine
	I1010 19:23:31.647063  148123 start.go:293] postStartSetup for "old-k8s-version-947203" (driver="kvm2")
	I1010 19:23:31.647073  148123 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1010 19:23:31.647104  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .DriverName
	I1010 19:23:31.647464  148123 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1010 19:23:31.647502  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHHostname
	I1010 19:23:31.649991  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.650288  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:31.650311  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.650484  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHPort
	I1010 19:23:31.650637  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:31.650832  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHUsername
	I1010 19:23:31.650968  148123 sshutil.go:53] new ssh client: &{IP:192.168.61.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/old-k8s-version-947203/id_rsa Username:docker}
	I1010 19:23:31.735985  148123 ssh_runner.go:195] Run: cat /etc/os-release
	I1010 19:23:31.740689  148123 info.go:137] Remote host: Buildroot 2023.02.9
	I1010 19:23:31.740725  148123 filesync.go:126] Scanning /home/jenkins/minikube-integration/19787-81676/.minikube/addons for local assets ...
	I1010 19:23:31.740803  148123 filesync.go:126] Scanning /home/jenkins/minikube-integration/19787-81676/.minikube/files for local assets ...
	I1010 19:23:31.740947  148123 filesync.go:149] local asset: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem -> 888762.pem in /etc/ssl/certs
	I1010 19:23:31.741057  148123 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1010 19:23:31.751383  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem --> /etc/ssl/certs/888762.pem (1708 bytes)
	I1010 19:23:31.777091  148123 start.go:296] duration metric: took 130.011618ms for postStartSetup
	I1010 19:23:31.777134  148123 fix.go:56] duration metric: took 20.455078936s for fixHost
	I1010 19:23:31.777158  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHHostname
	I1010 19:23:31.780083  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.780661  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:31.780708  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.780832  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHPort
	I1010 19:23:31.781069  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:31.781245  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:31.781382  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHUsername
	I1010 19:23:31.781534  148123 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:31.781711  148123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.112 22 <nil> <nil>}
	I1010 19:23:31.781721  148123 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1010 19:23:31.893727  148123 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728588211.849777581
	
	I1010 19:23:31.893756  148123 fix.go:216] guest clock: 1728588211.849777581
	I1010 19:23:31.893763  148123 fix.go:229] Guest: 2024-10-10 19:23:31.849777581 +0000 UTC Remote: 2024-10-10 19:23:31.777138808 +0000 UTC m=+218.253674512 (delta=72.638773ms)
	I1010 19:23:31.893806  148123 fix.go:200] guest clock delta is within tolerance: 72.638773ms
	I1010 19:23:31.893813  148123 start.go:83] releasing machines lock for "old-k8s-version-947203", held for 20.571797307s
	I1010 19:23:31.893848  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .DriverName
	I1010 19:23:31.894151  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetIP
	I1010 19:23:31.896747  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.897156  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:31.897207  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.897385  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .DriverName
	I1010 19:23:31.898036  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .DriverName
	I1010 19:23:31.898229  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .DriverName
	I1010 19:23:31.898375  148123 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1010 19:23:31.898422  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHHostname
	I1010 19:23:31.898461  148123 ssh_runner.go:195] Run: cat /version.json
	I1010 19:23:31.898487  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHHostname
	I1010 19:23:31.901274  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.901450  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.901608  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:31.901649  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.901784  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHPort
	I1010 19:23:31.901920  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:31.901945  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.901952  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:31.902112  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHPort
	I1010 19:23:31.902130  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHUsername
	I1010 19:23:31.902306  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:31.902348  148123 sshutil.go:53] new ssh client: &{IP:192.168.61.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/old-k8s-version-947203/id_rsa Username:docker}
	I1010 19:23:31.902412  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHUsername
	I1010 19:23:31.902533  148123 sshutil.go:53] new ssh client: &{IP:192.168.61.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/old-k8s-version-947203/id_rsa Username:docker}
	I1010 19:23:32.016264  148123 ssh_runner.go:195] Run: systemctl --version
	I1010 19:23:32.022618  148123 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1010 19:23:32.169502  148123 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1010 19:23:32.175960  148123 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1010 19:23:32.176039  148123 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1010 19:23:32.193584  148123 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1010 19:23:32.193615  148123 start.go:495] detecting cgroup driver to use...
	I1010 19:23:32.193698  148123 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1010 19:23:32.210923  148123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1010 19:23:32.227172  148123 docker.go:217] disabling cri-docker service (if available) ...
	I1010 19:23:32.227254  148123 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1010 19:23:32.242142  148123 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1010 19:23:32.256896  148123 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1010 19:23:32.387462  148123 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1010 19:23:32.575184  148123 docker.go:233] disabling docker service ...
	I1010 19:23:32.575257  148123 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1010 19:23:32.590667  148123 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1010 19:23:32.604825  148123 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1010 19:23:32.739827  148123 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1010 19:23:32.863648  148123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1010 19:23:32.879435  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1010 19:23:32.899582  148123 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1010 19:23:32.899659  148123 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:32.915010  148123 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1010 19:23:32.915081  148123 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:32.931181  148123 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:32.944038  148123 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:32.955182  148123 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1010 19:23:32.972220  148123 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1010 19:23:32.982681  148123 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1010 19:23:32.982739  148123 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1010 19:23:32.997012  148123 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1010 19:23:33.009468  148123 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 19:23:33.180403  148123 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1010 19:23:33.284934  148123 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1010 19:23:33.285008  148123 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1010 19:23:33.290063  148123 start.go:563] Will wait 60s for crictl version
	I1010 19:23:33.290124  148123 ssh_runner.go:195] Run: which crictl
	I1010 19:23:33.294752  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1010 19:23:33.342710  148123 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1010 19:23:33.342792  148123 ssh_runner.go:195] Run: crio --version
	I1010 19:23:33.372821  148123 ssh_runner.go:195] Run: crio --version
	I1010 19:23:33.409395  148123 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1010 19:23:33.411012  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetIP
	I1010 19:23:33.414374  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:33.414789  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:33.414836  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:33.415084  148123 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1010 19:23:33.420164  148123 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 19:23:33.433442  148123 kubeadm.go:883] updating cluster {Name:old-k8s-version-947203 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-947203 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.112 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1010 19:23:33.433631  148123 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1010 19:23:33.433717  148123 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 19:23:33.480013  148123 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1010 19:23:33.480078  148123 ssh_runner.go:195] Run: which lz4
	I1010 19:23:33.485443  148123 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1010 19:23:33.490986  148123 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1010 19:23:33.491032  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1010 19:23:31.921836  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .Start
	I1010 19:23:31.922036  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Ensuring networks are active...
	I1010 19:23:31.922890  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Ensuring network default is active
	I1010 19:23:31.923271  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Ensuring network mk-default-k8s-diff-port-361847 is active
	I1010 19:23:31.923685  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Getting domain xml...
	I1010 19:23:31.924449  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Creating domain...
	I1010 19:23:33.241164  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting to get IP...
	I1010 19:23:33.242273  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:33.242713  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | unable to find current IP address of domain default-k8s-diff-port-361847 in network mk-default-k8s-diff-port-361847
	I1010 19:23:33.242814  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | I1010 19:23:33.242702  149213 retry.go:31] will retry after 195.013046ms: waiting for machine to come up
	I1010 19:23:33.438965  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:33.439425  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | unable to find current IP address of domain default-k8s-diff-port-361847 in network mk-default-k8s-diff-port-361847
	I1010 19:23:33.439452  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | I1010 19:23:33.439379  149213 retry.go:31] will retry after 344.223823ms: waiting for machine to come up
	I1010 19:23:33.785167  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:33.785833  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | unable to find current IP address of domain default-k8s-diff-port-361847 in network mk-default-k8s-diff-port-361847
	I1010 19:23:33.785864  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | I1010 19:23:33.785780  149213 retry.go:31] will retry after 342.787658ms: waiting for machine to come up
	I1010 19:23:33.435066  147758 pod_ready.go:103] pod "coredns-7c65d6cfc9-fgtkg" in "kube-system" namespace has status "Ready":"False"
	I1010 19:23:34.936768  147758 pod_ready.go:93] pod "coredns-7c65d6cfc9-fgtkg" in "kube-system" namespace has status "Ready":"True"
	I1010 19:23:34.936800  147758 pod_ready.go:82] duration metric: took 10.009235225s for pod "coredns-7c65d6cfc9-fgtkg" in "kube-system" namespace to be "Ready" ...
	I1010 19:23:34.936814  147758 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:23:35.944395  147758 pod_ready.go:93] pod "etcd-embed-certs-541370" in "kube-system" namespace has status "Ready":"True"
	I1010 19:23:35.944430  147758 pod_ready.go:82] duration metric: took 1.007599746s for pod "etcd-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:23:35.944445  147758 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:23:35.953224  147758 pod_ready.go:93] pod "kube-apiserver-embed-certs-541370" in "kube-system" namespace has status "Ready":"True"
	I1010 19:23:35.953255  147758 pod_ready.go:82] duration metric: took 8.801702ms for pod "kube-apiserver-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:23:35.953266  147758 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:23:35.227279  148123 crio.go:462] duration metric: took 1.74186828s to copy over tarball
	I1010 19:23:35.227383  148123 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1010 19:23:38.216499  148123 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.989082447s)
	I1010 19:23:38.216533  148123 crio.go:469] duration metric: took 2.989218587s to extract the tarball
	I1010 19:23:38.216541  148123 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1010 19:23:38.259699  148123 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 19:23:38.308284  148123 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1010 19:23:38.308313  148123 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1010 19:23:38.308422  148123 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1010 19:23:38.308477  148123 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1010 19:23:38.308482  148123 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1010 19:23:38.308593  148123 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1010 19:23:38.308617  148123 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1010 19:23:38.308405  148123 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 19:23:38.308446  148123 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1010 19:23:38.308530  148123 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1010 19:23:38.310558  148123 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1010 19:23:38.310612  148123 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1010 19:23:38.310642  148123 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1010 19:23:38.310652  148123 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1010 19:23:38.310645  148123 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1010 19:23:38.310560  148123 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 19:23:38.310733  148123 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1010 19:23:38.311068  148123 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1010 19:23:38.469174  148123 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1010 19:23:38.469563  148123 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1010 19:23:38.488463  148123 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1010 19:23:38.490815  148123 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1010 19:23:38.491841  148123 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1010 19:23:38.496079  148123 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1010 19:23:38.501292  148123 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1010 19:23:34.130443  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:34.130968  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | unable to find current IP address of domain default-k8s-diff-port-361847 in network mk-default-k8s-diff-port-361847
	I1010 19:23:34.130998  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | I1010 19:23:34.130915  149213 retry.go:31] will retry after 393.100812ms: waiting for machine to come up
	I1010 19:23:34.525570  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:34.526032  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | unable to find current IP address of domain default-k8s-diff-port-361847 in network mk-default-k8s-diff-port-361847
	I1010 19:23:34.526060  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | I1010 19:23:34.525980  149213 retry.go:31] will retry after 465.468437ms: waiting for machine to come up
	I1010 19:23:34.992775  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:34.993348  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | unable to find current IP address of domain default-k8s-diff-port-361847 in network mk-default-k8s-diff-port-361847
	I1010 19:23:34.993386  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | I1010 19:23:34.993287  149213 retry.go:31] will retry after 907.884473ms: waiting for machine to come up
	I1010 19:23:35.902481  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:35.902942  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | unable to find current IP address of domain default-k8s-diff-port-361847 in network mk-default-k8s-diff-port-361847
	I1010 19:23:35.902974  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | I1010 19:23:35.902878  149213 retry.go:31] will retry after 1.157806188s: waiting for machine to come up
	I1010 19:23:37.062068  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:37.062749  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | unable to find current IP address of domain default-k8s-diff-port-361847 in network mk-default-k8s-diff-port-361847
	I1010 19:23:37.062777  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | I1010 19:23:37.062706  149213 retry.go:31] will retry after 1.432559208s: waiting for machine to come up
	I1010 19:23:38.496653  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:38.497120  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | unable to find current IP address of domain default-k8s-diff-port-361847 in network mk-default-k8s-diff-port-361847
	I1010 19:23:38.497153  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | I1010 19:23:38.497066  149213 retry.go:31] will retry after 1.559787003s: waiting for machine to come up
	I1010 19:23:37.961068  147758 pod_ready.go:103] pod "kube-controller-manager-embed-certs-541370" in "kube-system" namespace has status "Ready":"False"
	I1010 19:23:40.065559  147758 pod_ready.go:103] pod "kube-controller-manager-embed-certs-541370" in "kube-system" namespace has status "Ready":"False"
	I1010 19:23:40.528757  147758 pod_ready.go:93] pod "kube-controller-manager-embed-certs-541370" in "kube-system" namespace has status "Ready":"True"
	I1010 19:23:40.528786  147758 pod_ready.go:82] duration metric: took 4.575513259s for pod "kube-controller-manager-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:23:40.528802  147758 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-f5l6x" in "kube-system" namespace to be "Ready" ...
	I1010 19:23:40.538002  147758 pod_ready.go:93] pod "kube-proxy-f5l6x" in "kube-system" namespace has status "Ready":"True"
	I1010 19:23:40.538034  147758 pod_ready.go:82] duration metric: took 9.22357ms for pod "kube-proxy-f5l6x" in "kube-system" namespace to be "Ready" ...
	I1010 19:23:40.538049  147758 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:23:40.543594  147758 pod_ready.go:93] pod "kube-scheduler-embed-certs-541370" in "kube-system" namespace has status "Ready":"True"
	I1010 19:23:40.543615  147758 pod_ready.go:82] duration metric: took 5.558665ms for pod "kube-scheduler-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:23:40.543626  147758 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace to be "Ready" ...
	I1010 19:23:38.581315  148123 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1010 19:23:38.581361  148123 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1010 19:23:38.581385  148123 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1010 19:23:38.581407  148123 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1010 19:23:38.581442  148123 ssh_runner.go:195] Run: which crictl
	I1010 19:23:38.581457  148123 ssh_runner.go:195] Run: which crictl
	I1010 19:23:38.642668  148123 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1010 19:23:38.642721  148123 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1010 19:23:38.642777  148123 ssh_runner.go:195] Run: which crictl
	I1010 19:23:38.658235  148123 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1010 19:23:38.658291  148123 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1010 19:23:38.658288  148123 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1010 19:23:38.658328  148123 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1010 19:23:38.658346  148123 ssh_runner.go:195] Run: which crictl
	I1010 19:23:38.658373  148123 ssh_runner.go:195] Run: which crictl
	I1010 19:23:38.678038  148123 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1010 19:23:38.678097  148123 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1010 19:23:38.678155  148123 ssh_runner.go:195] Run: which crictl
	I1010 19:23:38.682719  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1010 19:23:38.682773  148123 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1010 19:23:38.682721  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1010 19:23:38.682791  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1010 19:23:38.682812  148123 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1010 19:23:38.682845  148123 ssh_runner.go:195] Run: which crictl
	I1010 19:23:38.682851  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1010 19:23:38.682872  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1010 19:23:38.691181  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1010 19:23:38.842626  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1010 19:23:38.842714  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1010 19:23:38.842754  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1010 19:23:38.842797  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1010 19:23:38.842721  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1010 19:23:38.842851  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1010 19:23:38.842789  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1010 19:23:38.989994  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1010 19:23:39.005815  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1010 19:23:39.005923  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1010 19:23:39.010029  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1010 19:23:39.010105  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1010 19:23:39.010150  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1010 19:23:39.010230  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1010 19:23:39.134646  148123 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 19:23:39.141431  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1010 19:23:39.176750  148123 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1010 19:23:39.176945  148123 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1010 19:23:39.176989  148123 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1010 19:23:39.177038  148123 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1010 19:23:39.177073  148123 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1010 19:23:39.177104  148123 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1010 19:23:39.335056  148123 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1010 19:23:39.335113  148123 cache_images.go:92] duration metric: took 1.026784312s to LoadCachedImages
	W1010 19:23:39.335180  148123 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I1010 19:23:39.335192  148123 kubeadm.go:934] updating node { 192.168.61.112 8443 v1.20.0 crio true true} ...
	I1010 19:23:39.335305  148123 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-947203 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.112
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-947203 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1010 19:23:39.335380  148123 ssh_runner.go:195] Run: crio config
	I1010 19:23:39.394307  148123 cni.go:84] Creating CNI manager for ""
	I1010 19:23:39.394338  148123 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1010 19:23:39.394350  148123 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1010 19:23:39.394378  148123 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.112 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-947203 NodeName:old-k8s-version-947203 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.112"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.112 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1010 19:23:39.394572  148123 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.112
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-947203"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.112
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.112"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1010 19:23:39.394662  148123 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1010 19:23:39.405510  148123 binaries.go:44] Found k8s binaries, skipping transfer
	I1010 19:23:39.405600  148123 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1010 19:23:39.415968  148123 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1010 19:23:39.443234  148123 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1010 19:23:39.462448  148123 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1010 19:23:39.481959  148123 ssh_runner.go:195] Run: grep 192.168.61.112	control-plane.minikube.internal$ /etc/hosts
	I1010 19:23:39.486188  148123 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.112	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 19:23:39.501922  148123 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 19:23:39.642769  148123 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 19:23:39.661335  148123 certs.go:68] Setting up /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/old-k8s-version-947203 for IP: 192.168.61.112
	I1010 19:23:39.661363  148123 certs.go:194] generating shared ca certs ...
	I1010 19:23:39.661384  148123 certs.go:226] acquiring lock for ca certs: {Name:mk934aa2a82050d7b4e8800492bf64f91d140517 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 19:23:39.661595  148123 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key
	I1010 19:23:39.661658  148123 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key
	I1010 19:23:39.661671  148123 certs.go:256] generating profile certs ...
	I1010 19:23:39.661779  148123 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/old-k8s-version-947203/client.key
	I1010 19:23:39.661867  148123 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/old-k8s-version-947203/apiserver.key.8a666a52
	I1010 19:23:39.661922  148123 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/old-k8s-version-947203/proxy-client.key
	I1010 19:23:39.662312  148123 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876.pem (1338 bytes)
	W1010 19:23:39.662395  148123 certs.go:480] ignoring /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876_empty.pem, impossibly tiny 0 bytes
	I1010 19:23:39.662410  148123 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem (1679 bytes)
	I1010 19:23:39.662447  148123 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem (1078 bytes)
	I1010 19:23:39.662501  148123 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem (1123 bytes)
	I1010 19:23:39.662531  148123 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem (1675 bytes)
	I1010 19:23:39.662622  148123 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem (1708 bytes)
	I1010 19:23:39.663372  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1010 19:23:39.710366  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1010 19:23:39.752724  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1010 19:23:39.801408  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1010 19:23:39.843557  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/old-k8s-version-947203/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1010 19:23:39.892522  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/old-k8s-version-947203/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1010 19:23:39.938519  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/old-k8s-version-947203/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1010 19:23:39.966140  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/old-k8s-version-947203/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1010 19:23:39.992974  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem --> /usr/share/ca-certificates/888762.pem (1708 bytes)
	I1010 19:23:40.019381  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1010 19:23:40.045769  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876.pem --> /usr/share/ca-certificates/88876.pem (1338 bytes)
	I1010 19:23:40.080683  148123 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1010 19:23:40.099747  148123 ssh_runner.go:195] Run: openssl version
	I1010 19:23:40.107865  148123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1010 19:23:40.123158  148123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1010 19:23:40.128594  148123 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 10 17:58 /usr/share/ca-certificates/minikubeCA.pem
	I1010 19:23:40.128660  148123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1010 19:23:40.135319  148123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1010 19:23:40.147514  148123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/88876.pem && ln -fs /usr/share/ca-certificates/88876.pem /etc/ssl/certs/88876.pem"
	I1010 19:23:40.162463  148123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/88876.pem
	I1010 19:23:40.168387  148123 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 10 18:09 /usr/share/ca-certificates/88876.pem
	I1010 19:23:40.168512  148123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/88876.pem
	I1010 19:23:40.176304  148123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/88876.pem /etc/ssl/certs/51391683.0"
	I1010 19:23:40.191306  148123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/888762.pem && ln -fs /usr/share/ca-certificates/888762.pem /etc/ssl/certs/888762.pem"
	I1010 19:23:40.204196  148123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/888762.pem
	I1010 19:23:40.211044  148123 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 10 18:09 /usr/share/ca-certificates/888762.pem
	I1010 19:23:40.211122  148123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/888762.pem
	I1010 19:23:40.219574  148123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/888762.pem /etc/ssl/certs/3ec20f2e.0"
	I1010 19:23:40.232634  148123 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1010 19:23:40.237639  148123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1010 19:23:40.244141  148123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1010 19:23:40.251188  148123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1010 19:23:40.258538  148123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1010 19:23:40.265029  148123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1010 19:23:40.271754  148123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1010 19:23:40.278352  148123 kubeadm.go:392] StartCluster: {Name:old-k8s-version-947203 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-947203 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.112 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 19:23:40.278453  148123 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1010 19:23:40.278518  148123 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1010 19:23:40.317771  148123 cri.go:89] found id: ""
	I1010 19:23:40.317838  148123 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1010 19:23:40.330494  148123 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1010 19:23:40.330520  148123 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1010 19:23:40.330576  148123 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1010 19:23:40.341984  148123 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1010 19:23:40.343386  148123 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-947203" does not appear in /home/jenkins/minikube-integration/19787-81676/kubeconfig
	I1010 19:23:40.344334  148123 kubeconfig.go:62] /home/jenkins/minikube-integration/19787-81676/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-947203" cluster setting kubeconfig missing "old-k8s-version-947203" context setting]
	I1010 19:23:40.345360  148123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19787-81676/kubeconfig: {Name:mk31297b607b774870865548d952622c04d970cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 19:23:40.347128  148123 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1010 19:23:40.357902  148123 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.112
	I1010 19:23:40.357944  148123 kubeadm.go:1160] stopping kube-system containers ...
	I1010 19:23:40.357961  148123 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1010 19:23:40.358031  148123 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1010 19:23:40.399970  148123 cri.go:89] found id: ""
	I1010 19:23:40.400057  148123 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1010 19:23:40.418527  148123 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1010 19:23:40.429182  148123 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1010 19:23:40.429217  148123 kubeadm.go:157] found existing configuration files:
	
	I1010 19:23:40.429262  148123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1010 19:23:40.439287  148123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1010 19:23:40.439363  148123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1010 19:23:40.449479  148123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1010 19:23:40.459288  148123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1010 19:23:40.459359  148123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1010 19:23:40.472733  148123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1010 19:23:40.484890  148123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1010 19:23:40.484958  148123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1010 19:23:40.495301  148123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1010 19:23:40.504750  148123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1010 19:23:40.504820  148123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1010 19:23:40.515034  148123 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1010 19:23:40.526168  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:23:40.675022  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:23:41.566371  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:23:41.815296  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:23:41.930082  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:23:42.027597  148123 api_server.go:52] waiting for apiserver process to appear ...
	I1010 19:23:42.027713  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:42.528219  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:43.027889  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:43.527735  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:40.058247  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:40.058783  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | unable to find current IP address of domain default-k8s-diff-port-361847 in network mk-default-k8s-diff-port-361847
	I1010 19:23:40.058835  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | I1010 19:23:40.058696  149213 retry.go:31] will retry after 2.214094081s: waiting for machine to come up
	I1010 19:23:42.274629  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:42.275167  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | unable to find current IP address of domain default-k8s-diff-port-361847 in network mk-default-k8s-diff-port-361847
	I1010 19:23:42.275194  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | I1010 19:23:42.275106  149213 retry.go:31] will retry after 2.126528577s: waiting for machine to come up
	I1010 19:23:42.550865  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:23:45.051043  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:23:44.028463  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:44.527916  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:45.027911  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:45.528797  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:46.028772  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:46.527982  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:47.027799  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:47.527894  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:48.028352  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:48.527935  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:44.403101  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:44.403575  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | unable to find current IP address of domain default-k8s-diff-port-361847 in network mk-default-k8s-diff-port-361847
	I1010 19:23:44.403616  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | I1010 19:23:44.403534  149213 retry.go:31] will retry after 3.603964622s: waiting for machine to come up
	I1010 19:23:48.008726  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:48.009142  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | unable to find current IP address of domain default-k8s-diff-port-361847 in network mk-default-k8s-diff-port-361847
	I1010 19:23:48.009191  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | I1010 19:23:48.009100  149213 retry.go:31] will retry after 3.639744981s: waiting for machine to come up
	I1010 19:23:47.551003  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:23:49.661572  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:23:52.858209  147213 start.go:364] duration metric: took 56.558774237s to acquireMachinesLock for "no-preload-320324"
	I1010 19:23:52.858274  147213 start.go:96] Skipping create...Using existing machine configuration
	I1010 19:23:52.858283  147213 fix.go:54] fixHost starting: 
	I1010 19:23:52.858705  147213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:23:52.858742  147213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:23:52.878428  147213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38525
	I1010 19:23:52.878955  147213 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:23:52.879563  147213 main.go:141] libmachine: Using API Version  1
	I1010 19:23:52.879599  147213 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:23:52.879945  147213 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:23:52.880144  147213 main.go:141] libmachine: (no-preload-320324) Calling .DriverName
	I1010 19:23:52.880282  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetState
	I1010 19:23:52.881626  147213 fix.go:112] recreateIfNeeded on no-preload-320324: state=Stopped err=<nil>
	I1010 19:23:52.881650  147213 main.go:141] libmachine: (no-preload-320324) Calling .DriverName
	W1010 19:23:52.881799  147213 fix.go:138] unexpected machine state, will restart: <nil>
	I1010 19:23:52.883912  147213 out.go:177] * Restarting existing kvm2 VM for "no-preload-320324" ...
	I1010 19:23:49.028421  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:49.528271  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:50.028462  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:50.527867  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:51.028782  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:51.528581  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:52.028732  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:52.527987  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:53.027852  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:53.528647  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:52.885239  147213 main.go:141] libmachine: (no-preload-320324) Calling .Start
	I1010 19:23:52.885429  147213 main.go:141] libmachine: (no-preload-320324) Ensuring networks are active...
	I1010 19:23:52.886211  147213 main.go:141] libmachine: (no-preload-320324) Ensuring network default is active
	I1010 19:23:52.886749  147213 main.go:141] libmachine: (no-preload-320324) Ensuring network mk-no-preload-320324 is active
	I1010 19:23:52.887310  147213 main.go:141] libmachine: (no-preload-320324) Getting domain xml...
	I1010 19:23:52.888034  147213 main.go:141] libmachine: (no-preload-320324) Creating domain...
	I1010 19:23:51.652975  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:51.653464  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Found IP for machine: 192.168.50.32
	I1010 19:23:51.653487  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Reserving static IP address...
	I1010 19:23:51.653509  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has current primary IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:51.653910  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-361847", mac: "52:54:00:a6:72:58", ip: "192.168.50.32"} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:51.653956  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | skip adding static IP to network mk-default-k8s-diff-port-361847 - found existing host DHCP lease matching {name: "default-k8s-diff-port-361847", mac: "52:54:00:a6:72:58", ip: "192.168.50.32"}
	I1010 19:23:51.653974  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Reserved static IP address: 192.168.50.32
	I1010 19:23:51.653993  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for SSH to be available...
	I1010 19:23:51.654006  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | Getting to WaitForSSH function...
	I1010 19:23:51.655927  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:51.656210  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:51.656240  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:51.656334  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | Using SSH client type: external
	I1010 19:23:51.656372  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | Using SSH private key: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/default-k8s-diff-port-361847/id_rsa (-rw-------)
	I1010 19:23:51.656409  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.32 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19787-81676/.minikube/machines/default-k8s-diff-port-361847/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1010 19:23:51.656425  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | About to run SSH command:
	I1010 19:23:51.656436  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | exit 0
	I1010 19:23:51.780839  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | SSH cmd err, output: <nil>: 
	I1010 19:23:51.781206  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetConfigRaw
	I1010 19:23:51.781939  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetIP
	I1010 19:23:51.784347  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:51.784663  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:51.784696  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:51.784918  148525 profile.go:143] Saving config to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/default-k8s-diff-port-361847/config.json ...
	I1010 19:23:51.785134  148525 machine.go:93] provisionDockerMachine start ...
	I1010 19:23:51.785158  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .DriverName
	I1010 19:23:51.785403  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHHostname
	I1010 19:23:51.787817  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:51.788306  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:51.788347  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:51.788547  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHPort
	I1010 19:23:51.788807  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:23:51.789038  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:23:51.789274  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHUsername
	I1010 19:23:51.789515  148525 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:51.789802  148525 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.32 22 <nil> <nil>}
	I1010 19:23:51.789825  148525 main.go:141] libmachine: About to run SSH command:
	hostname
	I1010 19:23:51.893367  148525 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1010 19:23:51.893399  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetMachineName
	I1010 19:23:51.893652  148525 buildroot.go:166] provisioning hostname "default-k8s-diff-port-361847"
	I1010 19:23:51.893699  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetMachineName
	I1010 19:23:51.893921  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHHostname
	I1010 19:23:51.896986  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:51.897377  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:51.897422  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:51.897662  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHPort
	I1010 19:23:51.897815  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:23:51.897949  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:23:51.898064  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHUsername
	I1010 19:23:51.898302  148525 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:51.898489  148525 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.32 22 <nil> <nil>}
	I1010 19:23:51.898502  148525 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-361847 && echo "default-k8s-diff-port-361847" | sudo tee /etc/hostname
	I1010 19:23:52.015158  148525 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-361847
	
	I1010 19:23:52.015199  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHHostname
	I1010 19:23:52.018094  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.018468  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:52.018497  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.018683  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHPort
	I1010 19:23:52.018901  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:23:52.019039  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:23:52.019209  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHUsername
	I1010 19:23:52.019474  148525 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:52.019690  148525 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.32 22 <nil> <nil>}
	I1010 19:23:52.019708  148525 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-361847' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-361847/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-361847' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1010 19:23:52.133923  148525 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1010 19:23:52.133960  148525 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19787-81676/.minikube CaCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19787-81676/.minikube}
	I1010 19:23:52.134007  148525 buildroot.go:174] setting up certificates
	I1010 19:23:52.134023  148525 provision.go:84] configureAuth start
	I1010 19:23:52.134043  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetMachineName
	I1010 19:23:52.134351  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetIP
	I1010 19:23:52.137242  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.137637  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:52.137670  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.137860  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHHostname
	I1010 19:23:52.140264  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.140640  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:52.140672  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.140833  148525 provision.go:143] copyHostCerts
	I1010 19:23:52.140907  148525 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem, removing ...
	I1010 19:23:52.140922  148525 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem
	I1010 19:23:52.140977  148525 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem (1078 bytes)
	I1010 19:23:52.141088  148525 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem, removing ...
	I1010 19:23:52.141098  148525 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem
	I1010 19:23:52.141118  148525 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem (1123 bytes)
	I1010 19:23:52.141175  148525 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem, removing ...
	I1010 19:23:52.141182  148525 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem
	I1010 19:23:52.141213  148525 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem (1675 bytes)
	I1010 19:23:52.141264  148525 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-361847 san=[127.0.0.1 192.168.50.32 default-k8s-diff-port-361847 localhost minikube]
	I1010 19:23:52.241146  148525 provision.go:177] copyRemoteCerts
	I1010 19:23:52.241212  148525 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1010 19:23:52.241241  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHHostname
	I1010 19:23:52.244061  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.244463  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:52.244490  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.244731  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHPort
	I1010 19:23:52.244929  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:23:52.245110  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHUsername
	I1010 19:23:52.245228  148525 sshutil.go:53] new ssh client: &{IP:192.168.50.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/default-k8s-diff-port-361847/id_rsa Username:docker}
	I1010 19:23:52.327309  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1010 19:23:52.352288  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1010 19:23:52.376308  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1010 19:23:52.400807  148525 provision.go:87] duration metric: took 266.765119ms to configureAuth
	I1010 19:23:52.400862  148525 buildroot.go:189] setting minikube options for container-runtime
	I1010 19:23:52.401065  148525 config.go:182] Loaded profile config "default-k8s-diff-port-361847": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 19:23:52.401171  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHHostname
	I1010 19:23:52.403552  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.403919  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:52.403950  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.404173  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHPort
	I1010 19:23:52.404371  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:23:52.404513  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:23:52.404629  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHUsername
	I1010 19:23:52.404743  148525 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:52.404927  148525 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.32 22 <nil> <nil>}
	I1010 19:23:52.404949  148525 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1010 19:23:52.622902  148525 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1010 19:23:52.622930  148525 machine.go:96] duration metric: took 837.779579ms to provisionDockerMachine
	I1010 19:23:52.622942  148525 start.go:293] postStartSetup for "default-k8s-diff-port-361847" (driver="kvm2")
	I1010 19:23:52.622952  148525 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1010 19:23:52.622968  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .DriverName
	I1010 19:23:52.623331  148525 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1010 19:23:52.623369  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHHostname
	I1010 19:23:52.626106  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.626435  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:52.626479  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.626721  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHPort
	I1010 19:23:52.626932  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:23:52.627091  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHUsername
	I1010 19:23:52.627262  148525 sshutil.go:53] new ssh client: &{IP:192.168.50.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/default-k8s-diff-port-361847/id_rsa Username:docker}
	I1010 19:23:52.708050  148525 ssh_runner.go:195] Run: cat /etc/os-release
	I1010 19:23:52.712524  148525 info.go:137] Remote host: Buildroot 2023.02.9
	I1010 19:23:52.712550  148525 filesync.go:126] Scanning /home/jenkins/minikube-integration/19787-81676/.minikube/addons for local assets ...
	I1010 19:23:52.712608  148525 filesync.go:126] Scanning /home/jenkins/minikube-integration/19787-81676/.minikube/files for local assets ...
	I1010 19:23:52.712688  148525 filesync.go:149] local asset: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem -> 888762.pem in /etc/ssl/certs
	I1010 19:23:52.712782  148525 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1010 19:23:52.723719  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem --> /etc/ssl/certs/888762.pem (1708 bytes)
	I1010 19:23:52.747686  148525 start.go:296] duration metric: took 124.729371ms for postStartSetup
	I1010 19:23:52.747727  148525 fix.go:56] duration metric: took 20.853721623s for fixHost
	I1010 19:23:52.747749  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHHostname
	I1010 19:23:52.750316  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.750645  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:52.750677  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.750817  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHPort
	I1010 19:23:52.751046  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:23:52.751195  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:23:52.751333  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHUsername
	I1010 19:23:52.751511  148525 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:52.751733  148525 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.32 22 <nil> <nil>}
	I1010 19:23:52.751749  148525 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1010 19:23:52.857986  148525 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728588232.831281012
	
	I1010 19:23:52.858019  148525 fix.go:216] guest clock: 1728588232.831281012
	I1010 19:23:52.858029  148525 fix.go:229] Guest: 2024-10-10 19:23:52.831281012 +0000 UTC Remote: 2024-10-10 19:23:52.747731551 +0000 UTC m=+158.845659062 (delta=83.549461ms)
	I1010 19:23:52.858075  148525 fix.go:200] guest clock delta is within tolerance: 83.549461ms
	I1010 19:23:52.858088  148525 start.go:83] releasing machines lock for "default-k8s-diff-port-361847", held for 20.964121636s
	I1010 19:23:52.858120  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .DriverName
	I1010 19:23:52.858491  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetIP
	I1010 19:23:52.861220  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.861640  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:52.861672  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.861828  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .DriverName
	I1010 19:23:52.862337  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .DriverName
	I1010 19:23:52.862548  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .DriverName
	I1010 19:23:52.862655  148525 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1010 19:23:52.862702  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHHostname
	I1010 19:23:52.862825  148525 ssh_runner.go:195] Run: cat /version.json
	I1010 19:23:52.862854  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHHostname
	I1010 19:23:52.865579  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.865840  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.865921  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:52.865960  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.866106  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHPort
	I1010 19:23:52.866290  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:23:52.866300  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:52.866319  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.866423  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHUsername
	I1010 19:23:52.866496  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHPort
	I1010 19:23:52.866648  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:23:52.866671  148525 sshutil.go:53] new ssh client: &{IP:192.168.50.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/default-k8s-diff-port-361847/id_rsa Username:docker}
	I1010 19:23:52.866798  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHUsername
	I1010 19:23:52.866910  148525 sshutil.go:53] new ssh client: &{IP:192.168.50.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/default-k8s-diff-port-361847/id_rsa Username:docker}
	I1010 19:23:52.966354  148525 ssh_runner.go:195] Run: systemctl --version
	I1010 19:23:52.972526  148525 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1010 19:23:53.119801  148525 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1010 19:23:53.126287  148525 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1010 19:23:53.126355  148525 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1010 19:23:53.147301  148525 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1010 19:23:53.147325  148525 start.go:495] detecting cgroup driver to use...
	I1010 19:23:53.147381  148525 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1010 19:23:53.167368  148525 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1010 19:23:53.183239  148525 docker.go:217] disabling cri-docker service (if available) ...
	I1010 19:23:53.183308  148525 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1010 19:23:53.203230  148525 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1010 19:23:53.217261  148525 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1010 19:23:53.343555  148525 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1010 19:23:53.491952  148525 docker.go:233] disabling docker service ...
	I1010 19:23:53.492054  148525 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1010 19:23:53.508136  148525 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1010 19:23:53.521662  148525 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1010 19:23:53.651858  148525 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1010 19:23:53.781954  148525 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1010 19:23:53.803934  148525 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1010 19:23:53.826070  148525 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1010 19:23:53.826146  148525 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:53.837506  148525 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1010 19:23:53.837587  148525 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:53.848653  148525 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:53.860511  148525 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:53.873254  148525 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1010 19:23:53.887862  148525 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:53.899507  148525 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:53.923325  148525 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:53.934999  148525 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1010 19:23:53.946869  148525 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1010 19:23:53.946945  148525 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1010 19:23:53.968116  148525 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1010 19:23:53.980109  148525 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 19:23:54.106345  148525 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1010 19:23:54.210345  148525 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1010 19:23:54.210417  148525 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1010 19:23:54.215968  148525 start.go:563] Will wait 60s for crictl version
	I1010 19:23:54.216037  148525 ssh_runner.go:195] Run: which crictl
	I1010 19:23:54.219885  148525 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1010 19:23:54.260286  148525 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1010 19:23:54.260375  148525 ssh_runner.go:195] Run: crio --version
	I1010 19:23:54.289908  148525 ssh_runner.go:195] Run: crio --version
	I1010 19:23:54.320940  148525 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1010 19:23:52.050137  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:23:54.060194  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:23:56.551981  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:23:54.027982  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:54.528065  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:55.027890  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:55.527753  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:56.027877  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:56.527790  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:57.028263  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:57.527790  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:58.028743  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:58.528039  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:54.234149  147213 main.go:141] libmachine: (no-preload-320324) Waiting to get IP...
	I1010 19:23:54.235147  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:23:54.235598  147213 main.go:141] libmachine: (no-preload-320324) DBG | unable to find current IP address of domain no-preload-320324 in network mk-no-preload-320324
	I1010 19:23:54.235657  147213 main.go:141] libmachine: (no-preload-320324) DBG | I1010 19:23:54.235580  149378 retry.go:31] will retry after 308.921504ms: waiting for machine to come up
	I1010 19:23:54.546327  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:23:54.547002  147213 main.go:141] libmachine: (no-preload-320324) DBG | unable to find current IP address of domain no-preload-320324 in network mk-no-preload-320324
	I1010 19:23:54.547029  147213 main.go:141] libmachine: (no-preload-320324) DBG | I1010 19:23:54.546956  149378 retry.go:31] will retry after 288.92327ms: waiting for machine to come up
	I1010 19:23:54.837625  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:23:54.838136  147213 main.go:141] libmachine: (no-preload-320324) DBG | unable to find current IP address of domain no-preload-320324 in network mk-no-preload-320324
	I1010 19:23:54.838164  147213 main.go:141] libmachine: (no-preload-320324) DBG | I1010 19:23:54.838054  149378 retry.go:31] will retry after 321.948113ms: waiting for machine to come up
	I1010 19:23:55.161940  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:23:55.162494  147213 main.go:141] libmachine: (no-preload-320324) DBG | unable to find current IP address of domain no-preload-320324 in network mk-no-preload-320324
	I1010 19:23:55.162526  147213 main.go:141] libmachine: (no-preload-320324) DBG | I1010 19:23:55.162441  149378 retry.go:31] will retry after 573.848095ms: waiting for machine to come up
	I1010 19:23:55.739080  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:23:55.739592  147213 main.go:141] libmachine: (no-preload-320324) DBG | unable to find current IP address of domain no-preload-320324 in network mk-no-preload-320324
	I1010 19:23:55.739620  147213 main.go:141] libmachine: (no-preload-320324) DBG | I1010 19:23:55.739494  149378 retry.go:31] will retry after 529.087622ms: waiting for machine to come up
	I1010 19:23:56.270324  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:23:56.270899  147213 main.go:141] libmachine: (no-preload-320324) DBG | unable to find current IP address of domain no-preload-320324 in network mk-no-preload-320324
	I1010 19:23:56.270929  147213 main.go:141] libmachine: (no-preload-320324) DBG | I1010 19:23:56.270850  149378 retry.go:31] will retry after 629.204989ms: waiting for machine to come up
	I1010 19:23:56.901836  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:23:56.902283  147213 main.go:141] libmachine: (no-preload-320324) DBG | unable to find current IP address of domain no-preload-320324 in network mk-no-preload-320324
	I1010 19:23:56.902325  147213 main.go:141] libmachine: (no-preload-320324) DBG | I1010 19:23:56.902222  149378 retry.go:31] will retry after 804.309499ms: waiting for machine to come up
	I1010 19:23:57.708806  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:23:57.709175  147213 main.go:141] libmachine: (no-preload-320324) DBG | unable to find current IP address of domain no-preload-320324 in network mk-no-preload-320324
	I1010 19:23:57.709208  147213 main.go:141] libmachine: (no-preload-320324) DBG | I1010 19:23:57.709151  149378 retry.go:31] will retry after 1.204078295s: waiting for machine to come up
	I1010 19:23:54.322534  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetIP
	I1010 19:23:54.325744  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:54.326217  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:54.326257  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:54.326533  148525 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1010 19:23:54.331527  148525 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 19:23:54.343881  148525 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-361847 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:default-k8s-diff-port-361847 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.32 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1010 19:23:54.344033  148525 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1010 19:23:54.344084  148525 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 19:23:54.389066  148525 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1010 19:23:54.389149  148525 ssh_runner.go:195] Run: which lz4
	I1010 19:23:54.393550  148525 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1010 19:23:54.397787  148525 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1010 19:23:54.397833  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1010 19:23:55.897111  148525 crio.go:462] duration metric: took 1.503593301s to copy over tarball
	I1010 19:23:55.897212  148525 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1010 19:23:58.060691  148525 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.16343467s)
	I1010 19:23:58.060731  148525 crio.go:469] duration metric: took 2.163580526s to extract the tarball
	I1010 19:23:58.060741  148525 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1010 19:23:58.103877  148525 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 19:23:58.162881  148525 crio.go:514] all images are preloaded for cri-o runtime.
	I1010 19:23:58.162907  148525 cache_images.go:84] Images are preloaded, skipping loading
	I1010 19:23:58.162915  148525 kubeadm.go:934] updating node { 192.168.50.32 8444 v1.31.1 crio true true} ...
	I1010 19:23:58.163031  148525 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-361847 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.32
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-361847 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1010 19:23:58.163098  148525 ssh_runner.go:195] Run: crio config
	I1010 19:23:58.219804  148525 cni.go:84] Creating CNI manager for ""
	I1010 19:23:58.219827  148525 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1010 19:23:58.219837  148525 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1010 19:23:58.219861  148525 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.32 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-361847 NodeName:default-k8s-diff-port-361847 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.32"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.32 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1010 19:23:58.219982  148525 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.32
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-361847"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.32
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.32"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1010 19:23:58.220042  148525 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1010 19:23:58.231444  148525 binaries.go:44] Found k8s binaries, skipping transfer
	I1010 19:23:58.231565  148525 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1010 19:23:58.241835  148525 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I1010 19:23:58.259408  148525 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1010 19:23:58.276571  148525 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I1010 19:23:58.294640  148525 ssh_runner.go:195] Run: grep 192.168.50.32	control-plane.minikube.internal$ /etc/hosts
	I1010 19:23:58.298503  148525 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.32	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 19:23:58.312286  148525 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 19:23:58.449757  148525 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 19:23:58.467342  148525 certs.go:68] Setting up /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/default-k8s-diff-port-361847 for IP: 192.168.50.32
	I1010 19:23:58.467377  148525 certs.go:194] generating shared ca certs ...
	I1010 19:23:58.467398  148525 certs.go:226] acquiring lock for ca certs: {Name:mk934aa2a82050d7b4e8800492bf64f91d140517 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 19:23:58.467583  148525 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key
	I1010 19:23:58.467642  148525 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key
	I1010 19:23:58.467655  148525 certs.go:256] generating profile certs ...
	I1010 19:23:58.467826  148525 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/default-k8s-diff-port-361847/client.key
	I1010 19:23:58.467895  148525 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/default-k8s-diff-port-361847/apiserver.key.ae5e3f04
	I1010 19:23:58.467951  148525 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/default-k8s-diff-port-361847/proxy-client.key
	I1010 19:23:58.468089  148525 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876.pem (1338 bytes)
	W1010 19:23:58.468136  148525 certs.go:480] ignoring /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876_empty.pem, impossibly tiny 0 bytes
	I1010 19:23:58.468153  148525 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem (1679 bytes)
	I1010 19:23:58.468194  148525 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem (1078 bytes)
	I1010 19:23:58.468226  148525 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem (1123 bytes)
	I1010 19:23:58.468260  148525 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem (1675 bytes)
	I1010 19:23:58.468317  148525 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem (1708 bytes)
	I1010 19:23:58.468931  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1010 19:23:58.529632  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1010 19:23:58.571900  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1010 19:23:58.612599  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1010 19:23:58.645536  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/default-k8s-diff-port-361847/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1010 19:23:58.675961  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/default-k8s-diff-port-361847/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1010 19:23:58.700712  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/default-k8s-diff-port-361847/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1010 19:23:58.725355  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/default-k8s-diff-port-361847/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1010 19:23:58.751138  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1010 19:23:58.775832  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876.pem --> /usr/share/ca-certificates/88876.pem (1338 bytes)
	I1010 19:23:58.800729  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem --> /usr/share/ca-certificates/888762.pem (1708 bytes)
	I1010 19:23:58.825558  148525 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1010 19:23:58.843331  148525 ssh_runner.go:195] Run: openssl version
	I1010 19:23:58.849271  148525 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/88876.pem && ln -fs /usr/share/ca-certificates/88876.pem /etc/ssl/certs/88876.pem"
	I1010 19:23:58.861031  148525 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/88876.pem
	I1010 19:23:58.865721  148525 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 10 18:09 /usr/share/ca-certificates/88876.pem
	I1010 19:23:58.865797  148525 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/88876.pem
	I1010 19:23:58.871961  148525 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/88876.pem /etc/ssl/certs/51391683.0"
	I1010 19:23:58.884520  148525 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/888762.pem && ln -fs /usr/share/ca-certificates/888762.pem /etc/ssl/certs/888762.pem"
	I1010 19:23:58.896744  148525 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/888762.pem
	I1010 19:23:58.901507  148525 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 10 18:09 /usr/share/ca-certificates/888762.pem
	I1010 19:23:58.901571  148525 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/888762.pem
	I1010 19:23:58.907366  148525 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/888762.pem /etc/ssl/certs/3ec20f2e.0"
	I1010 19:23:58.919784  148525 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1010 19:23:58.931972  148525 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1010 19:23:58.936897  148525 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 10 17:58 /usr/share/ca-certificates/minikubeCA.pem
	I1010 19:23:58.936981  148525 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1010 19:23:58.943007  148525 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1010 19:23:59.052037  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:01.551982  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:23:59.028519  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:59.527790  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:00.027889  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:00.528623  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:01.028697  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:01.528030  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:02.028062  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:02.527876  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:03.028745  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:03.528569  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:58.914409  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:23:58.914894  147213 main.go:141] libmachine: (no-preload-320324) DBG | unable to find current IP address of domain no-preload-320324 in network mk-no-preload-320324
	I1010 19:23:58.914927  147213 main.go:141] libmachine: (no-preload-320324) DBG | I1010 19:23:58.914831  149378 retry.go:31] will retry after 1.631827888s: waiting for machine to come up
	I1010 19:24:00.548505  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:00.549135  147213 main.go:141] libmachine: (no-preload-320324) DBG | unable to find current IP address of domain no-preload-320324 in network mk-no-preload-320324
	I1010 19:24:00.549164  147213 main.go:141] libmachine: (no-preload-320324) DBG | I1010 19:24:00.549043  149378 retry.go:31] will retry after 2.126895157s: waiting for machine to come up
	I1010 19:24:02.678328  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:02.678907  147213 main.go:141] libmachine: (no-preload-320324) DBG | unable to find current IP address of domain no-preload-320324 in network mk-no-preload-320324
	I1010 19:24:02.678969  147213 main.go:141] libmachine: (no-preload-320324) DBG | I1010 19:24:02.678891  149378 retry.go:31] will retry after 2.754376625s: waiting for machine to come up
	I1010 19:23:58.955104  148525 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1010 19:23:58.959833  148525 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1010 19:23:58.966528  148525 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1010 19:23:58.973590  148525 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1010 19:23:58.982390  148525 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1010 19:23:58.990767  148525 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1010 19:23:58.997162  148525 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1010 19:23:59.003647  148525 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-361847 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:default-k8s-diff-port-361847 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.32 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 19:23:59.003786  148525 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1010 19:23:59.003865  148525 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1010 19:23:59.048772  148525 cri.go:89] found id: ""
	I1010 19:23:59.048869  148525 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1010 19:23:59.061267  148525 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1010 19:23:59.061288  148525 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1010 19:23:59.061338  148525 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1010 19:23:59.072629  148525 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1010 19:23:59.074287  148525 kubeconfig.go:125] found "default-k8s-diff-port-361847" server: "https://192.168.50.32:8444"
	I1010 19:23:59.077880  148525 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1010 19:23:59.090738  148525 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.32
	I1010 19:23:59.090783  148525 kubeadm.go:1160] stopping kube-system containers ...
	I1010 19:23:59.090799  148525 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1010 19:23:59.090886  148525 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1010 19:23:59.136762  148525 cri.go:89] found id: ""
	I1010 19:23:59.136888  148525 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1010 19:23:59.155937  148525 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1010 19:23:59.166471  148525 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1010 19:23:59.166493  148525 kubeadm.go:157] found existing configuration files:
	
	I1010 19:23:59.166549  148525 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1010 19:23:59.178247  148525 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1010 19:23:59.178313  148525 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1010 19:23:59.189455  148525 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1010 19:23:59.200127  148525 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1010 19:23:59.200204  148525 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1010 19:23:59.210764  148525 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1010 19:23:59.221048  148525 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1010 19:23:59.221119  148525 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1010 19:23:59.231762  148525 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1010 19:23:59.242152  148525 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1010 19:23:59.242217  148525 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1010 19:23:59.252608  148525 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1010 19:23:59.265219  148525 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:23:59.391743  148525 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:24:00.243288  148525 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:24:00.453782  148525 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:24:00.532137  148525 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:24:00.623598  148525 api_server.go:52] waiting for apiserver process to appear ...
	I1010 19:24:00.623711  148525 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:01.124678  148525 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:01.624626  148525 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:01.667587  148525 api_server.go:72] duration metric: took 1.043987857s to wait for apiserver process to appear ...
	I1010 19:24:01.667621  148525 api_server.go:88] waiting for apiserver healthz status ...
	I1010 19:24:01.667649  148525 api_server.go:253] Checking apiserver healthz at https://192.168.50.32:8444/healthz ...
	I1010 19:24:01.668298  148525 api_server.go:269] stopped: https://192.168.50.32:8444/healthz: Get "https://192.168.50.32:8444/healthz": dial tcp 192.168.50.32:8444: connect: connection refused
	I1010 19:24:02.168273  148525 api_server.go:253] Checking apiserver healthz at https://192.168.50.32:8444/healthz ...
	I1010 19:24:05.275654  148525 api_server.go:279] https://192.168.50.32:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1010 19:24:05.275695  148525 api_server.go:103] status: https://192.168.50.32:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1010 19:24:05.275713  148525 api_server.go:253] Checking apiserver healthz at https://192.168.50.32:8444/healthz ...
	I1010 19:24:05.309713  148525 api_server.go:279] https://192.168.50.32:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1010 19:24:05.309770  148525 api_server.go:103] status: https://192.168.50.32:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1010 19:24:05.668325  148525 api_server.go:253] Checking apiserver healthz at https://192.168.50.32:8444/healthz ...
	I1010 19:24:05.684992  148525 api_server.go:279] https://192.168.50.32:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1010 19:24:05.685031  148525 api_server.go:103] status: https://192.168.50.32:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1010 19:24:06.168198  148525 api_server.go:253] Checking apiserver healthz at https://192.168.50.32:8444/healthz ...
	I1010 19:24:06.176584  148525 api_server.go:279] https://192.168.50.32:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1010 19:24:06.176627  148525 api_server.go:103] status: https://192.168.50.32:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1010 19:24:06.668130  148525 api_server.go:253] Checking apiserver healthz at https://192.168.50.32:8444/healthz ...
	I1010 19:24:06.682049  148525 api_server.go:279] https://192.168.50.32:8444/healthz returned 200:
	ok
	I1010 19:24:06.692780  148525 api_server.go:141] control plane version: v1.31.1
	I1010 19:24:06.692811  148525 api_server.go:131] duration metric: took 5.025182717s to wait for apiserver health ...
	I1010 19:24:06.692820  148525 cni.go:84] Creating CNI manager for ""
	I1010 19:24:06.692831  148525 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1010 19:24:06.694447  148525 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1010 19:24:03.558797  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:06.054012  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:04.028711  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:04.528770  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:05.028716  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:05.528083  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:06.028204  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:06.528430  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:07.027972  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:07.527987  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:08.027829  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:08.528676  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:05.435450  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:05.435940  147213 main.go:141] libmachine: (no-preload-320324) DBG | unable to find current IP address of domain no-preload-320324 in network mk-no-preload-320324
	I1010 19:24:05.435970  147213 main.go:141] libmachine: (no-preload-320324) DBG | I1010 19:24:05.435888  149378 retry.go:31] will retry after 2.981990051s: waiting for machine to come up
	I1010 19:24:08.419385  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:08.419982  147213 main.go:141] libmachine: (no-preload-320324) DBG | unable to find current IP address of domain no-preload-320324 in network mk-no-preload-320324
	I1010 19:24:08.420006  147213 main.go:141] libmachine: (no-preload-320324) DBG | I1010 19:24:08.419905  149378 retry.go:31] will retry after 3.976204267s: waiting for machine to come up
	I1010 19:24:06.695841  148525 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1010 19:24:06.711212  148525 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1010 19:24:06.747753  148525 system_pods.go:43] waiting for kube-system pods to appear ...
	I1010 19:24:06.768344  148525 system_pods.go:59] 8 kube-system pods found
	I1010 19:24:06.768429  148525 system_pods.go:61] "coredns-7c65d6cfc9-rv8vq" [93b209ea-bb5f-40c5-aea8-8771b785f021] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1010 19:24:06.768446  148525 system_pods.go:61] "etcd-default-k8s-diff-port-361847" [65129999-984d-497c-a6e1-9c53a5374991] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1010 19:24:06.768452  148525 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-361847" [5f18ba24-29cf-433e-a70d-23757278c04f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1010 19:24:06.768460  148525 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-361847" [c189c785-8ac5-4003-802d-9e7c089d450e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1010 19:24:06.768467  148525 system_pods.go:61] "kube-proxy-v5lm8" [e78eabf9-5c65-4cba-83fd-0837cef05126] Running
	I1010 19:24:06.768476  148525 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-361847" [4f84f0f5-e255-4534-9db3-e5cfee0b2447] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1010 19:24:06.768485  148525 system_pods.go:61] "metrics-server-6867b74b74-h5kjm" [a3979b79-bd21-490b-97ac-0a78efd43a99] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1010 19:24:06.768493  148525 system_pods.go:61] "storage-provisioner" [ca8606d3-9adb-46da-886a-3081b11b52a6] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1010 19:24:06.768499  148525 system_pods.go:74] duration metric: took 20.716461ms to wait for pod list to return data ...
	I1010 19:24:06.768509  148525 node_conditions.go:102] verifying NodePressure condition ...
	I1010 19:24:06.777935  148525 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1010 19:24:06.777973  148525 node_conditions.go:123] node cpu capacity is 2
	I1010 19:24:06.777988  148525 node_conditions.go:105] duration metric: took 9.473726ms to run NodePressure ...
	I1010 19:24:06.778019  148525 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:24:07.053296  148525 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1010 19:24:07.057585  148525 kubeadm.go:739] kubelet initialised
	I1010 19:24:07.057608  148525 kubeadm.go:740] duration metric: took 4.283027ms waiting for restarted kubelet to initialise ...
	I1010 19:24:07.057618  148525 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 19:24:07.064157  148525 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-rv8vq" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:07.069962  148525 pod_ready.go:98] node "default-k8s-diff-port-361847" hosting pod "coredns-7c65d6cfc9-rv8vq" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-361847" has status "Ready":"False"
	I1010 19:24:07.069989  148525 pod_ready.go:82] duration metric: took 5.791958ms for pod "coredns-7c65d6cfc9-rv8vq" in "kube-system" namespace to be "Ready" ...
	E1010 19:24:07.069999  148525 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-361847" hosting pod "coredns-7c65d6cfc9-rv8vq" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-361847" has status "Ready":"False"
	I1010 19:24:07.070022  148525 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:07.075615  148525 pod_ready.go:98] node "default-k8s-diff-port-361847" hosting pod "etcd-default-k8s-diff-port-361847" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-361847" has status "Ready":"False"
	I1010 19:24:07.075644  148525 pod_ready.go:82] duration metric: took 5.608749ms for pod "etcd-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	E1010 19:24:07.075654  148525 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-361847" hosting pod "etcd-default-k8s-diff-port-361847" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-361847" has status "Ready":"False"
	I1010 19:24:07.075661  148525 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:07.081717  148525 pod_ready.go:98] node "default-k8s-diff-port-361847" hosting pod "kube-apiserver-default-k8s-diff-port-361847" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-361847" has status "Ready":"False"
	I1010 19:24:07.081743  148525 pod_ready.go:82] duration metric: took 6.074977ms for pod "kube-apiserver-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	E1010 19:24:07.081754  148525 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-361847" hosting pod "kube-apiserver-default-k8s-diff-port-361847" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-361847" has status "Ready":"False"
	I1010 19:24:07.081761  148525 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:07.152204  148525 pod_ready.go:98] node "default-k8s-diff-port-361847" hosting pod "kube-controller-manager-default-k8s-diff-port-361847" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-361847" has status "Ready":"False"
	I1010 19:24:07.152244  148525 pod_ready.go:82] duration metric: took 70.475599ms for pod "kube-controller-manager-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	E1010 19:24:07.152258  148525 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-361847" hosting pod "kube-controller-manager-default-k8s-diff-port-361847" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-361847" has status "Ready":"False"
	I1010 19:24:07.152266  148525 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-v5lm8" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:07.551283  148525 pod_ready.go:93] pod "kube-proxy-v5lm8" in "kube-system" namespace has status "Ready":"True"
	I1010 19:24:07.551311  148525 pod_ready.go:82] duration metric: took 399.036581ms for pod "kube-proxy-v5lm8" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:07.551324  148525 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:08.550896  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:10.551437  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:09.028538  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:09.527883  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:10.028693  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:10.528439  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:11.028528  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:11.528679  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:12.028002  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:12.527904  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:13.028685  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:13.527833  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:12.401115  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.401808  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has current primary IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.401841  147213 main.go:141] libmachine: (no-preload-320324) Found IP for machine: 192.168.72.11
	I1010 19:24:12.401856  147213 main.go:141] libmachine: (no-preload-320324) Reserving static IP address...
	I1010 19:24:12.402368  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "no-preload-320324", mac: "52:54:00:95:03:cd", ip: "192.168.72.11"} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:12.402407  147213 main.go:141] libmachine: (no-preload-320324) DBG | skip adding static IP to network mk-no-preload-320324 - found existing host DHCP lease matching {name: "no-preload-320324", mac: "52:54:00:95:03:cd", ip: "192.168.72.11"}
	I1010 19:24:12.402426  147213 main.go:141] libmachine: (no-preload-320324) Reserved static IP address: 192.168.72.11
	I1010 19:24:12.402443  147213 main.go:141] libmachine: (no-preload-320324) Waiting for SSH to be available...
	I1010 19:24:12.402458  147213 main.go:141] libmachine: (no-preload-320324) DBG | Getting to WaitForSSH function...
	I1010 19:24:12.404803  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.405200  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:12.405226  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.405461  147213 main.go:141] libmachine: (no-preload-320324) DBG | Using SSH client type: external
	I1010 19:24:12.405494  147213 main.go:141] libmachine: (no-preload-320324) DBG | Using SSH private key: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/no-preload-320324/id_rsa (-rw-------)
	I1010 19:24:12.405527  147213 main.go:141] libmachine: (no-preload-320324) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.11 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19787-81676/.minikube/machines/no-preload-320324/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1010 19:24:12.405541  147213 main.go:141] libmachine: (no-preload-320324) DBG | About to run SSH command:
	I1010 19:24:12.405554  147213 main.go:141] libmachine: (no-preload-320324) DBG | exit 0
	I1010 19:24:12.529010  147213 main.go:141] libmachine: (no-preload-320324) DBG | SSH cmd err, output: <nil>: 
	I1010 19:24:12.529401  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetConfigRaw
	I1010 19:24:12.530257  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetIP
	I1010 19:24:12.533285  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.533692  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:12.533727  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.533963  147213 profile.go:143] Saving config to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/no-preload-320324/config.json ...
	I1010 19:24:12.534205  147213 machine.go:93] provisionDockerMachine start ...
	I1010 19:24:12.534230  147213 main.go:141] libmachine: (no-preload-320324) Calling .DriverName
	I1010 19:24:12.534450  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHHostname
	I1010 19:24:12.536585  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.536976  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:12.537003  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.537133  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHPort
	I1010 19:24:12.537323  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:12.537512  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:12.537689  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHUsername
	I1010 19:24:12.537925  147213 main.go:141] libmachine: Using SSH client type: native
	I1010 19:24:12.538138  147213 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.11 22 <nil> <nil>}
	I1010 19:24:12.538151  147213 main.go:141] libmachine: About to run SSH command:
	hostname
	I1010 19:24:12.641679  147213 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1010 19:24:12.641706  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetMachineName
	I1010 19:24:12.641964  147213 buildroot.go:166] provisioning hostname "no-preload-320324"
	I1010 19:24:12.642002  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetMachineName
	I1010 19:24:12.642235  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHHostname
	I1010 19:24:12.645149  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.645488  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:12.645521  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.645647  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHPort
	I1010 19:24:12.645836  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:12.646001  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:12.646155  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHUsername
	I1010 19:24:12.646352  147213 main.go:141] libmachine: Using SSH client type: native
	I1010 19:24:12.646533  147213 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.11 22 <nil> <nil>}
	I1010 19:24:12.646545  147213 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-320324 && echo "no-preload-320324" | sudo tee /etc/hostname
	I1010 19:24:12.766449  147213 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-320324
	
	I1010 19:24:12.766480  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHHostname
	I1010 19:24:12.769836  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.770331  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:12.770356  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.770584  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHPort
	I1010 19:24:12.770810  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:12.770962  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:12.771119  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHUsername
	I1010 19:24:12.771252  147213 main.go:141] libmachine: Using SSH client type: native
	I1010 19:24:12.771448  147213 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.11 22 <nil> <nil>}
	I1010 19:24:12.771470  147213 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-320324' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-320324/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-320324' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1010 19:24:12.882458  147213 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1010 19:24:12.882495  147213 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19787-81676/.minikube CaCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19787-81676/.minikube}
	I1010 19:24:12.882537  147213 buildroot.go:174] setting up certificates
	I1010 19:24:12.882547  147213 provision.go:84] configureAuth start
	I1010 19:24:12.882562  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetMachineName
	I1010 19:24:12.882865  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetIP
	I1010 19:24:12.885854  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.886139  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:12.886173  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.886308  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHHostname
	I1010 19:24:12.888479  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.888788  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:12.888819  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.888976  147213 provision.go:143] copyHostCerts
	I1010 19:24:12.889037  147213 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem, removing ...
	I1010 19:24:12.889049  147213 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem
	I1010 19:24:12.889102  147213 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem (1123 bytes)
	I1010 19:24:12.889235  147213 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem, removing ...
	I1010 19:24:12.889246  147213 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem
	I1010 19:24:12.889278  147213 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem (1675 bytes)
	I1010 19:24:12.889370  147213 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem, removing ...
	I1010 19:24:12.889381  147213 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem
	I1010 19:24:12.889406  147213 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem (1078 bytes)
	I1010 19:24:12.889493  147213 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem org=jenkins.no-preload-320324 san=[127.0.0.1 192.168.72.11 localhost minikube no-preload-320324]
	I1010 19:24:12.978176  147213 provision.go:177] copyRemoteCerts
	I1010 19:24:12.978235  147213 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1010 19:24:12.978261  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHHostname
	I1010 19:24:12.981662  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.982182  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:12.982218  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.982452  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHPort
	I1010 19:24:12.982647  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:12.982829  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHUsername
	I1010 19:24:12.983005  147213 sshutil.go:53] new ssh client: &{IP:192.168.72.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/no-preload-320324/id_rsa Username:docker}
	I1010 19:24:13.067269  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1010 19:24:13.092777  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1010 19:24:13.118530  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1010 19:24:13.143401  147213 provision.go:87] duration metric: took 260.833877ms to configureAuth
	I1010 19:24:13.143436  147213 buildroot.go:189] setting minikube options for container-runtime
	I1010 19:24:13.143678  147213 config.go:182] Loaded profile config "no-preload-320324": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 19:24:13.143776  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHHostname
	I1010 19:24:13.147086  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:13.147507  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:13.147531  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:13.147787  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHPort
	I1010 19:24:13.148032  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:13.148222  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:13.148452  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHUsername
	I1010 19:24:13.148660  147213 main.go:141] libmachine: Using SSH client type: native
	I1010 19:24:13.149013  147213 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.11 22 <nil> <nil>}
	I1010 19:24:13.149041  147213 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1010 19:24:13.375683  147213 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1010 19:24:13.375714  147213 machine.go:96] duration metric: took 841.493636ms to provisionDockerMachine
	I1010 19:24:13.375736  147213 start.go:293] postStartSetup for "no-preload-320324" (driver="kvm2")
	I1010 19:24:13.375754  147213 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1010 19:24:13.375775  147213 main.go:141] libmachine: (no-preload-320324) Calling .DriverName
	I1010 19:24:13.376085  147213 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1010 19:24:13.376116  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHHostname
	I1010 19:24:13.378855  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:13.379179  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:13.379224  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:13.379408  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHPort
	I1010 19:24:13.379608  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:13.379769  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHUsername
	I1010 19:24:13.379910  147213 sshutil.go:53] new ssh client: &{IP:192.168.72.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/no-preload-320324/id_rsa Username:docker}
	I1010 19:24:13.459580  147213 ssh_runner.go:195] Run: cat /etc/os-release
	I1010 19:24:13.463644  147213 info.go:137] Remote host: Buildroot 2023.02.9
	I1010 19:24:13.463674  147213 filesync.go:126] Scanning /home/jenkins/minikube-integration/19787-81676/.minikube/addons for local assets ...
	I1010 19:24:13.463751  147213 filesync.go:126] Scanning /home/jenkins/minikube-integration/19787-81676/.minikube/files for local assets ...
	I1010 19:24:13.463845  147213 filesync.go:149] local asset: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem -> 888762.pem in /etc/ssl/certs
	I1010 19:24:13.463963  147213 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1010 19:24:13.473483  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem --> /etc/ssl/certs/888762.pem (1708 bytes)
	I1010 19:24:13.498773  147213 start.go:296] duration metric: took 123.021762ms for postStartSetup
	I1010 19:24:13.498814  147213 fix.go:56] duration metric: took 20.640532088s for fixHost
	I1010 19:24:13.498834  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHHostname
	I1010 19:24:13.501681  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:13.502243  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:13.502281  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:13.502476  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHPort
	I1010 19:24:13.502679  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:13.502835  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:13.502993  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHUsername
	I1010 19:24:13.503177  147213 main.go:141] libmachine: Using SSH client type: native
	I1010 19:24:13.503383  147213 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.11 22 <nil> <nil>}
	I1010 19:24:13.503396  147213 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1010 19:24:13.613929  147213 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728588253.586950075
	
	I1010 19:24:13.613954  147213 fix.go:216] guest clock: 1728588253.586950075
	I1010 19:24:13.613963  147213 fix.go:229] Guest: 2024-10-10 19:24:13.586950075 +0000 UTC Remote: 2024-10-10 19:24:13.498818059 +0000 UTC m=+359.788559229 (delta=88.132016ms)
	I1010 19:24:13.613988  147213 fix.go:200] guest clock delta is within tolerance: 88.132016ms
	I1010 19:24:13.614020  147213 start.go:83] releasing machines lock for "no-preload-320324", held for 20.755775587s
	I1010 19:24:13.614063  147213 main.go:141] libmachine: (no-preload-320324) Calling .DriverName
	I1010 19:24:13.614473  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetIP
	I1010 19:24:13.617327  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:13.617694  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:13.617721  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:13.618016  147213 main.go:141] libmachine: (no-preload-320324) Calling .DriverName
	I1010 19:24:13.618670  147213 main.go:141] libmachine: (no-preload-320324) Calling .DriverName
	I1010 19:24:13.618884  147213 main.go:141] libmachine: (no-preload-320324) Calling .DriverName
	I1010 19:24:13.618989  147213 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1010 19:24:13.619047  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHHostname
	I1010 19:24:13.619142  147213 ssh_runner.go:195] Run: cat /version.json
	I1010 19:24:13.619185  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHHostname
	I1010 19:24:13.621972  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:13.622229  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:13.622322  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:13.622348  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:13.622533  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHPort
	I1010 19:24:13.622666  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:13.622697  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:13.622736  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:13.622865  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHPort
	I1010 19:24:13.622930  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHUsername
	I1010 19:24:13.623059  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:13.623073  147213 sshutil.go:53] new ssh client: &{IP:192.168.72.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/no-preload-320324/id_rsa Username:docker}
	I1010 19:24:13.623225  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHUsername
	I1010 19:24:13.623349  147213 sshutil.go:53] new ssh client: &{IP:192.168.72.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/no-preload-320324/id_rsa Username:docker}
	I1010 19:24:13.720999  147213 ssh_runner.go:195] Run: systemctl --version
	I1010 19:24:13.727679  147213 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1010 19:24:09.562834  148525 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-361847" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:12.058686  148525 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-361847" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:13.870558  147213 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1010 19:24:13.877853  147213 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1010 19:24:13.877923  147213 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1010 19:24:13.896295  147213 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1010 19:24:13.896325  147213 start.go:495] detecting cgroup driver to use...
	I1010 19:24:13.896400  147213 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1010 19:24:13.913122  147213 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1010 19:24:13.929359  147213 docker.go:217] disabling cri-docker service (if available) ...
	I1010 19:24:13.929437  147213 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1010 19:24:13.944840  147213 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1010 19:24:13.960062  147213 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1010 19:24:14.090774  147213 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1010 19:24:14.246094  147213 docker.go:233] disabling docker service ...
	I1010 19:24:14.246161  147213 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1010 19:24:14.264682  147213 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1010 19:24:14.280264  147213 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1010 19:24:14.437156  147213 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1010 19:24:14.569220  147213 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1010 19:24:14.585723  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1010 19:24:14.607349  147213 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1010 19:24:14.607429  147213 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:24:14.619113  147213 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1010 19:24:14.619198  147213 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:24:14.631818  147213 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:24:14.643977  147213 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:24:14.655753  147213 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1010 19:24:14.667235  147213 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:24:14.679225  147213 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:24:14.698760  147213 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:24:14.710440  147213 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1010 19:24:14.722565  147213 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1010 19:24:14.722625  147213 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1010 19:24:14.740587  147213 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1010 19:24:14.752630  147213 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 19:24:14.887728  147213 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1010 19:24:14.989026  147213 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1010 19:24:14.989109  147213 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1010 19:24:14.995309  147213 start.go:563] Will wait 60s for crictl version
	I1010 19:24:14.995366  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:24:14.999840  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1010 19:24:15.043758  147213 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1010 19:24:15.043856  147213 ssh_runner.go:195] Run: crio --version
	I1010 19:24:15.079274  147213 ssh_runner.go:195] Run: crio --version
	I1010 19:24:15.116630  147213 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1010 19:24:13.050633  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:15.552413  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:14.028090  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:14.528072  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:15.028648  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:15.528388  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:16.028060  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:16.528472  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:17.028098  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:17.528125  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:18.028563  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:18.528613  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:15.118343  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetIP
	I1010 19:24:15.121596  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:15.122101  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:15.122133  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:15.122396  147213 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1010 19:24:15.127140  147213 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 19:24:15.141249  147213 kubeadm.go:883] updating cluster {Name:no-preload-320324 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:no-preload-320324 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.11 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1010 19:24:15.141375  147213 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1010 19:24:15.141417  147213 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 19:24:15.183271  147213 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1010 19:24:15.183303  147213 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1010 19:24:15.183412  147213 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 19:24:15.183444  147213 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1010 19:24:15.183452  147213 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I1010 19:24:15.183459  147213 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1010 19:24:15.183422  147213 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1010 19:24:15.183493  147213 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1010 19:24:15.183512  147213 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I1010 19:24:15.183507  147213 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I1010 19:24:15.185097  147213 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I1010 19:24:15.185097  147213 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1010 19:24:15.185097  147213 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 19:24:15.185099  147213 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I1010 19:24:15.185097  147213 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I1010 19:24:15.185098  147213 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1010 19:24:15.185103  147213 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1010 19:24:15.185106  147213 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1010 19:24:15.328484  147213 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.1
	I1010 19:24:15.333573  147213 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I1010 19:24:15.340047  147213 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I1010 19:24:15.358922  147213 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I1010 19:24:15.359800  147213 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.1
	I1010 19:24:15.366668  147213 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.1
	I1010 19:24:15.409942  147213 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I1010 19:24:15.409995  147213 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1010 19:24:15.410050  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:24:15.416186  147213 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.1
	I1010 19:24:15.452343  147213 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I1010 19:24:15.452385  147213 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I1010 19:24:15.452426  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:24:15.533567  147213 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I1010 19:24:15.533620  147213 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I1010 19:24:15.533671  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:24:15.585611  147213 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I1010 19:24:15.585659  147213 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I1010 19:24:15.585685  147213 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I1010 19:24:15.585712  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:24:15.585724  147213 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I1010 19:24:15.585765  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:24:15.585769  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I1010 19:24:15.585805  147213 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I1010 19:24:15.585832  147213 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I1010 19:24:15.585856  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1010 19:24:15.585872  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:24:15.585943  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1010 19:24:15.603131  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I1010 19:24:15.661918  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1010 19:24:15.683739  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I1010 19:24:15.683760  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I1010 19:24:15.683833  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1010 19:24:15.683880  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I1010 19:24:15.685385  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I1010 19:24:15.792253  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1010 19:24:15.818116  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I1010 19:24:15.818183  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I1010 19:24:15.818289  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1010 19:24:15.818321  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I1010 19:24:15.818402  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I1010 19:24:15.878069  147213 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I1010 19:24:15.878202  147213 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I1010 19:24:15.940520  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I1010 19:24:15.953841  147213 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I1010 19:24:15.953955  147213 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I1010 19:24:15.953990  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I1010 19:24:15.954047  147213 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I1010 19:24:15.954115  147213 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I1010 19:24:15.954120  147213 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I1010 19:24:15.954130  147213 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I1010 19:24:15.954144  147213 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I1010 19:24:15.954157  147213 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I1010 19:24:15.954205  147213 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	I1010 19:24:16.005975  147213 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I1010 19:24:16.006028  147213 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I1010 19:24:16.006090  147213 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I1010 19:24:16.023905  147213 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I1010 19:24:16.023990  147213 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.1 (exists)
	I1010 19:24:16.024024  147213 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I1010 19:24:16.024023  147213 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.1 (exists)
	I1010 19:24:16.033715  147213 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 19:24:18.150881  147213 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1: (2.144766677s)
	I1010 19:24:18.150935  147213 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.1 (exists)
	I1010 19:24:18.150931  147213 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (2.196753845s)
	I1010 19:24:18.150944  147213 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1: (2.126894115s)
	I1010 19:24:18.150973  147213 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.1 (exists)
	I1010 19:24:18.150953  147213 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I1010 19:24:18.150982  147213 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.117235962s)
	I1010 19:24:18.151002  147213 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I1010 19:24:18.151014  147213 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1010 19:24:18.151053  147213 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 19:24:18.151069  147213 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I1010 19:24:18.151097  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:24:14.059223  148525 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-361847" in "kube-system" namespace has status "Ready":"True"
	I1010 19:24:14.059252  148525 pod_ready.go:82] duration metric: took 6.507918149s for pod "kube-scheduler-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:14.059266  148525 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:16.066908  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:18.082398  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:18.051799  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:20.552644  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:19.028306  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:19.527854  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:20.028765  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:20.528245  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:21.027936  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:21.528172  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:22.028527  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:22.528446  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:23.028709  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:23.528711  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:21.952099  147213 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.801005716s)
	I1010 19:24:21.952134  147213 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I1010 19:24:21.952163  147213 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I1010 19:24:21.952165  147213 ssh_runner.go:235] Completed: which crictl: (3.801048272s)
	I1010 19:24:21.952212  147213 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I1010 19:24:21.952225  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 19:24:21.993627  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 19:24:20.566055  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:22.567145  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:23.053514  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:25.554151  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:24.028402  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:24.528516  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:25.028207  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:25.527928  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:26.028032  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:26.527826  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:27.028609  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:27.528448  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:28.028018  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:28.528597  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:23.929370  147213 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1: (1.977128659s)
	I1010 19:24:23.929418  147213 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I1010 19:24:23.929450  147213 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I1010 19:24:23.929498  147213 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.935844384s)
	I1010 19:24:23.929532  147213 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1
	I1010 19:24:23.929551  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 19:24:26.009485  147213 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.079908324s)
	I1010 19:24:26.009567  147213 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1010 19:24:26.009484  147213 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1: (2.079925224s)
	I1010 19:24:26.009641  147213 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I1010 19:24:26.009671  147213 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I1010 19:24:26.009684  147213 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1010 19:24:26.009720  147213 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1
	I1010 19:24:27.968483  147213 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.958772952s)
	I1010 19:24:27.968534  147213 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1010 19:24:27.968559  147213 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1: (1.958813643s)
	I1010 19:24:27.968587  147213 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I1010 19:24:27.968619  147213 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I1010 19:24:27.968686  147213 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1
	I1010 19:24:25.069787  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:27.567013  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:28.050968  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:30.551528  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:29.027960  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:29.528054  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:30.028227  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:30.527791  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:31.027790  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:31.528761  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:32.028343  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:32.528553  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:33.028195  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:33.527895  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:29.315157  147213 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1: (1.346440456s)
	I1010 19:24:29.315211  147213 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I1010 19:24:29.315244  147213 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1010 19:24:29.315296  147213 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1010 19:24:30.173931  147213 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1010 19:24:30.173977  147213 cache_images.go:123] Successfully loaded all cached images
	I1010 19:24:30.173985  147213 cache_images.go:92] duration metric: took 14.990666845s to LoadCachedImages
	I1010 19:24:30.174001  147213 kubeadm.go:934] updating node { 192.168.72.11 8443 v1.31.1 crio true true} ...
	I1010 19:24:30.174129  147213 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-320324 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.11
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-320324 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1010 19:24:30.174221  147213 ssh_runner.go:195] Run: crio config
	I1010 19:24:30.222677  147213 cni.go:84] Creating CNI manager for ""
	I1010 19:24:30.222702  147213 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1010 19:24:30.222711  147213 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1010 19:24:30.222736  147213 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.11 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-320324 NodeName:no-preload-320324 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.11"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.11 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1010 19:24:30.222923  147213 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.11
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-320324"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.11
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.11"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1010 19:24:30.222998  147213 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1010 19:24:30.233755  147213 binaries.go:44] Found k8s binaries, skipping transfer
	I1010 19:24:30.233818  147213 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1010 19:24:30.243829  147213 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I1010 19:24:30.263056  147213 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1010 19:24:30.282362  147213 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I1010 19:24:30.300449  147213 ssh_runner.go:195] Run: grep 192.168.72.11	control-plane.minikube.internal$ /etc/hosts
	I1010 19:24:30.304661  147213 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.11	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 19:24:30.317462  147213 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 19:24:30.445515  147213 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 19:24:30.462816  147213 certs.go:68] Setting up /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/no-preload-320324 for IP: 192.168.72.11
	I1010 19:24:30.462847  147213 certs.go:194] generating shared ca certs ...
	I1010 19:24:30.462871  147213 certs.go:226] acquiring lock for ca certs: {Name:mk934aa2a82050d7b4e8800492bf64f91d140517 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 19:24:30.463074  147213 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key
	I1010 19:24:30.463132  147213 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key
	I1010 19:24:30.463145  147213 certs.go:256] generating profile certs ...
	I1010 19:24:30.463289  147213 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/no-preload-320324/client.key
	I1010 19:24:30.463364  147213 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/no-preload-320324/apiserver.key.a7785fc5
	I1010 19:24:30.463413  147213 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/no-preload-320324/proxy-client.key
	I1010 19:24:30.463565  147213 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876.pem (1338 bytes)
	W1010 19:24:30.463604  147213 certs.go:480] ignoring /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876_empty.pem, impossibly tiny 0 bytes
	I1010 19:24:30.463617  147213 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem (1679 bytes)
	I1010 19:24:30.463657  147213 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem (1078 bytes)
	I1010 19:24:30.463689  147213 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem (1123 bytes)
	I1010 19:24:30.463721  147213 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem (1675 bytes)
	I1010 19:24:30.463774  147213 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem (1708 bytes)
	I1010 19:24:30.464502  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1010 19:24:30.525320  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1010 19:24:30.565229  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1010 19:24:30.597731  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1010 19:24:30.626174  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/no-preload-320324/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1010 19:24:30.659991  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/no-preload-320324/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1010 19:24:30.685662  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/no-preload-320324/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1010 19:24:30.710757  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/no-preload-320324/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1010 19:24:30.736325  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem --> /usr/share/ca-certificates/888762.pem (1708 bytes)
	I1010 19:24:30.771239  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1010 19:24:30.796467  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876.pem --> /usr/share/ca-certificates/88876.pem (1338 bytes)
	I1010 19:24:30.821925  147213 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1010 19:24:30.840743  147213 ssh_runner.go:195] Run: openssl version
	I1010 19:24:30.846898  147213 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/88876.pem && ln -fs /usr/share/ca-certificates/88876.pem /etc/ssl/certs/88876.pem"
	I1010 19:24:30.858410  147213 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/88876.pem
	I1010 19:24:30.863188  147213 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 10 18:09 /usr/share/ca-certificates/88876.pem
	I1010 19:24:30.863260  147213 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/88876.pem
	I1010 19:24:30.869307  147213 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/88876.pem /etc/ssl/certs/51391683.0"
	I1010 19:24:30.880319  147213 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/888762.pem && ln -fs /usr/share/ca-certificates/888762.pem /etc/ssl/certs/888762.pem"
	I1010 19:24:30.891307  147213 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/888762.pem
	I1010 19:24:30.895771  147213 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 10 18:09 /usr/share/ca-certificates/888762.pem
	I1010 19:24:30.895828  147213 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/888762.pem
	I1010 19:24:30.901510  147213 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/888762.pem /etc/ssl/certs/3ec20f2e.0"
	I1010 19:24:30.912627  147213 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1010 19:24:30.924330  147213 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1010 19:24:30.929108  147213 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 10 17:58 /usr/share/ca-certificates/minikubeCA.pem
	I1010 19:24:30.929194  147213 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1010 19:24:30.935266  147213 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1010 19:24:30.946714  147213 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1010 19:24:30.951692  147213 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1010 19:24:30.957910  147213 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1010 19:24:30.964296  147213 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1010 19:24:30.971001  147213 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1010 19:24:30.977427  147213 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1010 19:24:30.984201  147213 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1010 19:24:30.990532  147213 kubeadm.go:392] StartCluster: {Name:no-preload-320324 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:no-preload-320324 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.11 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 19:24:30.990622  147213 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1010 19:24:30.990727  147213 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1010 19:24:31.033544  147213 cri.go:89] found id: ""
	I1010 19:24:31.033624  147213 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1010 19:24:31.044956  147213 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1010 19:24:31.044975  147213 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1010 19:24:31.045025  147213 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1010 19:24:31.056563  147213 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1010 19:24:31.057705  147213 kubeconfig.go:125] found "no-preload-320324" server: "https://192.168.72.11:8443"
	I1010 19:24:31.059853  147213 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1010 19:24:31.071304  147213 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.11
	I1010 19:24:31.071338  147213 kubeadm.go:1160] stopping kube-system containers ...
	I1010 19:24:31.071353  147213 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1010 19:24:31.071444  147213 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1010 19:24:31.107345  147213 cri.go:89] found id: ""
	I1010 19:24:31.107429  147213 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1010 19:24:31.125556  147213 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1010 19:24:31.135390  147213 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1010 19:24:31.135428  147213 kubeadm.go:157] found existing configuration files:
	
	I1010 19:24:31.135478  147213 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1010 19:24:31.144653  147213 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1010 19:24:31.144715  147213 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1010 19:24:31.154458  147213 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1010 19:24:31.163444  147213 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1010 19:24:31.163501  147213 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1010 19:24:31.172633  147213 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1010 19:24:31.181939  147213 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1010 19:24:31.182001  147213 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1010 19:24:31.191638  147213 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1010 19:24:31.200846  147213 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1010 19:24:31.200935  147213 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1010 19:24:31.211048  147213 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1010 19:24:31.221008  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:24:31.352733  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:24:32.270546  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:24:32.474510  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:24:32.551517  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:24:32.707737  147213 api_server.go:52] waiting for apiserver process to appear ...
	I1010 19:24:32.707826  147213 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:33.208647  147213 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:33.708539  147213 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:33.728647  147213 api_server.go:72] duration metric: took 1.020907246s to wait for apiserver process to appear ...
	I1010 19:24:33.728678  147213 api_server.go:88] waiting for apiserver healthz status ...
	I1010 19:24:33.728701  147213 api_server.go:253] Checking apiserver healthz at https://192.168.72.11:8443/healthz ...
	I1010 19:24:30.066635  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:32.066732  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:32.552277  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:35.051399  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:34.028434  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:34.527949  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:35.028224  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:35.528470  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:36.028540  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:36.528499  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:37.028362  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:37.528483  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:38.028531  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:38.527865  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:37.025756  147213 api_server.go:279] https://192.168.72.11:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1010 19:24:37.025787  147213 api_server.go:103] status: https://192.168.72.11:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1010 19:24:37.025802  147213 api_server.go:253] Checking apiserver healthz at https://192.168.72.11:8443/healthz ...
	I1010 19:24:37.078247  147213 api_server.go:279] https://192.168.72.11:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1010 19:24:37.078283  147213 api_server.go:103] status: https://192.168.72.11:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1010 19:24:37.229601  147213 api_server.go:253] Checking apiserver healthz at https://192.168.72.11:8443/healthz ...
	I1010 19:24:37.237166  147213 api_server.go:279] https://192.168.72.11:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1010 19:24:37.237204  147213 api_server.go:103] status: https://192.168.72.11:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1010 19:24:37.728824  147213 api_server.go:253] Checking apiserver healthz at https://192.168.72.11:8443/healthz ...
	I1010 19:24:37.735660  147213 api_server.go:279] https://192.168.72.11:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1010 19:24:37.735700  147213 api_server.go:103] status: https://192.168.72.11:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1010 19:24:38.229746  147213 api_server.go:253] Checking apiserver healthz at https://192.168.72.11:8443/healthz ...
	I1010 19:24:38.234449  147213 api_server.go:279] https://192.168.72.11:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1010 19:24:38.234491  147213 api_server.go:103] status: https://192.168.72.11:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1010 19:24:38.729000  147213 api_server.go:253] Checking apiserver healthz at https://192.168.72.11:8443/healthz ...
	I1010 19:24:38.737564  147213 api_server.go:279] https://192.168.72.11:8443/healthz returned 200:
	ok
	I1010 19:24:38.751982  147213 api_server.go:141] control plane version: v1.31.1
	I1010 19:24:38.752012  147213 api_server.go:131] duration metric: took 5.023326632s to wait for apiserver health ...
	I1010 19:24:38.752023  147213 cni.go:84] Creating CNI manager for ""
	I1010 19:24:38.752030  147213 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1010 19:24:38.753351  147213 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1010 19:24:34.067208  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:36.067413  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:38.566729  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:38.754645  147213 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1010 19:24:38.772086  147213 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1010 19:24:38.792017  147213 system_pods.go:43] waiting for kube-system pods to appear ...
	I1010 19:24:38.800547  147213 system_pods.go:59] 8 kube-system pods found
	I1010 19:24:38.800592  147213 system_pods.go:61] "coredns-7c65d6cfc9-86brb" [28e5f869-f82f-4bd4-9d9c-89499fa89c89] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1010 19:24:38.800602  147213 system_pods.go:61] "etcd-no-preload-320324" [3027fc7b-a2cd-47c9-a6f1-122bfd0475a8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1010 19:24:38.800609  147213 system_pods.go:61] "kube-apiserver-no-preload-320324" [6e3d2eab-4568-48e9-957c-e724ff89b5da] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1010 19:24:38.800617  147213 system_pods.go:61] "kube-controller-manager-no-preload-320324" [a6d65cd3-48b1-4faf-bb7f-f6433641d6f3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1010 19:24:38.800624  147213 system_pods.go:61] "kube-proxy-vn6sv" [e5b2c419-7299-4bc4-b263-99408b9484eb] Running
	I1010 19:24:38.800629  147213 system_pods.go:61] "kube-scheduler-no-preload-320324" [e089c258-e720-4c78-ada1-eebbe89556c1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1010 19:24:38.800638  147213 system_pods.go:61] "metrics-server-6867b74b74-8w9lk" [354939e6-2ca9-44f5-8e8e-c10493c68b79] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1010 19:24:38.800642  147213 system_pods.go:61] "storage-provisioner" [d4965b53-60c3-4a97-bd52-d164d977247a] Running
	I1010 19:24:38.800648  147213 system_pods.go:74] duration metric: took 8.60732ms to wait for pod list to return data ...
	I1010 19:24:38.800654  147213 node_conditions.go:102] verifying NodePressure condition ...
	I1010 19:24:38.804628  147213 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1010 19:24:38.804663  147213 node_conditions.go:123] node cpu capacity is 2
	I1010 19:24:38.804680  147213 node_conditions.go:105] duration metric: took 4.021699ms to run NodePressure ...
	I1010 19:24:38.804700  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:24:39.078452  147213 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1010 19:24:39.087090  147213 kubeadm.go:739] kubelet initialised
	I1010 19:24:39.087116  147213 kubeadm.go:740] duration metric: took 8.636436ms waiting for restarted kubelet to initialise ...
	I1010 19:24:39.087125  147213 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 19:24:39.094468  147213 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-86brb" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:39.108724  147213 pod_ready.go:98] node "no-preload-320324" hosting pod "coredns-7c65d6cfc9-86brb" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:39.108756  147213 pod_ready.go:82] duration metric: took 14.254631ms for pod "coredns-7c65d6cfc9-86brb" in "kube-system" namespace to be "Ready" ...
	E1010 19:24:39.108770  147213 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-320324" hosting pod "coredns-7c65d6cfc9-86brb" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:39.108780  147213 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:39.119304  147213 pod_ready.go:98] node "no-preload-320324" hosting pod "etcd-no-preload-320324" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:39.119335  147213 pod_ready.go:82] duration metric: took 10.543376ms for pod "etcd-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	E1010 19:24:39.119345  147213 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-320324" hosting pod "etcd-no-preload-320324" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:39.119352  147213 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:39.127243  147213 pod_ready.go:98] node "no-preload-320324" hosting pod "kube-apiserver-no-preload-320324" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:39.127268  147213 pod_ready.go:82] duration metric: took 7.907414ms for pod "kube-apiserver-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	E1010 19:24:39.127278  147213 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-320324" hosting pod "kube-apiserver-no-preload-320324" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:39.127285  147213 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:39.195549  147213 pod_ready.go:98] node "no-preload-320324" hosting pod "kube-controller-manager-no-preload-320324" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:39.195578  147213 pod_ready.go:82] duration metric: took 68.282333ms for pod "kube-controller-manager-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	E1010 19:24:39.195588  147213 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-320324" hosting pod "kube-controller-manager-no-preload-320324" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:39.195594  147213 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-vn6sv" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:39.595842  147213 pod_ready.go:98] node "no-preload-320324" hosting pod "kube-proxy-vn6sv" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:39.595871  147213 pod_ready.go:82] duration metric: took 400.267905ms for pod "kube-proxy-vn6sv" in "kube-system" namespace to be "Ready" ...
	E1010 19:24:39.595880  147213 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-320324" hosting pod "kube-proxy-vn6sv" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:39.595886  147213 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:39.995731  147213 pod_ready.go:98] node "no-preload-320324" hosting pod "kube-scheduler-no-preload-320324" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:39.995760  147213 pod_ready.go:82] duration metric: took 399.866947ms for pod "kube-scheduler-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	E1010 19:24:39.995770  147213 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-320324" hosting pod "kube-scheduler-no-preload-320324" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:39.995777  147213 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:40.396420  147213 pod_ready.go:98] node "no-preload-320324" hosting pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:40.396456  147213 pod_ready.go:82] duration metric: took 400.667834ms for pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace to be "Ready" ...
	E1010 19:24:40.396470  147213 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-320324" hosting pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:40.396482  147213 pod_ready.go:39] duration metric: took 1.309346973s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 19:24:40.396508  147213 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1010 19:24:40.409956  147213 ops.go:34] apiserver oom_adj: -16
	I1010 19:24:40.409980  147213 kubeadm.go:597] duration metric: took 9.364998977s to restartPrimaryControlPlane
	I1010 19:24:40.409991  147213 kubeadm.go:394] duration metric: took 9.419470024s to StartCluster
	I1010 19:24:40.410009  147213 settings.go:142] acquiring lock: {Name:mk8d73e472e6ca16e47be8eb712406a95625b8da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 19:24:40.410085  147213 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19787-81676/kubeconfig
	I1010 19:24:40.413037  147213 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19787-81676/kubeconfig: {Name:mk31297b607b774870865548d952622c04d970cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 19:24:40.413448  147213 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.11 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1010 19:24:40.413783  147213 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1010 19:24:40.413979  147213 config.go:182] Loaded profile config "no-preload-320324": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 19:24:40.413996  147213 addons.go:69] Setting default-storageclass=true in profile "no-preload-320324"
	I1010 19:24:40.414020  147213 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-320324"
	I1010 19:24:40.413983  147213 addons.go:69] Setting storage-provisioner=true in profile "no-preload-320324"
	I1010 19:24:40.414048  147213 addons.go:234] Setting addon storage-provisioner=true in "no-preload-320324"
	W1010 19:24:40.414057  147213 addons.go:243] addon storage-provisioner should already be in state true
	I1010 19:24:40.414091  147213 host.go:66] Checking if "no-preload-320324" exists ...
	I1010 19:24:40.414170  147213 addons.go:69] Setting metrics-server=true in profile "no-preload-320324"
	I1010 19:24:40.414230  147213 addons.go:234] Setting addon metrics-server=true in "no-preload-320324"
	W1010 19:24:40.414252  147213 addons.go:243] addon metrics-server should already be in state true
	I1010 19:24:40.414292  147213 host.go:66] Checking if "no-preload-320324" exists ...
	I1010 19:24:40.414612  147213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:24:40.414640  147213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:24:40.414678  147213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:24:40.414712  147213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:24:40.415409  147213 out.go:177] * Verifying Kubernetes components...
	I1010 19:24:40.415412  147213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:24:40.415553  147213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:24:40.416812  147213 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 19:24:40.431363  147213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44727
	I1010 19:24:40.431474  147213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46487
	I1010 19:24:40.431659  147213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33767
	I1010 19:24:40.431983  147213 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:24:40.432136  147213 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:24:40.432156  147213 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:24:40.432567  147213 main.go:141] libmachine: Using API Version  1
	I1010 19:24:40.432587  147213 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:24:40.432710  147213 main.go:141] libmachine: Using API Version  1
	I1010 19:24:40.432732  147213 main.go:141] libmachine: Using API Version  1
	I1010 19:24:40.432740  147213 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:24:40.432749  147213 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:24:40.433000  147213 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:24:40.433079  147213 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:24:40.433103  147213 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:24:40.433468  147213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:24:40.433498  147213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:24:40.436984  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetState
	I1010 19:24:40.453362  147213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:24:40.453426  147213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:24:40.454884  147213 addons.go:234] Setting addon default-storageclass=true in "no-preload-320324"
	W1010 19:24:40.454913  147213 addons.go:243] addon default-storageclass should already be in state true
	I1010 19:24:40.454947  147213 host.go:66] Checking if "no-preload-320324" exists ...
	I1010 19:24:40.455335  147213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:24:40.455394  147213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:24:40.470642  147213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38831
	I1010 19:24:40.471118  147213 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:24:40.471701  147213 main.go:141] libmachine: Using API Version  1
	I1010 19:24:40.471730  147213 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:24:40.472241  147213 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:24:40.472523  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetState
	I1010 19:24:40.473953  147213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36987
	I1010 19:24:40.474196  147213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34527
	I1010 19:24:40.474332  147213 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:24:40.474672  147213 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:24:40.474814  147213 main.go:141] libmachine: Using API Version  1
	I1010 19:24:40.474827  147213 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:24:40.475181  147213 main.go:141] libmachine: Using API Version  1
	I1010 19:24:40.475210  147213 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:24:40.475310  147213 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:24:40.475702  147213 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:24:40.475785  147213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:24:40.475825  147213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:24:40.475922  147213 main.go:141] libmachine: (no-preload-320324) Calling .DriverName
	I1010 19:24:40.476046  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetState
	I1010 19:24:40.478147  147213 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 19:24:40.478395  147213 main.go:141] libmachine: (no-preload-320324) Calling .DriverName
	I1010 19:24:40.479869  147213 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 19:24:40.479896  147213 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1010 19:24:40.479922  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHHostname
	I1010 19:24:40.480549  147213 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1010 19:24:37.051611  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:39.551952  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:41.553895  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:40.482101  147213 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1010 19:24:40.482119  147213 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1010 19:24:40.482144  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHHostname
	I1010 19:24:40.484066  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:40.484560  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:40.484588  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:40.484833  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHPort
	I1010 19:24:40.485065  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:40.485241  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHUsername
	I1010 19:24:40.485272  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:40.485443  147213 sshutil.go:53] new ssh client: &{IP:192.168.72.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/no-preload-320324/id_rsa Username:docker}
	I1010 19:24:40.485788  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:40.485807  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:40.485842  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHPort
	I1010 19:24:40.486017  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:40.486202  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHUsername
	I1010 19:24:40.486454  147213 sshutil.go:53] new ssh client: &{IP:192.168.72.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/no-preload-320324/id_rsa Username:docker}
	I1010 19:24:40.492533  147213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38967
	I1010 19:24:40.493012  147213 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:24:40.493566  147213 main.go:141] libmachine: Using API Version  1
	I1010 19:24:40.493595  147213 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:24:40.494056  147213 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:24:40.494325  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetState
	I1010 19:24:40.496053  147213 main.go:141] libmachine: (no-preload-320324) Calling .DriverName
	I1010 19:24:40.496301  147213 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1010 19:24:40.496321  147213 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1010 19:24:40.496344  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHHostname
	I1010 19:24:40.499125  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:40.499667  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:40.499690  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:40.499843  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHPort
	I1010 19:24:40.500022  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:40.500194  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHUsername
	I1010 19:24:40.500357  147213 sshutil.go:53] new ssh client: &{IP:192.168.72.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/no-preload-320324/id_rsa Username:docker}
	I1010 19:24:40.651454  147213 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 19:24:40.667056  147213 node_ready.go:35] waiting up to 6m0s for node "no-preload-320324" to be "Ready" ...
	I1010 19:24:40.782217  147213 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 19:24:40.803094  147213 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1010 19:24:40.803122  147213 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1010 19:24:40.812288  147213 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1010 19:24:40.837679  147213 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1010 19:24:40.837723  147213 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1010 19:24:40.882090  147213 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1010 19:24:40.882119  147213 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1010 19:24:40.940115  147213 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1010 19:24:41.949181  147213 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.136852217s)
	I1010 19:24:41.949258  147213 main.go:141] libmachine: Making call to close driver server
	I1010 19:24:41.949275  147213 main.go:141] libmachine: (no-preload-320324) Calling .Close
	I1010 19:24:41.949286  147213 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.167030419s)
	I1010 19:24:41.949327  147213 main.go:141] libmachine: Making call to close driver server
	I1010 19:24:41.949345  147213 main.go:141] libmachine: (no-preload-320324) Calling .Close
	I1010 19:24:41.949625  147213 main.go:141] libmachine: (no-preload-320324) DBG | Closing plugin on server side
	I1010 19:24:41.949652  147213 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:24:41.949660  147213 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:24:41.949661  147213 main.go:141] libmachine: (no-preload-320324) DBG | Closing plugin on server side
	I1010 19:24:41.949668  147213 main.go:141] libmachine: Making call to close driver server
	I1010 19:24:41.949679  147213 main.go:141] libmachine: (no-preload-320324) Calling .Close
	I1010 19:24:41.949761  147213 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:24:41.949804  147213 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:24:41.949819  147213 main.go:141] libmachine: Making call to close driver server
	I1010 19:24:41.949826  147213 main.go:141] libmachine: (no-preload-320324) Calling .Close
	I1010 19:24:41.950811  147213 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:24:41.950824  147213 main.go:141] libmachine: (no-preload-320324) DBG | Closing plugin on server side
	I1010 19:24:41.950827  147213 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:24:41.950822  147213 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:24:41.950845  147213 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:24:41.950811  147213 main.go:141] libmachine: (no-preload-320324) DBG | Closing plugin on server side
	I1010 19:24:41.957797  147213 main.go:141] libmachine: Making call to close driver server
	I1010 19:24:41.957814  147213 main.go:141] libmachine: (no-preload-320324) Calling .Close
	I1010 19:24:41.958071  147213 main.go:141] libmachine: (no-preload-320324) DBG | Closing plugin on server side
	I1010 19:24:41.958077  147213 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:24:41.958099  147213 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:24:42.005530  147213 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.065377363s)
	I1010 19:24:42.005590  147213 main.go:141] libmachine: Making call to close driver server
	I1010 19:24:42.005602  147213 main.go:141] libmachine: (no-preload-320324) Calling .Close
	I1010 19:24:42.005914  147213 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:24:42.005937  147213 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:24:42.005935  147213 main.go:141] libmachine: (no-preload-320324) DBG | Closing plugin on server side
	I1010 19:24:42.005972  147213 main.go:141] libmachine: Making call to close driver server
	I1010 19:24:42.006003  147213 main.go:141] libmachine: (no-preload-320324) Calling .Close
	I1010 19:24:42.006280  147213 main.go:141] libmachine: (no-preload-320324) DBG | Closing plugin on server side
	I1010 19:24:42.006313  147213 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:24:42.006335  147213 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:24:42.006354  147213 addons.go:475] Verifying addon metrics-server=true in "no-preload-320324"
	I1010 19:24:42.008523  147213 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1010 19:24:39.028363  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:39.528571  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:40.027992  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:40.528552  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:41.028220  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:41.527889  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:42.028462  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:24:42.028549  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:24:42.070669  148123 cri.go:89] found id: ""
	I1010 19:24:42.070701  148123 logs.go:282] 0 containers: []
	W1010 19:24:42.070710  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:24:42.070716  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:24:42.070775  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:24:42.110693  148123 cri.go:89] found id: ""
	I1010 19:24:42.110731  148123 logs.go:282] 0 containers: []
	W1010 19:24:42.110748  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:24:42.110756  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:24:42.110816  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:24:42.148471  148123 cri.go:89] found id: ""
	I1010 19:24:42.148511  148123 logs.go:282] 0 containers: []
	W1010 19:24:42.148525  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:24:42.148535  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:24:42.148603  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:24:42.184637  148123 cri.go:89] found id: ""
	I1010 19:24:42.184670  148123 logs.go:282] 0 containers: []
	W1010 19:24:42.184683  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:24:42.184691  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:24:42.184759  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:24:42.226794  148123 cri.go:89] found id: ""
	I1010 19:24:42.226834  148123 logs.go:282] 0 containers: []
	W1010 19:24:42.226848  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:24:42.226857  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:24:42.226942  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:24:42.262969  148123 cri.go:89] found id: ""
	I1010 19:24:42.263004  148123 logs.go:282] 0 containers: []
	W1010 19:24:42.263017  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:24:42.263027  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:24:42.263167  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:24:42.301063  148123 cri.go:89] found id: ""
	I1010 19:24:42.301088  148123 logs.go:282] 0 containers: []
	W1010 19:24:42.301096  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:24:42.301102  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:24:42.301153  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:24:42.338829  148123 cri.go:89] found id: ""
	I1010 19:24:42.338862  148123 logs.go:282] 0 containers: []
	W1010 19:24:42.338873  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:24:42.338886  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:24:42.338901  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:24:42.391426  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:24:42.391478  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:24:42.405520  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:24:42.405563  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:24:42.544245  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:24:42.544275  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:24:42.544292  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:24:42.620760  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:24:42.620804  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:24:42.009965  147213 addons.go:510] duration metric: took 1.596190602s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1010 19:24:42.672792  147213 node_ready.go:53] node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:41.066744  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:43.066850  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:43.557231  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:46.051820  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:45.164389  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:45.179714  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:24:45.179776  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:24:45.216271  148123 cri.go:89] found id: ""
	I1010 19:24:45.216308  148123 logs.go:282] 0 containers: []
	W1010 19:24:45.216319  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:24:45.216327  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:24:45.216394  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:24:45.255109  148123 cri.go:89] found id: ""
	I1010 19:24:45.255154  148123 logs.go:282] 0 containers: []
	W1010 19:24:45.255167  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:24:45.255176  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:24:45.255248  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:24:45.294338  148123 cri.go:89] found id: ""
	I1010 19:24:45.294369  148123 logs.go:282] 0 containers: []
	W1010 19:24:45.294380  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:24:45.294388  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:24:45.294457  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:24:45.339573  148123 cri.go:89] found id: ""
	I1010 19:24:45.339606  148123 logs.go:282] 0 containers: []
	W1010 19:24:45.339626  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:24:45.339636  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:24:45.339703  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:24:45.400184  148123 cri.go:89] found id: ""
	I1010 19:24:45.400214  148123 logs.go:282] 0 containers: []
	W1010 19:24:45.400225  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:24:45.400234  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:24:45.400301  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:24:45.436152  148123 cri.go:89] found id: ""
	I1010 19:24:45.436183  148123 logs.go:282] 0 containers: []
	W1010 19:24:45.436195  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:24:45.436203  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:24:45.436262  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:24:45.484321  148123 cri.go:89] found id: ""
	I1010 19:24:45.484347  148123 logs.go:282] 0 containers: []
	W1010 19:24:45.484355  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:24:45.484361  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:24:45.484441  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:24:45.532893  148123 cri.go:89] found id: ""
	I1010 19:24:45.532923  148123 logs.go:282] 0 containers: []
	W1010 19:24:45.532932  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:24:45.532943  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:24:45.532958  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:24:45.585183  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:24:45.585214  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:24:45.638800  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:24:45.638847  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:24:45.653928  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:24:45.653961  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:24:45.733534  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:24:45.733564  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:24:45.733580  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:24:48.314367  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:48.329687  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:24:48.329754  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:24:48.370372  148123 cri.go:89] found id: ""
	I1010 19:24:48.370400  148123 logs.go:282] 0 containers: []
	W1010 19:24:48.370409  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:24:48.370415  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:24:48.370490  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:24:48.409362  148123 cri.go:89] found id: ""
	I1010 19:24:48.409396  148123 logs.go:282] 0 containers: []
	W1010 19:24:48.409407  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:24:48.409415  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:24:48.409500  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:24:48.449640  148123 cri.go:89] found id: ""
	I1010 19:24:48.449672  148123 logs.go:282] 0 containers: []
	W1010 19:24:48.449681  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:24:48.449687  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:24:48.449746  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:24:48.485053  148123 cri.go:89] found id: ""
	I1010 19:24:48.485104  148123 logs.go:282] 0 containers: []
	W1010 19:24:48.485121  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:24:48.485129  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:24:48.485183  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:24:48.520083  148123 cri.go:89] found id: ""
	I1010 19:24:48.520113  148123 logs.go:282] 0 containers: []
	W1010 19:24:48.520121  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:24:48.520127  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:24:48.520185  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:24:48.555112  148123 cri.go:89] found id: ""
	I1010 19:24:48.555138  148123 logs.go:282] 0 containers: []
	W1010 19:24:48.555149  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:24:48.555156  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:24:48.555241  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:24:45.171882  147213 node_ready.go:53] node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:47.673073  147213 node_ready.go:49] node "no-preload-320324" has status "Ready":"True"
	I1010 19:24:47.673103  147213 node_ready.go:38] duration metric: took 7.00601327s for node "no-preload-320324" to be "Ready" ...
	I1010 19:24:47.673117  147213 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 19:24:47.682195  147213 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-86brb" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:47.690079  147213 pod_ready.go:93] pod "coredns-7c65d6cfc9-86brb" in "kube-system" namespace has status "Ready":"True"
	I1010 19:24:47.690111  147213 pod_ready.go:82] duration metric: took 7.882823ms for pod "coredns-7c65d6cfc9-86brb" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:47.690126  147213 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:47.698009  147213 pod_ready.go:93] pod "etcd-no-preload-320324" in "kube-system" namespace has status "Ready":"True"
	I1010 19:24:47.698038  147213 pod_ready.go:82] duration metric: took 7.903016ms for pod "etcd-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:47.698052  147213 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:45.066893  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:47.566144  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:48.551853  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:51.050365  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:48.592682  148123 cri.go:89] found id: ""
	I1010 19:24:48.592710  148123 logs.go:282] 0 containers: []
	W1010 19:24:48.592719  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:24:48.592725  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:24:48.592775  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:24:48.628449  148123 cri.go:89] found id: ""
	I1010 19:24:48.628482  148123 logs.go:282] 0 containers: []
	W1010 19:24:48.628490  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:24:48.628500  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:24:48.628513  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:24:48.709385  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:24:48.709428  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:24:48.752542  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:24:48.752677  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:24:48.812331  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:24:48.812385  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:24:48.827057  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:24:48.827095  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:24:48.908312  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:24:51.409122  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:51.423404  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:24:51.423482  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:24:51.465742  148123 cri.go:89] found id: ""
	I1010 19:24:51.465781  148123 logs.go:282] 0 containers: []
	W1010 19:24:51.465793  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:24:51.465803  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:24:51.465875  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:24:51.501597  148123 cri.go:89] found id: ""
	I1010 19:24:51.501630  148123 logs.go:282] 0 containers: []
	W1010 19:24:51.501641  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:24:51.501648  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:24:51.501711  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:24:51.537500  148123 cri.go:89] found id: ""
	I1010 19:24:51.537539  148123 logs.go:282] 0 containers: []
	W1010 19:24:51.537551  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:24:51.537559  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:24:51.537626  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:24:51.579546  148123 cri.go:89] found id: ""
	I1010 19:24:51.579576  148123 logs.go:282] 0 containers: []
	W1010 19:24:51.579587  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:24:51.579595  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:24:51.579660  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:24:51.619893  148123 cri.go:89] found id: ""
	I1010 19:24:51.619917  148123 logs.go:282] 0 containers: []
	W1010 19:24:51.619925  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:24:51.619931  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:24:51.619999  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:24:51.655787  148123 cri.go:89] found id: ""
	I1010 19:24:51.655824  148123 logs.go:282] 0 containers: []
	W1010 19:24:51.655837  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:24:51.655846  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:24:51.655921  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:24:51.699582  148123 cri.go:89] found id: ""
	I1010 19:24:51.699611  148123 logs.go:282] 0 containers: []
	W1010 19:24:51.699619  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:24:51.699625  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:24:51.699685  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:24:51.738658  148123 cri.go:89] found id: ""
	I1010 19:24:51.738689  148123 logs.go:282] 0 containers: []
	W1010 19:24:51.738697  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:24:51.738707  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:24:51.738721  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:24:51.789556  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:24:51.789587  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:24:51.858919  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:24:51.858968  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:24:51.887818  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:24:51.887854  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:24:51.964408  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:24:51.964434  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:24:51.964449  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:24:49.705130  147213 pod_ready.go:103] pod "kube-apiserver-no-preload-320324" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:51.705847  147213 pod_ready.go:103] pod "kube-apiserver-no-preload-320324" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:53.205374  147213 pod_ready.go:93] pod "kube-apiserver-no-preload-320324" in "kube-system" namespace has status "Ready":"True"
	I1010 19:24:53.205401  147213 pod_ready.go:82] duration metric: took 5.507341974s for pod "kube-apiserver-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:53.205413  147213 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:53.210237  147213 pod_ready.go:93] pod "kube-controller-manager-no-preload-320324" in "kube-system" namespace has status "Ready":"True"
	I1010 19:24:53.210259  147213 pod_ready.go:82] duration metric: took 4.83925ms for pod "kube-controller-manager-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:53.210269  147213 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-vn6sv" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:53.215158  147213 pod_ready.go:93] pod "kube-proxy-vn6sv" in "kube-system" namespace has status "Ready":"True"
	I1010 19:24:53.215186  147213 pod_ready.go:82] duration metric: took 4.909888ms for pod "kube-proxy-vn6sv" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:53.215198  147213 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:53.220077  147213 pod_ready.go:93] pod "kube-scheduler-no-preload-320324" in "kube-system" namespace has status "Ready":"True"
	I1010 19:24:53.220097  147213 pod_ready.go:82] duration metric: took 4.890652ms for pod "kube-scheduler-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:53.220105  147213 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:50.066165  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:52.066343  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:53.552604  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:56.050748  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:54.545627  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:54.560748  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:24:54.560830  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:24:54.595863  148123 cri.go:89] found id: ""
	I1010 19:24:54.595893  148123 logs.go:282] 0 containers: []
	W1010 19:24:54.595903  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:24:54.595912  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:24:54.595978  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:24:54.633645  148123 cri.go:89] found id: ""
	I1010 19:24:54.633681  148123 logs.go:282] 0 containers: []
	W1010 19:24:54.633693  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:24:54.633701  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:24:54.633761  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:24:54.668269  148123 cri.go:89] found id: ""
	I1010 19:24:54.668299  148123 logs.go:282] 0 containers: []
	W1010 19:24:54.668311  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:24:54.668317  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:24:54.668369  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:24:54.706559  148123 cri.go:89] found id: ""
	I1010 19:24:54.706591  148123 logs.go:282] 0 containers: []
	W1010 19:24:54.706600  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:24:54.706608  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:24:54.706673  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:24:54.746248  148123 cri.go:89] found id: ""
	I1010 19:24:54.746283  148123 logs.go:282] 0 containers: []
	W1010 19:24:54.746295  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:24:54.746303  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:24:54.746383  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:24:54.782982  148123 cri.go:89] found id: ""
	I1010 19:24:54.783017  148123 logs.go:282] 0 containers: []
	W1010 19:24:54.783027  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:24:54.783033  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:24:54.783085  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:24:54.819664  148123 cri.go:89] found id: ""
	I1010 19:24:54.819700  148123 logs.go:282] 0 containers: []
	W1010 19:24:54.819713  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:24:54.819722  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:24:54.819797  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:24:54.859603  148123 cri.go:89] found id: ""
	I1010 19:24:54.859632  148123 logs.go:282] 0 containers: []
	W1010 19:24:54.859640  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:24:54.859650  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:24:54.859662  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:24:54.910949  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:24:54.910987  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:24:54.925941  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:24:54.925975  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:24:55.009626  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:24:55.009654  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:24:55.009669  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:24:55.097196  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:24:55.097237  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:24:57.641732  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:57.661141  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:24:57.661222  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:24:57.716054  148123 cri.go:89] found id: ""
	I1010 19:24:57.716086  148123 logs.go:282] 0 containers: []
	W1010 19:24:57.716094  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:24:57.716100  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:24:57.716178  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:24:57.766862  148123 cri.go:89] found id: ""
	I1010 19:24:57.766892  148123 logs.go:282] 0 containers: []
	W1010 19:24:57.766906  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:24:57.766917  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:24:57.766989  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:24:57.814776  148123 cri.go:89] found id: ""
	I1010 19:24:57.814808  148123 logs.go:282] 0 containers: []
	W1010 19:24:57.814821  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:24:57.814829  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:24:57.814899  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:24:57.850458  148123 cri.go:89] found id: ""
	I1010 19:24:57.850495  148123 logs.go:282] 0 containers: []
	W1010 19:24:57.850505  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:24:57.850516  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:24:57.850667  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:24:57.886541  148123 cri.go:89] found id: ""
	I1010 19:24:57.886566  148123 logs.go:282] 0 containers: []
	W1010 19:24:57.886575  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:24:57.886581  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:24:57.886645  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:24:57.920748  148123 cri.go:89] found id: ""
	I1010 19:24:57.920783  148123 logs.go:282] 0 containers: []
	W1010 19:24:57.920795  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:24:57.920802  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:24:57.920887  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:24:57.957801  148123 cri.go:89] found id: ""
	I1010 19:24:57.957833  148123 logs.go:282] 0 containers: []
	W1010 19:24:57.957844  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:24:57.957852  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:24:57.957919  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:24:57.995648  148123 cri.go:89] found id: ""
	I1010 19:24:57.995683  148123 logs.go:282] 0 containers: []
	W1010 19:24:57.995694  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:24:57.995706  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:24:57.995723  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:24:58.034987  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:24:58.035030  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:24:58.089014  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:24:58.089066  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:24:58.104179  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:24:58.104211  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:24:58.178239  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:24:58.178271  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:24:58.178291  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:24:55.229459  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:57.727298  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:54.566779  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:57.065902  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:58.051248  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:00.550512  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:00.756057  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:00.770086  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:00.770150  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:00.811400  148123 cri.go:89] found id: ""
	I1010 19:25:00.811444  148123 logs.go:282] 0 containers: []
	W1010 19:25:00.811456  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:00.811466  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:00.811533  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:00.848821  148123 cri.go:89] found id: ""
	I1010 19:25:00.848859  148123 logs.go:282] 0 containers: []
	W1010 19:25:00.848870  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:00.848877  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:00.848947  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:00.883529  148123 cri.go:89] found id: ""
	I1010 19:25:00.883557  148123 logs.go:282] 0 containers: []
	W1010 19:25:00.883566  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:00.883573  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:00.883631  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:00.922952  148123 cri.go:89] found id: ""
	I1010 19:25:00.922982  148123 logs.go:282] 0 containers: []
	W1010 19:25:00.922992  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:00.922998  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:00.923057  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:00.956116  148123 cri.go:89] found id: ""
	I1010 19:25:00.956147  148123 logs.go:282] 0 containers: []
	W1010 19:25:00.956159  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:00.956168  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:00.956233  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:00.998892  148123 cri.go:89] found id: ""
	I1010 19:25:00.998919  148123 logs.go:282] 0 containers: []
	W1010 19:25:00.998930  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:00.998939  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:00.998996  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:01.037617  148123 cri.go:89] found id: ""
	I1010 19:25:01.037649  148123 logs.go:282] 0 containers: []
	W1010 19:25:01.037657  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:01.037663  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:01.037717  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:01.078003  148123 cri.go:89] found id: ""
	I1010 19:25:01.078034  148123 logs.go:282] 0 containers: []
	W1010 19:25:01.078046  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:01.078055  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:01.078069  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:01.118948  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:01.118977  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:01.171053  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:01.171091  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:01.187027  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:01.187060  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:01.261288  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:01.261315  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:01.261330  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:24:59.728997  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:02.227142  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:59.566448  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:02.066184  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:02.551951  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:05.050558  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:03.848144  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:03.862527  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:03.862601  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:03.899120  148123 cri.go:89] found id: ""
	I1010 19:25:03.899159  148123 logs.go:282] 0 containers: []
	W1010 19:25:03.899180  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:03.899187  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:03.899276  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:03.935461  148123 cri.go:89] found id: ""
	I1010 19:25:03.935492  148123 logs.go:282] 0 containers: []
	W1010 19:25:03.935500  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:03.935507  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:03.935569  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:03.971892  148123 cri.go:89] found id: ""
	I1010 19:25:03.971925  148123 logs.go:282] 0 containers: []
	W1010 19:25:03.971937  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:03.971945  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:03.972019  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:04.008341  148123 cri.go:89] found id: ""
	I1010 19:25:04.008368  148123 logs.go:282] 0 containers: []
	W1010 19:25:04.008377  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:04.008390  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:04.008447  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:04.042007  148123 cri.go:89] found id: ""
	I1010 19:25:04.042036  148123 logs.go:282] 0 containers: []
	W1010 19:25:04.042044  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:04.042051  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:04.042101  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:04.078017  148123 cri.go:89] found id: ""
	I1010 19:25:04.078045  148123 logs.go:282] 0 containers: []
	W1010 19:25:04.078053  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:04.078059  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:04.078112  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:04.116792  148123 cri.go:89] found id: ""
	I1010 19:25:04.116823  148123 logs.go:282] 0 containers: []
	W1010 19:25:04.116832  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:04.116839  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:04.116928  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:04.153468  148123 cri.go:89] found id: ""
	I1010 19:25:04.153496  148123 logs.go:282] 0 containers: []
	W1010 19:25:04.153503  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:04.153513  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:04.153525  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:04.230646  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:04.230683  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:04.270975  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:04.271015  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:04.320845  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:04.320902  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:04.337789  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:04.337828  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:04.416077  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:06.916412  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:06.931309  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:06.931376  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:06.969224  148123 cri.go:89] found id: ""
	I1010 19:25:06.969257  148123 logs.go:282] 0 containers: []
	W1010 19:25:06.969269  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:06.969277  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:06.969346  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:07.006598  148123 cri.go:89] found id: ""
	I1010 19:25:07.006653  148123 logs.go:282] 0 containers: []
	W1010 19:25:07.006667  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:07.006676  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:07.006744  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:07.046392  148123 cri.go:89] found id: ""
	I1010 19:25:07.046418  148123 logs.go:282] 0 containers: []
	W1010 19:25:07.046427  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:07.046433  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:07.046491  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:07.089570  148123 cri.go:89] found id: ""
	I1010 19:25:07.089603  148123 logs.go:282] 0 containers: []
	W1010 19:25:07.089615  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:07.089624  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:07.089689  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:07.127152  148123 cri.go:89] found id: ""
	I1010 19:25:07.127185  148123 logs.go:282] 0 containers: []
	W1010 19:25:07.127195  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:07.127204  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:07.127282  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:07.160516  148123 cri.go:89] found id: ""
	I1010 19:25:07.160543  148123 logs.go:282] 0 containers: []
	W1010 19:25:07.160554  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:07.160563  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:07.160635  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:07.197270  148123 cri.go:89] found id: ""
	I1010 19:25:07.197307  148123 logs.go:282] 0 containers: []
	W1010 19:25:07.197318  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:07.197327  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:07.197395  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:07.232658  148123 cri.go:89] found id: ""
	I1010 19:25:07.232687  148123 logs.go:282] 0 containers: []
	W1010 19:25:07.232696  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:07.232706  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:07.232720  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:07.246579  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:07.246609  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:07.315200  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:07.315230  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:07.315251  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:07.389532  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:07.389580  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:07.438808  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:07.438837  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:04.227537  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:06.727865  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:04.067121  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:06.565089  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:08.565565  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:07.051371  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:09.051420  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:11.054211  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:09.990294  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:10.004155  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:10.004270  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:10.039034  148123 cri.go:89] found id: ""
	I1010 19:25:10.039068  148123 logs.go:282] 0 containers: []
	W1010 19:25:10.039078  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:10.039087  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:10.039174  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:10.079936  148123 cri.go:89] found id: ""
	I1010 19:25:10.079967  148123 logs.go:282] 0 containers: []
	W1010 19:25:10.079978  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:10.079985  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:10.080068  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:10.116441  148123 cri.go:89] found id: ""
	I1010 19:25:10.116471  148123 logs.go:282] 0 containers: []
	W1010 19:25:10.116483  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:10.116491  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:10.116556  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:10.151998  148123 cri.go:89] found id: ""
	I1010 19:25:10.152045  148123 logs.go:282] 0 containers: []
	W1010 19:25:10.152058  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:10.152075  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:10.152153  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:10.188514  148123 cri.go:89] found id: ""
	I1010 19:25:10.188547  148123 logs.go:282] 0 containers: []
	W1010 19:25:10.188565  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:10.188574  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:10.188640  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:10.223648  148123 cri.go:89] found id: ""
	I1010 19:25:10.223682  148123 logs.go:282] 0 containers: []
	W1010 19:25:10.223694  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:10.223702  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:10.223771  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:10.264117  148123 cri.go:89] found id: ""
	I1010 19:25:10.264143  148123 logs.go:282] 0 containers: []
	W1010 19:25:10.264151  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:10.264158  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:10.264215  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:10.302566  148123 cri.go:89] found id: ""
	I1010 19:25:10.302601  148123 logs.go:282] 0 containers: []
	W1010 19:25:10.302613  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:10.302625  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:10.302649  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:10.316567  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:10.316606  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:10.388642  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:10.388674  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:10.388692  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:10.471035  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:10.471090  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:10.519600  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:10.519645  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:13.075428  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:13.090830  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:13.090915  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:13.126302  148123 cri.go:89] found id: ""
	I1010 19:25:13.126336  148123 logs.go:282] 0 containers: []
	W1010 19:25:13.126348  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:13.126357  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:13.126429  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:13.163309  148123 cri.go:89] found id: ""
	I1010 19:25:13.163344  148123 logs.go:282] 0 containers: []
	W1010 19:25:13.163357  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:13.163365  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:13.163428  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:13.197569  148123 cri.go:89] found id: ""
	I1010 19:25:13.197605  148123 logs.go:282] 0 containers: []
	W1010 19:25:13.197615  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:13.197621  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:13.197692  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:13.236299  148123 cri.go:89] found id: ""
	I1010 19:25:13.236328  148123 logs.go:282] 0 containers: []
	W1010 19:25:13.236336  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:13.236342  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:13.236406  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:13.275667  148123 cri.go:89] found id: ""
	I1010 19:25:13.275696  148123 logs.go:282] 0 containers: []
	W1010 19:25:13.275705  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:13.275711  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:13.275776  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:13.309728  148123 cri.go:89] found id: ""
	I1010 19:25:13.309763  148123 logs.go:282] 0 containers: []
	W1010 19:25:13.309774  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:13.309786  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:13.309854  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:13.347464  148123 cri.go:89] found id: ""
	I1010 19:25:13.347493  148123 logs.go:282] 0 containers: []
	W1010 19:25:13.347504  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:13.347513  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:13.347576  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:13.385086  148123 cri.go:89] found id: ""
	I1010 19:25:13.385119  148123 logs.go:282] 0 containers: []
	W1010 19:25:13.385130  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:13.385142  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:13.385156  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:13.466513  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:13.466553  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:13.508610  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:13.508651  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:09.226850  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:11.227241  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:13.726879  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:10.565663  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:12.565845  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:13.555465  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:16.051764  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:13.564897  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:13.564932  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:13.578771  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:13.578803  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:13.651857  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:16.153066  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:16.167613  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:16.167698  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:16.203867  148123 cri.go:89] found id: ""
	I1010 19:25:16.203897  148123 logs.go:282] 0 containers: []
	W1010 19:25:16.203906  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:16.203912  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:16.203965  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:16.242077  148123 cri.go:89] found id: ""
	I1010 19:25:16.242109  148123 logs.go:282] 0 containers: []
	W1010 19:25:16.242130  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:16.242138  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:16.242234  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:16.283792  148123 cri.go:89] found id: ""
	I1010 19:25:16.283825  148123 logs.go:282] 0 containers: []
	W1010 19:25:16.283836  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:16.283844  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:16.283907  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:16.319934  148123 cri.go:89] found id: ""
	I1010 19:25:16.319969  148123 logs.go:282] 0 containers: []
	W1010 19:25:16.319980  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:16.319990  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:16.320063  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:16.360439  148123 cri.go:89] found id: ""
	I1010 19:25:16.360482  148123 logs.go:282] 0 containers: []
	W1010 19:25:16.360495  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:16.360504  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:16.360569  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:16.395890  148123 cri.go:89] found id: ""
	I1010 19:25:16.395922  148123 logs.go:282] 0 containers: []
	W1010 19:25:16.395931  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:16.395941  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:16.396009  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:16.431396  148123 cri.go:89] found id: ""
	I1010 19:25:16.431442  148123 logs.go:282] 0 containers: []
	W1010 19:25:16.431456  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:16.431463  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:16.431531  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:16.471768  148123 cri.go:89] found id: ""
	I1010 19:25:16.471796  148123 logs.go:282] 0 containers: []
	W1010 19:25:16.471804  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:16.471814  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:16.471830  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:16.526112  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:16.526155  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:16.541041  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:16.541074  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:16.623245  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:16.623273  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:16.623289  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:16.710840  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:16.710884  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:15.727171  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:17.728705  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:15.067362  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:17.566242  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:18.551207  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:21.050222  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:19.257566  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:19.273982  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:19.274063  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:19.311360  148123 cri.go:89] found id: ""
	I1010 19:25:19.311392  148123 logs.go:282] 0 containers: []
	W1010 19:25:19.311404  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:19.311411  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:19.311475  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:19.347025  148123 cri.go:89] found id: ""
	I1010 19:25:19.347053  148123 logs.go:282] 0 containers: []
	W1010 19:25:19.347062  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:19.347068  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:19.347120  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:19.383575  148123 cri.go:89] found id: ""
	I1010 19:25:19.383608  148123 logs.go:282] 0 containers: []
	W1010 19:25:19.383620  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:19.383633  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:19.383698  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:19.419245  148123 cri.go:89] found id: ""
	I1010 19:25:19.419270  148123 logs.go:282] 0 containers: []
	W1010 19:25:19.419278  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:19.419284  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:19.419334  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:19.453563  148123 cri.go:89] found id: ""
	I1010 19:25:19.453591  148123 logs.go:282] 0 containers: []
	W1010 19:25:19.453601  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:19.453658  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:19.453715  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:19.496430  148123 cri.go:89] found id: ""
	I1010 19:25:19.496458  148123 logs.go:282] 0 containers: []
	W1010 19:25:19.496466  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:19.496473  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:19.496533  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:19.534626  148123 cri.go:89] found id: ""
	I1010 19:25:19.534670  148123 logs.go:282] 0 containers: []
	W1010 19:25:19.534682  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:19.534691  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:19.534757  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:19.574186  148123 cri.go:89] found id: ""
	I1010 19:25:19.574236  148123 logs.go:282] 0 containers: []
	W1010 19:25:19.574248  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:19.574261  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:19.574283  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:19.587952  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:19.587989  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:19.664607  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:19.664644  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:19.664664  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:19.742185  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:19.742224  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:19.781530  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:19.781562  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:22.335398  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:22.349627  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:22.349693  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:22.389075  148123 cri.go:89] found id: ""
	I1010 19:25:22.389102  148123 logs.go:282] 0 containers: []
	W1010 19:25:22.389110  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:22.389117  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:22.389168  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:22.424331  148123 cri.go:89] found id: ""
	I1010 19:25:22.424368  148123 logs.go:282] 0 containers: []
	W1010 19:25:22.424381  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:22.424389  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:22.424460  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:22.464011  148123 cri.go:89] found id: ""
	I1010 19:25:22.464051  148123 logs.go:282] 0 containers: []
	W1010 19:25:22.464062  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:22.464070  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:22.464141  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:22.500786  148123 cri.go:89] found id: ""
	I1010 19:25:22.500819  148123 logs.go:282] 0 containers: []
	W1010 19:25:22.500828  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:22.500835  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:22.500914  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:22.538016  148123 cri.go:89] found id: ""
	I1010 19:25:22.538053  148123 logs.go:282] 0 containers: []
	W1010 19:25:22.538061  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:22.538068  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:22.538124  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:22.576600  148123 cri.go:89] found id: ""
	I1010 19:25:22.576629  148123 logs.go:282] 0 containers: []
	W1010 19:25:22.576637  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:22.576643  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:22.576702  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:22.616020  148123 cri.go:89] found id: ""
	I1010 19:25:22.616048  148123 logs.go:282] 0 containers: []
	W1010 19:25:22.616057  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:22.616064  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:22.616114  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:22.652238  148123 cri.go:89] found id: ""
	I1010 19:25:22.652271  148123 logs.go:282] 0 containers: []
	W1010 19:25:22.652283  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:22.652295  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:22.652311  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:22.726182  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:22.726216  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:22.726234  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:22.814040  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:22.814086  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:22.859254  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:22.859289  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:22.916565  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:22.916607  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:20.227871  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:22.732566  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:20.066872  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:22.566173  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:23.050833  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:25.551662  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:25.432636  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:25.446982  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:25.447047  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:25.482440  148123 cri.go:89] found id: ""
	I1010 19:25:25.482472  148123 logs.go:282] 0 containers: []
	W1010 19:25:25.482484  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:25.482492  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:25.482552  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:25.517709  148123 cri.go:89] found id: ""
	I1010 19:25:25.517742  148123 logs.go:282] 0 containers: []
	W1010 19:25:25.517756  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:25.517764  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:25.517867  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:25.561504  148123 cri.go:89] found id: ""
	I1010 19:25:25.561532  148123 logs.go:282] 0 containers: []
	W1010 19:25:25.561544  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:25.561552  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:25.561616  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:25.601704  148123 cri.go:89] found id: ""
	I1010 19:25:25.601741  148123 logs.go:282] 0 containers: []
	W1010 19:25:25.601753  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:25.601762  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:25.601825  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:25.638519  148123 cri.go:89] found id: ""
	I1010 19:25:25.638544  148123 logs.go:282] 0 containers: []
	W1010 19:25:25.638552  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:25.638558  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:25.638609  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:25.673390  148123 cri.go:89] found id: ""
	I1010 19:25:25.673425  148123 logs.go:282] 0 containers: []
	W1010 19:25:25.673436  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:25.673447  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:25.673525  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:25.708251  148123 cri.go:89] found id: ""
	I1010 19:25:25.708278  148123 logs.go:282] 0 containers: []
	W1010 19:25:25.708286  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:25.708293  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:25.708354  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:25.749793  148123 cri.go:89] found id: ""
	I1010 19:25:25.749826  148123 logs.go:282] 0 containers: []
	W1010 19:25:25.749837  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:25.749846  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:25.749861  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:25.802511  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:25.802554  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:25.817523  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:25.817562  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:25.898595  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:25.898619  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:25.898635  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:25.978629  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:25.978684  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:28.520165  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:28.532725  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:28.532793  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:25.226875  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:27.729015  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:25.066298  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:27.066963  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:27.551915  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:29.558497  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:28.571667  148123 cri.go:89] found id: ""
	I1010 19:25:28.571695  148123 logs.go:282] 0 containers: []
	W1010 19:25:28.571704  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:28.571710  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:28.571770  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:28.608136  148123 cri.go:89] found id: ""
	I1010 19:25:28.608165  148123 logs.go:282] 0 containers: []
	W1010 19:25:28.608174  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:28.608181  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:28.608244  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:28.642123  148123 cri.go:89] found id: ""
	I1010 19:25:28.642161  148123 logs.go:282] 0 containers: []
	W1010 19:25:28.642173  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:28.642181  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:28.642242  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:28.678600  148123 cri.go:89] found id: ""
	I1010 19:25:28.678633  148123 logs.go:282] 0 containers: []
	W1010 19:25:28.678643  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:28.678651  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:28.678702  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:28.716627  148123 cri.go:89] found id: ""
	I1010 19:25:28.716660  148123 logs.go:282] 0 containers: []
	W1010 19:25:28.716673  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:28.716681  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:28.716766  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:28.752750  148123 cri.go:89] found id: ""
	I1010 19:25:28.752786  148123 logs.go:282] 0 containers: []
	W1010 19:25:28.752798  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:28.752806  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:28.752898  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:28.790021  148123 cri.go:89] found id: ""
	I1010 19:25:28.790054  148123 logs.go:282] 0 containers: []
	W1010 19:25:28.790066  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:28.790075  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:28.790128  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:28.828760  148123 cri.go:89] found id: ""
	I1010 19:25:28.828803  148123 logs.go:282] 0 containers: []
	W1010 19:25:28.828827  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:28.828839  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:28.828877  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:28.879773  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:28.879811  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:28.894311  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:28.894342  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:28.968675  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:28.968706  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:28.968729  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:29.045862  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:29.045913  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:31.588772  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:31.601453  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:31.601526  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:31.637056  148123 cri.go:89] found id: ""
	I1010 19:25:31.637090  148123 logs.go:282] 0 containers: []
	W1010 19:25:31.637101  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:31.637108  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:31.637183  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:31.677825  148123 cri.go:89] found id: ""
	I1010 19:25:31.677854  148123 logs.go:282] 0 containers: []
	W1010 19:25:31.677862  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:31.677867  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:31.677920  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:31.715372  148123 cri.go:89] found id: ""
	I1010 19:25:31.715402  148123 logs.go:282] 0 containers: []
	W1010 19:25:31.715411  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:31.715419  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:31.715470  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:31.753767  148123 cri.go:89] found id: ""
	I1010 19:25:31.753794  148123 logs.go:282] 0 containers: []
	W1010 19:25:31.753806  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:31.753814  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:31.753879  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:31.792999  148123 cri.go:89] found id: ""
	I1010 19:25:31.793024  148123 logs.go:282] 0 containers: []
	W1010 19:25:31.793035  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:31.793050  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:31.793106  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:31.831571  148123 cri.go:89] found id: ""
	I1010 19:25:31.831598  148123 logs.go:282] 0 containers: []
	W1010 19:25:31.831607  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:31.831614  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:31.831673  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:31.880382  148123 cri.go:89] found id: ""
	I1010 19:25:31.880422  148123 logs.go:282] 0 containers: []
	W1010 19:25:31.880436  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:31.880447  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:31.880527  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:31.919596  148123 cri.go:89] found id: ""
	I1010 19:25:31.919627  148123 logs.go:282] 0 containers: []
	W1010 19:25:31.919637  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:31.919647  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:31.919661  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:31.967963  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:31.968001  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:31.983064  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:31.983106  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:32.056073  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:32.056105  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:32.056122  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:32.142927  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:32.142978  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:30.226683  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:32.227047  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:29.565699  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:31.566109  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:32.051411  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:34.052064  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:36.550062  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:34.690025  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:34.705326  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:34.705413  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:34.740756  148123 cri.go:89] found id: ""
	I1010 19:25:34.740786  148123 logs.go:282] 0 containers: []
	W1010 19:25:34.740793  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:34.740799  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:34.740872  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:34.777021  148123 cri.go:89] found id: ""
	I1010 19:25:34.777049  148123 logs.go:282] 0 containers: []
	W1010 19:25:34.777061  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:34.777069  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:34.777123  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:34.826699  148123 cri.go:89] found id: ""
	I1010 19:25:34.826735  148123 logs.go:282] 0 containers: []
	W1010 19:25:34.826747  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:34.826754  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:34.826896  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:34.864297  148123 cri.go:89] found id: ""
	I1010 19:25:34.864329  148123 logs.go:282] 0 containers: []
	W1010 19:25:34.864340  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:34.864348  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:34.864415  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:34.899198  148123 cri.go:89] found id: ""
	I1010 19:25:34.899234  148123 logs.go:282] 0 containers: []
	W1010 19:25:34.899247  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:34.899256  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:34.899315  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:34.934132  148123 cri.go:89] found id: ""
	I1010 19:25:34.934162  148123 logs.go:282] 0 containers: []
	W1010 19:25:34.934170  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:34.934176  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:34.934238  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:34.969858  148123 cri.go:89] found id: ""
	I1010 19:25:34.969891  148123 logs.go:282] 0 containers: []
	W1010 19:25:34.969903  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:34.969911  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:34.969978  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:35.006082  148123 cri.go:89] found id: ""
	I1010 19:25:35.006121  148123 logs.go:282] 0 containers: []
	W1010 19:25:35.006132  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:35.006145  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:35.006160  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:35.060596  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:35.060631  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:35.076246  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:35.076281  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:35.151879  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:35.151912  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:35.151931  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:35.231802  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:35.231845  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:37.774550  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:37.787871  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:37.787938  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:37.824273  148123 cri.go:89] found id: ""
	I1010 19:25:37.824306  148123 logs.go:282] 0 containers: []
	W1010 19:25:37.824317  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:37.824325  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:37.824410  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:37.860548  148123 cri.go:89] found id: ""
	I1010 19:25:37.860582  148123 logs.go:282] 0 containers: []
	W1010 19:25:37.860593  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:37.860601  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:37.860677  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:37.894616  148123 cri.go:89] found id: ""
	I1010 19:25:37.894644  148123 logs.go:282] 0 containers: []
	W1010 19:25:37.894654  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:37.894662  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:37.894730  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:37.931247  148123 cri.go:89] found id: ""
	I1010 19:25:37.931272  148123 logs.go:282] 0 containers: []
	W1010 19:25:37.931280  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:37.931287  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:37.931337  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:37.968028  148123 cri.go:89] found id: ""
	I1010 19:25:37.968068  148123 logs.go:282] 0 containers: []
	W1010 19:25:37.968079  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:37.968086  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:37.968153  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:38.004661  148123 cri.go:89] found id: ""
	I1010 19:25:38.004692  148123 logs.go:282] 0 containers: []
	W1010 19:25:38.004700  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:38.004706  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:38.004760  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:38.040874  148123 cri.go:89] found id: ""
	I1010 19:25:38.040906  148123 logs.go:282] 0 containers: []
	W1010 19:25:38.040915  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:38.040922  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:38.040990  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:38.083771  148123 cri.go:89] found id: ""
	I1010 19:25:38.083802  148123 logs.go:282] 0 containers: []
	W1010 19:25:38.083811  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:38.083821  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:38.083836  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:38.099645  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:38.099683  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:38.172168  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:38.172207  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:38.172223  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:38.248523  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:38.248561  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:38.306594  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:38.306630  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:34.728106  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:37.226285  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:34.065919  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:36.066751  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:38.067361  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:38.550359  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:40.551190  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:40.873868  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:40.889561  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:40.889668  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:40.926769  148123 cri.go:89] found id: ""
	I1010 19:25:40.926800  148123 logs.go:282] 0 containers: []
	W1010 19:25:40.926808  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:40.926816  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:40.926869  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:40.962564  148123 cri.go:89] found id: ""
	I1010 19:25:40.962592  148123 logs.go:282] 0 containers: []
	W1010 19:25:40.962600  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:40.962606  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:40.962659  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:40.997856  148123 cri.go:89] found id: ""
	I1010 19:25:40.997885  148123 logs.go:282] 0 containers: []
	W1010 19:25:40.997894  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:40.997900  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:40.997951  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:41.035479  148123 cri.go:89] found id: ""
	I1010 19:25:41.035506  148123 logs.go:282] 0 containers: []
	W1010 19:25:41.035514  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:41.035520  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:41.035575  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:41.077733  148123 cri.go:89] found id: ""
	I1010 19:25:41.077817  148123 logs.go:282] 0 containers: []
	W1010 19:25:41.077830  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:41.077840  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:41.077916  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:41.115439  148123 cri.go:89] found id: ""
	I1010 19:25:41.115476  148123 logs.go:282] 0 containers: []
	W1010 19:25:41.115489  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:41.115498  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:41.115552  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:41.152586  148123 cri.go:89] found id: ""
	I1010 19:25:41.152625  148123 logs.go:282] 0 containers: []
	W1010 19:25:41.152636  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:41.152643  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:41.152695  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:41.186807  148123 cri.go:89] found id: ""
	I1010 19:25:41.186840  148123 logs.go:282] 0 containers: []
	W1010 19:25:41.186851  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:41.186862  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:41.186880  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:41.200404  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:41.200447  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:41.280717  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:41.280744  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:41.280760  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:41.360561  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:41.360611  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:41.404461  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:41.404497  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:39.226903  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:41.227077  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:43.727197  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:40.570404  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:43.066523  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:43.050813  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:45.051094  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:43.955723  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:43.970895  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:43.970964  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:44.012162  148123 cri.go:89] found id: ""
	I1010 19:25:44.012194  148123 logs.go:282] 0 containers: []
	W1010 19:25:44.012203  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:44.012209  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:44.012282  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:44.074583  148123 cri.go:89] found id: ""
	I1010 19:25:44.074606  148123 logs.go:282] 0 containers: []
	W1010 19:25:44.074614  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:44.074620  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:44.074675  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:44.133562  148123 cri.go:89] found id: ""
	I1010 19:25:44.133588  148123 logs.go:282] 0 containers: []
	W1010 19:25:44.133596  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:44.133602  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:44.133658  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:44.168832  148123 cri.go:89] found id: ""
	I1010 19:25:44.168883  148123 logs.go:282] 0 containers: []
	W1010 19:25:44.168896  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:44.168908  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:44.168967  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:44.204896  148123 cri.go:89] found id: ""
	I1010 19:25:44.204923  148123 logs.go:282] 0 containers: []
	W1010 19:25:44.204934  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:44.204943  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:44.205002  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:44.242410  148123 cri.go:89] found id: ""
	I1010 19:25:44.242437  148123 logs.go:282] 0 containers: []
	W1010 19:25:44.242448  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:44.242456  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:44.242524  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:44.278553  148123 cri.go:89] found id: ""
	I1010 19:25:44.278581  148123 logs.go:282] 0 containers: []
	W1010 19:25:44.278589  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:44.278595  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:44.278658  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:44.316099  148123 cri.go:89] found id: ""
	I1010 19:25:44.316125  148123 logs.go:282] 0 containers: []
	W1010 19:25:44.316132  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:44.316141  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:44.316155  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:44.365809  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:44.365860  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:44.380129  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:44.380162  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:44.454241  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:44.454267  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:44.454281  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:44.530928  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:44.530974  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:47.078279  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:47.092233  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:47.092310  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:47.129517  148123 cri.go:89] found id: ""
	I1010 19:25:47.129557  148123 logs.go:282] 0 containers: []
	W1010 19:25:47.129573  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:47.129582  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:47.129654  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:47.170309  148123 cri.go:89] found id: ""
	I1010 19:25:47.170345  148123 logs.go:282] 0 containers: []
	W1010 19:25:47.170357  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:47.170365  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:47.170432  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:47.205110  148123 cri.go:89] found id: ""
	I1010 19:25:47.205145  148123 logs.go:282] 0 containers: []
	W1010 19:25:47.205156  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:47.205162  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:47.205228  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:47.245668  148123 cri.go:89] found id: ""
	I1010 19:25:47.245695  148123 logs.go:282] 0 containers: []
	W1010 19:25:47.245704  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:47.245711  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:47.245768  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:47.284549  148123 cri.go:89] found id: ""
	I1010 19:25:47.284576  148123 logs.go:282] 0 containers: []
	W1010 19:25:47.284584  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:47.284590  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:47.284643  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:47.324676  148123 cri.go:89] found id: ""
	I1010 19:25:47.324708  148123 logs.go:282] 0 containers: []
	W1010 19:25:47.324720  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:47.324729  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:47.324782  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:47.364315  148123 cri.go:89] found id: ""
	I1010 19:25:47.364347  148123 logs.go:282] 0 containers: []
	W1010 19:25:47.364356  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:47.364362  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:47.364433  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:47.400611  148123 cri.go:89] found id: ""
	I1010 19:25:47.400641  148123 logs.go:282] 0 containers: []
	W1010 19:25:47.400652  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:47.400664  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:47.400680  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:47.415185  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:47.415227  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:47.481638  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:47.481665  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:47.481681  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:47.560135  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:47.560171  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:47.602144  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:47.602174  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:46.227386  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:48.227699  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:45.066887  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:47.565340  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:47.051459  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:49.550170  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:51.554542  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:50.151770  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:50.165336  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:50.165403  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:50.201045  148123 cri.go:89] found id: ""
	I1010 19:25:50.201072  148123 logs.go:282] 0 containers: []
	W1010 19:25:50.201082  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:50.201089  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:50.201154  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:50.237047  148123 cri.go:89] found id: ""
	I1010 19:25:50.237082  148123 logs.go:282] 0 containers: []
	W1010 19:25:50.237094  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:50.237102  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:50.237174  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:50.273727  148123 cri.go:89] found id: ""
	I1010 19:25:50.273756  148123 logs.go:282] 0 containers: []
	W1010 19:25:50.273767  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:50.273780  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:50.273843  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:50.311397  148123 cri.go:89] found id: ""
	I1010 19:25:50.311424  148123 logs.go:282] 0 containers: []
	W1010 19:25:50.311433  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:50.311450  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:50.311513  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:50.347593  148123 cri.go:89] found id: ""
	I1010 19:25:50.347625  148123 logs.go:282] 0 containers: []
	W1010 19:25:50.347637  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:50.347705  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:50.347782  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:50.382195  148123 cri.go:89] found id: ""
	I1010 19:25:50.382228  148123 logs.go:282] 0 containers: []
	W1010 19:25:50.382240  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:50.382247  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:50.382313  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:50.419189  148123 cri.go:89] found id: ""
	I1010 19:25:50.419221  148123 logs.go:282] 0 containers: []
	W1010 19:25:50.419229  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:50.419236  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:50.419297  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:50.454368  148123 cri.go:89] found id: ""
	I1010 19:25:50.454399  148123 logs.go:282] 0 containers: []
	W1010 19:25:50.454410  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:50.454421  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:50.454440  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:50.495575  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:50.495623  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:50.548691  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:50.548737  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:50.564585  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:50.564621  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:50.637152  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:50.637174  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:50.637190  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:53.221939  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:53.235889  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:53.235968  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:53.278589  148123 cri.go:89] found id: ""
	I1010 19:25:53.278620  148123 logs.go:282] 0 containers: []
	W1010 19:25:53.278632  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:53.278640  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:53.278705  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:53.315062  148123 cri.go:89] found id: ""
	I1010 19:25:53.315090  148123 logs.go:282] 0 containers: []
	W1010 19:25:53.315101  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:53.315109  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:53.315182  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:53.348994  148123 cri.go:89] found id: ""
	I1010 19:25:53.349031  148123 logs.go:282] 0 containers: []
	W1010 19:25:53.349040  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:53.349046  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:53.349099  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:53.383334  148123 cri.go:89] found id: ""
	I1010 19:25:53.383363  148123 logs.go:282] 0 containers: []
	W1010 19:25:53.383373  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:53.383380  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:53.383450  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:53.417888  148123 cri.go:89] found id: ""
	I1010 19:25:53.417920  148123 logs.go:282] 0 containers: []
	W1010 19:25:53.417931  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:53.417938  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:53.418013  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:53.454757  148123 cri.go:89] found id: ""
	I1010 19:25:53.454787  148123 logs.go:282] 0 containers: []
	W1010 19:25:53.454798  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:53.454806  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:53.454871  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:53.491422  148123 cri.go:89] found id: ""
	I1010 19:25:53.491452  148123 logs.go:282] 0 containers: []
	W1010 19:25:53.491464  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:53.491472  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:53.491541  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:53.531240  148123 cri.go:89] found id: ""
	I1010 19:25:53.531271  148123 logs.go:282] 0 containers: []
	W1010 19:25:53.531282  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:53.531294  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:53.531310  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:50.727196  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:53.226957  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:50.065907  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:52.567056  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:54.051112  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:56.554137  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:53.582576  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:53.582608  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:53.596481  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:53.596511  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:53.666134  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:53.666162  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:53.666178  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:53.745621  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:53.745659  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:56.290744  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:56.307536  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:56.307610  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:56.353456  148123 cri.go:89] found id: ""
	I1010 19:25:56.353484  148123 logs.go:282] 0 containers: []
	W1010 19:25:56.353494  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:56.353501  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:56.353553  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:56.394093  148123 cri.go:89] found id: ""
	I1010 19:25:56.394120  148123 logs.go:282] 0 containers: []
	W1010 19:25:56.394131  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:56.394138  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:56.394203  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:56.428388  148123 cri.go:89] found id: ""
	I1010 19:25:56.428428  148123 logs.go:282] 0 containers: []
	W1010 19:25:56.428441  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:56.428450  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:56.428524  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:56.464414  148123 cri.go:89] found id: ""
	I1010 19:25:56.464459  148123 logs.go:282] 0 containers: []
	W1010 19:25:56.464472  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:56.464480  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:56.464563  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:56.505271  148123 cri.go:89] found id: ""
	I1010 19:25:56.505307  148123 logs.go:282] 0 containers: []
	W1010 19:25:56.505316  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:56.505322  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:56.505374  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:56.546424  148123 cri.go:89] found id: ""
	I1010 19:25:56.546456  148123 logs.go:282] 0 containers: []
	W1010 19:25:56.546467  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:56.546475  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:56.546545  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:56.587458  148123 cri.go:89] found id: ""
	I1010 19:25:56.587489  148123 logs.go:282] 0 containers: []
	W1010 19:25:56.587501  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:56.587509  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:56.587588  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:56.628248  148123 cri.go:89] found id: ""
	I1010 19:25:56.628279  148123 logs.go:282] 0 containers: []
	W1010 19:25:56.628291  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:56.628304  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:56.628320  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:56.642109  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:56.642141  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:56.723646  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:56.723678  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:56.723696  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:56.805849  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:56.805899  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:56.849863  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:56.849891  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:55.230447  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:57.726896  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:55.066248  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:57.565240  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:59.051145  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:01.554276  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:59.407093  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:59.422915  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:59.423005  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:59.462867  148123 cri.go:89] found id: ""
	I1010 19:25:59.462898  148123 logs.go:282] 0 containers: []
	W1010 19:25:59.462909  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:59.462917  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:59.462980  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:59.496931  148123 cri.go:89] found id: ""
	I1010 19:25:59.496958  148123 logs.go:282] 0 containers: []
	W1010 19:25:59.496968  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:59.496983  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:59.497049  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:59.532241  148123 cri.go:89] found id: ""
	I1010 19:25:59.532271  148123 logs.go:282] 0 containers: []
	W1010 19:25:59.532283  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:59.532291  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:59.532352  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:59.571285  148123 cri.go:89] found id: ""
	I1010 19:25:59.571313  148123 logs.go:282] 0 containers: []
	W1010 19:25:59.571324  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:59.571331  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:59.571391  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:59.606687  148123 cri.go:89] found id: ""
	I1010 19:25:59.606721  148123 logs.go:282] 0 containers: []
	W1010 19:25:59.606734  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:59.606741  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:59.606800  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:59.643247  148123 cri.go:89] found id: ""
	I1010 19:25:59.643276  148123 logs.go:282] 0 containers: []
	W1010 19:25:59.643286  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:59.643294  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:59.643369  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:59.679305  148123 cri.go:89] found id: ""
	I1010 19:25:59.679335  148123 logs.go:282] 0 containers: []
	W1010 19:25:59.679344  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:59.679350  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:59.679407  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:59.716149  148123 cri.go:89] found id: ""
	I1010 19:25:59.716198  148123 logs.go:282] 0 containers: []
	W1010 19:25:59.716210  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:59.716222  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:59.716239  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:59.730334  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:59.730364  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:59.799759  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:59.799786  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:59.799804  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:59.881883  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:59.881925  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:59.923755  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:59.923784  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:02.478043  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:02.492627  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:02.492705  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:02.532133  148123 cri.go:89] found id: ""
	I1010 19:26:02.532160  148123 logs.go:282] 0 containers: []
	W1010 19:26:02.532172  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:02.532179  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:02.532244  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:02.571915  148123 cri.go:89] found id: ""
	I1010 19:26:02.571943  148123 logs.go:282] 0 containers: []
	W1010 19:26:02.571951  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:02.571957  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:02.572014  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:02.616330  148123 cri.go:89] found id: ""
	I1010 19:26:02.616365  148123 logs.go:282] 0 containers: []
	W1010 19:26:02.616376  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:02.616385  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:02.616455  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:02.662245  148123 cri.go:89] found id: ""
	I1010 19:26:02.662275  148123 logs.go:282] 0 containers: []
	W1010 19:26:02.662284  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:02.662291  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:02.662354  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:02.698692  148123 cri.go:89] found id: ""
	I1010 19:26:02.698720  148123 logs.go:282] 0 containers: []
	W1010 19:26:02.698731  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:02.698739  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:02.698805  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:02.736643  148123 cri.go:89] found id: ""
	I1010 19:26:02.736667  148123 logs.go:282] 0 containers: []
	W1010 19:26:02.736677  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:02.736685  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:02.736742  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:02.780356  148123 cri.go:89] found id: ""
	I1010 19:26:02.780388  148123 logs.go:282] 0 containers: []
	W1010 19:26:02.780399  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:02.780407  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:02.780475  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:02.816716  148123 cri.go:89] found id: ""
	I1010 19:26:02.816744  148123 logs.go:282] 0 containers: []
	W1010 19:26:02.816752  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:02.816761  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:02.816775  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:02.899002  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:02.899046  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:02.942060  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:02.942093  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:02.997605  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:02.997661  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:03.011768  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:03.011804  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:03.096404  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:59.727075  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:02.227526  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:59.565903  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:02.066179  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:04.049656  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:06.050425  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:05.596971  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:05.612524  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:05.612598  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:05.649999  148123 cri.go:89] found id: ""
	I1010 19:26:05.650032  148123 logs.go:282] 0 containers: []
	W1010 19:26:05.650044  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:05.650052  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:05.650119  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:05.684365  148123 cri.go:89] found id: ""
	I1010 19:26:05.684398  148123 logs.go:282] 0 containers: []
	W1010 19:26:05.684419  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:05.684427  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:05.684494  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:05.721801  148123 cri.go:89] found id: ""
	I1010 19:26:05.721832  148123 logs.go:282] 0 containers: []
	W1010 19:26:05.721840  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:05.721847  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:05.721908  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:05.762010  148123 cri.go:89] found id: ""
	I1010 19:26:05.762037  148123 logs.go:282] 0 containers: []
	W1010 19:26:05.762046  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:05.762056  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:05.762117  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:05.803425  148123 cri.go:89] found id: ""
	I1010 19:26:05.803457  148123 logs.go:282] 0 containers: []
	W1010 19:26:05.803468  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:05.803476  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:05.803551  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:05.838800  148123 cri.go:89] found id: ""
	I1010 19:26:05.838833  148123 logs.go:282] 0 containers: []
	W1010 19:26:05.838845  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:05.838853  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:05.838921  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:05.875110  148123 cri.go:89] found id: ""
	I1010 19:26:05.875147  148123 logs.go:282] 0 containers: []
	W1010 19:26:05.875158  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:05.875167  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:05.875245  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:05.908951  148123 cri.go:89] found id: ""
	I1010 19:26:05.908988  148123 logs.go:282] 0 containers: []
	W1010 19:26:05.909000  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:05.909012  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:05.909027  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:05.922263  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:05.922293  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:05.996441  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:05.996465  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:05.996484  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:06.078113  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:06.078160  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:06.122919  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:06.122956  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:04.726335  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:06.728178  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:04.066573  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:06.564991  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:08.566655  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:08.050522  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:10.550288  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:08.674464  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:08.688452  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:08.688570  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:08.726240  148123 cri.go:89] found id: ""
	I1010 19:26:08.726269  148123 logs.go:282] 0 containers: []
	W1010 19:26:08.726279  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:08.726287  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:08.726370  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:08.764762  148123 cri.go:89] found id: ""
	I1010 19:26:08.764790  148123 logs.go:282] 0 containers: []
	W1010 19:26:08.764799  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:08.764806  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:08.764887  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:08.805522  148123 cri.go:89] found id: ""
	I1010 19:26:08.805552  148123 logs.go:282] 0 containers: []
	W1010 19:26:08.805563  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:08.805572  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:08.805642  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:08.845694  148123 cri.go:89] found id: ""
	I1010 19:26:08.845729  148123 logs.go:282] 0 containers: []
	W1010 19:26:08.845740  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:08.845747  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:08.845817  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:08.885024  148123 cri.go:89] found id: ""
	I1010 19:26:08.885054  148123 logs.go:282] 0 containers: []
	W1010 19:26:08.885066  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:08.885074  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:08.885138  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:08.927500  148123 cri.go:89] found id: ""
	I1010 19:26:08.927531  148123 logs.go:282] 0 containers: []
	W1010 19:26:08.927540  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:08.927547  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:08.927616  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:08.963870  148123 cri.go:89] found id: ""
	I1010 19:26:08.963904  148123 logs.go:282] 0 containers: []
	W1010 19:26:08.963916  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:08.963924  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:08.963988  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:08.997984  148123 cri.go:89] found id: ""
	I1010 19:26:08.998017  148123 logs.go:282] 0 containers: []
	W1010 19:26:08.998027  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:08.998039  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:08.998056  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:09.049307  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:09.049341  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:09.063341  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:09.063432  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:09.145190  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:09.145214  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:09.145226  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:09.231409  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:09.231445  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:11.773475  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:11.788981  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:11.789055  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:11.830289  148123 cri.go:89] found id: ""
	I1010 19:26:11.830319  148123 logs.go:282] 0 containers: []
	W1010 19:26:11.830327  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:11.830333  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:11.830399  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:11.867093  148123 cri.go:89] found id: ""
	I1010 19:26:11.867120  148123 logs.go:282] 0 containers: []
	W1010 19:26:11.867131  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:11.867138  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:11.867212  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:11.902146  148123 cri.go:89] found id: ""
	I1010 19:26:11.902181  148123 logs.go:282] 0 containers: []
	W1010 19:26:11.902192  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:11.902201  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:11.902271  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:11.938652  148123 cri.go:89] found id: ""
	I1010 19:26:11.938681  148123 logs.go:282] 0 containers: []
	W1010 19:26:11.938691  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:11.938703  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:11.938771  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:11.975903  148123 cri.go:89] found id: ""
	I1010 19:26:11.975937  148123 logs.go:282] 0 containers: []
	W1010 19:26:11.975947  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:11.975955  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:11.976020  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:12.014083  148123 cri.go:89] found id: ""
	I1010 19:26:12.014111  148123 logs.go:282] 0 containers: []
	W1010 19:26:12.014119  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:12.014126  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:12.014190  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:12.057832  148123 cri.go:89] found id: ""
	I1010 19:26:12.057858  148123 logs.go:282] 0 containers: []
	W1010 19:26:12.057867  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:12.057874  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:12.057925  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:12.103253  148123 cri.go:89] found id: ""
	I1010 19:26:12.103286  148123 logs.go:282] 0 containers: []
	W1010 19:26:12.103298  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:12.103311  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:12.103326  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:12.171062  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:12.171104  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:12.185463  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:12.185496  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:12.261568  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:12.261592  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:12.261607  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:12.345819  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:12.345860  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:09.226954  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:11.227205  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:13.227457  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:11.066777  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:13.565854  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:12.551323  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:15.051745  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:14.886283  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:14.901117  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:14.901213  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:14.935235  148123 cri.go:89] found id: ""
	I1010 19:26:14.935264  148123 logs.go:282] 0 containers: []
	W1010 19:26:14.935273  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:14.935280  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:14.935336  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:14.970617  148123 cri.go:89] found id: ""
	I1010 19:26:14.970642  148123 logs.go:282] 0 containers: []
	W1010 19:26:14.970652  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:14.970658  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:14.970722  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:15.011086  148123 cri.go:89] found id: ""
	I1010 19:26:15.011113  148123 logs.go:282] 0 containers: []
	W1010 19:26:15.011122  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:15.011128  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:15.011188  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:15.053004  148123 cri.go:89] found id: ""
	I1010 19:26:15.053026  148123 logs.go:282] 0 containers: []
	W1010 19:26:15.053033  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:15.053039  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:15.053099  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:15.089275  148123 cri.go:89] found id: ""
	I1010 19:26:15.089304  148123 logs.go:282] 0 containers: []
	W1010 19:26:15.089312  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:15.089319  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:15.089383  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:15.126620  148123 cri.go:89] found id: ""
	I1010 19:26:15.126645  148123 logs.go:282] 0 containers: []
	W1010 19:26:15.126654  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:15.126660  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:15.126723  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:15.160920  148123 cri.go:89] found id: ""
	I1010 19:26:15.160956  148123 logs.go:282] 0 containers: []
	W1010 19:26:15.160969  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:15.160979  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:15.161037  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:15.197430  148123 cri.go:89] found id: ""
	I1010 19:26:15.197462  148123 logs.go:282] 0 containers: []
	W1010 19:26:15.197474  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:15.197486  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:15.197507  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:15.240905  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:15.240953  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:15.297663  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:15.297719  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:15.312279  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:15.312313  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:15.395296  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:15.395320  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:15.395336  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:17.978030  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:17.992574  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:17.992651  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:18.028846  148123 cri.go:89] found id: ""
	I1010 19:26:18.028890  148123 logs.go:282] 0 containers: []
	W1010 19:26:18.028901  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:18.028908  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:18.028965  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:18.073004  148123 cri.go:89] found id: ""
	I1010 19:26:18.073033  148123 logs.go:282] 0 containers: []
	W1010 19:26:18.073041  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:18.073047  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:18.073118  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:18.112062  148123 cri.go:89] found id: ""
	I1010 19:26:18.112098  148123 logs.go:282] 0 containers: []
	W1010 19:26:18.112111  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:18.112121  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:18.112209  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:18.151565  148123 cri.go:89] found id: ""
	I1010 19:26:18.151597  148123 logs.go:282] 0 containers: []
	W1010 19:26:18.151608  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:18.151616  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:18.151675  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:18.197483  148123 cri.go:89] found id: ""
	I1010 19:26:18.197509  148123 logs.go:282] 0 containers: []
	W1010 19:26:18.197518  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:18.197524  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:18.197591  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:18.235151  148123 cri.go:89] found id: ""
	I1010 19:26:18.235181  148123 logs.go:282] 0 containers: []
	W1010 19:26:18.235193  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:18.235201  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:18.235279  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:18.273807  148123 cri.go:89] found id: ""
	I1010 19:26:18.273841  148123 logs.go:282] 0 containers: []
	W1010 19:26:18.273852  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:18.273858  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:18.273920  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:18.311644  148123 cri.go:89] found id: ""
	I1010 19:26:18.311677  148123 logs.go:282] 0 containers: []
	W1010 19:26:18.311688  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:18.311700  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:18.311716  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:18.355599  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:18.355635  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:18.407786  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:18.407830  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:18.422944  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:18.422976  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:18.496830  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:18.496873  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:18.496887  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:15.227600  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:17.726712  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:16.065701  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:18.066861  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:17.558257  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:20.050914  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:21.108300  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:21.123177  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:21.123243  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:21.162544  148123 cri.go:89] found id: ""
	I1010 19:26:21.162575  148123 logs.go:282] 0 containers: []
	W1010 19:26:21.162586  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:21.162595  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:21.162662  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:21.199799  148123 cri.go:89] found id: ""
	I1010 19:26:21.199828  148123 logs.go:282] 0 containers: []
	W1010 19:26:21.199839  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:21.199845  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:21.199901  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:21.246333  148123 cri.go:89] found id: ""
	I1010 19:26:21.246357  148123 logs.go:282] 0 containers: []
	W1010 19:26:21.246366  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:21.246372  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:21.246433  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:21.282681  148123 cri.go:89] found id: ""
	I1010 19:26:21.282719  148123 logs.go:282] 0 containers: []
	W1010 19:26:21.282732  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:21.282740  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:21.282810  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:21.319500  148123 cri.go:89] found id: ""
	I1010 19:26:21.319535  148123 logs.go:282] 0 containers: []
	W1010 19:26:21.319546  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:21.319554  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:21.319623  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:21.355779  148123 cri.go:89] found id: ""
	I1010 19:26:21.355809  148123 logs.go:282] 0 containers: []
	W1010 19:26:21.355820  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:21.355828  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:21.355902  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:21.394567  148123 cri.go:89] found id: ""
	I1010 19:26:21.394608  148123 logs.go:282] 0 containers: []
	W1010 19:26:21.394617  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:21.394623  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:21.394684  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:21.430009  148123 cri.go:89] found id: ""
	I1010 19:26:21.430046  148123 logs.go:282] 0 containers: []
	W1010 19:26:21.430058  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:21.430069  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:21.430089  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:21.443267  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:21.443301  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:21.517046  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:21.517068  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:21.517083  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:21.594927  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:21.594978  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:21.634274  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:21.634311  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:20.227157  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:22.727736  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:20.566652  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:23.066459  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:22.550526  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:25.050647  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:24.189760  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:24.204355  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:24.204420  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:24.248130  148123 cri.go:89] found id: ""
	I1010 19:26:24.248160  148123 logs.go:282] 0 containers: []
	W1010 19:26:24.248171  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:24.248178  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:24.248244  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:24.293281  148123 cri.go:89] found id: ""
	I1010 19:26:24.293312  148123 logs.go:282] 0 containers: []
	W1010 19:26:24.293324  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:24.293331  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:24.293400  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:24.350709  148123 cri.go:89] found id: ""
	I1010 19:26:24.350743  148123 logs.go:282] 0 containers: []
	W1010 19:26:24.350755  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:24.350765  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:24.350838  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:24.394113  148123 cri.go:89] found id: ""
	I1010 19:26:24.394152  148123 logs.go:282] 0 containers: []
	W1010 19:26:24.394170  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:24.394181  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:24.394256  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:24.445999  148123 cri.go:89] found id: ""
	I1010 19:26:24.446030  148123 logs.go:282] 0 containers: []
	W1010 19:26:24.446042  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:24.446051  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:24.446120  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:24.483566  148123 cri.go:89] found id: ""
	I1010 19:26:24.483596  148123 logs.go:282] 0 containers: []
	W1010 19:26:24.483605  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:24.483612  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:24.483665  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:24.520705  148123 cri.go:89] found id: ""
	I1010 19:26:24.520736  148123 logs.go:282] 0 containers: []
	W1010 19:26:24.520748  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:24.520757  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:24.520825  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:24.557316  148123 cri.go:89] found id: ""
	I1010 19:26:24.557346  148123 logs.go:282] 0 containers: []
	W1010 19:26:24.557355  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:24.557364  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:24.557376  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:24.608065  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:24.608109  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:24.623202  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:24.623250  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:24.694665  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:24.694697  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:24.694716  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:24.783369  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:24.783414  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:27.327524  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:27.341132  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:27.341214  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:27.376589  148123 cri.go:89] found id: ""
	I1010 19:26:27.376618  148123 logs.go:282] 0 containers: []
	W1010 19:26:27.376627  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:27.376633  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:27.376684  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:27.414452  148123 cri.go:89] found id: ""
	I1010 19:26:27.414491  148123 logs.go:282] 0 containers: []
	W1010 19:26:27.414502  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:27.414510  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:27.414572  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:27.449839  148123 cri.go:89] found id: ""
	I1010 19:26:27.449867  148123 logs.go:282] 0 containers: []
	W1010 19:26:27.449875  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:27.449882  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:27.449932  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:27.492445  148123 cri.go:89] found id: ""
	I1010 19:26:27.492472  148123 logs.go:282] 0 containers: []
	W1010 19:26:27.492480  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:27.492487  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:27.492549  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:27.533032  148123 cri.go:89] found id: ""
	I1010 19:26:27.533085  148123 logs.go:282] 0 containers: []
	W1010 19:26:27.533095  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:27.533122  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:27.533199  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:27.571096  148123 cri.go:89] found id: ""
	I1010 19:26:27.571122  148123 logs.go:282] 0 containers: []
	W1010 19:26:27.571130  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:27.571135  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:27.571204  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:27.608762  148123 cri.go:89] found id: ""
	I1010 19:26:27.608798  148123 logs.go:282] 0 containers: []
	W1010 19:26:27.608809  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:27.608818  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:27.608896  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:27.648200  148123 cri.go:89] found id: ""
	I1010 19:26:27.648236  148123 logs.go:282] 0 containers: []
	W1010 19:26:27.648248  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:27.648260  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:27.648275  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:27.698124  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:27.698154  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:27.755009  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:27.755052  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:27.770048  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:27.770084  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:27.845475  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:27.845505  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:27.845519  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:24.729352  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:26.731831  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:25.566028  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:27.567052  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:27.555698  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:30.049914  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:30.423117  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:30.436706  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:30.436768  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:30.473367  148123 cri.go:89] found id: ""
	I1010 19:26:30.473396  148123 logs.go:282] 0 containers: []
	W1010 19:26:30.473408  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:30.473417  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:30.473470  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:30.510560  148123 cri.go:89] found id: ""
	I1010 19:26:30.510599  148123 logs.go:282] 0 containers: []
	W1010 19:26:30.510610  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:30.510619  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:30.510687  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:30.549682  148123 cri.go:89] found id: ""
	I1010 19:26:30.549715  148123 logs.go:282] 0 containers: []
	W1010 19:26:30.549726  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:30.549734  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:30.549798  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:30.586401  148123 cri.go:89] found id: ""
	I1010 19:26:30.586425  148123 logs.go:282] 0 containers: []
	W1010 19:26:30.586434  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:30.586441  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:30.586492  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:30.622012  148123 cri.go:89] found id: ""
	I1010 19:26:30.622037  148123 logs.go:282] 0 containers: []
	W1010 19:26:30.622045  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:30.622052  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:30.622107  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:30.658336  148123 cri.go:89] found id: ""
	I1010 19:26:30.658364  148123 logs.go:282] 0 containers: []
	W1010 19:26:30.658372  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:30.658378  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:30.658442  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:30.695495  148123 cri.go:89] found id: ""
	I1010 19:26:30.695523  148123 logs.go:282] 0 containers: []
	W1010 19:26:30.695532  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:30.695538  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:30.695591  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:30.735093  148123 cri.go:89] found id: ""
	I1010 19:26:30.735125  148123 logs.go:282] 0 containers: []
	W1010 19:26:30.735136  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:30.735148  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:30.735163  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:30.816097  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:30.816140  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:30.853878  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:30.853915  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:30.906289  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:30.906341  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:30.923742  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:30.923769  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:30.993226  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:33.493799  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:33.508083  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:33.508166  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:33.545019  148123 cri.go:89] found id: ""
	I1010 19:26:33.545055  148123 logs.go:282] 0 containers: []
	W1010 19:26:33.545072  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:33.545081  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:33.545150  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:29.226673  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:31.227117  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:33.727777  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:30.068231  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:32.566025  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:32.050118  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:34.051720  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:36.550138  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:33.584215  148123 cri.go:89] found id: ""
	I1010 19:26:33.584242  148123 logs.go:282] 0 containers: []
	W1010 19:26:33.584252  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:33.584259  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:33.584315  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:33.620598  148123 cri.go:89] found id: ""
	I1010 19:26:33.620627  148123 logs.go:282] 0 containers: []
	W1010 19:26:33.620636  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:33.620643  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:33.620691  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:33.655225  148123 cri.go:89] found id: ""
	I1010 19:26:33.655252  148123 logs.go:282] 0 containers: []
	W1010 19:26:33.655260  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:33.655267  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:33.655322  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:33.693464  148123 cri.go:89] found id: ""
	I1010 19:26:33.693490  148123 logs.go:282] 0 containers: []
	W1010 19:26:33.693498  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:33.693505  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:33.693554  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:33.734024  148123 cri.go:89] found id: ""
	I1010 19:26:33.734059  148123 logs.go:282] 0 containers: []
	W1010 19:26:33.734071  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:33.734079  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:33.734141  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:33.769434  148123 cri.go:89] found id: ""
	I1010 19:26:33.769467  148123 logs.go:282] 0 containers: []
	W1010 19:26:33.769476  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:33.769483  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:33.769533  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:33.809011  148123 cri.go:89] found id: ""
	I1010 19:26:33.809040  148123 logs.go:282] 0 containers: []
	W1010 19:26:33.809049  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:33.809059  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:33.809071  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:33.864186  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:33.864230  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:33.878681  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:33.878711  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:33.958225  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:33.958250  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:33.958265  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:34.034730  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:34.034774  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:36.577123  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:36.591494  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:36.591560  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:36.625170  148123 cri.go:89] found id: ""
	I1010 19:26:36.625210  148123 logs.go:282] 0 containers: []
	W1010 19:26:36.625222  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:36.625230  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:36.625295  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:36.660790  148123 cri.go:89] found id: ""
	I1010 19:26:36.660821  148123 logs.go:282] 0 containers: []
	W1010 19:26:36.660831  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:36.660838  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:36.660911  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:36.695742  148123 cri.go:89] found id: ""
	I1010 19:26:36.695774  148123 logs.go:282] 0 containers: []
	W1010 19:26:36.695786  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:36.695793  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:36.695854  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:36.729523  148123 cri.go:89] found id: ""
	I1010 19:26:36.729552  148123 logs.go:282] 0 containers: []
	W1010 19:26:36.729561  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:36.729569  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:36.729630  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:36.765515  148123 cri.go:89] found id: ""
	I1010 19:26:36.765549  148123 logs.go:282] 0 containers: []
	W1010 19:26:36.765561  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:36.765571  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:36.765635  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:36.799110  148123 cri.go:89] found id: ""
	I1010 19:26:36.799144  148123 logs.go:282] 0 containers: []
	W1010 19:26:36.799155  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:36.799164  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:36.799224  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:36.832806  148123 cri.go:89] found id: ""
	I1010 19:26:36.832834  148123 logs.go:282] 0 containers: []
	W1010 19:26:36.832845  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:36.832865  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:36.832933  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:36.869093  148123 cri.go:89] found id: ""
	I1010 19:26:36.869126  148123 logs.go:282] 0 containers: []
	W1010 19:26:36.869137  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:36.869149  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:36.869165  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:36.922229  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:36.922276  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:36.936973  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:36.937019  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:37.016400  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:37.016425  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:37.016448  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:37.106308  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:37.106354  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:36.227451  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:38.726229  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:35.067396  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:37.565711  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:38.550438  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:41.050698  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:39.652101  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:39.665546  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:39.665639  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:39.699646  148123 cri.go:89] found id: ""
	I1010 19:26:39.699675  148123 logs.go:282] 0 containers: []
	W1010 19:26:39.699686  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:39.699693  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:39.699745  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:39.735568  148123 cri.go:89] found id: ""
	I1010 19:26:39.735592  148123 logs.go:282] 0 containers: []
	W1010 19:26:39.735600  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:39.735606  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:39.735656  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:39.770021  148123 cri.go:89] found id: ""
	I1010 19:26:39.770049  148123 logs.go:282] 0 containers: []
	W1010 19:26:39.770060  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:39.770067  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:39.770139  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:39.804867  148123 cri.go:89] found id: ""
	I1010 19:26:39.804894  148123 logs.go:282] 0 containers: []
	W1010 19:26:39.804903  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:39.804910  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:39.804972  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:39.847074  148123 cri.go:89] found id: ""
	I1010 19:26:39.847104  148123 logs.go:282] 0 containers: []
	W1010 19:26:39.847115  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:39.847124  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:39.847200  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:39.890334  148123 cri.go:89] found id: ""
	I1010 19:26:39.890368  148123 logs.go:282] 0 containers: []
	W1010 19:26:39.890379  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:39.890387  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:39.890456  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:39.926316  148123 cri.go:89] found id: ""
	I1010 19:26:39.926346  148123 logs.go:282] 0 containers: []
	W1010 19:26:39.926355  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:39.926362  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:39.926416  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:39.966179  148123 cri.go:89] found id: ""
	I1010 19:26:39.966215  148123 logs.go:282] 0 containers: []
	W1010 19:26:39.966227  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:39.966239  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:39.966255  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:40.047457  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:40.047505  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:40.086611  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:40.086656  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:40.139123  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:40.139167  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:40.152966  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:40.152995  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:40.231009  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:42.731851  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:42.748201  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:42.748287  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:42.787567  148123 cri.go:89] found id: ""
	I1010 19:26:42.787599  148123 logs.go:282] 0 containers: []
	W1010 19:26:42.787607  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:42.787614  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:42.787697  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:42.831051  148123 cri.go:89] found id: ""
	I1010 19:26:42.831089  148123 logs.go:282] 0 containers: []
	W1010 19:26:42.831100  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:42.831109  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:42.831180  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:42.871352  148123 cri.go:89] found id: ""
	I1010 19:26:42.871390  148123 logs.go:282] 0 containers: []
	W1010 19:26:42.871402  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:42.871410  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:42.871484  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:42.910283  148123 cri.go:89] found id: ""
	I1010 19:26:42.910316  148123 logs.go:282] 0 containers: []
	W1010 19:26:42.910327  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:42.910335  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:42.910403  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:42.946971  148123 cri.go:89] found id: ""
	I1010 19:26:42.947003  148123 logs.go:282] 0 containers: []
	W1010 19:26:42.947012  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:42.947019  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:42.947087  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:42.985484  148123 cri.go:89] found id: ""
	I1010 19:26:42.985517  148123 logs.go:282] 0 containers: []
	W1010 19:26:42.985528  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:42.985537  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:42.985603  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:43.024500  148123 cri.go:89] found id: ""
	I1010 19:26:43.024535  148123 logs.go:282] 0 containers: []
	W1010 19:26:43.024544  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:43.024552  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:43.024608  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:43.064008  148123 cri.go:89] found id: ""
	I1010 19:26:43.064039  148123 logs.go:282] 0 containers: []
	W1010 19:26:43.064049  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:43.064060  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:43.064075  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:43.119367  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:43.119415  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:43.133396  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:43.133428  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:43.207447  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:43.207470  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:43.207491  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:43.290416  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:43.290456  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:40.727919  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:43.227782  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:40.066461  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:42.565505  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:43.051835  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:45.052308  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:45.834115  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:45.849172  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:45.849238  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:45.886020  148123 cri.go:89] found id: ""
	I1010 19:26:45.886049  148123 logs.go:282] 0 containers: []
	W1010 19:26:45.886060  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:45.886067  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:45.886152  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:45.923188  148123 cri.go:89] found id: ""
	I1010 19:26:45.923218  148123 logs.go:282] 0 containers: []
	W1010 19:26:45.923245  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:45.923253  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:45.923316  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:45.960297  148123 cri.go:89] found id: ""
	I1010 19:26:45.960337  148123 logs.go:282] 0 containers: []
	W1010 19:26:45.960351  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:45.960361  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:45.960432  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:45.995140  148123 cri.go:89] found id: ""
	I1010 19:26:45.995173  148123 logs.go:282] 0 containers: []
	W1010 19:26:45.995184  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:45.995191  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:45.995265  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:46.040390  148123 cri.go:89] found id: ""
	I1010 19:26:46.040418  148123 logs.go:282] 0 containers: []
	W1010 19:26:46.040426  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:46.040433  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:46.040500  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:46.079999  148123 cri.go:89] found id: ""
	I1010 19:26:46.080029  148123 logs.go:282] 0 containers: []
	W1010 19:26:46.080042  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:46.080052  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:46.080105  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:46.117331  148123 cri.go:89] found id: ""
	I1010 19:26:46.117357  148123 logs.go:282] 0 containers: []
	W1010 19:26:46.117364  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:46.117370  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:46.117441  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:46.152248  148123 cri.go:89] found id: ""
	I1010 19:26:46.152279  148123 logs.go:282] 0 containers: []
	W1010 19:26:46.152290  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:46.152303  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:46.152319  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:46.192503  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:46.192533  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:46.246419  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:46.246456  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:46.259864  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:46.259895  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:46.338518  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:46.338549  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:46.338567  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:45.726776  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:48.228318  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:44.567056  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:47.065636  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:47.551013  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:50.053824  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:48.924216  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:48.938741  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:48.938805  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:48.973310  148123 cri.go:89] found id: ""
	I1010 19:26:48.973339  148123 logs.go:282] 0 containers: []
	W1010 19:26:48.973348  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:48.973356  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:48.973411  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:49.010467  148123 cri.go:89] found id: ""
	I1010 19:26:49.010492  148123 logs.go:282] 0 containers: []
	W1010 19:26:49.010500  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:49.010506  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:49.010585  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:49.051621  148123 cri.go:89] found id: ""
	I1010 19:26:49.051646  148123 logs.go:282] 0 containers: []
	W1010 19:26:49.051653  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:49.051664  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:49.051727  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:49.091090  148123 cri.go:89] found id: ""
	I1010 19:26:49.091121  148123 logs.go:282] 0 containers: []
	W1010 19:26:49.091132  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:49.091140  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:49.091202  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:49.127669  148123 cri.go:89] found id: ""
	I1010 19:26:49.127712  148123 logs.go:282] 0 containers: []
	W1010 19:26:49.127724  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:49.127732  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:49.127804  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:49.164446  148123 cri.go:89] found id: ""
	I1010 19:26:49.164476  148123 logs.go:282] 0 containers: []
	W1010 19:26:49.164485  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:49.164491  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:49.164553  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:49.201222  148123 cri.go:89] found id: ""
	I1010 19:26:49.201263  148123 logs.go:282] 0 containers: []
	W1010 19:26:49.201275  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:49.201284  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:49.201345  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:49.238258  148123 cri.go:89] found id: ""
	I1010 19:26:49.238283  148123 logs.go:282] 0 containers: []
	W1010 19:26:49.238290  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:49.238299  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:49.238311  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:49.313576  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:49.313619  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:49.351066  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:49.351096  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:49.402772  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:49.402823  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:49.417713  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:49.417752  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:49.488834  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:51.989031  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:52.003056  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:52.003140  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:52.039682  148123 cri.go:89] found id: ""
	I1010 19:26:52.039709  148123 logs.go:282] 0 containers: []
	W1010 19:26:52.039718  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:52.039725  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:52.039788  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:52.081402  148123 cri.go:89] found id: ""
	I1010 19:26:52.081433  148123 logs.go:282] 0 containers: []
	W1010 19:26:52.081443  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:52.081449  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:52.081502  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:52.118270  148123 cri.go:89] found id: ""
	I1010 19:26:52.118304  148123 logs.go:282] 0 containers: []
	W1010 19:26:52.118315  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:52.118325  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:52.118392  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:52.154692  148123 cri.go:89] found id: ""
	I1010 19:26:52.154724  148123 logs.go:282] 0 containers: []
	W1010 19:26:52.154735  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:52.154743  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:52.154807  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:52.190056  148123 cri.go:89] found id: ""
	I1010 19:26:52.190085  148123 logs.go:282] 0 containers: []
	W1010 19:26:52.190094  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:52.190101  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:52.190161  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:52.225468  148123 cri.go:89] found id: ""
	I1010 19:26:52.225501  148123 logs.go:282] 0 containers: []
	W1010 19:26:52.225511  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:52.225521  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:52.225589  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:52.260682  148123 cri.go:89] found id: ""
	I1010 19:26:52.260710  148123 logs.go:282] 0 containers: []
	W1010 19:26:52.260718  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:52.260724  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:52.260774  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:52.297613  148123 cri.go:89] found id: ""
	I1010 19:26:52.297642  148123 logs.go:282] 0 containers: []
	W1010 19:26:52.297659  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:52.297672  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:52.297689  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:52.352224  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:52.352267  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:52.367003  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:52.367033  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:52.443124  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:52.443157  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:52.443173  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:52.522391  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:52.522433  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:50.726363  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:52.727069  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:49.069109  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:51.566132  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:53.567867  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:52.554195  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:55.050995  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:55.066877  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:55.082191  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:55.082258  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:55.120729  148123 cri.go:89] found id: ""
	I1010 19:26:55.120763  148123 logs.go:282] 0 containers: []
	W1010 19:26:55.120775  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:55.120784  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:55.120863  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:55.154795  148123 cri.go:89] found id: ""
	I1010 19:26:55.154827  148123 logs.go:282] 0 containers: []
	W1010 19:26:55.154839  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:55.154848  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:55.154904  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:55.194846  148123 cri.go:89] found id: ""
	I1010 19:26:55.194876  148123 logs.go:282] 0 containers: []
	W1010 19:26:55.194889  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:55.194897  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:55.194958  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:55.230173  148123 cri.go:89] found id: ""
	I1010 19:26:55.230201  148123 logs.go:282] 0 containers: []
	W1010 19:26:55.230212  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:55.230220  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:55.230290  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:55.264999  148123 cri.go:89] found id: ""
	I1010 19:26:55.265025  148123 logs.go:282] 0 containers: []
	W1010 19:26:55.265032  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:55.265039  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:55.265096  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:55.303666  148123 cri.go:89] found id: ""
	I1010 19:26:55.303695  148123 logs.go:282] 0 containers: []
	W1010 19:26:55.303703  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:55.303724  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:55.303797  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:55.342061  148123 cri.go:89] found id: ""
	I1010 19:26:55.342087  148123 logs.go:282] 0 containers: []
	W1010 19:26:55.342098  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:55.342106  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:55.342171  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:55.378305  148123 cri.go:89] found id: ""
	I1010 19:26:55.378336  148123 logs.go:282] 0 containers: []
	W1010 19:26:55.378345  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:55.378354  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:55.378365  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:55.416815  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:55.416865  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:55.467371  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:55.467415  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:55.481704  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:55.481744  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:55.559219  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:55.559242  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:55.559254  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:58.141472  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:58.154326  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:58.154394  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:58.193053  148123 cri.go:89] found id: ""
	I1010 19:26:58.193080  148123 logs.go:282] 0 containers: []
	W1010 19:26:58.193088  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:58.193094  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:58.193142  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:58.232514  148123 cri.go:89] found id: ""
	I1010 19:26:58.232538  148123 logs.go:282] 0 containers: []
	W1010 19:26:58.232545  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:58.232551  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:58.232599  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:58.266724  148123 cri.go:89] found id: ""
	I1010 19:26:58.266756  148123 logs.go:282] 0 containers: []
	W1010 19:26:58.266765  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:58.266771  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:58.266824  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:58.301986  148123 cri.go:89] found id: ""
	I1010 19:26:58.302012  148123 logs.go:282] 0 containers: []
	W1010 19:26:58.302019  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:58.302031  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:58.302080  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:58.340394  148123 cri.go:89] found id: ""
	I1010 19:26:58.340431  148123 logs.go:282] 0 containers: []
	W1010 19:26:58.340440  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:58.340448  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:58.340500  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:58.407035  148123 cri.go:89] found id: ""
	I1010 19:26:58.407069  148123 logs.go:282] 0 containers: []
	W1010 19:26:58.407081  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:58.407088  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:58.407151  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:58.446650  148123 cri.go:89] found id: ""
	I1010 19:26:58.446682  148123 logs.go:282] 0 containers: []
	W1010 19:26:58.446694  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:58.446703  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:58.446763  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:58.480719  148123 cri.go:89] found id: ""
	I1010 19:26:58.480750  148123 logs.go:282] 0 containers: []
	W1010 19:26:58.480759  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:58.480768  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:58.480781  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:58.561568  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:58.561602  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:55.227199  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:57.726841  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:56.065787  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:58.566732  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:57.550718  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:59.550793  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:58.609237  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:58.609264  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:58.659705  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:58.659748  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:58.674646  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:58.674680  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:58.750922  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:01.252098  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:01.265420  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:01.265497  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:01.300133  148123 cri.go:89] found id: ""
	I1010 19:27:01.300168  148123 logs.go:282] 0 containers: []
	W1010 19:27:01.300180  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:01.300188  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:01.300272  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:01.337451  148123 cri.go:89] found id: ""
	I1010 19:27:01.337477  148123 logs.go:282] 0 containers: []
	W1010 19:27:01.337485  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:01.337492  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:01.337539  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:01.371987  148123 cri.go:89] found id: ""
	I1010 19:27:01.372019  148123 logs.go:282] 0 containers: []
	W1010 19:27:01.372028  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:01.372034  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:01.372098  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:01.408996  148123 cri.go:89] found id: ""
	I1010 19:27:01.409022  148123 logs.go:282] 0 containers: []
	W1010 19:27:01.409033  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:01.409042  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:01.409109  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:01.443558  148123 cri.go:89] found id: ""
	I1010 19:27:01.443587  148123 logs.go:282] 0 containers: []
	W1010 19:27:01.443595  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:01.443602  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:01.443663  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:01.478517  148123 cri.go:89] found id: ""
	I1010 19:27:01.478546  148123 logs.go:282] 0 containers: []
	W1010 19:27:01.478555  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:01.478561  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:01.478611  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:01.513528  148123 cri.go:89] found id: ""
	I1010 19:27:01.513556  148123 logs.go:282] 0 containers: []
	W1010 19:27:01.513568  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:01.513576  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:01.513641  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:01.558861  148123 cri.go:89] found id: ""
	I1010 19:27:01.558897  148123 logs.go:282] 0 containers: []
	W1010 19:27:01.558909  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:01.558921  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:01.558937  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:01.599719  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:01.599747  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:01.650736  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:01.650774  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:01.664643  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:01.664669  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:01.736533  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:01.736557  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:01.736570  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:00.225540  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:02.226962  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:00.567193  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:03.066587  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:02.050439  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:04.050984  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:06.550977  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:04.317302  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:04.330450  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:04.330519  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:04.368893  148123 cri.go:89] found id: ""
	I1010 19:27:04.368923  148123 logs.go:282] 0 containers: []
	W1010 19:27:04.368932  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:04.368939  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:04.368993  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:04.406598  148123 cri.go:89] found id: ""
	I1010 19:27:04.406625  148123 logs.go:282] 0 containers: []
	W1010 19:27:04.406634  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:04.406640  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:04.406692  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:04.441485  148123 cri.go:89] found id: ""
	I1010 19:27:04.441515  148123 logs.go:282] 0 containers: []
	W1010 19:27:04.441525  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:04.441532  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:04.441581  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:04.476663  148123 cri.go:89] found id: ""
	I1010 19:27:04.476690  148123 logs.go:282] 0 containers: []
	W1010 19:27:04.476698  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:04.476704  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:04.476765  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:04.512124  148123 cri.go:89] found id: ""
	I1010 19:27:04.512171  148123 logs.go:282] 0 containers: []
	W1010 19:27:04.512186  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:04.512195  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:04.512260  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:04.547902  148123 cri.go:89] found id: ""
	I1010 19:27:04.547929  148123 logs.go:282] 0 containers: []
	W1010 19:27:04.547940  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:04.547949  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:04.548008  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:04.585257  148123 cri.go:89] found id: ""
	I1010 19:27:04.585287  148123 logs.go:282] 0 containers: []
	W1010 19:27:04.585297  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:04.585303  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:04.585352  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:04.621019  148123 cri.go:89] found id: ""
	I1010 19:27:04.621048  148123 logs.go:282] 0 containers: []
	W1010 19:27:04.621057  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:04.621068  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:04.621080  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:04.662770  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:04.662801  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:04.714551  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:04.714592  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:04.730922  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:04.730951  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:04.841139  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:04.841163  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:04.841178  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:07.428528  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:07.442423  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:07.442498  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:07.476722  148123 cri.go:89] found id: ""
	I1010 19:27:07.476753  148123 logs.go:282] 0 containers: []
	W1010 19:27:07.476764  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:07.476772  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:07.476835  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:07.512211  148123 cri.go:89] found id: ""
	I1010 19:27:07.512245  148123 logs.go:282] 0 containers: []
	W1010 19:27:07.512256  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:07.512263  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:07.512325  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:07.546546  148123 cri.go:89] found id: ""
	I1010 19:27:07.546588  148123 logs.go:282] 0 containers: []
	W1010 19:27:07.546601  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:07.546619  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:07.546688  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:07.585743  148123 cri.go:89] found id: ""
	I1010 19:27:07.585768  148123 logs.go:282] 0 containers: []
	W1010 19:27:07.585777  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:07.585783  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:07.585848  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:07.626827  148123 cri.go:89] found id: ""
	I1010 19:27:07.626855  148123 logs.go:282] 0 containers: []
	W1010 19:27:07.626865  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:07.626874  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:07.626960  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:07.661910  148123 cri.go:89] found id: ""
	I1010 19:27:07.661940  148123 logs.go:282] 0 containers: []
	W1010 19:27:07.661948  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:07.661955  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:07.662072  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:07.698255  148123 cri.go:89] found id: ""
	I1010 19:27:07.698288  148123 logs.go:282] 0 containers: []
	W1010 19:27:07.698301  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:07.698309  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:07.698374  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:07.734389  148123 cri.go:89] found id: ""
	I1010 19:27:07.734417  148123 logs.go:282] 0 containers: []
	W1010 19:27:07.734428  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:07.734440  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:07.734454  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:07.814089  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:07.814130  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:07.854799  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:07.854831  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:07.906595  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:07.906635  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:07.921928  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:07.921966  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:07.989839  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:04.727522  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:07.226694  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:05.565868  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:07.567139  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:09.050772  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:11.051291  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:10.490115  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:10.504216  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:10.504292  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:10.554789  148123 cri.go:89] found id: ""
	I1010 19:27:10.554815  148123 logs.go:282] 0 containers: []
	W1010 19:27:10.554825  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:10.554831  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:10.554889  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:10.618970  148123 cri.go:89] found id: ""
	I1010 19:27:10.618997  148123 logs.go:282] 0 containers: []
	W1010 19:27:10.619005  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:10.619012  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:10.619061  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:10.666900  148123 cri.go:89] found id: ""
	I1010 19:27:10.666933  148123 logs.go:282] 0 containers: []
	W1010 19:27:10.666946  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:10.666953  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:10.667023  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:10.704364  148123 cri.go:89] found id: ""
	I1010 19:27:10.704405  148123 logs.go:282] 0 containers: []
	W1010 19:27:10.704430  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:10.704450  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:10.704525  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:10.742341  148123 cri.go:89] found id: ""
	I1010 19:27:10.742380  148123 logs.go:282] 0 containers: []
	W1010 19:27:10.742389  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:10.742396  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:10.742461  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:10.783056  148123 cri.go:89] found id: ""
	I1010 19:27:10.783088  148123 logs.go:282] 0 containers: []
	W1010 19:27:10.783098  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:10.783105  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:10.783174  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:10.823198  148123 cri.go:89] found id: ""
	I1010 19:27:10.823224  148123 logs.go:282] 0 containers: []
	W1010 19:27:10.823233  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:10.823243  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:10.823302  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:10.858154  148123 cri.go:89] found id: ""
	I1010 19:27:10.858195  148123 logs.go:282] 0 containers: []
	W1010 19:27:10.858204  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:10.858213  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:10.858229  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:10.935491  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:10.935518  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:10.935532  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:11.015653  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:11.015694  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:11.058765  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:11.058793  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:11.116748  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:11.116799  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:09.727270  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:12.225797  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:10.065372  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:12.065695  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:13.550669  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:16.051044  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:13.631579  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:13.644885  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:13.644953  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:13.685242  148123 cri.go:89] found id: ""
	I1010 19:27:13.685273  148123 logs.go:282] 0 containers: []
	W1010 19:27:13.685285  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:13.685294  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:13.685364  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:13.720831  148123 cri.go:89] found id: ""
	I1010 19:27:13.720866  148123 logs.go:282] 0 containers: []
	W1010 19:27:13.720877  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:13.720885  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:13.720947  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:13.766374  148123 cri.go:89] found id: ""
	I1010 19:27:13.766404  148123 logs.go:282] 0 containers: []
	W1010 19:27:13.766413  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:13.766419  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:13.766468  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:13.804550  148123 cri.go:89] found id: ""
	I1010 19:27:13.804585  148123 logs.go:282] 0 containers: []
	W1010 19:27:13.804597  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:13.804606  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:13.804674  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:13.842406  148123 cri.go:89] found id: ""
	I1010 19:27:13.842432  148123 logs.go:282] 0 containers: []
	W1010 19:27:13.842440  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:13.842460  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:13.842678  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:13.882365  148123 cri.go:89] found id: ""
	I1010 19:27:13.882402  148123 logs.go:282] 0 containers: []
	W1010 19:27:13.882415  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:13.882423  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:13.882505  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:13.926016  148123 cri.go:89] found id: ""
	I1010 19:27:13.926052  148123 logs.go:282] 0 containers: []
	W1010 19:27:13.926065  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:13.926075  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:13.926177  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:13.960485  148123 cri.go:89] found id: ""
	I1010 19:27:13.960523  148123 logs.go:282] 0 containers: []
	W1010 19:27:13.960533  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:13.960542  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:13.960558  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:14.013013  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:14.013055  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:14.026906  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:14.026935  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:14.104469  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:14.104494  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:14.104507  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:14.185917  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:14.185959  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:16.732088  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:16.746646  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:16.746710  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:16.784715  148123 cri.go:89] found id: ""
	I1010 19:27:16.784739  148123 logs.go:282] 0 containers: []
	W1010 19:27:16.784749  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:16.784755  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:16.784807  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:16.824181  148123 cri.go:89] found id: ""
	I1010 19:27:16.824210  148123 logs.go:282] 0 containers: []
	W1010 19:27:16.824220  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:16.824228  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:16.824289  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:16.860901  148123 cri.go:89] found id: ""
	I1010 19:27:16.860932  148123 logs.go:282] 0 containers: []
	W1010 19:27:16.860941  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:16.860947  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:16.860997  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:16.895127  148123 cri.go:89] found id: ""
	I1010 19:27:16.895159  148123 logs.go:282] 0 containers: []
	W1010 19:27:16.895175  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:16.895183  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:16.895266  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:16.931989  148123 cri.go:89] found id: ""
	I1010 19:27:16.932020  148123 logs.go:282] 0 containers: []
	W1010 19:27:16.932028  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:16.932035  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:16.932086  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:16.967254  148123 cri.go:89] found id: ""
	I1010 19:27:16.967283  148123 logs.go:282] 0 containers: []
	W1010 19:27:16.967292  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:16.967299  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:16.967347  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:17.003010  148123 cri.go:89] found id: ""
	I1010 19:27:17.003035  148123 logs.go:282] 0 containers: []
	W1010 19:27:17.003043  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:17.003050  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:17.003097  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:17.040053  148123 cri.go:89] found id: ""
	I1010 19:27:17.040093  148123 logs.go:282] 0 containers: []
	W1010 19:27:17.040102  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:17.040118  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:17.040131  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:17.093176  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:17.093217  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:17.108086  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:17.108116  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:17.186423  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:17.186452  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:17.186467  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:17.271884  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:17.271926  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:14.227197  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:16.739354  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:14.066233  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:16.565852  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:18.566337  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:18.051613  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:20.549888  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:19.817954  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:19.831924  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:19.831993  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:19.873415  148123 cri.go:89] found id: ""
	I1010 19:27:19.873441  148123 logs.go:282] 0 containers: []
	W1010 19:27:19.873459  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:19.873466  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:19.873527  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:19.912174  148123 cri.go:89] found id: ""
	I1010 19:27:19.912207  148123 logs.go:282] 0 containers: []
	W1010 19:27:19.912219  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:19.912227  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:19.912298  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:19.948422  148123 cri.go:89] found id: ""
	I1010 19:27:19.948456  148123 logs.go:282] 0 containers: []
	W1010 19:27:19.948466  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:19.948472  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:19.948524  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:19.983917  148123 cri.go:89] found id: ""
	I1010 19:27:19.983949  148123 logs.go:282] 0 containers: []
	W1010 19:27:19.983962  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:19.983970  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:19.984024  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:20.019160  148123 cri.go:89] found id: ""
	I1010 19:27:20.019187  148123 logs.go:282] 0 containers: []
	W1010 19:27:20.019198  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:20.019207  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:20.019271  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:20.056073  148123 cri.go:89] found id: ""
	I1010 19:27:20.056104  148123 logs.go:282] 0 containers: []
	W1010 19:27:20.056116  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:20.056124  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:20.056187  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:20.095877  148123 cri.go:89] found id: ""
	I1010 19:27:20.095916  148123 logs.go:282] 0 containers: []
	W1010 19:27:20.095928  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:20.095935  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:20.096007  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:20.132360  148123 cri.go:89] found id: ""
	I1010 19:27:20.132385  148123 logs.go:282] 0 containers: []
	W1010 19:27:20.132394  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:20.132402  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:20.132413  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:20.190573  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:20.190619  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:20.205785  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:20.205819  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:20.278882  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:20.278911  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:20.278924  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:20.363982  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:20.364024  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:22.906059  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:22.919839  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:22.919912  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:22.958203  148123 cri.go:89] found id: ""
	I1010 19:27:22.958242  148123 logs.go:282] 0 containers: []
	W1010 19:27:22.958252  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:22.958258  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:22.958312  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:22.995834  148123 cri.go:89] found id: ""
	I1010 19:27:22.995866  148123 logs.go:282] 0 containers: []
	W1010 19:27:22.995874  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:22.995880  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:22.995945  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:23.035913  148123 cri.go:89] found id: ""
	I1010 19:27:23.035950  148123 logs.go:282] 0 containers: []
	W1010 19:27:23.035962  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:23.035970  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:23.036039  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:23.073995  148123 cri.go:89] found id: ""
	I1010 19:27:23.074036  148123 logs.go:282] 0 containers: []
	W1010 19:27:23.074049  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:23.074057  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:23.074117  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:23.111190  148123 cri.go:89] found id: ""
	I1010 19:27:23.111222  148123 logs.go:282] 0 containers: []
	W1010 19:27:23.111234  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:23.111243  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:23.111305  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:23.147771  148123 cri.go:89] found id: ""
	I1010 19:27:23.147797  148123 logs.go:282] 0 containers: []
	W1010 19:27:23.147806  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:23.147814  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:23.147878  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:23.185485  148123 cri.go:89] found id: ""
	I1010 19:27:23.185517  148123 logs.go:282] 0 containers: []
	W1010 19:27:23.185527  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:23.185535  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:23.185598  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:23.222030  148123 cri.go:89] found id: ""
	I1010 19:27:23.222060  148123 logs.go:282] 0 containers: []
	W1010 19:27:23.222070  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:23.222081  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:23.222097  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:23.301826  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:23.301849  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:23.301861  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:23.378688  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:23.378730  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:23.426683  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:23.426717  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:23.478425  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:23.478467  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:19.226994  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:21.727366  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:21.067094  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:23.567075  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:22.550076  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:24.551681  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:25.993768  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:26.008311  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:26.008383  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:26.046315  148123 cri.go:89] found id: ""
	I1010 19:27:26.046343  148123 logs.go:282] 0 containers: []
	W1010 19:27:26.046355  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:26.046364  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:26.046418  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:26.081823  148123 cri.go:89] found id: ""
	I1010 19:27:26.081847  148123 logs.go:282] 0 containers: []
	W1010 19:27:26.081855  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:26.081861  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:26.081913  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:26.117139  148123 cri.go:89] found id: ""
	I1010 19:27:26.117167  148123 logs.go:282] 0 containers: []
	W1010 19:27:26.117183  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:26.117191  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:26.117242  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:26.154477  148123 cri.go:89] found id: ""
	I1010 19:27:26.154501  148123 logs.go:282] 0 containers: []
	W1010 19:27:26.154510  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:26.154517  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:26.154568  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:26.187497  148123 cri.go:89] found id: ""
	I1010 19:27:26.187527  148123 logs.go:282] 0 containers: []
	W1010 19:27:26.187535  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:26.187541  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:26.187604  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:26.223110  148123 cri.go:89] found id: ""
	I1010 19:27:26.223140  148123 logs.go:282] 0 containers: []
	W1010 19:27:26.223151  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:26.223160  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:26.223226  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:26.264975  148123 cri.go:89] found id: ""
	I1010 19:27:26.265003  148123 logs.go:282] 0 containers: []
	W1010 19:27:26.265011  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:26.265017  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:26.265067  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:26.307467  148123 cri.go:89] found id: ""
	I1010 19:27:26.307494  148123 logs.go:282] 0 containers: []
	W1010 19:27:26.307503  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:26.307512  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:26.307524  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:26.346313  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:26.346342  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:26.400069  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:26.400109  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:26.415467  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:26.415500  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:26.486986  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:26.487016  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:26.487033  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:24.226736  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:26.228720  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:28.726470  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:26.067100  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:28.565675  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:27.051110  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:29.051207  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:31.553085  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:29.064522  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:29.077631  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:29.077696  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:29.116127  148123 cri.go:89] found id: ""
	I1010 19:27:29.116154  148123 logs.go:282] 0 containers: []
	W1010 19:27:29.116164  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:29.116172  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:29.116241  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:29.152863  148123 cri.go:89] found id: ""
	I1010 19:27:29.152893  148123 logs.go:282] 0 containers: []
	W1010 19:27:29.152901  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:29.152907  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:29.152963  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:29.189951  148123 cri.go:89] found id: ""
	I1010 19:27:29.189983  148123 logs.go:282] 0 containers: []
	W1010 19:27:29.189992  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:29.189999  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:29.190060  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:29.226611  148123 cri.go:89] found id: ""
	I1010 19:27:29.226646  148123 logs.go:282] 0 containers: []
	W1010 19:27:29.226657  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:29.226665  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:29.226729  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:29.271884  148123 cri.go:89] found id: ""
	I1010 19:27:29.271921  148123 logs.go:282] 0 containers: []
	W1010 19:27:29.271934  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:29.271944  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:29.272016  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:29.306128  148123 cri.go:89] found id: ""
	I1010 19:27:29.306168  148123 logs.go:282] 0 containers: []
	W1010 19:27:29.306181  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:29.306191  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:29.306255  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:29.341869  148123 cri.go:89] found id: ""
	I1010 19:27:29.341893  148123 logs.go:282] 0 containers: []
	W1010 19:27:29.341901  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:29.341908  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:29.341962  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:29.376213  148123 cri.go:89] found id: ""
	I1010 19:27:29.376240  148123 logs.go:282] 0 containers: []
	W1010 19:27:29.376249  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:29.376259  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:29.376273  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:29.428827  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:29.428878  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:29.443719  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:29.443754  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:29.516166  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:29.516193  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:29.516208  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:29.596643  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:29.596681  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:32.135286  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:32.148791  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:32.148893  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:32.185511  148123 cri.go:89] found id: ""
	I1010 19:27:32.185542  148123 logs.go:282] 0 containers: []
	W1010 19:27:32.185553  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:32.185560  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:32.185626  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:32.222908  148123 cri.go:89] found id: ""
	I1010 19:27:32.222934  148123 logs.go:282] 0 containers: []
	W1010 19:27:32.222942  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:32.222948  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:32.222999  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:32.260998  148123 cri.go:89] found id: ""
	I1010 19:27:32.261033  148123 logs.go:282] 0 containers: []
	W1010 19:27:32.261045  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:32.261063  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:32.261128  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:32.298311  148123 cri.go:89] found id: ""
	I1010 19:27:32.298339  148123 logs.go:282] 0 containers: []
	W1010 19:27:32.298348  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:32.298355  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:32.298409  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:32.334134  148123 cri.go:89] found id: ""
	I1010 19:27:32.334220  148123 logs.go:282] 0 containers: []
	W1010 19:27:32.334236  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:32.334249  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:32.334319  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:32.369690  148123 cri.go:89] found id: ""
	I1010 19:27:32.369723  148123 logs.go:282] 0 containers: []
	W1010 19:27:32.369735  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:32.369743  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:32.369807  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:32.407164  148123 cri.go:89] found id: ""
	I1010 19:27:32.407210  148123 logs.go:282] 0 containers: []
	W1010 19:27:32.407218  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:32.407224  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:32.407279  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:32.446468  148123 cri.go:89] found id: ""
	I1010 19:27:32.446494  148123 logs.go:282] 0 containers: []
	W1010 19:27:32.446505  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:32.446519  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:32.446535  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:32.497953  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:32.497997  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:32.513590  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:32.513620  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:32.594037  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:32.594072  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:32.594090  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:32.673546  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:32.673587  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:30.727725  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:32.727813  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:31.066731  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:33.067815  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:34.050574  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:36.550119  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:35.226084  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:35.241152  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:35.241222  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:35.283208  148123 cri.go:89] found id: ""
	I1010 19:27:35.283245  148123 logs.go:282] 0 containers: []
	W1010 19:27:35.283269  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:35.283286  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:35.283346  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:35.320406  148123 cri.go:89] found id: ""
	I1010 19:27:35.320444  148123 logs.go:282] 0 containers: []
	W1010 19:27:35.320457  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:35.320466  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:35.320523  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:35.360984  148123 cri.go:89] found id: ""
	I1010 19:27:35.361015  148123 logs.go:282] 0 containers: []
	W1010 19:27:35.361027  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:35.361034  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:35.361097  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:35.400191  148123 cri.go:89] found id: ""
	I1010 19:27:35.400219  148123 logs.go:282] 0 containers: []
	W1010 19:27:35.400230  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:35.400244  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:35.400314  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:35.438092  148123 cri.go:89] found id: ""
	I1010 19:27:35.438126  148123 logs.go:282] 0 containers: []
	W1010 19:27:35.438139  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:35.438147  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:35.438215  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:35.472656  148123 cri.go:89] found id: ""
	I1010 19:27:35.472681  148123 logs.go:282] 0 containers: []
	W1010 19:27:35.472688  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:35.472694  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:35.472743  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:35.505895  148123 cri.go:89] found id: ""
	I1010 19:27:35.505931  148123 logs.go:282] 0 containers: []
	W1010 19:27:35.505942  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:35.505950  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:35.506015  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:35.540406  148123 cri.go:89] found id: ""
	I1010 19:27:35.540441  148123 logs.go:282] 0 containers: []
	W1010 19:27:35.540451  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:35.540462  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:35.540478  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:35.555874  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:35.555903  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:35.628400  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:35.628427  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:35.628453  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:35.708262  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:35.708305  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:35.752796  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:35.752824  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:38.306212  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:38.319473  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:38.319543  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:38.357493  148123 cri.go:89] found id: ""
	I1010 19:27:38.357520  148123 logs.go:282] 0 containers: []
	W1010 19:27:38.357529  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:38.357536  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:38.357588  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:38.395908  148123 cri.go:89] found id: ""
	I1010 19:27:38.395935  148123 logs.go:282] 0 containers: []
	W1010 19:27:38.395943  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:38.395949  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:38.396010  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:38.434495  148123 cri.go:89] found id: ""
	I1010 19:27:38.434529  148123 logs.go:282] 0 containers: []
	W1010 19:27:38.434539  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:38.434545  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:38.434608  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:38.474647  148123 cri.go:89] found id: ""
	I1010 19:27:38.474686  148123 logs.go:282] 0 containers: []
	W1010 19:27:38.474698  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:38.474710  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:38.474784  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:38.511105  148123 cri.go:89] found id: ""
	I1010 19:27:38.511133  148123 logs.go:282] 0 containers: []
	W1010 19:27:38.511141  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:38.511148  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:38.511199  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:38.551011  148123 cri.go:89] found id: ""
	I1010 19:27:38.551055  148123 logs.go:282] 0 containers: []
	W1010 19:27:38.551066  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:38.551074  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:38.551139  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:35.227301  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:37.726528  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:35.567838  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:38.066658  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:38.552499  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:40.544561  147758 pod_ready.go:82] duration metric: took 4m0.00091784s for pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace to be "Ready" ...
	E1010 19:27:40.544600  147758 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace to be "Ready" (will not retry!)
	I1010 19:27:40.544623  147758 pod_ready.go:39] duration metric: took 4m15.623470592s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 19:27:40.544664  147758 kubeadm.go:597] duration metric: took 4m22.92080204s to restartPrimaryControlPlane
	W1010 19:27:40.544737  147758 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1010 19:27:40.544829  147758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1010 19:27:38.590579  148123 cri.go:89] found id: ""
	I1010 19:27:38.590606  148123 logs.go:282] 0 containers: []
	W1010 19:27:38.590615  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:38.590621  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:38.590686  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:38.627775  148123 cri.go:89] found id: ""
	I1010 19:27:38.627812  148123 logs.go:282] 0 containers: []
	W1010 19:27:38.627826  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:38.627838  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:38.627854  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:38.683195  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:38.683239  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:38.697220  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:38.697250  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:38.771619  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:38.771651  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:38.771668  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:38.851324  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:38.851362  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:41.408165  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:41.422958  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:41.423032  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:41.466774  148123 cri.go:89] found id: ""
	I1010 19:27:41.466806  148123 logs.go:282] 0 containers: []
	W1010 19:27:41.466816  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:41.466823  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:41.466874  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:41.506429  148123 cri.go:89] found id: ""
	I1010 19:27:41.506470  148123 logs.go:282] 0 containers: []
	W1010 19:27:41.506482  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:41.506489  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:41.506555  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:41.548929  148123 cri.go:89] found id: ""
	I1010 19:27:41.548965  148123 logs.go:282] 0 containers: []
	W1010 19:27:41.548976  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:41.548983  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:41.549044  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:41.592243  148123 cri.go:89] found id: ""
	I1010 19:27:41.592274  148123 logs.go:282] 0 containers: []
	W1010 19:27:41.592283  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:41.592290  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:41.592342  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:41.628516  148123 cri.go:89] found id: ""
	I1010 19:27:41.628558  148123 logs.go:282] 0 containers: []
	W1010 19:27:41.628570  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:41.628578  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:41.628650  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:41.665011  148123 cri.go:89] found id: ""
	I1010 19:27:41.665041  148123 logs.go:282] 0 containers: []
	W1010 19:27:41.665053  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:41.665060  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:41.665112  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:41.702650  148123 cri.go:89] found id: ""
	I1010 19:27:41.702681  148123 logs.go:282] 0 containers: []
	W1010 19:27:41.702692  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:41.702700  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:41.702764  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:41.738971  148123 cri.go:89] found id: ""
	I1010 19:27:41.739002  148123 logs.go:282] 0 containers: []
	W1010 19:27:41.739021  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:41.739031  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:41.739062  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:41.813296  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:41.813321  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:41.813335  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:41.895138  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:41.895185  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:41.941975  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:41.942012  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:41.994838  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:41.994888  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:39.727140  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:41.728263  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:40.566241  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:43.065219  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:44.511153  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:44.524871  148123 kubeadm.go:597] duration metric: took 4m4.194337013s to restartPrimaryControlPlane
	W1010 19:27:44.524953  148123 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1010 19:27:44.524988  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1010 19:27:44.994063  148123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 19:27:45.010926  148123 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1010 19:27:45.021757  148123 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1010 19:27:45.032246  148123 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1010 19:27:45.032270  148123 kubeadm.go:157] found existing configuration files:
	
	I1010 19:27:45.032326  148123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1010 19:27:45.042799  148123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1010 19:27:45.042866  148123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1010 19:27:45.053017  148123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1010 19:27:45.063373  148123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1010 19:27:45.063445  148123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1010 19:27:45.074689  148123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1010 19:27:45.085187  148123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1010 19:27:45.085241  148123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1010 19:27:45.096275  148123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1010 19:27:45.106686  148123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1010 19:27:45.106750  148123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1010 19:27:45.117018  148123 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1010 19:27:45.358920  148123 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1010 19:27:44.226853  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:46.227586  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:48.727469  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:45.066410  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:47.569864  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:51.230704  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:53.727351  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:50.065845  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:52.066267  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:55.727457  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:58.226861  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:54.564611  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:56.566702  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:00.728542  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:03.225779  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:59.065614  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:01.068088  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:03.566502  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:06.739904  147758 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.195045639s)
	I1010 19:28:06.739984  147758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 19:28:06.756046  147758 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1010 19:28:06.768580  147758 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1010 19:28:06.780663  147758 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1010 19:28:06.780732  147758 kubeadm.go:157] found existing configuration files:
	
	I1010 19:28:06.780807  147758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1010 19:28:06.792092  147758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1010 19:28:06.792179  147758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1010 19:28:06.804515  147758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1010 19:28:06.814969  147758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1010 19:28:06.815040  147758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1010 19:28:06.826056  147758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1010 19:28:06.836050  147758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1010 19:28:06.836108  147758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1010 19:28:06.846125  147758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1010 19:28:06.855505  147758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1010 19:28:06.855559  147758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1010 19:28:06.865367  147758 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1010 19:28:06.916227  147758 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1010 19:28:06.916375  147758 kubeadm.go:310] [preflight] Running pre-flight checks
	I1010 19:28:07.036539  147758 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1010 19:28:07.036652  147758 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1010 19:28:07.036762  147758 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1010 19:28:07.044897  147758 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1010 19:28:07.046978  147758 out.go:235]   - Generating certificates and keys ...
	I1010 19:28:07.047117  147758 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1010 19:28:07.047229  147758 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1010 19:28:07.047384  147758 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1010 19:28:07.047467  147758 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1010 19:28:07.047584  147758 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1010 19:28:07.047675  147758 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1010 19:28:07.047794  147758 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1010 19:28:07.047902  147758 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1010 19:28:07.048005  147758 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1010 19:28:07.048093  147758 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1010 19:28:07.048142  147758 kubeadm.go:310] [certs] Using the existing "sa" key
	I1010 19:28:07.048210  147758 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1010 19:28:07.127836  147758 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1010 19:28:07.434492  147758 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1010 19:28:07.487567  147758 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1010 19:28:07.731314  147758 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1010 19:28:07.919060  147758 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1010 19:28:07.919565  147758 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1010 19:28:07.922740  147758 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1010 19:28:05.227611  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:07.229836  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:06.065246  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:08.067360  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:07.925140  147758 out.go:235]   - Booting up control plane ...
	I1010 19:28:07.925239  147758 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1010 19:28:07.925356  147758 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1010 19:28:07.925444  147758 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1010 19:28:07.944375  147758 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1010 19:28:07.951182  147758 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1010 19:28:07.951274  147758 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1010 19:28:08.087325  147758 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1010 19:28:08.087560  147758 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1010 19:28:08.598361  147758 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 511.081439ms
	I1010 19:28:08.598502  147758 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1010 19:28:09.727932  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:12.227939  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:10.566945  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:13.067142  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:14.100517  147758 kubeadm.go:310] [api-check] The API server is healthy after 5.501985157s
	I1010 19:28:14.119932  147758 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1010 19:28:14.149557  147758 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1010 19:28:14.207413  147758 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1010 19:28:14.207735  147758 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-541370 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1010 19:28:14.226199  147758 kubeadm.go:310] [bootstrap-token] Using token: sbg4v0.t5me93bb5vn8m913
	I1010 19:28:14.228059  147758 out.go:235]   - Configuring RBAC rules ...
	I1010 19:28:14.228208  147758 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1010 19:28:14.241706  147758 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1010 19:28:14.256554  147758 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1010 19:28:14.263129  147758 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1010 19:28:14.274346  147758 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1010 19:28:14.282313  147758 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1010 19:28:14.507850  147758 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1010 19:28:14.970234  147758 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1010 19:28:15.508328  147758 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1010 19:28:15.509530  147758 kubeadm.go:310] 
	I1010 19:28:15.509635  147758 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1010 19:28:15.509653  147758 kubeadm.go:310] 
	I1010 19:28:15.509743  147758 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1010 19:28:15.509762  147758 kubeadm.go:310] 
	I1010 19:28:15.509795  147758 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1010 19:28:15.509888  147758 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1010 19:28:15.509954  147758 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1010 19:28:15.509970  147758 kubeadm.go:310] 
	I1010 19:28:15.510083  147758 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1010 19:28:15.510103  147758 kubeadm.go:310] 
	I1010 19:28:15.510203  147758 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1010 19:28:15.510214  147758 kubeadm.go:310] 
	I1010 19:28:15.510297  147758 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1010 19:28:15.510410  147758 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1010 19:28:15.510489  147758 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1010 19:28:15.510495  147758 kubeadm.go:310] 
	I1010 19:28:15.510603  147758 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1010 19:28:15.510707  147758 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1010 19:28:15.510724  147758 kubeadm.go:310] 
	I1010 19:28:15.510807  147758 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token sbg4v0.t5me93bb5vn8m913 \
	I1010 19:28:15.510958  147758 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:bc8568f96f96483e4403a7eff9866ac7bd8b0e76d8203796a49dd1a769f11bda \
	I1010 19:28:15.511005  147758 kubeadm.go:310] 	--control-plane 
	I1010 19:28:15.511034  147758 kubeadm.go:310] 
	I1010 19:28:15.511161  147758 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1010 19:28:15.511173  147758 kubeadm.go:310] 
	I1010 19:28:15.511268  147758 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token sbg4v0.t5me93bb5vn8m913 \
	I1010 19:28:15.511403  147758 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:bc8568f96f96483e4403a7eff9866ac7bd8b0e76d8203796a49dd1a769f11bda 
	I1010 19:28:15.512298  147758 kubeadm.go:310] W1010 19:28:06.890572    2539 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1010 19:28:15.512594  147758 kubeadm.go:310] W1010 19:28:06.891448    2539 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1010 19:28:15.512702  147758 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1010 19:28:15.512734  147758 cni.go:84] Creating CNI manager for ""
	I1010 19:28:15.512744  147758 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1010 19:28:15.514703  147758 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1010 19:28:15.516229  147758 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1010 19:28:15.527554  147758 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1010 19:28:15.549266  147758 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1010 19:28:15.549362  147758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:15.549399  147758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-541370 minikube.k8s.io/updated_at=2024_10_10T19_28_15_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=e90d7de550e839433dc631623053398b652747fc minikube.k8s.io/name=embed-certs-541370 minikube.k8s.io/primary=true
	I1010 19:28:15.590732  147758 ops.go:34] apiserver oom_adj: -16
	I1010 19:28:15.740942  147758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:16.241392  147758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:16.741807  147758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:14.229241  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:16.727260  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:14.059512  148525 pod_ready.go:82] duration metric: took 4m0.00022742s for pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace to be "Ready" ...
	E1010 19:28:14.059550  148525 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace to be "Ready" (will not retry!)
	I1010 19:28:14.059569  148525 pod_ready.go:39] duration metric: took 4m7.001942194s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 19:28:14.059614  148525 kubeadm.go:597] duration metric: took 4m14.998320151s to restartPrimaryControlPlane
	W1010 19:28:14.059672  148525 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1010 19:28:14.059698  148525 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1010 19:28:17.241315  147758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:17.741580  147758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:18.241006  147758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:18.742042  147758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:19.241251  147758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:19.741030  147758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:19.862541  147758 kubeadm.go:1113] duration metric: took 4.313246481s to wait for elevateKubeSystemPrivileges
	I1010 19:28:19.862579  147758 kubeadm.go:394] duration metric: took 5m2.288571479s to StartCluster
	I1010 19:28:19.862628  147758 settings.go:142] acquiring lock: {Name:mk8d73e472e6ca16e47be8eb712406a95625b8da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 19:28:19.862751  147758 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19787-81676/kubeconfig
	I1010 19:28:19.864528  147758 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19787-81676/kubeconfig: {Name:mk31297b607b774870865548d952622c04d970cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 19:28:19.864812  147758 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.120 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1010 19:28:19.864910  147758 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1010 19:28:19.865019  147758 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-541370"
	I1010 19:28:19.865041  147758 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-541370"
	W1010 19:28:19.865053  147758 addons.go:243] addon storage-provisioner should already be in state true
	I1010 19:28:19.865062  147758 addons.go:69] Setting default-storageclass=true in profile "embed-certs-541370"
	I1010 19:28:19.865085  147758 host.go:66] Checking if "embed-certs-541370" exists ...
	I1010 19:28:19.865077  147758 config.go:182] Loaded profile config "embed-certs-541370": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 19:28:19.865129  147758 addons.go:69] Setting metrics-server=true in profile "embed-certs-541370"
	I1010 19:28:19.865164  147758 addons.go:234] Setting addon metrics-server=true in "embed-certs-541370"
	W1010 19:28:19.865179  147758 addons.go:243] addon metrics-server should already be in state true
	I1010 19:28:19.865115  147758 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-541370"
	I1010 19:28:19.865215  147758 host.go:66] Checking if "embed-certs-541370" exists ...
	I1010 19:28:19.865558  147758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:28:19.865593  147758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:28:19.865607  147758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:28:19.865629  147758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:28:19.865595  147758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:28:19.865725  147758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:28:19.866857  147758 out.go:177] * Verifying Kubernetes components...
	I1010 19:28:19.868590  147758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 19:28:19.882524  147758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42899
	I1010 19:28:19.882595  147758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39477
	I1010 19:28:19.882678  147758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42999
	I1010 19:28:19.883065  147758 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:28:19.883168  147758 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:28:19.883281  147758 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:28:19.883559  147758 main.go:141] libmachine: Using API Version  1
	I1010 19:28:19.883575  147758 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:28:19.883657  147758 main.go:141] libmachine: Using API Version  1
	I1010 19:28:19.883669  147758 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:28:19.883802  147758 main.go:141] libmachine: Using API Version  1
	I1010 19:28:19.883818  147758 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:28:19.883968  147758 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:28:19.883976  147758 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:28:19.884141  147758 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:28:19.884194  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetState
	I1010 19:28:19.884408  147758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:28:19.884437  147758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:28:19.884684  147758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:28:19.884746  147758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:28:19.887912  147758 addons.go:234] Setting addon default-storageclass=true in "embed-certs-541370"
	W1010 19:28:19.887942  147758 addons.go:243] addon default-storageclass should already be in state true
	I1010 19:28:19.887973  147758 host.go:66] Checking if "embed-certs-541370" exists ...
	I1010 19:28:19.888333  147758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:28:19.888383  147758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:28:19.901588  147758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37895
	I1010 19:28:19.902131  147758 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:28:19.902597  147758 main.go:141] libmachine: Using API Version  1
	I1010 19:28:19.902621  147758 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:28:19.902927  147758 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:28:19.903101  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetState
	I1010 19:28:19.904556  147758 main.go:141] libmachine: (embed-certs-541370) Calling .DriverName
	I1010 19:28:19.905207  147758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33927
	I1010 19:28:19.905621  147758 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:28:19.906188  147758 main.go:141] libmachine: Using API Version  1
	I1010 19:28:19.906209  147758 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:28:19.906599  147758 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:28:19.906647  147758 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 19:28:19.906837  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetState
	I1010 19:28:19.907699  147758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34245
	I1010 19:28:19.908147  147758 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:28:19.908557  147758 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 19:28:19.908584  147758 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1010 19:28:19.908610  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHHostname
	I1010 19:28:19.908705  147758 main.go:141] libmachine: (embed-certs-541370) Calling .DriverName
	I1010 19:28:19.908717  147758 main.go:141] libmachine: Using API Version  1
	I1010 19:28:19.908745  147758 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:28:19.909364  147758 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:28:19.910154  147758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:28:19.910208  147758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:28:19.910840  147758 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1010 19:28:19.912716  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:28:19.912722  147758 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1010 19:28:19.912743  147758 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1010 19:28:19.912769  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHHostname
	I1010 19:28:19.913199  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:28:19.913224  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:28:19.913500  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHPort
	I1010 19:28:19.913682  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:28:19.913845  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHUsername
	I1010 19:28:19.913972  147758 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/embed-certs-541370/id_rsa Username:docker}
	I1010 19:28:19.921800  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:28:19.922343  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:28:19.922374  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:28:19.922653  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHPort
	I1010 19:28:19.922842  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:28:19.922965  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHUsername
	I1010 19:28:19.923108  147758 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/embed-certs-541370/id_rsa Username:docker}
	I1010 19:28:19.935097  147758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39933
	I1010 19:28:19.935605  147758 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:28:19.936123  147758 main.go:141] libmachine: Using API Version  1
	I1010 19:28:19.936146  147758 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:28:19.936561  147758 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:28:19.936747  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetState
	I1010 19:28:19.938789  147758 main.go:141] libmachine: (embed-certs-541370) Calling .DriverName
	I1010 19:28:19.939019  147758 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1010 19:28:19.939034  147758 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1010 19:28:19.939054  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHHostname
	I1010 19:28:19.941682  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:28:19.942137  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:28:19.942165  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:28:19.942404  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHPort
	I1010 19:28:19.942642  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:28:19.942767  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHUsername
	I1010 19:28:19.942915  147758 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/embed-certs-541370/id_rsa Username:docker}
	I1010 19:28:20.108247  147758 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 19:28:20.149819  147758 node_ready.go:35] waiting up to 6m0s for node "embed-certs-541370" to be "Ready" ...
	I1010 19:28:20.163096  147758 node_ready.go:49] node "embed-certs-541370" has status "Ready":"True"
	I1010 19:28:20.163118  147758 node_ready.go:38] duration metric: took 13.26779ms for node "embed-certs-541370" to be "Ready" ...
	I1010 19:28:20.163128  147758 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 19:28:20.168620  147758 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-59752" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:20.241952  147758 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1010 19:28:20.241978  147758 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1010 19:28:20.249679  147758 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1010 19:28:20.290149  147758 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1010 19:28:20.290190  147758 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1010 19:28:20.291475  147758 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 19:28:20.410539  147758 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1010 19:28:20.410582  147758 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1010 19:28:20.491567  147758 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1010 19:28:20.684370  147758 main.go:141] libmachine: Making call to close driver server
	I1010 19:28:20.684403  147758 main.go:141] libmachine: (embed-certs-541370) Calling .Close
	I1010 19:28:20.684695  147758 main.go:141] libmachine: (embed-certs-541370) DBG | Closing plugin on server side
	I1010 19:28:20.684742  147758 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:28:20.684749  147758 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:28:20.684756  147758 main.go:141] libmachine: Making call to close driver server
	I1010 19:28:20.684764  147758 main.go:141] libmachine: (embed-certs-541370) Calling .Close
	I1010 19:28:20.685029  147758 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:28:20.685059  147758 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:28:20.685036  147758 main.go:141] libmachine: (embed-certs-541370) DBG | Closing plugin on server side
	I1010 19:28:20.695901  147758 main.go:141] libmachine: Making call to close driver server
	I1010 19:28:20.695926  147758 main.go:141] libmachine: (embed-certs-541370) Calling .Close
	I1010 19:28:20.696202  147758 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:28:20.696249  147758 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:28:21.439463  147758 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.147952803s)
	I1010 19:28:21.439626  147758 main.go:141] libmachine: Making call to close driver server
	I1010 19:28:21.439659  147758 main.go:141] libmachine: (embed-certs-541370) Calling .Close
	I1010 19:28:21.439951  147758 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:28:21.439969  147758 main.go:141] libmachine: (embed-certs-541370) DBG | Closing plugin on server side
	I1010 19:28:21.439976  147758 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:28:21.439997  147758 main.go:141] libmachine: Making call to close driver server
	I1010 19:28:21.440009  147758 main.go:141] libmachine: (embed-certs-541370) Calling .Close
	I1010 19:28:21.440299  147758 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:28:21.440298  147758 main.go:141] libmachine: (embed-certs-541370) DBG | Closing plugin on server side
	I1010 19:28:21.440314  147758 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:28:21.780486  147758 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.288854773s)
	I1010 19:28:21.780551  147758 main.go:141] libmachine: Making call to close driver server
	I1010 19:28:21.780567  147758 main.go:141] libmachine: (embed-certs-541370) Calling .Close
	I1010 19:28:21.780948  147758 main.go:141] libmachine: (embed-certs-541370) DBG | Closing plugin on server side
	I1010 19:28:21.780980  147758 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:28:21.780996  147758 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:28:21.781007  147758 main.go:141] libmachine: Making call to close driver server
	I1010 19:28:21.781016  147758 main.go:141] libmachine: (embed-certs-541370) Calling .Close
	I1010 19:28:21.781289  147758 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:28:21.781310  147758 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:28:21.781331  147758 addons.go:475] Verifying addon metrics-server=true in "embed-certs-541370"
	I1010 19:28:21.783512  147758 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1010 19:28:21.784958  147758 addons.go:510] duration metric: took 1.92006141s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1010 19:28:19.225844  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:21.227960  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:23.726439  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:22.195129  147758 pod_ready.go:103] pod "coredns-7c65d6cfc9-59752" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:24.678736  147758 pod_ready.go:103] pod "coredns-7c65d6cfc9-59752" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:25.727053  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:27.727657  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:27.177348  147758 pod_ready.go:103] pod "coredns-7c65d6cfc9-59752" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:29.177459  147758 pod_ready.go:93] pod "coredns-7c65d6cfc9-59752" in "kube-system" namespace has status "Ready":"True"
	I1010 19:28:29.177485  147758 pod_ready.go:82] duration metric: took 9.008841503s for pod "coredns-7c65d6cfc9-59752" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:29.177495  147758 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-n7wxs" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:29.182744  147758 pod_ready.go:93] pod "coredns-7c65d6cfc9-n7wxs" in "kube-system" namespace has status "Ready":"True"
	I1010 19:28:29.182777  147758 pod_ready.go:82] duration metric: took 5.273263ms for pod "coredns-7c65d6cfc9-n7wxs" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:29.182791  147758 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:29.191507  147758 pod_ready.go:93] pod "etcd-embed-certs-541370" in "kube-system" namespace has status "Ready":"True"
	I1010 19:28:29.191539  147758 pod_ready.go:82] duration metric: took 8.738961ms for pod "etcd-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:29.191554  147758 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:29.199167  147758 pod_ready.go:93] pod "kube-apiserver-embed-certs-541370" in "kube-system" namespace has status "Ready":"True"
	I1010 19:28:29.199218  147758 pod_ready.go:82] duration metric: took 7.635672ms for pod "kube-apiserver-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:29.199234  147758 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:29.204558  147758 pod_ready.go:93] pod "kube-controller-manager-embed-certs-541370" in "kube-system" namespace has status "Ready":"True"
	I1010 19:28:29.204581  147758 pod_ready.go:82] duration metric: took 5.337574ms for pod "kube-controller-manager-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:29.204591  147758 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-6hdds" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:29.573781  147758 pod_ready.go:93] pod "kube-proxy-6hdds" in "kube-system" namespace has status "Ready":"True"
	I1010 19:28:29.573808  147758 pod_ready.go:82] duration metric: took 369.210969ms for pod "kube-proxy-6hdds" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:29.573818  147758 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:29.974015  147758 pod_ready.go:93] pod "kube-scheduler-embed-certs-541370" in "kube-system" namespace has status "Ready":"True"
	I1010 19:28:29.974039  147758 pod_ready.go:82] duration metric: took 400.214845ms for pod "kube-scheduler-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:29.974048  147758 pod_ready.go:39] duration metric: took 9.810911064s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 19:28:29.974066  147758 api_server.go:52] waiting for apiserver process to appear ...
	I1010 19:28:29.974120  147758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:28:29.991332  147758 api_server.go:72] duration metric: took 10.126480862s to wait for apiserver process to appear ...
	I1010 19:28:29.991356  147758 api_server.go:88] waiting for apiserver healthz status ...
	I1010 19:28:29.991382  147758 api_server.go:253] Checking apiserver healthz at https://192.168.39.120:8443/healthz ...
	I1010 19:28:29.995855  147758 api_server.go:279] https://192.168.39.120:8443/healthz returned 200:
	ok
	I1010 19:28:29.997488  147758 api_server.go:141] control plane version: v1.31.1
	I1010 19:28:29.997516  147758 api_server.go:131] duration metric: took 6.152312ms to wait for apiserver health ...
	I1010 19:28:29.997526  147758 system_pods.go:43] waiting for kube-system pods to appear ...
	I1010 19:28:30.176631  147758 system_pods.go:59] 9 kube-system pods found
	I1010 19:28:30.176662  147758 system_pods.go:61] "coredns-7c65d6cfc9-59752" [f7980c69-dd8e-42e0-a0ab-1dedf2203367] Running
	I1010 19:28:30.176668  147758 system_pods.go:61] "coredns-7c65d6cfc9-n7wxs" [ac89df92-bbdb-432b-8c4a-d9ced3c2f2b5] Running
	I1010 19:28:30.176672  147758 system_pods.go:61] "etcd-embed-certs-541370" [94f92d38-753f-47c7-95b6-7ef0486a172d] Running
	I1010 19:28:30.176676  147758 system_pods.go:61] "kube-apiserver-embed-certs-541370" [b60f7ec1-6416-4e0b-8f49-d673496187b6] Running
	I1010 19:28:30.176680  147758 system_pods.go:61] "kube-controller-manager-embed-certs-541370" [ca09511b-ec93-4146-9d43-5fbc0880394a] Running
	I1010 19:28:30.176683  147758 system_pods.go:61] "kube-proxy-6hdds" [fe7cbbf4-12be-469d-b176-37c4daccab96] Running
	I1010 19:28:30.176686  147758 system_pods.go:61] "kube-scheduler-embed-certs-541370" [37c96f22-d5a4-4233-ba65-7367b075656a] Running
	I1010 19:28:30.176693  147758 system_pods.go:61] "metrics-server-6867b74b74-znhn4" [5dc1f764-c7c7-480e-b787-5f5cf6c14a84] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1010 19:28:30.176699  147758 system_pods.go:61] "storage-provisioner" [cfb28184-daef-40be-9170-b42058727418] Running
	I1010 19:28:30.176707  147758 system_pods.go:74] duration metric: took 179.174083ms to wait for pod list to return data ...
	I1010 19:28:30.176714  147758 default_sa.go:34] waiting for default service account to be created ...
	I1010 19:28:30.375326  147758 default_sa.go:45] found service account: "default"
	I1010 19:28:30.375361  147758 default_sa.go:55] duration metric: took 198.640267ms for default service account to be created ...
	I1010 19:28:30.375374  147758 system_pods.go:116] waiting for k8s-apps to be running ...
	I1010 19:28:30.578749  147758 system_pods.go:86] 9 kube-system pods found
	I1010 19:28:30.578780  147758 system_pods.go:89] "coredns-7c65d6cfc9-59752" [f7980c69-dd8e-42e0-a0ab-1dedf2203367] Running
	I1010 19:28:30.578786  147758 system_pods.go:89] "coredns-7c65d6cfc9-n7wxs" [ac89df92-bbdb-432b-8c4a-d9ced3c2f2b5] Running
	I1010 19:28:30.578790  147758 system_pods.go:89] "etcd-embed-certs-541370" [94f92d38-753f-47c7-95b6-7ef0486a172d] Running
	I1010 19:28:30.578794  147758 system_pods.go:89] "kube-apiserver-embed-certs-541370" [b60f7ec1-6416-4e0b-8f49-d673496187b6] Running
	I1010 19:28:30.578797  147758 system_pods.go:89] "kube-controller-manager-embed-certs-541370" [ca09511b-ec93-4146-9d43-5fbc0880394a] Running
	I1010 19:28:30.578801  147758 system_pods.go:89] "kube-proxy-6hdds" [fe7cbbf4-12be-469d-b176-37c4daccab96] Running
	I1010 19:28:30.578804  147758 system_pods.go:89] "kube-scheduler-embed-certs-541370" [37c96f22-d5a4-4233-ba65-7367b075656a] Running
	I1010 19:28:30.578810  147758 system_pods.go:89] "metrics-server-6867b74b74-znhn4" [5dc1f764-c7c7-480e-b787-5f5cf6c14a84] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1010 19:28:30.578814  147758 system_pods.go:89] "storage-provisioner" [cfb28184-daef-40be-9170-b42058727418] Running
	I1010 19:28:30.578822  147758 system_pods.go:126] duration metric: took 203.441477ms to wait for k8s-apps to be running ...
	I1010 19:28:30.578829  147758 system_svc.go:44] waiting for kubelet service to be running ....
	I1010 19:28:30.578877  147758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 19:28:30.596523  147758 system_svc.go:56] duration metric: took 17.684729ms WaitForService to wait for kubelet
	I1010 19:28:30.596553  147758 kubeadm.go:582] duration metric: took 10.731708748s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1010 19:28:30.596573  147758 node_conditions.go:102] verifying NodePressure condition ...
	I1010 19:28:30.774749  147758 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1010 19:28:30.774783  147758 node_conditions.go:123] node cpu capacity is 2
	I1010 19:28:30.774807  147758 node_conditions.go:105] duration metric: took 178.228671ms to run NodePressure ...
	I1010 19:28:30.774822  147758 start.go:241] waiting for startup goroutines ...
	I1010 19:28:30.774831  147758 start.go:246] waiting for cluster config update ...
	I1010 19:28:30.774845  147758 start.go:255] writing updated cluster config ...
	I1010 19:28:30.775121  147758 ssh_runner.go:195] Run: rm -f paused
	I1010 19:28:30.826689  147758 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1010 19:28:30.828795  147758 out.go:177] * Done! kubectl is now configured to use "embed-certs-541370" cluster and "default" namespace by default
	I1010 19:28:29.728096  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:32.229632  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:34.726536  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:36.727032  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:38.727488  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:40.372903  148525 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.31317648s)
	I1010 19:28:40.372991  148525 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 19:28:40.389319  148525 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1010 19:28:40.400123  148525 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1010 19:28:40.411906  148525 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1010 19:28:40.411932  148525 kubeadm.go:157] found existing configuration files:
	
	I1010 19:28:40.411976  148525 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1010 19:28:40.421840  148525 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1010 19:28:40.421904  148525 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1010 19:28:40.432229  148525 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1010 19:28:40.442121  148525 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1010 19:28:40.442203  148525 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1010 19:28:40.452969  148525 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1010 19:28:40.463085  148525 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1010 19:28:40.463146  148525 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1010 19:28:40.473103  148525 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1010 19:28:40.482854  148525 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1010 19:28:40.482914  148525 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1010 19:28:40.494023  148525 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1010 19:28:40.543369  148525 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1010 19:28:40.543466  148525 kubeadm.go:310] [preflight] Running pre-flight checks
	I1010 19:28:40.657301  148525 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1010 19:28:40.657462  148525 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1010 19:28:40.657579  148525 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1010 19:28:40.669222  148525 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1010 19:28:40.670995  148525 out.go:235]   - Generating certificates and keys ...
	I1010 19:28:40.671102  148525 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1010 19:28:40.671171  148525 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1010 19:28:40.671284  148525 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1010 19:28:40.671374  148525 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1010 19:28:40.671471  148525 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1010 19:28:40.671557  148525 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1010 19:28:40.671650  148525 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1010 19:28:40.671751  148525 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1010 19:28:40.671895  148525 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1010 19:28:40.672000  148525 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1010 19:28:40.672056  148525 kubeadm.go:310] [certs] Using the existing "sa" key
	I1010 19:28:40.672136  148525 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1010 19:28:40.876613  148525 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1010 19:28:41.109518  148525 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1010 19:28:41.186751  148525 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1010 19:28:41.424710  148525 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1010 19:28:41.479611  148525 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1010 19:28:41.480235  148525 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1010 19:28:41.483222  148525 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1010 19:28:41.227521  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:43.728023  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:41.484809  148525 out.go:235]   - Booting up control plane ...
	I1010 19:28:41.484935  148525 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1010 19:28:41.485020  148525 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1010 19:28:41.485317  148525 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1010 19:28:41.506919  148525 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1010 19:28:41.517006  148525 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1010 19:28:41.517077  148525 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1010 19:28:41.653211  148525 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1010 19:28:41.653364  148525 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1010 19:28:42.655360  148525 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001910447s
	I1010 19:28:42.655482  148525 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1010 19:28:47.658431  148525 kubeadm.go:310] [api-check] The API server is healthy after 5.003169217s
	I1010 19:28:47.676178  148525 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1010 19:28:47.694752  148525 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1010 19:28:47.720376  148525 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1010 19:28:47.720645  148525 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-361847 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1010 19:28:47.736489  148525 kubeadm.go:310] [bootstrap-token] Using token: cprf0t.lm4xp75yi0cdu4sy
	I1010 19:28:46.228217  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:48.726740  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:47.737958  148525 out.go:235]   - Configuring RBAC rules ...
	I1010 19:28:47.738089  148525 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1010 19:28:47.750073  148525 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1010 19:28:47.758010  148525 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1010 19:28:47.761649  148525 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1010 19:28:47.768953  148525 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1010 19:28:47.774428  148525 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1010 19:28:48.065988  148525 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1010 19:28:48.502538  148525 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1010 19:28:49.066479  148525 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1010 19:28:49.069842  148525 kubeadm.go:310] 
	I1010 19:28:49.069937  148525 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1010 19:28:49.069947  148525 kubeadm.go:310] 
	I1010 19:28:49.070046  148525 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1010 19:28:49.070058  148525 kubeadm.go:310] 
	I1010 19:28:49.070089  148525 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1010 19:28:49.070166  148525 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1010 19:28:49.070254  148525 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1010 19:28:49.070265  148525 kubeadm.go:310] 
	I1010 19:28:49.070342  148525 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1010 19:28:49.070353  148525 kubeadm.go:310] 
	I1010 19:28:49.070446  148525 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1010 19:28:49.070478  148525 kubeadm.go:310] 
	I1010 19:28:49.070544  148525 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1010 19:28:49.070640  148525 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1010 19:28:49.070750  148525 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1010 19:28:49.070773  148525 kubeadm.go:310] 
	I1010 19:28:49.070880  148525 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1010 19:28:49.070990  148525 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1010 19:28:49.071001  148525 kubeadm.go:310] 
	I1010 19:28:49.071153  148525 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token cprf0t.lm4xp75yi0cdu4sy \
	I1010 19:28:49.071299  148525 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:bc8568f96f96483e4403a7eff9866ac7bd8b0e76d8203796a49dd1a769f11bda \
	I1010 19:28:49.071330  148525 kubeadm.go:310] 	--control-plane 
	I1010 19:28:49.071349  148525 kubeadm.go:310] 
	I1010 19:28:49.071468  148525 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1010 19:28:49.071497  148525 kubeadm.go:310] 
	I1010 19:28:49.072228  148525 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token cprf0t.lm4xp75yi0cdu4sy \
	I1010 19:28:49.072354  148525 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:bc8568f96f96483e4403a7eff9866ac7bd8b0e76d8203796a49dd1a769f11bda 
	I1010 19:28:49.074595  148525 kubeadm.go:310] W1010 19:28:40.525557    2560 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1010 19:28:49.074944  148525 kubeadm.go:310] W1010 19:28:40.526329    2560 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1010 19:28:49.075102  148525 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1010 19:28:49.075143  148525 cni.go:84] Creating CNI manager for ""
	I1010 19:28:49.075166  148525 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1010 19:28:49.077190  148525 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1010 19:28:49.078665  148525 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1010 19:28:49.091792  148525 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1010 19:28:49.113801  148525 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1010 19:28:49.113920  148525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-361847 minikube.k8s.io/updated_at=2024_10_10T19_28_49_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=e90d7de550e839433dc631623053398b652747fc minikube.k8s.io/name=default-k8s-diff-port-361847 minikube.k8s.io/primary=true
	I1010 19:28:49.114074  148525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:49.154398  148525 ops.go:34] apiserver oom_adj: -16
	I1010 19:28:49.351271  148525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:49.852049  148525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:50.351441  148525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:50.852022  148525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:51.351391  148525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:51.851329  148525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:52.351840  148525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:52.852392  148525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:53.351397  148525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:53.443325  148525 kubeadm.go:1113] duration metric: took 4.329288133s to wait for elevateKubeSystemPrivileges
	I1010 19:28:53.443363  148525 kubeadm.go:394] duration metric: took 4m54.439732071s to StartCluster
	I1010 19:28:53.443386  148525 settings.go:142] acquiring lock: {Name:mk8d73e472e6ca16e47be8eb712406a95625b8da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 19:28:53.443481  148525 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19787-81676/kubeconfig
	I1010 19:28:53.445465  148525 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19787-81676/kubeconfig: {Name:mk31297b607b774870865548d952622c04d970cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 19:28:53.445747  148525 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.32 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1010 19:28:53.445842  148525 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1010 19:28:53.445957  148525 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-361847"
	I1010 19:28:53.445980  148525 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-361847"
	W1010 19:28:53.445992  148525 addons.go:243] addon storage-provisioner should already be in state true
	I1010 19:28:53.446004  148525 config.go:182] Loaded profile config "default-k8s-diff-port-361847": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 19:28:53.446026  148525 host.go:66] Checking if "default-k8s-diff-port-361847" exists ...
	I1010 19:28:53.446065  148525 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-361847"
	I1010 19:28:53.446100  148525 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-361847"
	I1010 19:28:53.446085  148525 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-361847"
	I1010 19:28:53.446137  148525 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-361847"
	W1010 19:28:53.446151  148525 addons.go:243] addon metrics-server should already be in state true
	I1010 19:28:53.446242  148525 host.go:66] Checking if "default-k8s-diff-port-361847" exists ...
	I1010 19:28:53.446515  148525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:28:53.446562  148525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:28:53.447089  148525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:28:53.447135  148525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:28:53.447315  148525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:28:53.447360  148525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:28:53.450779  148525 out.go:177] * Verifying Kubernetes components...
	I1010 19:28:53.452838  148525 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 19:28:53.465502  148525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36411
	I1010 19:28:53.466020  148525 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:28:53.466572  148525 main.go:141] libmachine: Using API Version  1
	I1010 19:28:53.466594  148525 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:28:53.466772  148525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42293
	I1010 19:28:53.467034  148525 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:28:53.467209  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetState
	I1010 19:28:53.467310  148525 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:28:53.467828  148525 main.go:141] libmachine: Using API Version  1
	I1010 19:28:53.467857  148525 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:28:53.467899  148525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36387
	I1010 19:28:53.468270  148525 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:28:53.468451  148525 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:28:53.468866  148525 main.go:141] libmachine: Using API Version  1
	I1010 19:28:53.468891  148525 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:28:53.469102  148525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:28:53.469150  148525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:28:53.469484  148525 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:28:53.470068  148525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:28:53.470114  148525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:28:53.471192  148525 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-361847"
	W1010 19:28:53.471213  148525 addons.go:243] addon default-storageclass should already be in state true
	I1010 19:28:53.471261  148525 host.go:66] Checking if "default-k8s-diff-port-361847" exists ...
	I1010 19:28:53.471618  148525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:28:53.471664  148525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:28:53.486550  148525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38971
	I1010 19:28:53.487068  148525 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:28:53.487608  148525 main.go:141] libmachine: Using API Version  1
	I1010 19:28:53.487626  148525 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:28:53.488015  148525 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:28:53.488329  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetState
	I1010 19:28:53.490200  148525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44971
	I1010 19:28:53.490240  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .DriverName
	I1010 19:28:53.490790  148525 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:28:53.491318  148525 main.go:141] libmachine: Using API Version  1
	I1010 19:28:53.491341  148525 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:28:53.491682  148525 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:28:53.491957  148525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45297
	I1010 19:28:53.492100  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetState
	I1010 19:28:53.492423  148525 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:28:53.492731  148525 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1010 19:28:53.492811  148525 main.go:141] libmachine: Using API Version  1
	I1010 19:28:53.492831  148525 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:28:53.493240  148525 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:28:53.493885  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .DriverName
	I1010 19:28:53.493979  148525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:28:53.494031  148525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:28:53.494359  148525 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1010 19:28:53.494381  148525 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1010 19:28:53.494397  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHHostname
	I1010 19:28:53.495771  148525 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 19:28:51.226596  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:53.227299  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:53.227335  147213 pod_ready.go:82] duration metric: took 4m0.007224391s for pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace to be "Ready" ...
	E1010 19:28:53.227346  147213 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1010 19:28:53.227355  147213 pod_ready.go:39] duration metric: took 4m5.554224355s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 19:28:53.227375  147213 api_server.go:52] waiting for apiserver process to appear ...
	I1010 19:28:53.227419  147213 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:28:53.227484  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:28:53.288713  147213 cri.go:89] found id: "20a9cb514f18abab89d82173a52c43693b13548712f96726c491e14100e5deba"
	I1010 19:28:53.288749  147213 cri.go:89] found id: ""
	I1010 19:28:53.288759  147213 logs.go:282] 1 containers: [20a9cb514f18abab89d82173a52c43693b13548712f96726c491e14100e5deba]
	I1010 19:28:53.288823  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:53.294819  147213 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:28:53.294904  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:28:53.340169  147213 cri.go:89] found id: "bfc9f1f069a0299235908025d3c8840e059ddc2fc2f579932edb134cff568c36"
	I1010 19:28:53.340197  147213 cri.go:89] found id: ""
	I1010 19:28:53.340207  147213 logs.go:282] 1 containers: [bfc9f1f069a0299235908025d3c8840e059ddc2fc2f579932edb134cff568c36]
	I1010 19:28:53.340271  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:53.345214  147213 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:28:53.345292  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:28:53.392808  147213 cri.go:89] found id: "3c98f0e3e46ce82feb8638281ffb8c8cdf326b15115a6edfe91de59de6868846"
	I1010 19:28:53.392838  147213 cri.go:89] found id: ""
	I1010 19:28:53.392859  147213 logs.go:282] 1 containers: [3c98f0e3e46ce82feb8638281ffb8c8cdf326b15115a6edfe91de59de6868846]
	I1010 19:28:53.392921  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:53.398275  147213 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:28:53.398361  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:28:53.439567  147213 cri.go:89] found id: "d397ef1d012accb6f9cb41fa5bd5ed4649588e9ad8552f87d2f9a579a9c3f023"
	I1010 19:28:53.439594  147213 cri.go:89] found id: ""
	I1010 19:28:53.439604  147213 logs.go:282] 1 containers: [d397ef1d012accb6f9cb41fa5bd5ed4649588e9ad8552f87d2f9a579a9c3f023]
	I1010 19:28:53.439665  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:53.444366  147213 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:28:53.444436  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:28:53.522580  147213 cri.go:89] found id: "3a26f9cbec8dc8e6e1b1c1cc24a2432dc85819a78557be2263db6bc34e847664"
	I1010 19:28:53.522597  147213 cri.go:89] found id: ""
	I1010 19:28:53.522605  147213 logs.go:282] 1 containers: [3a26f9cbec8dc8e6e1b1c1cc24a2432dc85819a78557be2263db6bc34e847664]
	I1010 19:28:53.522654  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:53.528890  147213 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:28:53.528974  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:28:53.575933  147213 cri.go:89] found id: "d59196636b2826805e226757e3c5d3683d49f2fddb43f2e965aeae02026742ea"
	I1010 19:28:53.575963  147213 cri.go:89] found id: ""
	I1010 19:28:53.575975  147213 logs.go:282] 1 containers: [d59196636b2826805e226757e3c5d3683d49f2fddb43f2e965aeae02026742ea]
	I1010 19:28:53.576035  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:53.581693  147213 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:28:53.581763  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:28:53.619789  147213 cri.go:89] found id: ""
	I1010 19:28:53.619819  147213 logs.go:282] 0 containers: []
	W1010 19:28:53.619831  147213 logs.go:284] No container was found matching "kindnet"
	I1010 19:28:53.619839  147213 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1010 19:28:53.619899  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1010 19:28:53.659715  147213 cri.go:89] found id: "dfabbf70cd44985666c6b54a456e49c66a41cc19300a483565decd937ce240ab"
	I1010 19:28:53.659746  147213 cri.go:89] found id: "e14d37c6da3f630d60720c5dc7ec414f060a7b327fafd27b633f3402e552840e"
	I1010 19:28:53.659752  147213 cri.go:89] found id: ""
	I1010 19:28:53.659762  147213 logs.go:282] 2 containers: [dfabbf70cd44985666c6b54a456e49c66a41cc19300a483565decd937ce240ab e14d37c6da3f630d60720c5dc7ec414f060a7b327fafd27b633f3402e552840e]
	I1010 19:28:53.659828  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:53.664377  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:53.668766  147213 logs.go:123] Gathering logs for dmesg ...
	I1010 19:28:53.668796  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:28:53.685976  147213 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:28:53.686007  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 19:28:53.497232  148525 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 19:28:53.497251  148525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1010 19:28:53.497273  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHHostname
	I1010 19:28:53.497732  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:28:53.498599  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:28:53.498627  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:28:53.498971  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHPort
	I1010 19:28:53.499159  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:28:53.499312  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHUsername
	I1010 19:28:53.499414  148525 sshutil.go:53] new ssh client: &{IP:192.168.50.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/default-k8s-diff-port-361847/id_rsa Username:docker}
	I1010 19:28:53.501044  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:28:53.501509  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:28:53.501531  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:28:53.501782  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHPort
	I1010 19:28:53.501956  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:28:53.502080  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHUsername
	I1010 19:28:53.502232  148525 sshutil.go:53] new ssh client: &{IP:192.168.50.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/default-k8s-diff-port-361847/id_rsa Username:docker}
	I1010 19:28:53.512240  148525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34899
	I1010 19:28:53.512809  148525 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:28:53.513347  148525 main.go:141] libmachine: Using API Version  1
	I1010 19:28:53.513368  148525 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:28:53.513787  148525 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:28:53.514001  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetState
	I1010 19:28:53.515436  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .DriverName
	I1010 19:28:53.515639  148525 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1010 19:28:53.515659  148525 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1010 19:28:53.515681  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHHostname
	I1010 19:28:53.518128  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:28:53.518596  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:28:53.518628  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:28:53.518909  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHPort
	I1010 19:28:53.519080  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:28:53.519216  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHUsername
	I1010 19:28:53.519376  148525 sshutil.go:53] new ssh client: &{IP:192.168.50.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/default-k8s-diff-port-361847/id_rsa Username:docker}
	I1010 19:28:53.712871  148525 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 19:28:53.755059  148525 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-361847" to be "Ready" ...
	I1010 19:28:53.766564  148525 node_ready.go:49] node "default-k8s-diff-port-361847" has status "Ready":"True"
	I1010 19:28:53.766590  148525 node_ready.go:38] duration metric: took 11.490223ms for node "default-k8s-diff-port-361847" to be "Ready" ...
	I1010 19:28:53.766603  148525 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 19:28:53.777458  148525 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-dh9th" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:53.875493  148525 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1010 19:28:53.875525  148525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1010 19:28:53.911443  148525 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1010 19:28:53.944885  148525 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1010 19:28:53.944919  148525 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1010 19:28:53.945487  148525 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 19:28:54.011209  148525 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1010 19:28:54.011239  148525 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1010 19:28:54.039679  148525 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1010 19:28:54.598172  148525 main.go:141] libmachine: Making call to close driver server
	I1010 19:28:54.598226  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .Close
	I1010 19:28:54.598584  148525 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:28:54.598608  148525 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:28:54.598619  148525 main.go:141] libmachine: Making call to close driver server
	I1010 19:28:54.598629  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .Close
	I1010 19:28:54.598898  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | Closing plugin on server side
	I1010 19:28:54.598931  148525 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:28:54.598939  148525 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:28:54.643365  148525 main.go:141] libmachine: Making call to close driver server
	I1010 19:28:54.643392  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .Close
	I1010 19:28:54.643734  148525 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:28:54.643760  148525 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:28:55.287018  148525 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.341483807s)
	I1010 19:28:55.287045  148525 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.247326452s)
	I1010 19:28:55.287089  148525 main.go:141] libmachine: Making call to close driver server
	I1010 19:28:55.287094  148525 main.go:141] libmachine: Making call to close driver server
	I1010 19:28:55.287106  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .Close
	I1010 19:28:55.287112  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .Close
	I1010 19:28:55.287440  148525 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:28:55.287479  148525 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:28:55.287506  148525 main.go:141] libmachine: Making call to close driver server
	I1010 19:28:55.287524  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .Close
	I1010 19:28:55.287570  148525 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:28:55.287589  148525 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:28:55.287598  148525 main.go:141] libmachine: Making call to close driver server
	I1010 19:28:55.287607  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .Close
	I1010 19:28:55.287818  148525 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:28:55.287831  148525 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:28:55.287840  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | Closing plugin on server side
	I1010 19:28:55.287855  148525 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:28:55.287862  148525 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:28:55.287872  148525 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-361847"
	I1010 19:28:55.287880  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | Closing plugin on server side
	I1010 19:28:55.289944  148525 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1010 19:28:53.841387  147213 logs.go:123] Gathering logs for kube-apiserver [20a9cb514f18abab89d82173a52c43693b13548712f96726c491e14100e5deba] ...
	I1010 19:28:53.841441  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 20a9cb514f18abab89d82173a52c43693b13548712f96726c491e14100e5deba"
	I1010 19:28:53.892951  147213 logs.go:123] Gathering logs for coredns [3c98f0e3e46ce82feb8638281ffb8c8cdf326b15115a6edfe91de59de6868846] ...
	I1010 19:28:53.893005  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c98f0e3e46ce82feb8638281ffb8c8cdf326b15115a6edfe91de59de6868846"
	I1010 19:28:53.947636  147213 logs.go:123] Gathering logs for kube-scheduler [d397ef1d012accb6f9cb41fa5bd5ed4649588e9ad8552f87d2f9a579a9c3f023] ...
	I1010 19:28:53.947668  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d397ef1d012accb6f9cb41fa5bd5ed4649588e9ad8552f87d2f9a579a9c3f023"
	I1010 19:28:53.992969  147213 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:28:53.992998  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:28:54.520652  147213 logs.go:123] Gathering logs for kubelet ...
	I1010 19:28:54.520703  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:28:54.588366  147213 logs.go:123] Gathering logs for etcd [bfc9f1f069a0299235908025d3c8840e059ddc2fc2f579932edb134cff568c36] ...
	I1010 19:28:54.588418  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bfc9f1f069a0299235908025d3c8840e059ddc2fc2f579932edb134cff568c36"
	I1010 19:28:54.651179  147213 logs.go:123] Gathering logs for kube-proxy [3a26f9cbec8dc8e6e1b1c1cc24a2432dc85819a78557be2263db6bc34e847664] ...
	I1010 19:28:54.651227  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3a26f9cbec8dc8e6e1b1c1cc24a2432dc85819a78557be2263db6bc34e847664"
	I1010 19:28:54.712881  147213 logs.go:123] Gathering logs for kube-controller-manager [d59196636b2826805e226757e3c5d3683d49f2fddb43f2e965aeae02026742ea] ...
	I1010 19:28:54.712925  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d59196636b2826805e226757e3c5d3683d49f2fddb43f2e965aeae02026742ea"
	I1010 19:28:54.779030  147213 logs.go:123] Gathering logs for storage-provisioner [dfabbf70cd44985666c6b54a456e49c66a41cc19300a483565decd937ce240ab] ...
	I1010 19:28:54.779094  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dfabbf70cd44985666c6b54a456e49c66a41cc19300a483565decd937ce240ab"
	I1010 19:28:54.821961  147213 logs.go:123] Gathering logs for storage-provisioner [e14d37c6da3f630d60720c5dc7ec414f060a7b327fafd27b633f3402e552840e] ...
	I1010 19:28:54.822002  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e14d37c6da3f630d60720c5dc7ec414f060a7b327fafd27b633f3402e552840e"
	I1010 19:28:54.871409  147213 logs.go:123] Gathering logs for container status ...
	I1010 19:28:54.871446  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:28:57.425310  147213 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:28:57.442308  147213 api_server.go:72] duration metric: took 4m17.02881034s to wait for apiserver process to appear ...
	I1010 19:28:57.442343  147213 api_server.go:88] waiting for apiserver healthz status ...
	I1010 19:28:57.442383  147213 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:28:57.442444  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:28:57.481392  147213 cri.go:89] found id: "20a9cb514f18abab89d82173a52c43693b13548712f96726c491e14100e5deba"
	I1010 19:28:57.481420  147213 cri.go:89] found id: ""
	I1010 19:28:57.481430  147213 logs.go:282] 1 containers: [20a9cb514f18abab89d82173a52c43693b13548712f96726c491e14100e5deba]
	I1010 19:28:57.481503  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:57.486191  147213 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:28:57.486269  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:28:57.532238  147213 cri.go:89] found id: "bfc9f1f069a0299235908025d3c8840e059ddc2fc2f579932edb134cff568c36"
	I1010 19:28:57.532271  147213 cri.go:89] found id: ""
	I1010 19:28:57.532284  147213 logs.go:282] 1 containers: [bfc9f1f069a0299235908025d3c8840e059ddc2fc2f579932edb134cff568c36]
	I1010 19:28:57.532357  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:57.538105  147213 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:28:57.538188  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:28:57.579729  147213 cri.go:89] found id: "3c98f0e3e46ce82feb8638281ffb8c8cdf326b15115a6edfe91de59de6868846"
	I1010 19:28:57.579757  147213 cri.go:89] found id: ""
	I1010 19:28:57.579767  147213 logs.go:282] 1 containers: [3c98f0e3e46ce82feb8638281ffb8c8cdf326b15115a6edfe91de59de6868846]
	I1010 19:28:57.579833  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:57.584494  147213 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:28:57.584568  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:28:57.623920  147213 cri.go:89] found id: "d397ef1d012accb6f9cb41fa5bd5ed4649588e9ad8552f87d2f9a579a9c3f023"
	I1010 19:28:57.623949  147213 cri.go:89] found id: ""
	I1010 19:28:57.623960  147213 logs.go:282] 1 containers: [d397ef1d012accb6f9cb41fa5bd5ed4649588e9ad8552f87d2f9a579a9c3f023]
	I1010 19:28:57.624028  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:57.628927  147213 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:28:57.629018  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:28:57.669669  147213 cri.go:89] found id: "3a26f9cbec8dc8e6e1b1c1cc24a2432dc85819a78557be2263db6bc34e847664"
	I1010 19:28:57.669698  147213 cri.go:89] found id: ""
	I1010 19:28:57.669707  147213 logs.go:282] 1 containers: [3a26f9cbec8dc8e6e1b1c1cc24a2432dc85819a78557be2263db6bc34e847664]
	I1010 19:28:57.669771  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:57.674449  147213 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:28:57.674526  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:28:57.721856  147213 cri.go:89] found id: "d59196636b2826805e226757e3c5d3683d49f2fddb43f2e965aeae02026742ea"
	I1010 19:28:57.721881  147213 cri.go:89] found id: ""
	I1010 19:28:57.721891  147213 logs.go:282] 1 containers: [d59196636b2826805e226757e3c5d3683d49f2fddb43f2e965aeae02026742ea]
	I1010 19:28:57.721955  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:57.726422  147213 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:28:57.726497  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:28:57.764464  147213 cri.go:89] found id: ""
	I1010 19:28:57.764499  147213 logs.go:282] 0 containers: []
	W1010 19:28:57.764512  147213 logs.go:284] No container was found matching "kindnet"
	I1010 19:28:57.764521  147213 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1010 19:28:57.764595  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1010 19:28:57.809758  147213 cri.go:89] found id: "dfabbf70cd44985666c6b54a456e49c66a41cc19300a483565decd937ce240ab"
	I1010 19:28:57.809784  147213 cri.go:89] found id: "e14d37c6da3f630d60720c5dc7ec414f060a7b327fafd27b633f3402e552840e"
	I1010 19:28:57.809788  147213 cri.go:89] found id: ""
	I1010 19:28:57.809797  147213 logs.go:282] 2 containers: [dfabbf70cd44985666c6b54a456e49c66a41cc19300a483565decd937ce240ab e14d37c6da3f630d60720c5dc7ec414f060a7b327fafd27b633f3402e552840e]
	I1010 19:28:57.809854  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:57.815576  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:57.820152  147213 logs.go:123] Gathering logs for kube-apiserver [20a9cb514f18abab89d82173a52c43693b13548712f96726c491e14100e5deba] ...
	I1010 19:28:57.820181  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 20a9cb514f18abab89d82173a52c43693b13548712f96726c491e14100e5deba"
	I1010 19:28:57.869339  147213 logs.go:123] Gathering logs for etcd [bfc9f1f069a0299235908025d3c8840e059ddc2fc2f579932edb134cff568c36] ...
	I1010 19:28:57.869383  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bfc9f1f069a0299235908025d3c8840e059ddc2fc2f579932edb134cff568c36"
	I1010 19:28:57.918698  147213 logs.go:123] Gathering logs for kube-proxy [3a26f9cbec8dc8e6e1b1c1cc24a2432dc85819a78557be2263db6bc34e847664] ...
	I1010 19:28:57.918739  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3a26f9cbec8dc8e6e1b1c1cc24a2432dc85819a78557be2263db6bc34e847664"
	I1010 19:28:57.960939  147213 logs.go:123] Gathering logs for kube-controller-manager [d59196636b2826805e226757e3c5d3683d49f2fddb43f2e965aeae02026742ea] ...
	I1010 19:28:57.960985  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d59196636b2826805e226757e3c5d3683d49f2fddb43f2e965aeae02026742ea"
	I1010 19:28:58.013572  147213 logs.go:123] Gathering logs for storage-provisioner [dfabbf70cd44985666c6b54a456e49c66a41cc19300a483565decd937ce240ab] ...
	I1010 19:28:58.013612  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dfabbf70cd44985666c6b54a456e49c66a41cc19300a483565decd937ce240ab"
	I1010 19:28:58.053247  147213 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:28:58.053277  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:28:58.507428  147213 logs.go:123] Gathering logs for container status ...
	I1010 19:28:58.507473  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:28:58.552704  147213 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:28:58.552742  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 19:28:58.672077  147213 logs.go:123] Gathering logs for dmesg ...
	I1010 19:28:58.672127  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:28:58.690997  147213 logs.go:123] Gathering logs for coredns [3c98f0e3e46ce82feb8638281ffb8c8cdf326b15115a6edfe91de59de6868846] ...
	I1010 19:28:58.691049  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c98f0e3e46ce82feb8638281ffb8c8cdf326b15115a6edfe91de59de6868846"
	I1010 19:28:58.735251  147213 logs.go:123] Gathering logs for kube-scheduler [d397ef1d012accb6f9cb41fa5bd5ed4649588e9ad8552f87d2f9a579a9c3f023] ...
	I1010 19:28:58.735287  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d397ef1d012accb6f9cb41fa5bd5ed4649588e9ad8552f87d2f9a579a9c3f023"
	I1010 19:28:55.291700  148525 addons.go:510] duration metric: took 1.845864985s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1010 19:28:55.785186  148525 pod_ready.go:103] pod "coredns-7c65d6cfc9-dh9th" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:57.789567  148525 pod_ready.go:103] pod "coredns-7c65d6cfc9-dh9th" in "kube-system" namespace has status "Ready":"False"
	I1010 19:29:00.284444  148525 pod_ready.go:103] pod "coredns-7c65d6cfc9-dh9th" in "kube-system" namespace has status "Ready":"False"
	I1010 19:29:01.297627  148525 pod_ready.go:93] pod "coredns-7c65d6cfc9-dh9th" in "kube-system" namespace has status "Ready":"True"
	I1010 19:29:01.297660  148525 pod_ready.go:82] duration metric: took 7.520173084s for pod "coredns-7c65d6cfc9-dh9th" in "kube-system" namespace to be "Ready" ...
	I1010 19:29:01.297676  148525 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-fgxh7" in "kube-system" namespace to be "Ready" ...
	I1010 19:29:01.804654  148525 pod_ready.go:93] pod "coredns-7c65d6cfc9-fgxh7" in "kube-system" namespace has status "Ready":"True"
	I1010 19:29:01.804676  148525 pod_ready.go:82] duration metric: took 506.992872ms for pod "coredns-7c65d6cfc9-fgxh7" in "kube-system" namespace to be "Ready" ...
	I1010 19:29:01.804690  148525 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	I1010 19:29:01.809788  148525 pod_ready.go:93] pod "etcd-default-k8s-diff-port-361847" in "kube-system" namespace has status "Ready":"True"
	I1010 19:29:01.809814  148525 pod_ready.go:82] duration metric: took 5.116023ms for pod "etcd-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	I1010 19:29:01.809825  148525 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	I1010 19:29:01.814460  148525 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-361847" in "kube-system" namespace has status "Ready":"True"
	I1010 19:29:01.814486  148525 pod_ready.go:82] duration metric: took 4.652085ms for pod "kube-apiserver-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	I1010 19:29:01.814501  148525 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	I1010 19:29:01.819719  148525 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-361847" in "kube-system" namespace has status "Ready":"True"
	I1010 19:29:01.819741  148525 pod_ready.go:82] duration metric: took 5.231258ms for pod "kube-controller-manager-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	I1010 19:29:01.819753  148525 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-jlvn6" in "kube-system" namespace to be "Ready" ...
	I1010 19:29:02.082285  148525 pod_ready.go:93] pod "kube-proxy-jlvn6" in "kube-system" namespace has status "Ready":"True"
	I1010 19:29:02.082325  148525 pod_ready.go:82] duration metric: took 262.562954ms for pod "kube-proxy-jlvn6" in "kube-system" namespace to be "Ready" ...
	I1010 19:29:02.082342  148525 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	I1010 19:29:02.481705  148525 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-361847" in "kube-system" namespace has status "Ready":"True"
	I1010 19:29:02.481730  148525 pod_ready.go:82] duration metric: took 399.378957ms for pod "kube-scheduler-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	I1010 19:29:02.481742  148525 pod_ready.go:39] duration metric: took 8.715126416s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 19:29:02.481779  148525 api_server.go:52] waiting for apiserver process to appear ...
	I1010 19:29:02.481832  148525 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:29:02.498706  148525 api_server.go:72] duration metric: took 9.052891898s to wait for apiserver process to appear ...
	I1010 19:29:02.498760  148525 api_server.go:88] waiting for apiserver healthz status ...
	I1010 19:29:02.498795  148525 api_server.go:253] Checking apiserver healthz at https://192.168.50.32:8444/healthz ...
	I1010 19:29:02.503501  148525 api_server.go:279] https://192.168.50.32:8444/healthz returned 200:
	ok
	I1010 19:29:02.504594  148525 api_server.go:141] control plane version: v1.31.1
	I1010 19:29:02.504620  148525 api_server.go:131] duration metric: took 5.850548ms to wait for apiserver health ...
	I1010 19:29:02.504629  148525 system_pods.go:43] waiting for kube-system pods to appear ...
	I1010 19:29:02.685579  148525 system_pods.go:59] 9 kube-system pods found
	I1010 19:29:02.685611  148525 system_pods.go:61] "coredns-7c65d6cfc9-dh9th" [ff14d755-810a-497a-b1fc-7fe231748af3] Running
	I1010 19:29:02.685618  148525 system_pods.go:61] "coredns-7c65d6cfc9-fgxh7" [b4faa977-3205-4395-bda3-8fe24fdcf6cc] Running
	I1010 19:29:02.685624  148525 system_pods.go:61] "etcd-default-k8s-diff-port-361847" [d7e1e625-2945-4755-927a-aab5e40f5392] Running
	I1010 19:29:02.685630  148525 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-361847" [0f7dace6-6cdb-4438-a4b3-3fecac56c709] Running
	I1010 19:29:02.685635  148525 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-361847" [d08fb57e-d916-4d4f-929a-509c1c8c0f89] Running
	I1010 19:29:02.685639  148525 system_pods.go:61] "kube-proxy-jlvn6" [6336f682-0362-4855-b848-3540052aec19] Running
	I1010 19:29:02.685644  148525 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-361847" [60282768-5672-42b9-b9f5-6d5cd0b186cf] Running
	I1010 19:29:02.685653  148525 system_pods.go:61] "metrics-server-6867b74b74-fdf7p" [6f8ca204-13fe-4adb-9c09-33ec6821ff2d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1010 19:29:02.685658  148525 system_pods.go:61] "storage-provisioner" [ea1a4ade-9648-401f-a0ad-633ab3c1196b] Running
	I1010 19:29:02.685669  148525 system_pods.go:74] duration metric: took 181.032548ms to wait for pod list to return data ...
	I1010 19:29:02.685683  148525 default_sa.go:34] waiting for default service account to be created ...
	I1010 19:29:02.883256  148525 default_sa.go:45] found service account: "default"
	I1010 19:29:02.883288  148525 default_sa.go:55] duration metric: took 197.59742ms for default service account to be created ...
	I1010 19:29:02.883298  148525 system_pods.go:116] waiting for k8s-apps to be running ...
	I1010 19:29:03.084706  148525 system_pods.go:86] 9 kube-system pods found
	I1010 19:29:03.084737  148525 system_pods.go:89] "coredns-7c65d6cfc9-dh9th" [ff14d755-810a-497a-b1fc-7fe231748af3] Running
	I1010 19:29:03.084742  148525 system_pods.go:89] "coredns-7c65d6cfc9-fgxh7" [b4faa977-3205-4395-bda3-8fe24fdcf6cc] Running
	I1010 19:29:03.084746  148525 system_pods.go:89] "etcd-default-k8s-diff-port-361847" [d7e1e625-2945-4755-927a-aab5e40f5392] Running
	I1010 19:29:03.084751  148525 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-361847" [0f7dace6-6cdb-4438-a4b3-3fecac56c709] Running
	I1010 19:29:03.084755  148525 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-361847" [d08fb57e-d916-4d4f-929a-509c1c8c0f89] Running
	I1010 19:29:03.084759  148525 system_pods.go:89] "kube-proxy-jlvn6" [6336f682-0362-4855-b848-3540052aec19] Running
	I1010 19:29:03.084762  148525 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-361847" [60282768-5672-42b9-b9f5-6d5cd0b186cf] Running
	I1010 19:29:03.084768  148525 system_pods.go:89] "metrics-server-6867b74b74-fdf7p" [6f8ca204-13fe-4adb-9c09-33ec6821ff2d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1010 19:29:03.084772  148525 system_pods.go:89] "storage-provisioner" [ea1a4ade-9648-401f-a0ad-633ab3c1196b] Running
	I1010 19:29:03.084779  148525 system_pods.go:126] duration metric: took 201.476637ms to wait for k8s-apps to be running ...
	I1010 19:29:03.084787  148525 system_svc.go:44] waiting for kubelet service to be running ....
	I1010 19:29:03.084832  148525 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 19:29:03.100986  148525 system_svc.go:56] duration metric: took 16.183062ms WaitForService to wait for kubelet
	I1010 19:29:03.101026  148525 kubeadm.go:582] duration metric: took 9.655245557s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1010 19:29:03.101050  148525 node_conditions.go:102] verifying NodePressure condition ...
	I1010 19:29:03.282063  148525 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1010 19:29:03.282095  148525 node_conditions.go:123] node cpu capacity is 2
	I1010 19:29:03.282106  148525 node_conditions.go:105] duration metric: took 181.049888ms to run NodePressure ...
	I1010 19:29:03.282119  148525 start.go:241] waiting for startup goroutines ...
	I1010 19:29:03.282125  148525 start.go:246] waiting for cluster config update ...
	I1010 19:29:03.282135  148525 start.go:255] writing updated cluster config ...
	I1010 19:29:03.282414  148525 ssh_runner.go:195] Run: rm -f paused
	I1010 19:29:03.331838  148525 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1010 19:29:03.333698  148525 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-361847" cluster and "default" namespace by default
	I1010 19:28:58.775358  147213 logs.go:123] Gathering logs for storage-provisioner [e14d37c6da3f630d60720c5dc7ec414f060a7b327fafd27b633f3402e552840e] ...
	I1010 19:28:58.775396  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e14d37c6da3f630d60720c5dc7ec414f060a7b327fafd27b633f3402e552840e"
	I1010 19:28:58.812210  147213 logs.go:123] Gathering logs for kubelet ...
	I1010 19:28:58.812269  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:29:01.381750  147213 api_server.go:253] Checking apiserver healthz at https://192.168.72.11:8443/healthz ...
	I1010 19:29:01.386658  147213 api_server.go:279] https://192.168.72.11:8443/healthz returned 200:
	ok
	I1010 19:29:01.387793  147213 api_server.go:141] control plane version: v1.31.1
	I1010 19:29:01.387819  147213 api_server.go:131] duration metric: took 3.945468552s to wait for apiserver health ...
	I1010 19:29:01.387829  147213 system_pods.go:43] waiting for kube-system pods to appear ...
	I1010 19:29:01.387861  147213 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:29:01.387948  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:29:01.433312  147213 cri.go:89] found id: "20a9cb514f18abab89d82173a52c43693b13548712f96726c491e14100e5deba"
	I1010 19:29:01.433344  147213 cri.go:89] found id: ""
	I1010 19:29:01.433433  147213 logs.go:282] 1 containers: [20a9cb514f18abab89d82173a52c43693b13548712f96726c491e14100e5deba]
	I1010 19:29:01.433521  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:29:01.437920  147213 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:29:01.437983  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:29:01.476429  147213 cri.go:89] found id: "bfc9f1f069a0299235908025d3c8840e059ddc2fc2f579932edb134cff568c36"
	I1010 19:29:01.476458  147213 cri.go:89] found id: ""
	I1010 19:29:01.476470  147213 logs.go:282] 1 containers: [bfc9f1f069a0299235908025d3c8840e059ddc2fc2f579932edb134cff568c36]
	I1010 19:29:01.476522  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:29:01.480912  147213 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:29:01.480987  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:29:01.522141  147213 cri.go:89] found id: "3c98f0e3e46ce82feb8638281ffb8c8cdf326b15115a6edfe91de59de6868846"
	I1010 19:29:01.522164  147213 cri.go:89] found id: ""
	I1010 19:29:01.522173  147213 logs.go:282] 1 containers: [3c98f0e3e46ce82feb8638281ffb8c8cdf326b15115a6edfe91de59de6868846]
	I1010 19:29:01.522238  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:29:01.526742  147213 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:29:01.526803  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:29:01.572715  147213 cri.go:89] found id: "d397ef1d012accb6f9cb41fa5bd5ed4649588e9ad8552f87d2f9a579a9c3f023"
	I1010 19:29:01.572747  147213 cri.go:89] found id: ""
	I1010 19:29:01.572759  147213 logs.go:282] 1 containers: [d397ef1d012accb6f9cb41fa5bd5ed4649588e9ad8552f87d2f9a579a9c3f023]
	I1010 19:29:01.572814  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:29:01.577754  147213 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:29:01.577832  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:29:01.616077  147213 cri.go:89] found id: "3a26f9cbec8dc8e6e1b1c1cc24a2432dc85819a78557be2263db6bc34e847664"
	I1010 19:29:01.616104  147213 cri.go:89] found id: ""
	I1010 19:29:01.616121  147213 logs.go:282] 1 containers: [3a26f9cbec8dc8e6e1b1c1cc24a2432dc85819a78557be2263db6bc34e847664]
	I1010 19:29:01.616185  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:29:01.620622  147213 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:29:01.620702  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:29:01.662859  147213 cri.go:89] found id: "d59196636b2826805e226757e3c5d3683d49f2fddb43f2e965aeae02026742ea"
	I1010 19:29:01.662889  147213 cri.go:89] found id: ""
	I1010 19:29:01.662903  147213 logs.go:282] 1 containers: [d59196636b2826805e226757e3c5d3683d49f2fddb43f2e965aeae02026742ea]
	I1010 19:29:01.662964  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:29:01.667491  147213 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:29:01.667585  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:29:01.706191  147213 cri.go:89] found id: ""
	I1010 19:29:01.706217  147213 logs.go:282] 0 containers: []
	W1010 19:29:01.706228  147213 logs.go:284] No container was found matching "kindnet"
	I1010 19:29:01.706234  147213 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1010 19:29:01.706299  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1010 19:29:01.753559  147213 cri.go:89] found id: "dfabbf70cd44985666c6b54a456e49c66a41cc19300a483565decd937ce240ab"
	I1010 19:29:01.753581  147213 cri.go:89] found id: "e14d37c6da3f630d60720c5dc7ec414f060a7b327fafd27b633f3402e552840e"
	I1010 19:29:01.753584  147213 cri.go:89] found id: ""
	I1010 19:29:01.753591  147213 logs.go:282] 2 containers: [dfabbf70cd44985666c6b54a456e49c66a41cc19300a483565decd937ce240ab e14d37c6da3f630d60720c5dc7ec414f060a7b327fafd27b633f3402e552840e]
	I1010 19:29:01.753645  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:29:01.758179  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:29:01.762336  147213 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:29:01.762358  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 19:29:01.867667  147213 logs.go:123] Gathering logs for etcd [bfc9f1f069a0299235908025d3c8840e059ddc2fc2f579932edb134cff568c36] ...
	I1010 19:29:01.867698  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bfc9f1f069a0299235908025d3c8840e059ddc2fc2f579932edb134cff568c36"
	I1010 19:29:01.911722  147213 logs.go:123] Gathering logs for coredns [3c98f0e3e46ce82feb8638281ffb8c8cdf326b15115a6edfe91de59de6868846] ...
	I1010 19:29:01.911756  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c98f0e3e46ce82feb8638281ffb8c8cdf326b15115a6edfe91de59de6868846"
	I1010 19:29:01.955152  147213 logs.go:123] Gathering logs for kube-proxy [3a26f9cbec8dc8e6e1b1c1cc24a2432dc85819a78557be2263db6bc34e847664] ...
	I1010 19:29:01.955189  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3a26f9cbec8dc8e6e1b1c1cc24a2432dc85819a78557be2263db6bc34e847664"
	I1010 19:29:01.995010  147213 logs.go:123] Gathering logs for kube-controller-manager [d59196636b2826805e226757e3c5d3683d49f2fddb43f2e965aeae02026742ea] ...
	I1010 19:29:01.995041  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d59196636b2826805e226757e3c5d3683d49f2fddb43f2e965aeae02026742ea"
	I1010 19:29:02.047505  147213 logs.go:123] Gathering logs for storage-provisioner [dfabbf70cd44985666c6b54a456e49c66a41cc19300a483565decd937ce240ab] ...
	I1010 19:29:02.047546  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dfabbf70cd44985666c6b54a456e49c66a41cc19300a483565decd937ce240ab"
	I1010 19:29:02.085080  147213 logs.go:123] Gathering logs for storage-provisioner [e14d37c6da3f630d60720c5dc7ec414f060a7b327fafd27b633f3402e552840e] ...
	I1010 19:29:02.085110  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e14d37c6da3f630d60720c5dc7ec414f060a7b327fafd27b633f3402e552840e"
	I1010 19:29:02.128482  147213 logs.go:123] Gathering logs for kubelet ...
	I1010 19:29:02.128527  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:29:02.194867  147213 logs.go:123] Gathering logs for dmesg ...
	I1010 19:29:02.194904  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:29:02.211881  147213 logs.go:123] Gathering logs for kube-apiserver [20a9cb514f18abab89d82173a52c43693b13548712f96726c491e14100e5deba] ...
	I1010 19:29:02.211911  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 20a9cb514f18abab89d82173a52c43693b13548712f96726c491e14100e5deba"
	I1010 19:29:02.262969  147213 logs.go:123] Gathering logs for kube-scheduler [d397ef1d012accb6f9cb41fa5bd5ed4649588e9ad8552f87d2f9a579a9c3f023] ...
	I1010 19:29:02.263013  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d397ef1d012accb6f9cb41fa5bd5ed4649588e9ad8552f87d2f9a579a9c3f023"
	I1010 19:29:02.302921  147213 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:29:02.302956  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:29:02.671102  147213 logs.go:123] Gathering logs for container status ...
	I1010 19:29:02.671169  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:29:05.241477  147213 system_pods.go:59] 8 kube-system pods found
	I1010 19:29:05.241508  147213 system_pods.go:61] "coredns-7c65d6cfc9-86brb" [28e5f869-f82f-4bd4-9d9c-89499fa89c89] Running
	I1010 19:29:05.241513  147213 system_pods.go:61] "etcd-no-preload-320324" [3027fc7b-a2cd-47c9-a6f1-122bfd0475a8] Running
	I1010 19:29:05.241517  147213 system_pods.go:61] "kube-apiserver-no-preload-320324" [6e3d2eab-4568-48e9-957c-e724ff89b5da] Running
	I1010 19:29:05.241521  147213 system_pods.go:61] "kube-controller-manager-no-preload-320324" [a6d65cd3-48b1-4faf-bb7f-f6433641d6f3] Running
	I1010 19:29:05.241525  147213 system_pods.go:61] "kube-proxy-vn6sv" [e5b2c419-7299-4bc4-b263-99408b9484eb] Running
	I1010 19:29:05.241528  147213 system_pods.go:61] "kube-scheduler-no-preload-320324" [e089c258-e720-4c78-ada1-eebbe89556c1] Running
	I1010 19:29:05.241534  147213 system_pods.go:61] "metrics-server-6867b74b74-8w9lk" [354939e6-2ca9-44f5-8e8e-c10493c68b79] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1010 19:29:05.241540  147213 system_pods.go:61] "storage-provisioner" [d4965b53-60c3-4a97-bd52-d164d977247a] Running
	I1010 19:29:05.241549  147213 system_pods.go:74] duration metric: took 3.853712488s to wait for pod list to return data ...
	I1010 19:29:05.241556  147213 default_sa.go:34] waiting for default service account to be created ...
	I1010 19:29:05.244686  147213 default_sa.go:45] found service account: "default"
	I1010 19:29:05.244721  147213 default_sa.go:55] duration metric: took 3.158069ms for default service account to be created ...
	I1010 19:29:05.244733  147213 system_pods.go:116] waiting for k8s-apps to be running ...
	I1010 19:29:05.249372  147213 system_pods.go:86] 8 kube-system pods found
	I1010 19:29:05.249398  147213 system_pods.go:89] "coredns-7c65d6cfc9-86brb" [28e5f869-f82f-4bd4-9d9c-89499fa89c89] Running
	I1010 19:29:05.249404  147213 system_pods.go:89] "etcd-no-preload-320324" [3027fc7b-a2cd-47c9-a6f1-122bfd0475a8] Running
	I1010 19:29:05.249408  147213 system_pods.go:89] "kube-apiserver-no-preload-320324" [6e3d2eab-4568-48e9-957c-e724ff89b5da] Running
	I1010 19:29:05.249413  147213 system_pods.go:89] "kube-controller-manager-no-preload-320324" [a6d65cd3-48b1-4faf-bb7f-f6433641d6f3] Running
	I1010 19:29:05.249418  147213 system_pods.go:89] "kube-proxy-vn6sv" [e5b2c419-7299-4bc4-b263-99408b9484eb] Running
	I1010 19:29:05.249425  147213 system_pods.go:89] "kube-scheduler-no-preload-320324" [e089c258-e720-4c78-ada1-eebbe89556c1] Running
	I1010 19:29:05.249433  147213 system_pods.go:89] "metrics-server-6867b74b74-8w9lk" [354939e6-2ca9-44f5-8e8e-c10493c68b79] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1010 19:29:05.249442  147213 system_pods.go:89] "storage-provisioner" [d4965b53-60c3-4a97-bd52-d164d977247a] Running
	I1010 19:29:05.249455  147213 system_pods.go:126] duration metric: took 4.715381ms to wait for k8s-apps to be running ...
	I1010 19:29:05.249467  147213 system_svc.go:44] waiting for kubelet service to be running ....
	I1010 19:29:05.249519  147213 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 19:29:05.265180  147213 system_svc.go:56] duration metric: took 15.703413ms WaitForService to wait for kubelet
	I1010 19:29:05.265216  147213 kubeadm.go:582] duration metric: took 4m24.851723603s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1010 19:29:05.265237  147213 node_conditions.go:102] verifying NodePressure condition ...
	I1010 19:29:05.268775  147213 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1010 19:29:05.268807  147213 node_conditions.go:123] node cpu capacity is 2
	I1010 19:29:05.268821  147213 node_conditions.go:105] duration metric: took 3.575195ms to run NodePressure ...
	I1010 19:29:05.268834  147213 start.go:241] waiting for startup goroutines ...
	I1010 19:29:05.268840  147213 start.go:246] waiting for cluster config update ...
	I1010 19:29:05.268869  147213 start.go:255] writing updated cluster config ...
	I1010 19:29:05.269148  147213 ssh_runner.go:195] Run: rm -f paused
	I1010 19:29:05.319999  147213 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1010 19:29:05.322189  147213 out.go:177] * Done! kubectl is now configured to use "no-preload-320324" cluster and "default" namespace by default
	I1010 19:29:41.275119  148123 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1010 19:29:41.275272  148123 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1010 19:29:41.276822  148123 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1010 19:29:41.276919  148123 kubeadm.go:310] [preflight] Running pre-flight checks
	I1010 19:29:41.277017  148123 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1010 19:29:41.277142  148123 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1010 19:29:41.277254  148123 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1010 19:29:41.277357  148123 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1010 19:29:41.279069  148123 out.go:235]   - Generating certificates and keys ...
	I1010 19:29:41.279160  148123 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1010 19:29:41.279217  148123 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1010 19:29:41.279306  148123 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1010 19:29:41.279381  148123 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1010 19:29:41.279484  148123 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1010 19:29:41.279576  148123 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1010 19:29:41.279674  148123 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1010 19:29:41.279779  148123 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1010 19:29:41.279906  148123 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1010 19:29:41.279971  148123 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1010 19:29:41.280005  148123 kubeadm.go:310] [certs] Using the existing "sa" key
	I1010 19:29:41.280052  148123 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1010 19:29:41.280095  148123 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1010 19:29:41.280144  148123 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1010 19:29:41.280219  148123 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1010 19:29:41.280317  148123 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1010 19:29:41.280474  148123 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1010 19:29:41.280583  148123 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1010 19:29:41.280648  148123 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1010 19:29:41.280736  148123 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1010 19:29:41.282180  148123 out.go:235]   - Booting up control plane ...
	I1010 19:29:41.282266  148123 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1010 19:29:41.282336  148123 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1010 19:29:41.282414  148123 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1010 19:29:41.282538  148123 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1010 19:29:41.282748  148123 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1010 19:29:41.282822  148123 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1010 19:29:41.282896  148123 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1010 19:29:41.283061  148123 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1010 19:29:41.283123  148123 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1010 19:29:41.283287  148123 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1010 19:29:41.283348  148123 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1010 19:29:41.283519  148123 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1010 19:29:41.283623  148123 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1010 19:29:41.283809  148123 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1010 19:29:41.283900  148123 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1010 19:29:41.284115  148123 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1010 19:29:41.284135  148123 kubeadm.go:310] 
	I1010 19:29:41.284201  148123 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1010 19:29:41.284243  148123 kubeadm.go:310] 		timed out waiting for the condition
	I1010 19:29:41.284257  148123 kubeadm.go:310] 
	I1010 19:29:41.284307  148123 kubeadm.go:310] 	This error is likely caused by:
	I1010 19:29:41.284346  148123 kubeadm.go:310] 		- The kubelet is not running
	I1010 19:29:41.284481  148123 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1010 19:29:41.284505  148123 kubeadm.go:310] 
	I1010 19:29:41.284655  148123 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1010 19:29:41.284714  148123 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1010 19:29:41.284752  148123 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1010 19:29:41.284758  148123 kubeadm.go:310] 
	I1010 19:29:41.284913  148123 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1010 19:29:41.285038  148123 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1010 19:29:41.285055  148123 kubeadm.go:310] 
	I1010 19:29:41.285169  148123 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1010 19:29:41.285307  148123 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1010 19:29:41.285475  148123 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1010 19:29:41.285587  148123 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1010 19:29:41.285644  148123 kubeadm.go:310] 
	W1010 19:29:41.285747  148123 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1010 19:29:41.285792  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1010 19:29:46.681684  148123 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.395862838s)
	I1010 19:29:46.681797  148123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 19:29:46.696927  148123 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1010 19:29:46.708250  148123 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1010 19:29:46.708280  148123 kubeadm.go:157] found existing configuration files:
	
	I1010 19:29:46.708339  148123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1010 19:29:46.719748  148123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1010 19:29:46.719828  148123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1010 19:29:46.730892  148123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1010 19:29:46.742329  148123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1010 19:29:46.742401  148123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1010 19:29:46.752919  148123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1010 19:29:46.762860  148123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1010 19:29:46.762932  148123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1010 19:29:46.772513  148123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1010 19:29:46.781672  148123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1010 19:29:46.781746  148123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1010 19:29:46.791250  148123 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1010 19:29:47.018706  148123 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1010 19:31:43.351851  148123 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1010 19:31:43.351992  148123 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1010 19:31:43.353664  148123 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1010 19:31:43.353752  148123 kubeadm.go:310] [preflight] Running pre-flight checks
	I1010 19:31:43.353861  148123 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1010 19:31:43.353998  148123 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1010 19:31:43.354130  148123 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1010 19:31:43.354235  148123 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1010 19:31:43.356450  148123 out.go:235]   - Generating certificates and keys ...
	I1010 19:31:43.356557  148123 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1010 19:31:43.356652  148123 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1010 19:31:43.356783  148123 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1010 19:31:43.356902  148123 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1010 19:31:43.357002  148123 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1010 19:31:43.357074  148123 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1010 19:31:43.357181  148123 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1010 19:31:43.357247  148123 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1010 19:31:43.357325  148123 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1010 19:31:43.357408  148123 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1010 19:31:43.357459  148123 kubeadm.go:310] [certs] Using the existing "sa" key
	I1010 19:31:43.357529  148123 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1010 19:31:43.357604  148123 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1010 19:31:43.357676  148123 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1010 19:31:43.357751  148123 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1010 19:31:43.357830  148123 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1010 19:31:43.357979  148123 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1010 19:31:43.358092  148123 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1010 19:31:43.358158  148123 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1010 19:31:43.358263  148123 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1010 19:31:43.360216  148123 out.go:235]   - Booting up control plane ...
	I1010 19:31:43.360332  148123 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1010 19:31:43.360463  148123 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1010 19:31:43.360551  148123 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1010 19:31:43.360673  148123 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1010 19:31:43.360907  148123 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1010 19:31:43.360976  148123 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1010 19:31:43.361058  148123 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1010 19:31:43.361309  148123 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1010 19:31:43.361410  148123 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1010 19:31:43.361648  148123 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1010 19:31:43.361742  148123 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1010 19:31:43.361960  148123 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1010 19:31:43.362058  148123 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1010 19:31:43.362308  148123 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1010 19:31:43.362399  148123 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1010 19:31:43.362640  148123 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1010 19:31:43.362652  148123 kubeadm.go:310] 
	I1010 19:31:43.362708  148123 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1010 19:31:43.362758  148123 kubeadm.go:310] 		timed out waiting for the condition
	I1010 19:31:43.362768  148123 kubeadm.go:310] 
	I1010 19:31:43.362809  148123 kubeadm.go:310] 	This error is likely caused by:
	I1010 19:31:43.362858  148123 kubeadm.go:310] 		- The kubelet is not running
	I1010 19:31:43.362981  148123 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1010 19:31:43.362993  148123 kubeadm.go:310] 
	I1010 19:31:43.363119  148123 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1010 19:31:43.363164  148123 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1010 19:31:43.363204  148123 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1010 19:31:43.363219  148123 kubeadm.go:310] 
	I1010 19:31:43.363344  148123 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1010 19:31:43.363454  148123 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1010 19:31:43.363465  148123 kubeadm.go:310] 
	I1010 19:31:43.363591  148123 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1010 19:31:43.363708  148123 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1010 19:31:43.363803  148123 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1010 19:31:43.363892  148123 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1010 19:31:43.363978  148123 kubeadm.go:394] duration metric: took 8m3.085634474s to StartCluster
	I1010 19:31:43.364052  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:31:43.364159  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:31:43.364268  148123 kubeadm.go:310] 
	I1010 19:31:43.413041  148123 cri.go:89] found id: ""
	I1010 19:31:43.413075  148123 logs.go:282] 0 containers: []
	W1010 19:31:43.413086  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:31:43.413092  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:31:43.413206  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:31:43.456454  148123 cri.go:89] found id: ""
	I1010 19:31:43.456487  148123 logs.go:282] 0 containers: []
	W1010 19:31:43.456499  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:31:43.456507  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:31:43.456567  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:31:43.498582  148123 cri.go:89] found id: ""
	I1010 19:31:43.498614  148123 logs.go:282] 0 containers: []
	W1010 19:31:43.498624  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:31:43.498633  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:31:43.498694  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:31:43.540557  148123 cri.go:89] found id: ""
	I1010 19:31:43.540589  148123 logs.go:282] 0 containers: []
	W1010 19:31:43.540598  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:31:43.540605  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:31:43.540661  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:31:43.576743  148123 cri.go:89] found id: ""
	I1010 19:31:43.576776  148123 logs.go:282] 0 containers: []
	W1010 19:31:43.576787  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:31:43.576796  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:31:43.576887  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:31:43.614620  148123 cri.go:89] found id: ""
	I1010 19:31:43.614652  148123 logs.go:282] 0 containers: []
	W1010 19:31:43.614662  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:31:43.614668  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:31:43.614730  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:31:43.652008  148123 cri.go:89] found id: ""
	I1010 19:31:43.652061  148123 logs.go:282] 0 containers: []
	W1010 19:31:43.652075  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:31:43.652084  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:31:43.652153  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:31:43.689990  148123 cri.go:89] found id: ""
	I1010 19:31:43.690019  148123 logs.go:282] 0 containers: []
	W1010 19:31:43.690028  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:31:43.690043  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:31:43.690056  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:31:43.745634  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:31:43.745673  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:31:43.760064  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:31:43.760095  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:31:43.840168  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:31:43.840197  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:31:43.840214  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:31:43.943265  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:31:43.943303  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1010 19:31:43.987758  148123 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1010 19:31:43.987816  148123 out.go:270] * 
	W1010 19:31:43.987891  148123 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1010 19:31:43.987906  148123 out.go:270] * 
	W1010 19:31:43.988703  148123 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1010 19:31:43.991928  148123 out.go:201] 
	W1010 19:31:43.993494  148123 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1010 19:31:43.993556  148123 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1010 19:31:43.993585  148123 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1010 19:31:43.995273  148123 out.go:201] 
	
	
	==> CRI-O <==
	Oct 10 19:40:49 old-k8s-version-947203 crio[635]: time="2024-10-10 19:40:49.362134801Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728589249362106076,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=57eabbd1-bc17-4cfc-a1d2-4ec20e1263db name=/runtime.v1.ImageService/ImageFsInfo
	Oct 10 19:40:49 old-k8s-version-947203 crio[635]: time="2024-10-10 19:40:49.362741772Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0660affb-45c8-40a8-b61a-8f557e5bd910 name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 19:40:49 old-k8s-version-947203 crio[635]: time="2024-10-10 19:40:49.362821245Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0660affb-45c8-40a8-b61a-8f557e5bd910 name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 19:40:49 old-k8s-version-947203 crio[635]: time="2024-10-10 19:40:49.362857296Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=0660affb-45c8-40a8-b61a-8f557e5bd910 name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 19:40:49 old-k8s-version-947203 crio[635]: time="2024-10-10 19:40:49.397131876Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b0a54be4-a06b-442b-90bd-d699880590d4 name=/runtime.v1.RuntimeService/Version
	Oct 10 19:40:49 old-k8s-version-947203 crio[635]: time="2024-10-10 19:40:49.397217574Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b0a54be4-a06b-442b-90bd-d699880590d4 name=/runtime.v1.RuntimeService/Version
	Oct 10 19:40:49 old-k8s-version-947203 crio[635]: time="2024-10-10 19:40:49.398346260Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=edce855a-1351-46f9-9471-40d9f14c1efd name=/runtime.v1.ImageService/ImageFsInfo
	Oct 10 19:40:49 old-k8s-version-947203 crio[635]: time="2024-10-10 19:40:49.398808450Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728589249398784713,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=edce855a-1351-46f9-9471-40d9f14c1efd name=/runtime.v1.ImageService/ImageFsInfo
	Oct 10 19:40:49 old-k8s-version-947203 crio[635]: time="2024-10-10 19:40:49.399392081Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=43e86f05-a1cd-46bf-9fe5-ff10b7850b62 name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 19:40:49 old-k8s-version-947203 crio[635]: time="2024-10-10 19:40:49.399460396Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=43e86f05-a1cd-46bf-9fe5-ff10b7850b62 name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 19:40:49 old-k8s-version-947203 crio[635]: time="2024-10-10 19:40:49.399503122Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=43e86f05-a1cd-46bf-9fe5-ff10b7850b62 name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 19:40:49 old-k8s-version-947203 crio[635]: time="2024-10-10 19:40:49.433120682Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cecdaeac-1333-4ade-9296-ad4584506257 name=/runtime.v1.RuntimeService/Version
	Oct 10 19:40:49 old-k8s-version-947203 crio[635]: time="2024-10-10 19:40:49.433221115Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cecdaeac-1333-4ade-9296-ad4584506257 name=/runtime.v1.RuntimeService/Version
	Oct 10 19:40:49 old-k8s-version-947203 crio[635]: time="2024-10-10 19:40:49.434420383Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cf48372f-e9e1-4cf8-9db8-459fc37ceed9 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 10 19:40:49 old-k8s-version-947203 crio[635]: time="2024-10-10 19:40:49.434905009Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728589249434872119,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cf48372f-e9e1-4cf8-9db8-459fc37ceed9 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 10 19:40:49 old-k8s-version-947203 crio[635]: time="2024-10-10 19:40:49.435577576Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4e2c7a5f-471a-4559-8961-8f2727ca19e6 name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 19:40:49 old-k8s-version-947203 crio[635]: time="2024-10-10 19:40:49.435647043Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4e2c7a5f-471a-4559-8961-8f2727ca19e6 name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 19:40:49 old-k8s-version-947203 crio[635]: time="2024-10-10 19:40:49.435686393Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=4e2c7a5f-471a-4559-8961-8f2727ca19e6 name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 19:40:49 old-k8s-version-947203 crio[635]: time="2024-10-10 19:40:49.469925427Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7939cc42-2f03-44e7-b74b-6d3effbb9760 name=/runtime.v1.RuntimeService/Version
	Oct 10 19:40:49 old-k8s-version-947203 crio[635]: time="2024-10-10 19:40:49.470042976Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7939cc42-2f03-44e7-b74b-6d3effbb9760 name=/runtime.v1.RuntimeService/Version
	Oct 10 19:40:49 old-k8s-version-947203 crio[635]: time="2024-10-10 19:40:49.471438137Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e292ba25-fe1c-4574-abf7-83bbf642ba31 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 10 19:40:49 old-k8s-version-947203 crio[635]: time="2024-10-10 19:40:49.471845171Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728589249471823028,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e292ba25-fe1c-4574-abf7-83bbf642ba31 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 10 19:40:49 old-k8s-version-947203 crio[635]: time="2024-10-10 19:40:49.472481189Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4a52342d-161e-44ed-9ecb-c08f2fd6a6d1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 19:40:49 old-k8s-version-947203 crio[635]: time="2024-10-10 19:40:49.472557665Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4a52342d-161e-44ed-9ecb-c08f2fd6a6d1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 19:40:49 old-k8s-version-947203 crio[635]: time="2024-10-10 19:40:49.472601598Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=4a52342d-161e-44ed-9ecb-c08f2fd6a6d1 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct10 19:23] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051246] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042600] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.085550] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.699486] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.514715] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.834131] systemd-fstab-generator[559]: Ignoring "noauto" option for root device
	[  +0.134078] systemd-fstab-generator[571]: Ignoring "noauto" option for root device
	[  +0.216712] systemd-fstab-generator[585]: Ignoring "noauto" option for root device
	[  +0.120541] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.278860] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +6.492743] systemd-fstab-generator[881]: Ignoring "noauto" option for root device
	[  +0.072493] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.094540] systemd-fstab-generator[1007]: Ignoring "noauto" option for root device
	[ +12.843516] kauditd_printk_skb: 46 callbacks suppressed
	[Oct10 19:27] systemd-fstab-generator[5080]: Ignoring "noauto" option for root device
	[Oct10 19:29] systemd-fstab-generator[5373]: Ignoring "noauto" option for root device
	[  +0.064417] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 19:40:49 up 17 min,  0 users,  load average: 0.02, 0.02, 0.01
	Linux old-k8s-version-947203 5.10.207 #1 SMP Tue Oct 8 15:16:25 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Oct 10 19:40:44 old-k8s-version-947203 kubelet[6550]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Oct 10 19:40:44 old-k8s-version-947203 kubelet[6550]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Oct 10 19:40:44 old-k8s-version-947203 kubelet[6550]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Oct 10 19:40:44 old-k8s-version-947203 kubelet[6550]: goroutine 125 [select]:
	Oct 10 19:40:44 old-k8s-version-947203 kubelet[6550]: net.(*netFD).connect.func2(0x4f7fe40, 0xc000c46f60, 0xc000f05480, 0xc000e5ea80, 0xc000e5ea20)
	Oct 10 19:40:44 old-k8s-version-947203 kubelet[6550]:         /usr/local/go/src/net/fd_unix.go:118 +0xc5
	Oct 10 19:40:44 old-k8s-version-947203 kubelet[6550]: created by net.(*netFD).connect
	Oct 10 19:40:44 old-k8s-version-947203 kubelet[6550]:         /usr/local/go/src/net/fd_unix.go:117 +0x234
	Oct 10 19:40:44 old-k8s-version-947203 kubelet[6550]: goroutine 124 [select]:
	Oct 10 19:40:44 old-k8s-version-947203 kubelet[6550]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*controlBuffer).get(0xc000b74dc0, 0x1, 0x0, 0x0, 0x0, 0x0)
	Oct 10 19:40:44 old-k8s-version-947203 kubelet[6550]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:395 +0x125
	Oct 10 19:40:44 old-k8s-version-947203 kubelet[6550]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*loopyWriter).run(0xc000f1e000, 0x0, 0x0)
	Oct 10 19:40:44 old-k8s-version-947203 kubelet[6550]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:513 +0x1d3
	Oct 10 19:40:44 old-k8s-version-947203 kubelet[6550]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client.func3(0xc0005d5c00)
	Oct 10 19:40:44 old-k8s-version-947203 kubelet[6550]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:346 +0x7b
	Oct 10 19:40:44 old-k8s-version-947203 kubelet[6550]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Oct 10 19:40:44 old-k8s-version-947203 kubelet[6550]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:344 +0xefc
	Oct 10 19:40:44 old-k8s-version-947203 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 114.
	Oct 10 19:40:45 old-k8s-version-947203 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Oct 10 19:40:45 old-k8s-version-947203 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Oct 10 19:40:45 old-k8s-version-947203 kubelet[6559]: I1010 19:40:45.100505    6559 server.go:416] Version: v1.20.0
	Oct 10 19:40:45 old-k8s-version-947203 kubelet[6559]: I1010 19:40:45.100809    6559 server.go:837] Client rotation is on, will bootstrap in background
	Oct 10 19:40:45 old-k8s-version-947203 kubelet[6559]: I1010 19:40:45.103069    6559 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Oct 10 19:40:45 old-k8s-version-947203 kubelet[6559]: W1010 19:40:45.104609    6559 manager.go:159] Cannot detect current cgroup on cgroup v2
	Oct 10 19:40:45 old-k8s-version-947203 kubelet[6559]: I1010 19:40:45.104684    6559 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-947203 -n old-k8s-version-947203
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-947203 -n old-k8s-version-947203: exit status 2 (249.701043ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-947203" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.55s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (439.72s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-541370 -n embed-certs-541370
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-10-10 19:44:51.199472099 +0000 UTC m=+6438.884407285
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-541370 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-541370 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.347µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-541370 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-541370 -n embed-certs-541370
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-541370 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-541370 logs -n 25: (2.104087668s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p newest-cni-029826                                   | newest-cni-029826            | jenkins | v1.34.0 | 10 Oct 24 19:16 UTC | 10 Oct 24 19:16 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-029826                  | newest-cni-029826            | jenkins | v1.34.0 | 10 Oct 24 19:16 UTC | 10 Oct 24 19:16 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-029826 --memory=2200 --alsologtostderr   | newest-cni-029826            | jenkins | v1.34.0 | 10 Oct 24 19:16 UTC | 10 Oct 24 19:16 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-541370            | embed-certs-541370           | jenkins | v1.34.0 | 10 Oct 24 19:16 UTC | 10 Oct 24 19:16 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-541370                                  | embed-certs-541370           | jenkins | v1.34.0 | 10 Oct 24 19:16 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| image   | newest-cni-029826 image list                           | newest-cni-029826            | jenkins | v1.34.0 | 10 Oct 24 19:16 UTC | 10 Oct 24 19:16 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-029826                                   | newest-cni-029826            | jenkins | v1.34.0 | 10 Oct 24 19:16 UTC | 10 Oct 24 19:16 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-029826                                   | newest-cni-029826            | jenkins | v1.34.0 | 10 Oct 24 19:16 UTC | 10 Oct 24 19:16 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-029826                                   | newest-cni-029826            | jenkins | v1.34.0 | 10 Oct 24 19:16 UTC | 10 Oct 24 19:16 UTC |
	| delete  | -p newest-cni-029826                                   | newest-cni-029826            | jenkins | v1.34.0 | 10 Oct 24 19:16 UTC | 10 Oct 24 19:17 UTC |
	| start   | -p                                                     | default-k8s-diff-port-361847 | jenkins | v1.34.0 | 10 Oct 24 19:17 UTC | 10 Oct 24 19:18 UTC |
	|         | default-k8s-diff-port-361847                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-320324                  | no-preload-320324            | jenkins | v1.34.0 | 10 Oct 24 19:18 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-320324                                   | no-preload-320324            | jenkins | v1.34.0 | 10 Oct 24 19:18 UTC | 10 Oct 24 19:29 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-947203        | old-k8s-version-947203       | jenkins | v1.34.0 | 10 Oct 24 19:18 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-361847  | default-k8s-diff-port-361847 | jenkins | v1.34.0 | 10 Oct 24 19:18 UTC | 10 Oct 24 19:18 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-361847 | jenkins | v1.34.0 | 10 Oct 24 19:18 UTC |                     |
	|         | default-k8s-diff-port-361847                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-541370                 | embed-certs-541370           | jenkins | v1.34.0 | 10 Oct 24 19:18 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-541370                                  | embed-certs-541370           | jenkins | v1.34.0 | 10 Oct 24 19:19 UTC | 10 Oct 24 19:28 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-947203                              | old-k8s-version-947203       | jenkins | v1.34.0 | 10 Oct 24 19:19 UTC | 10 Oct 24 19:19 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-947203             | old-k8s-version-947203       | jenkins | v1.34.0 | 10 Oct 24 19:19 UTC | 10 Oct 24 19:19 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-947203                              | old-k8s-version-947203       | jenkins | v1.34.0 | 10 Oct 24 19:19 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-361847       | default-k8s-diff-port-361847 | jenkins | v1.34.0 | 10 Oct 24 19:21 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-361847 | jenkins | v1.34.0 | 10 Oct 24 19:21 UTC | 10 Oct 24 19:29 UTC |
	|         | default-k8s-diff-port-361847                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-947203                              | old-k8s-version-947203       | jenkins | v1.34.0 | 10 Oct 24 19:43 UTC | 10 Oct 24 19:43 UTC |
	| delete  | -p no-preload-320324                                   | no-preload-320324            | jenkins | v1.34.0 | 10 Oct 24 19:44 UTC | 10 Oct 24 19:44 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/10 19:21:13
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1010 19:21:13.943219  148525 out.go:345] Setting OutFile to fd 1 ...
	I1010 19:21:13.943336  148525 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 19:21:13.943343  148525 out.go:358] Setting ErrFile to fd 2...
	I1010 19:21:13.943347  148525 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 19:21:13.943560  148525 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19787-81676/.minikube/bin
	I1010 19:21:13.944109  148525 out.go:352] Setting JSON to false
	I1010 19:21:13.945219  148525 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":11020,"bootTime":1728577054,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1010 19:21:13.945321  148525 start.go:139] virtualization: kvm guest
	I1010 19:21:13.947915  148525 out.go:177] * [default-k8s-diff-port-361847] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1010 19:21:13.950021  148525 out.go:177]   - MINIKUBE_LOCATION=19787
	I1010 19:21:13.950037  148525 notify.go:220] Checking for updates...
	I1010 19:21:13.952994  148525 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1010 19:21:13.954661  148525 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19787-81676/kubeconfig
	I1010 19:21:13.956438  148525 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19787-81676/.minikube
	I1010 19:21:13.958502  148525 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1010 19:21:13.960099  148525 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1010 19:21:13.961930  148525 config.go:182] Loaded profile config "default-k8s-diff-port-361847": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 19:21:13.962374  148525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:21:13.962450  148525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:21:13.978323  148525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41247
	I1010 19:21:13.978926  148525 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:21:13.979520  148525 main.go:141] libmachine: Using API Version  1
	I1010 19:21:13.979538  148525 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:21:13.979954  148525 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:21:13.980144  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .DriverName
	I1010 19:21:13.980446  148525 driver.go:394] Setting default libvirt URI to qemu:///system
	I1010 19:21:13.980745  148525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:21:13.980784  148525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:21:13.996046  148525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37617
	I1010 19:21:13.996534  148525 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:21:13.997069  148525 main.go:141] libmachine: Using API Version  1
	I1010 19:21:13.997097  148525 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:21:13.997530  148525 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:21:13.997788  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .DriverName
	I1010 19:21:14.033593  148525 out.go:177] * Using the kvm2 driver based on existing profile
	I1010 19:21:14.035367  148525 start.go:297] selected driver: kvm2
	I1010 19:21:14.035394  148525 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-361847 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-361847 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.32 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 19:21:14.035526  148525 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1010 19:21:14.036341  148525 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 19:21:14.036452  148525 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19787-81676/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1010 19:21:14.052462  148525 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1010 19:21:14.052918  148525 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1010 19:21:14.052967  148525 cni.go:84] Creating CNI manager for ""
	I1010 19:21:14.053019  148525 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1010 19:21:14.053067  148525 start.go:340] cluster config:
	{Name:default-k8s-diff-port-361847 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-361847 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.32 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-h
ost Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 19:21:14.053178  148525 iso.go:125] acquiring lock: {Name:mk53f4624ffe67cac4f5d6b29b9ed87292a010a5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 19:21:14.055485  148525 out.go:177] * Starting "default-k8s-diff-port-361847" primary control-plane node in "default-k8s-diff-port-361847" cluster
	I1010 19:21:16.773106  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:21:14.056945  148525 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1010 19:21:14.057002  148525 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1010 19:21:14.057014  148525 cache.go:56] Caching tarball of preloaded images
	I1010 19:21:14.057118  148525 preload.go:172] Found /home/jenkins/minikube-integration/19787-81676/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1010 19:21:14.057134  148525 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1010 19:21:14.057268  148525 profile.go:143] Saving config to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/default-k8s-diff-port-361847/config.json ...
	I1010 19:21:14.057476  148525 start.go:360] acquireMachinesLock for default-k8s-diff-port-361847: {Name:mkf50a8a3600473176c10a3ea212772e747151e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1010 19:21:22.853158  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:21:25.925174  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:21:32.005160  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:21:35.077198  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:21:41.157130  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:21:44.229127  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:21:50.309136  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:21:53.381191  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:21:59.461129  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:22:02.533201  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:22:08.613124  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:22:11.685169  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:22:17.765161  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:22:20.837208  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:22:26.917127  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:22:29.989172  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:22:36.069147  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:22:39.141173  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:22:45.221160  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:22:48.293141  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:22:51.297376  147758 start.go:364] duration metric: took 3m49.312490934s to acquireMachinesLock for "embed-certs-541370"
	I1010 19:22:51.297453  147758 start.go:96] Skipping create...Using existing machine configuration
	I1010 19:22:51.297464  147758 fix.go:54] fixHost starting: 
	I1010 19:22:51.297787  147758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:22:51.297848  147758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:22:51.314087  147758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43639
	I1010 19:22:51.314588  147758 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:22:51.315115  147758 main.go:141] libmachine: Using API Version  1
	I1010 19:22:51.315138  147758 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:22:51.315509  147758 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:22:51.315691  147758 main.go:141] libmachine: (embed-certs-541370) Calling .DriverName
	I1010 19:22:51.315879  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetState
	I1010 19:22:51.317597  147758 fix.go:112] recreateIfNeeded on embed-certs-541370: state=Stopped err=<nil>
	I1010 19:22:51.317621  147758 main.go:141] libmachine: (embed-certs-541370) Calling .DriverName
	W1010 19:22:51.317781  147758 fix.go:138] unexpected machine state, will restart: <nil>
	I1010 19:22:51.319664  147758 out.go:177] * Restarting existing kvm2 VM for "embed-certs-541370" ...
	I1010 19:22:51.320967  147758 main.go:141] libmachine: (embed-certs-541370) Calling .Start
	I1010 19:22:51.321134  147758 main.go:141] libmachine: (embed-certs-541370) Ensuring networks are active...
	I1010 19:22:51.322026  147758 main.go:141] libmachine: (embed-certs-541370) Ensuring network default is active
	I1010 19:22:51.322468  147758 main.go:141] libmachine: (embed-certs-541370) Ensuring network mk-embed-certs-541370 is active
	I1010 19:22:51.322874  147758 main.go:141] libmachine: (embed-certs-541370) Getting domain xml...
	I1010 19:22:51.323687  147758 main.go:141] libmachine: (embed-certs-541370) Creating domain...
	I1010 19:22:51.294881  147213 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1010 19:22:51.294927  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetMachineName
	I1010 19:22:51.295226  147213 buildroot.go:166] provisioning hostname "no-preload-320324"
	I1010 19:22:51.295256  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetMachineName
	I1010 19:22:51.295454  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHHostname
	I1010 19:22:51.297198  147213 machine.go:96] duration metric: took 4m37.414594306s to provisionDockerMachine
	I1010 19:22:51.297252  147213 fix.go:56] duration metric: took 4m37.436635356s for fixHost
	I1010 19:22:51.297259  147213 start.go:83] releasing machines lock for "no-preload-320324", held for 4m37.436668423s
	W1010 19:22:51.297278  147213 start.go:714] error starting host: provision: host is not running
	W1010 19:22:51.297382  147213 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1010 19:22:51.297396  147213 start.go:729] Will try again in 5 seconds ...
	I1010 19:22:52.568699  147758 main.go:141] libmachine: (embed-certs-541370) Waiting to get IP...
	I1010 19:22:52.569582  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:22:52.569952  147758 main.go:141] libmachine: (embed-certs-541370) DBG | unable to find current IP address of domain embed-certs-541370 in network mk-embed-certs-541370
	I1010 19:22:52.570018  147758 main.go:141] libmachine: (embed-certs-541370) DBG | I1010 19:22:52.569935  148914 retry.go:31] will retry after 261.244287ms: waiting for machine to come up
	I1010 19:22:52.832639  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:22:52.833280  147758 main.go:141] libmachine: (embed-certs-541370) DBG | unable to find current IP address of domain embed-certs-541370 in network mk-embed-certs-541370
	I1010 19:22:52.833310  147758 main.go:141] libmachine: (embed-certs-541370) DBG | I1010 19:22:52.833200  148914 retry.go:31] will retry after 304.116732ms: waiting for machine to come up
	I1010 19:22:53.138770  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:22:53.139091  147758 main.go:141] libmachine: (embed-certs-541370) DBG | unable to find current IP address of domain embed-certs-541370 in network mk-embed-certs-541370
	I1010 19:22:53.139124  147758 main.go:141] libmachine: (embed-certs-541370) DBG | I1010 19:22:53.139055  148914 retry.go:31] will retry after 484.354474ms: waiting for machine to come up
	I1010 19:22:53.624831  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:22:53.625293  147758 main.go:141] libmachine: (embed-certs-541370) DBG | unable to find current IP address of domain embed-certs-541370 in network mk-embed-certs-541370
	I1010 19:22:53.625323  147758 main.go:141] libmachine: (embed-certs-541370) DBG | I1010 19:22:53.625234  148914 retry.go:31] will retry after 591.916836ms: waiting for machine to come up
	I1010 19:22:54.219214  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:22:54.219732  147758 main.go:141] libmachine: (embed-certs-541370) DBG | unable to find current IP address of domain embed-certs-541370 in network mk-embed-certs-541370
	I1010 19:22:54.219763  147758 main.go:141] libmachine: (embed-certs-541370) DBG | I1010 19:22:54.219673  148914 retry.go:31] will retry after 614.162479ms: waiting for machine to come up
	I1010 19:22:54.835573  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:22:54.836038  147758 main.go:141] libmachine: (embed-certs-541370) DBG | unable to find current IP address of domain embed-certs-541370 in network mk-embed-certs-541370
	I1010 19:22:54.836063  147758 main.go:141] libmachine: (embed-certs-541370) DBG | I1010 19:22:54.835988  148914 retry.go:31] will retry after 824.170953ms: waiting for machine to come up
	I1010 19:22:55.662092  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:22:55.662646  147758 main.go:141] libmachine: (embed-certs-541370) DBG | unable to find current IP address of domain embed-certs-541370 in network mk-embed-certs-541370
	I1010 19:22:55.662668  147758 main.go:141] libmachine: (embed-certs-541370) DBG | I1010 19:22:55.662586  148914 retry.go:31] will retry after 928.483848ms: waiting for machine to come up
	I1010 19:22:56.593200  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:22:56.593724  147758 main.go:141] libmachine: (embed-certs-541370) DBG | unable to find current IP address of domain embed-certs-541370 in network mk-embed-certs-541370
	I1010 19:22:56.593756  147758 main.go:141] libmachine: (embed-certs-541370) DBG | I1010 19:22:56.593679  148914 retry.go:31] will retry after 941.138644ms: waiting for machine to come up
	I1010 19:22:56.299351  147213 start.go:360] acquireMachinesLock for no-preload-320324: {Name:mkf50a8a3600473176c10a3ea212772e747151e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1010 19:22:57.536977  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:22:57.537403  147758 main.go:141] libmachine: (embed-certs-541370) DBG | unable to find current IP address of domain embed-certs-541370 in network mk-embed-certs-541370
	I1010 19:22:57.537429  147758 main.go:141] libmachine: (embed-certs-541370) DBG | I1010 19:22:57.537331  148914 retry.go:31] will retry after 1.262203584s: waiting for machine to come up
	I1010 19:22:58.801921  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:22:58.802420  147758 main.go:141] libmachine: (embed-certs-541370) DBG | unable to find current IP address of domain embed-certs-541370 in network mk-embed-certs-541370
	I1010 19:22:58.802454  147758 main.go:141] libmachine: (embed-certs-541370) DBG | I1010 19:22:58.802381  148914 retry.go:31] will retry after 2.154751391s: waiting for machine to come up
	I1010 19:23:00.960100  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:00.960661  147758 main.go:141] libmachine: (embed-certs-541370) DBG | unable to find current IP address of domain embed-certs-541370 in network mk-embed-certs-541370
	I1010 19:23:00.960684  147758 main.go:141] libmachine: (embed-certs-541370) DBG | I1010 19:23:00.960607  148914 retry.go:31] will retry after 1.945155171s: waiting for machine to come up
	I1010 19:23:02.907705  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:02.908097  147758 main.go:141] libmachine: (embed-certs-541370) DBG | unable to find current IP address of domain embed-certs-541370 in network mk-embed-certs-541370
	I1010 19:23:02.908129  147758 main.go:141] libmachine: (embed-certs-541370) DBG | I1010 19:23:02.908038  148914 retry.go:31] will retry after 3.245262469s: waiting for machine to come up
	I1010 19:23:06.157527  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:06.157897  147758 main.go:141] libmachine: (embed-certs-541370) DBG | unable to find current IP address of domain embed-certs-541370 in network mk-embed-certs-541370
	I1010 19:23:06.157925  147758 main.go:141] libmachine: (embed-certs-541370) DBG | I1010 19:23:06.157858  148914 retry.go:31] will retry after 3.973579024s: waiting for machine to come up
	I1010 19:23:11.321975  148123 start.go:364] duration metric: took 3m17.648521443s to acquireMachinesLock for "old-k8s-version-947203"
	I1010 19:23:11.322043  148123 start.go:96] Skipping create...Using existing machine configuration
	I1010 19:23:11.322056  148123 fix.go:54] fixHost starting: 
	I1010 19:23:11.322537  148123 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:23:11.322596  148123 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:23:11.339586  148123 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42157
	I1010 19:23:11.340091  148123 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:23:11.340686  148123 main.go:141] libmachine: Using API Version  1
	I1010 19:23:11.340719  148123 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:23:11.341101  148123 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:23:11.341295  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .DriverName
	I1010 19:23:11.341453  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetState
	I1010 19:23:11.343215  148123 fix.go:112] recreateIfNeeded on old-k8s-version-947203: state=Stopped err=<nil>
	I1010 19:23:11.343247  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .DriverName
	W1010 19:23:11.343408  148123 fix.go:138] unexpected machine state, will restart: <nil>
	I1010 19:23:11.345653  148123 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-947203" ...
	I1010 19:23:10.135369  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.135799  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has current primary IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.135830  147758 main.go:141] libmachine: (embed-certs-541370) Found IP for machine: 192.168.39.120
	I1010 19:23:10.135839  147758 main.go:141] libmachine: (embed-certs-541370) Reserving static IP address...
	I1010 19:23:10.136283  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "embed-certs-541370", mac: "52:54:00:e2:ee:d0", ip: "192.168.39.120"} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:10.136311  147758 main.go:141] libmachine: (embed-certs-541370) Reserved static IP address: 192.168.39.120
	I1010 19:23:10.136327  147758 main.go:141] libmachine: (embed-certs-541370) DBG | skip adding static IP to network mk-embed-certs-541370 - found existing host DHCP lease matching {name: "embed-certs-541370", mac: "52:54:00:e2:ee:d0", ip: "192.168.39.120"}
	I1010 19:23:10.136339  147758 main.go:141] libmachine: (embed-certs-541370) DBG | Getting to WaitForSSH function...
	I1010 19:23:10.136351  147758 main.go:141] libmachine: (embed-certs-541370) Waiting for SSH to be available...
	I1010 19:23:10.138861  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.139259  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:10.139293  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.139438  147758 main.go:141] libmachine: (embed-certs-541370) DBG | Using SSH client type: external
	I1010 19:23:10.139472  147758 main.go:141] libmachine: (embed-certs-541370) DBG | Using SSH private key: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/embed-certs-541370/id_rsa (-rw-------)
	I1010 19:23:10.139517  147758 main.go:141] libmachine: (embed-certs-541370) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.120 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19787-81676/.minikube/machines/embed-certs-541370/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1010 19:23:10.139541  147758 main.go:141] libmachine: (embed-certs-541370) DBG | About to run SSH command:
	I1010 19:23:10.139562  147758 main.go:141] libmachine: (embed-certs-541370) DBG | exit 0
	I1010 19:23:10.261078  147758 main.go:141] libmachine: (embed-certs-541370) DBG | SSH cmd err, output: <nil>: 
	I1010 19:23:10.261533  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetConfigRaw
	I1010 19:23:10.262192  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetIP
	I1010 19:23:10.265071  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.265467  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:10.265515  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.265737  147758 profile.go:143] Saving config to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/embed-certs-541370/config.json ...
	I1010 19:23:10.265941  147758 machine.go:93] provisionDockerMachine start ...
	I1010 19:23:10.265960  147758 main.go:141] libmachine: (embed-certs-541370) Calling .DriverName
	I1010 19:23:10.266188  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHHostname
	I1010 19:23:10.269186  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.269618  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:10.269649  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.269799  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHPort
	I1010 19:23:10.269984  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:23:10.270206  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:23:10.270345  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHUsername
	I1010 19:23:10.270550  147758 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:10.270834  147758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.120 22 <nil> <nil>}
	I1010 19:23:10.270849  147758 main.go:141] libmachine: About to run SSH command:
	hostname
	I1010 19:23:10.373285  147758 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1010 19:23:10.373316  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetMachineName
	I1010 19:23:10.373625  147758 buildroot.go:166] provisioning hostname "embed-certs-541370"
	I1010 19:23:10.373660  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetMachineName
	I1010 19:23:10.373835  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHHostname
	I1010 19:23:10.376552  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.376951  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:10.376994  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.377132  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHPort
	I1010 19:23:10.377332  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:23:10.377489  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:23:10.377606  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHUsername
	I1010 19:23:10.377745  147758 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:10.377918  147758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.120 22 <nil> <nil>}
	I1010 19:23:10.377930  147758 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-541370 && echo "embed-certs-541370" | sudo tee /etc/hostname
	I1010 19:23:10.495847  147758 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-541370
	
	I1010 19:23:10.495880  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHHostname
	I1010 19:23:10.498868  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.499205  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:10.499247  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.499362  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHPort
	I1010 19:23:10.499556  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:23:10.499700  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:23:10.499829  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHUsername
	I1010 19:23:10.499961  147758 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:10.500187  147758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.120 22 <nil> <nil>}
	I1010 19:23:10.500210  147758 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-541370' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-541370/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-541370' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1010 19:23:10.614318  147758 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1010 19:23:10.614357  147758 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19787-81676/.minikube CaCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19787-81676/.minikube}
	I1010 19:23:10.614412  147758 buildroot.go:174] setting up certificates
	I1010 19:23:10.614429  147758 provision.go:84] configureAuth start
	I1010 19:23:10.614457  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetMachineName
	I1010 19:23:10.614763  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetIP
	I1010 19:23:10.617457  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.617888  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:10.617916  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.618078  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHHostname
	I1010 19:23:10.620243  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.620635  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:10.620666  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.620789  147758 provision.go:143] copyHostCerts
	I1010 19:23:10.620895  147758 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem, removing ...
	I1010 19:23:10.620913  147758 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem
	I1010 19:23:10.620998  147758 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem (1078 bytes)
	I1010 19:23:10.621111  147758 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem, removing ...
	I1010 19:23:10.621123  147758 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem
	I1010 19:23:10.621159  147758 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem (1123 bytes)
	I1010 19:23:10.621245  147758 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem, removing ...
	I1010 19:23:10.621257  147758 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem
	I1010 19:23:10.621292  147758 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem (1675 bytes)
	I1010 19:23:10.621364  147758 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem org=jenkins.embed-certs-541370 san=[127.0.0.1 192.168.39.120 embed-certs-541370 localhost minikube]
	I1010 19:23:10.697456  147758 provision.go:177] copyRemoteCerts
	I1010 19:23:10.697515  147758 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1010 19:23:10.697547  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHHostname
	I1010 19:23:10.700439  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.700763  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:10.700799  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.700956  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHPort
	I1010 19:23:10.701162  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:23:10.701320  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHUsername
	I1010 19:23:10.701465  147758 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/embed-certs-541370/id_rsa Username:docker}
	I1010 19:23:10.783442  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1010 19:23:10.808446  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1010 19:23:10.832117  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1010 19:23:10.856286  147758 provision.go:87] duration metric: took 241.840139ms to configureAuth
	I1010 19:23:10.856318  147758 buildroot.go:189] setting minikube options for container-runtime
	I1010 19:23:10.856528  147758 config.go:182] Loaded profile config "embed-certs-541370": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 19:23:10.856640  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHHostname
	I1010 19:23:10.859252  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.859677  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:10.859708  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.859916  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHPort
	I1010 19:23:10.860087  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:23:10.860222  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:23:10.860344  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHUsername
	I1010 19:23:10.860524  147758 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:10.860688  147758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.120 22 <nil> <nil>}
	I1010 19:23:10.860702  147758 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1010 19:23:11.086349  147758 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1010 19:23:11.086375  147758 machine.go:96] duration metric: took 820.421344ms to provisionDockerMachine
	I1010 19:23:11.086386  147758 start.go:293] postStartSetup for "embed-certs-541370" (driver="kvm2")
	I1010 19:23:11.086401  147758 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1010 19:23:11.086423  147758 main.go:141] libmachine: (embed-certs-541370) Calling .DriverName
	I1010 19:23:11.086755  147758 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1010 19:23:11.086783  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHHostname
	I1010 19:23:11.089482  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:11.089838  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:11.089860  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:11.090042  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHPort
	I1010 19:23:11.090253  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:23:11.090410  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHUsername
	I1010 19:23:11.090535  147758 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/embed-certs-541370/id_rsa Username:docker}
	I1010 19:23:11.172474  147758 ssh_runner.go:195] Run: cat /etc/os-release
	I1010 19:23:11.176699  147758 info.go:137] Remote host: Buildroot 2023.02.9
	I1010 19:23:11.176733  147758 filesync.go:126] Scanning /home/jenkins/minikube-integration/19787-81676/.minikube/addons for local assets ...
	I1010 19:23:11.176800  147758 filesync.go:126] Scanning /home/jenkins/minikube-integration/19787-81676/.minikube/files for local assets ...
	I1010 19:23:11.176899  147758 filesync.go:149] local asset: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem -> 888762.pem in /etc/ssl/certs
	I1010 19:23:11.177044  147758 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1010 19:23:11.186985  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem --> /etc/ssl/certs/888762.pem (1708 bytes)
	I1010 19:23:11.211385  147758 start.go:296] duration metric: took 124.982089ms for postStartSetup
	I1010 19:23:11.211442  147758 fix.go:56] duration metric: took 19.913977793s for fixHost
	I1010 19:23:11.211472  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHHostname
	I1010 19:23:11.214421  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:11.214780  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:11.214812  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:11.214999  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHPort
	I1010 19:23:11.215219  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:23:11.215429  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:23:11.215612  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHUsername
	I1010 19:23:11.215786  147758 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:11.215974  147758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.120 22 <nil> <nil>}
	I1010 19:23:11.215985  147758 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1010 19:23:11.321786  147758 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728588191.295446348
	
	I1010 19:23:11.321814  147758 fix.go:216] guest clock: 1728588191.295446348
	I1010 19:23:11.321822  147758 fix.go:229] Guest: 2024-10-10 19:23:11.295446348 +0000 UTC Remote: 2024-10-10 19:23:11.211447413 +0000 UTC m=+249.373680838 (delta=83.998935ms)
	I1010 19:23:11.321870  147758 fix.go:200] guest clock delta is within tolerance: 83.998935ms
	I1010 19:23:11.321877  147758 start.go:83] releasing machines lock for "embed-certs-541370", held for 20.024455781s
	I1010 19:23:11.321905  147758 main.go:141] libmachine: (embed-certs-541370) Calling .DriverName
	I1010 19:23:11.322169  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetIP
	I1010 19:23:11.325004  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:11.325350  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:11.325375  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:11.325566  147758 main.go:141] libmachine: (embed-certs-541370) Calling .DriverName
	I1010 19:23:11.326090  147758 main.go:141] libmachine: (embed-certs-541370) Calling .DriverName
	I1010 19:23:11.326294  147758 main.go:141] libmachine: (embed-certs-541370) Calling .DriverName
	I1010 19:23:11.326383  147758 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1010 19:23:11.326444  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHHostname
	I1010 19:23:11.326501  147758 ssh_runner.go:195] Run: cat /version.json
	I1010 19:23:11.326529  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHHostname
	I1010 19:23:11.329311  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:11.329657  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:11.329690  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:11.329713  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:11.329866  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHPort
	I1010 19:23:11.330057  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:23:11.330160  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:11.330188  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:11.330204  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHUsername
	I1010 19:23:11.330344  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHPort
	I1010 19:23:11.330346  147758 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/embed-certs-541370/id_rsa Username:docker}
	I1010 19:23:11.330538  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:23:11.330687  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHUsername
	I1010 19:23:11.330821  147758 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/embed-certs-541370/id_rsa Username:docker}
	I1010 19:23:11.406525  147758 ssh_runner.go:195] Run: systemctl --version
	I1010 19:23:11.428958  147758 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1010 19:23:11.577663  147758 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1010 19:23:11.584024  147758 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1010 19:23:11.584112  147758 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1010 19:23:11.603163  147758 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1010 19:23:11.603190  147758 start.go:495] detecting cgroup driver to use...
	I1010 19:23:11.603291  147758 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1010 19:23:11.624744  147758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1010 19:23:11.645477  147758 docker.go:217] disabling cri-docker service (if available) ...
	I1010 19:23:11.645537  147758 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1010 19:23:11.660216  147758 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1010 19:23:11.675019  147758 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1010 19:23:11.796038  147758 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1010 19:23:11.967750  147758 docker.go:233] disabling docker service ...
	I1010 19:23:11.967828  147758 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1010 19:23:11.983184  147758 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1010 19:23:12.001603  147758 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1010 19:23:12.149408  147758 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1010 19:23:12.306724  147758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1010 19:23:12.324302  147758 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1010 19:23:12.345426  147758 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1010 19:23:12.345508  147758 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:12.357812  147758 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1010 19:23:12.357883  147758 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:12.370095  147758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:12.382389  147758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:12.395000  147758 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1010 19:23:12.408429  147758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:12.426851  147758 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:12.450568  147758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:12.463434  147758 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1010 19:23:12.474537  147758 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1010 19:23:12.474606  147758 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1010 19:23:12.489074  147758 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1010 19:23:12.500048  147758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 19:23:12.635695  147758 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1010 19:23:12.733511  147758 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1010 19:23:12.733593  147758 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1010 19:23:12.739072  147758 start.go:563] Will wait 60s for crictl version
	I1010 19:23:12.739138  147758 ssh_runner.go:195] Run: which crictl
	I1010 19:23:12.743675  147758 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1010 19:23:12.792272  147758 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1010 19:23:12.792379  147758 ssh_runner.go:195] Run: crio --version
	I1010 19:23:12.829968  147758 ssh_runner.go:195] Run: crio --version
	I1010 19:23:12.862579  147758 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1010 19:23:11.347158  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .Start
	I1010 19:23:11.347359  148123 main.go:141] libmachine: (old-k8s-version-947203) Ensuring networks are active...
	I1010 19:23:11.348252  148123 main.go:141] libmachine: (old-k8s-version-947203) Ensuring network default is active
	I1010 19:23:11.348689  148123 main.go:141] libmachine: (old-k8s-version-947203) Ensuring network mk-old-k8s-version-947203 is active
	I1010 19:23:11.349139  148123 main.go:141] libmachine: (old-k8s-version-947203) Getting domain xml...
	I1010 19:23:11.349799  148123 main.go:141] libmachine: (old-k8s-version-947203) Creating domain...
	I1010 19:23:12.671472  148123 main.go:141] libmachine: (old-k8s-version-947203) Waiting to get IP...
	I1010 19:23:12.672628  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:12.673145  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:12.673278  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:12.673132  149057 retry.go:31] will retry after 280.111667ms: waiting for machine to come up
	I1010 19:23:12.954865  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:12.955325  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:12.955377  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:12.955300  149057 retry.go:31] will retry after 289.967238ms: waiting for machine to come up
	I1010 19:23:13.247039  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:13.247663  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:13.247769  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:13.247632  149057 retry.go:31] will retry after 319.085935ms: waiting for machine to come up
	I1010 19:23:12.863797  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetIP
	I1010 19:23:12.867335  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:12.867760  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:12.867794  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:12.868029  147758 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1010 19:23:12.872503  147758 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 19:23:12.887684  147758 kubeadm.go:883] updating cluster {Name:embed-certs-541370 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:embed-certs-541370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.120 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1010 19:23:12.887809  147758 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1010 19:23:12.887853  147758 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 19:23:12.924155  147758 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1010 19:23:12.924240  147758 ssh_runner.go:195] Run: which lz4
	I1010 19:23:12.928613  147758 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1010 19:23:12.933024  147758 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1010 19:23:12.933069  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1010 19:23:14.450790  147758 crio.go:462] duration metric: took 1.522223644s to copy over tarball
	I1010 19:23:14.450893  147758 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1010 19:23:16.642155  147758 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.191220673s)
	I1010 19:23:16.642193  147758 crio.go:469] duration metric: took 2.191371146s to extract the tarball
	I1010 19:23:16.642202  147758 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1010 19:23:16.679611  147758 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 19:23:16.723840  147758 crio.go:514] all images are preloaded for cri-o runtime.
	I1010 19:23:16.723865  147758 cache_images.go:84] Images are preloaded, skipping loading
	I1010 19:23:16.723874  147758 kubeadm.go:934] updating node { 192.168.39.120 8443 v1.31.1 crio true true} ...
	I1010 19:23:16.723998  147758 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-541370 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.120
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-541370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1010 19:23:16.724081  147758 ssh_runner.go:195] Run: crio config
	I1010 19:23:16.779659  147758 cni.go:84] Creating CNI manager for ""
	I1010 19:23:16.779682  147758 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1010 19:23:16.779693  147758 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1010 19:23:16.779714  147758 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.120 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-541370 NodeName:embed-certs-541370 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.120"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.120 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1010 19:23:16.779842  147758 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.120
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-541370"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.120
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.120"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1010 19:23:16.779904  147758 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1010 19:23:16.791424  147758 binaries.go:44] Found k8s binaries, skipping transfer
	I1010 19:23:16.791493  147758 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1010 19:23:16.801715  147758 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I1010 19:23:16.821364  147758 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1010 19:23:16.842703  147758 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I1010 19:23:16.864835  147758 ssh_runner.go:195] Run: grep 192.168.39.120	control-plane.minikube.internal$ /etc/hosts
	I1010 19:23:16.868928  147758 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.120	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 19:23:13.568192  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:13.568690  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:13.568721  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:13.568657  149057 retry.go:31] will retry after 575.032546ms: waiting for machine to come up
	I1010 19:23:14.145650  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:14.146293  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:14.146319  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:14.146234  149057 retry.go:31] will retry after 702.803794ms: waiting for machine to come up
	I1010 19:23:14.851201  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:14.851736  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:14.851769  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:14.851706  149057 retry.go:31] will retry after 883.195401ms: waiting for machine to come up
	I1010 19:23:15.736627  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:15.737199  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:15.737252  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:15.737155  149057 retry.go:31] will retry after 794.699815ms: waiting for machine to come up
	I1010 19:23:16.533952  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:16.534510  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:16.534545  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:16.534443  149057 retry.go:31] will retry after 961.751912ms: waiting for machine to come up
	I1010 19:23:17.497535  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:17.498007  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:17.498035  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:17.497936  149057 retry.go:31] will retry after 1.423503471s: waiting for machine to come up
	I1010 19:23:16.883162  147758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 19:23:17.027646  147758 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 19:23:17.045083  147758 certs.go:68] Setting up /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/embed-certs-541370 for IP: 192.168.39.120
	I1010 19:23:17.045108  147758 certs.go:194] generating shared ca certs ...
	I1010 19:23:17.045130  147758 certs.go:226] acquiring lock for ca certs: {Name:mk934aa2a82050d7b4e8800492bf64f91d140517 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 19:23:17.045491  147758 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key
	I1010 19:23:17.045561  147758 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key
	I1010 19:23:17.045579  147758 certs.go:256] generating profile certs ...
	I1010 19:23:17.045730  147758 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/embed-certs-541370/client.key
	I1010 19:23:17.045814  147758 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/embed-certs-541370/apiserver.key.dd7630a8
	I1010 19:23:17.045874  147758 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/embed-certs-541370/proxy-client.key
	I1010 19:23:17.046015  147758 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876.pem (1338 bytes)
	W1010 19:23:17.046055  147758 certs.go:480] ignoring /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876_empty.pem, impossibly tiny 0 bytes
	I1010 19:23:17.046075  147758 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem (1679 bytes)
	I1010 19:23:17.046114  147758 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem (1078 bytes)
	I1010 19:23:17.046150  147758 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem (1123 bytes)
	I1010 19:23:17.046177  147758 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem (1675 bytes)
	I1010 19:23:17.046235  147758 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem (1708 bytes)
	I1010 19:23:17.047131  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1010 19:23:17.087057  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1010 19:23:17.137707  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1010 19:23:17.181707  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1010 19:23:17.213227  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/embed-certs-541370/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1010 19:23:17.247846  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/embed-certs-541370/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1010 19:23:17.275989  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/embed-certs-541370/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1010 19:23:17.301144  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/embed-certs-541370/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1010 19:23:17.326232  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem --> /usr/share/ca-certificates/888762.pem (1708 bytes)
	I1010 19:23:17.350586  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1010 19:23:17.374666  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876.pem --> /usr/share/ca-certificates/88876.pem (1338 bytes)
	I1010 19:23:17.399570  147758 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1010 19:23:17.417846  147758 ssh_runner.go:195] Run: openssl version
	I1010 19:23:17.424206  147758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/888762.pem && ln -fs /usr/share/ca-certificates/888762.pem /etc/ssl/certs/888762.pem"
	I1010 19:23:17.436091  147758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/888762.pem
	I1010 19:23:17.441020  147758 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 10 18:09 /usr/share/ca-certificates/888762.pem
	I1010 19:23:17.441090  147758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/888762.pem
	I1010 19:23:17.447318  147758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/888762.pem /etc/ssl/certs/3ec20f2e.0"
	I1010 19:23:17.459191  147758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1010 19:23:17.470878  147758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1010 19:23:17.476185  147758 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 10 17:58 /usr/share/ca-certificates/minikubeCA.pem
	I1010 19:23:17.476248  147758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1010 19:23:17.482808  147758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1010 19:23:17.494626  147758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/88876.pem && ln -fs /usr/share/ca-certificates/88876.pem /etc/ssl/certs/88876.pem"
	I1010 19:23:17.506522  147758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/88876.pem
	I1010 19:23:17.511484  147758 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 10 18:09 /usr/share/ca-certificates/88876.pem
	I1010 19:23:17.511558  147758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/88876.pem
	I1010 19:23:17.517445  147758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/88876.pem /etc/ssl/certs/51391683.0"
	I1010 19:23:17.529109  147758 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1010 19:23:17.534139  147758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1010 19:23:17.540846  147758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1010 19:23:17.547429  147758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1010 19:23:17.554350  147758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1010 19:23:17.561036  147758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1010 19:23:17.567571  147758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1010 19:23:17.574019  147758 kubeadm.go:392] StartCluster: {Name:embed-certs-541370 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:embed-certs-541370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.120 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 19:23:17.574128  147758 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1010 19:23:17.574187  147758 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1010 19:23:17.612699  147758 cri.go:89] found id: ""
	I1010 19:23:17.612804  147758 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1010 19:23:17.623827  147758 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1010 19:23:17.623856  147758 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1010 19:23:17.623917  147758 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1010 19:23:17.634732  147758 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1010 19:23:17.635754  147758 kubeconfig.go:125] found "embed-certs-541370" server: "https://192.168.39.120:8443"
	I1010 19:23:17.637813  147758 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1010 19:23:17.648543  147758 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.120
	I1010 19:23:17.648590  147758 kubeadm.go:1160] stopping kube-system containers ...
	I1010 19:23:17.648606  147758 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1010 19:23:17.648671  147758 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1010 19:23:17.693966  147758 cri.go:89] found id: ""
	I1010 19:23:17.694057  147758 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1010 19:23:17.715977  147758 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1010 19:23:17.727871  147758 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1010 19:23:17.727891  147758 kubeadm.go:157] found existing configuration files:
	
	I1010 19:23:17.727942  147758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1010 19:23:17.738274  147758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1010 19:23:17.738340  147758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1010 19:23:17.748925  147758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1010 19:23:17.758945  147758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1010 19:23:17.759008  147758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1010 19:23:17.769169  147758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1010 19:23:17.779196  147758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1010 19:23:17.779282  147758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1010 19:23:17.790948  147758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1010 19:23:17.802264  147758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1010 19:23:17.802332  147758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1010 19:23:17.814009  147758 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1010 19:23:17.826820  147758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:23:17.947270  147758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:23:19.128720  147758 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.181409785s)
	I1010 19:23:19.128770  147758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:23:19.343735  147758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:23:19.419728  147758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:23:19.529802  147758 api_server.go:52] waiting for apiserver process to appear ...
	I1010 19:23:19.529930  147758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:20.030019  147758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:20.530833  147758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:20.558314  147758 api_server.go:72] duration metric: took 1.028510044s to wait for apiserver process to appear ...
	I1010 19:23:20.558350  147758 api_server.go:88] waiting for apiserver healthz status ...
	I1010 19:23:20.558375  147758 api_server.go:253] Checking apiserver healthz at https://192.168.39.120:8443/healthz ...
	I1010 19:23:20.558991  147758 api_server.go:269] stopped: https://192.168.39.120:8443/healthz: Get "https://192.168.39.120:8443/healthz": dial tcp 192.168.39.120:8443: connect: connection refused
	I1010 19:23:21.058727  147758 api_server.go:253] Checking apiserver healthz at https://192.168.39.120:8443/healthz ...
	I1010 19:23:18.923057  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:18.923636  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:18.923671  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:18.923582  149057 retry.go:31] will retry after 2.09836426s: waiting for machine to come up
	I1010 19:23:21.023500  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:21.024054  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:21.024084  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:21.023999  149057 retry.go:31] will retry after 2.809962093s: waiting for machine to come up
	I1010 19:23:23.187135  147758 api_server.go:279] https://192.168.39.120:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1010 19:23:23.187187  147758 api_server.go:103] status: https://192.168.39.120:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1010 19:23:23.187203  147758 api_server.go:253] Checking apiserver healthz at https://192.168.39.120:8443/healthz ...
	I1010 19:23:23.233367  147758 api_server.go:279] https://192.168.39.120:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1010 19:23:23.233414  147758 api_server.go:103] status: https://192.168.39.120:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1010 19:23:23.558658  147758 api_server.go:253] Checking apiserver healthz at https://192.168.39.120:8443/healthz ...
	I1010 19:23:23.575108  147758 api_server.go:279] https://192.168.39.120:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1010 19:23:23.575139  147758 api_server.go:103] status: https://192.168.39.120:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1010 19:23:24.058679  147758 api_server.go:253] Checking apiserver healthz at https://192.168.39.120:8443/healthz ...
	I1010 19:23:24.065735  147758 api_server.go:279] https://192.168.39.120:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1010 19:23:24.065763  147758 api_server.go:103] status: https://192.168.39.120:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1010 19:23:24.559440  147758 api_server.go:253] Checking apiserver healthz at https://192.168.39.120:8443/healthz ...
	I1010 19:23:24.565460  147758 api_server.go:279] https://192.168.39.120:8443/healthz returned 200:
	ok
	I1010 19:23:24.571828  147758 api_server.go:141] control plane version: v1.31.1
	I1010 19:23:24.571859  147758 api_server.go:131] duration metric: took 4.013501806s to wait for apiserver health ...
	I1010 19:23:24.571869  147758 cni.go:84] Creating CNI manager for ""
	I1010 19:23:24.571875  147758 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1010 19:23:24.573875  147758 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1010 19:23:24.575458  147758 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1010 19:23:24.586870  147758 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1010 19:23:24.624362  147758 system_pods.go:43] waiting for kube-system pods to appear ...
	I1010 19:23:24.643465  147758 system_pods.go:59] 8 kube-system pods found
	I1010 19:23:24.643516  147758 system_pods.go:61] "coredns-7c65d6cfc9-fgtkg" [df696e79-ca6f-4d73-a57e-9c6cdc93c505] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1010 19:23:24.643532  147758 system_pods.go:61] "etcd-embed-certs-541370" [254fa12c-b0d2-499f-8dd9-c1505efeaaab] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1010 19:23:24.643543  147758 system_pods.go:61] "kube-apiserver-embed-certs-541370" [fcd3809d-d325-4481-8e86-c246e29458fe] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1010 19:23:24.643565  147758 system_pods.go:61] "kube-controller-manager-embed-certs-541370" [ab0fdd6b-d9b7-48dc-b82f-29b21d2295ee] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1010 19:23:24.643584  147758 system_pods.go:61] "kube-proxy-f5l6x" [446383fa-44c5-4b9e-bfc5-e38799597e75] Running
	I1010 19:23:24.643592  147758 system_pods.go:61] "kube-scheduler-embed-certs-541370" [1c6af7e7-ce16-4ae2-8feb-e5d474173de1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1010 19:23:24.643603  147758 system_pods.go:61] "metrics-server-6867b74b74-kw529" [aad00321-d499-4563-849e-286d6e699fc8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1010 19:23:24.643611  147758 system_pods.go:61] "storage-provisioner" [df4ae621-5066-4276-9276-a0538a9f9dd1] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1010 19:23:24.643620  147758 system_pods.go:74] duration metric: took 19.234558ms to wait for pod list to return data ...
	I1010 19:23:24.643637  147758 node_conditions.go:102] verifying NodePressure condition ...
	I1010 19:23:24.651647  147758 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1010 19:23:24.651683  147758 node_conditions.go:123] node cpu capacity is 2
	I1010 19:23:24.651699  147758 node_conditions.go:105] duration metric: took 8.056629ms to run NodePressure ...
	I1010 19:23:24.651720  147758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:23:24.915651  147758 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1010 19:23:24.921104  147758 kubeadm.go:739] kubelet initialised
	I1010 19:23:24.921131  147758 kubeadm.go:740] duration metric: took 5.44643ms waiting for restarted kubelet to initialise ...
	I1010 19:23:24.921142  147758 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 19:23:24.927535  147758 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-fgtkg" in "kube-system" namespace to be "Ready" ...
	I1010 19:23:23.837827  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:23.838271  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:23.838295  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:23.838220  149057 retry.go:31] will retry after 2.562944835s: waiting for machine to come up
	I1010 19:23:26.402699  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:26.403250  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:26.403294  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:26.403192  149057 retry.go:31] will retry after 3.867656846s: waiting for machine to come up
	I1010 19:23:26.932764  147758 pod_ready.go:103] pod "coredns-7c65d6cfc9-fgtkg" in "kube-system" namespace has status "Ready":"False"
	I1010 19:23:28.936055  147758 pod_ready.go:103] pod "coredns-7c65d6cfc9-fgtkg" in "kube-system" namespace has status "Ready":"False"
	I1010 19:23:31.434959  147758 pod_ready.go:103] pod "coredns-7c65d6cfc9-fgtkg" in "kube-system" namespace has status "Ready":"False"
	I1010 19:23:31.893914  148525 start.go:364] duration metric: took 2m17.836396131s to acquireMachinesLock for "default-k8s-diff-port-361847"
	I1010 19:23:31.893993  148525 start.go:96] Skipping create...Using existing machine configuration
	I1010 19:23:31.894007  148525 fix.go:54] fixHost starting: 
	I1010 19:23:31.894438  148525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:23:31.894502  148525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:23:31.914583  148525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37215
	I1010 19:23:31.915054  148525 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:23:31.915535  148525 main.go:141] libmachine: Using API Version  1
	I1010 19:23:31.915560  148525 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:23:31.915967  148525 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:23:31.916207  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .DriverName
	I1010 19:23:31.916387  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetState
	I1010 19:23:31.918035  148525 fix.go:112] recreateIfNeeded on default-k8s-diff-port-361847: state=Stopped err=<nil>
	I1010 19:23:31.918073  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .DriverName
	W1010 19:23:31.918241  148525 fix.go:138] unexpected machine state, will restart: <nil>
	I1010 19:23:31.920390  148525 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-361847" ...
	I1010 19:23:30.275222  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.275762  148123 main.go:141] libmachine: (old-k8s-version-947203) Found IP for machine: 192.168.61.112
	I1010 19:23:30.275779  148123 main.go:141] libmachine: (old-k8s-version-947203) Reserving static IP address...
	I1010 19:23:30.275794  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has current primary IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.276478  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "old-k8s-version-947203", mac: "52:54:00:b5:0b:b2", ip: "192.168.61.112"} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:30.276529  148123 main.go:141] libmachine: (old-k8s-version-947203) Reserved static IP address: 192.168.61.112
	I1010 19:23:30.276553  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | skip adding static IP to network mk-old-k8s-version-947203 - found existing host DHCP lease matching {name: "old-k8s-version-947203", mac: "52:54:00:b5:0b:b2", ip: "192.168.61.112"}
	I1010 19:23:30.276574  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | Getting to WaitForSSH function...
	I1010 19:23:30.276590  148123 main.go:141] libmachine: (old-k8s-version-947203) Waiting for SSH to be available...
	I1010 19:23:30.278486  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.278854  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:30.278890  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.278984  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | Using SSH client type: external
	I1010 19:23:30.279016  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | Using SSH private key: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/old-k8s-version-947203/id_rsa (-rw-------)
	I1010 19:23:30.279051  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.112 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19787-81676/.minikube/machines/old-k8s-version-947203/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1010 19:23:30.279068  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | About to run SSH command:
	I1010 19:23:30.279080  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | exit 0
	I1010 19:23:30.405053  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | SSH cmd err, output: <nil>: 
	I1010 19:23:30.405492  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetConfigRaw
	I1010 19:23:30.406224  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetIP
	I1010 19:23:30.408830  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.409178  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:30.409204  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.409462  148123 profile.go:143] Saving config to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/old-k8s-version-947203/config.json ...
	I1010 19:23:30.409673  148123 machine.go:93] provisionDockerMachine start ...
	I1010 19:23:30.409693  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .DriverName
	I1010 19:23:30.409916  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHHostname
	I1010 19:23:30.412131  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.412400  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:30.412427  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.412574  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHPort
	I1010 19:23:30.412770  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:30.412965  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:30.413117  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHUsername
	I1010 19:23:30.413368  148123 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:30.413653  148123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.112 22 <nil> <nil>}
	I1010 19:23:30.413668  148123 main.go:141] libmachine: About to run SSH command:
	hostname
	I1010 19:23:30.525149  148123 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1010 19:23:30.525176  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetMachineName
	I1010 19:23:30.525417  148123 buildroot.go:166] provisioning hostname "old-k8s-version-947203"
	I1010 19:23:30.525478  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetMachineName
	I1010 19:23:30.525703  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHHostname
	I1010 19:23:30.528343  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.528681  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:30.528708  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.528841  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHPort
	I1010 19:23:30.529035  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:30.529185  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:30.529303  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHUsername
	I1010 19:23:30.529460  148123 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:30.529635  148123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.112 22 <nil> <nil>}
	I1010 19:23:30.529645  148123 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-947203 && echo "old-k8s-version-947203" | sudo tee /etc/hostname
	I1010 19:23:30.655605  148123 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-947203
	
	I1010 19:23:30.655644  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHHostname
	I1010 19:23:30.658439  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.658834  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:30.658869  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.659063  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHPort
	I1010 19:23:30.659285  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:30.659458  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:30.659574  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHUsername
	I1010 19:23:30.659733  148123 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:30.659905  148123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.112 22 <nil> <nil>}
	I1010 19:23:30.659921  148123 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-947203' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-947203/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-947203' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1010 19:23:30.781974  148123 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1010 19:23:30.782011  148123 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19787-81676/.minikube CaCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19787-81676/.minikube}
	I1010 19:23:30.782033  148123 buildroot.go:174] setting up certificates
	I1010 19:23:30.782048  148123 provision.go:84] configureAuth start
	I1010 19:23:30.782059  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetMachineName
	I1010 19:23:30.782379  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetIP
	I1010 19:23:30.785189  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.785584  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:30.785613  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.785782  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHHostname
	I1010 19:23:30.788086  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.788432  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:30.788458  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.788582  148123 provision.go:143] copyHostCerts
	I1010 19:23:30.788645  148123 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem, removing ...
	I1010 19:23:30.788660  148123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem
	I1010 19:23:30.788729  148123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem (1078 bytes)
	I1010 19:23:30.788839  148123 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem, removing ...
	I1010 19:23:30.788867  148123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem
	I1010 19:23:30.788900  148123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem (1123 bytes)
	I1010 19:23:30.788977  148123 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem, removing ...
	I1010 19:23:30.788986  148123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem
	I1010 19:23:30.789013  148123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem (1675 bytes)
	I1010 19:23:30.789081  148123 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-947203 san=[127.0.0.1 192.168.61.112 localhost minikube old-k8s-version-947203]
	I1010 19:23:31.240626  148123 provision.go:177] copyRemoteCerts
	I1010 19:23:31.240698  148123 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1010 19:23:31.240737  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHHostname
	I1010 19:23:31.243678  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.244108  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:31.244138  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.244356  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHPort
	I1010 19:23:31.244565  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:31.244765  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHUsername
	I1010 19:23:31.244929  148123 sshutil.go:53] new ssh client: &{IP:192.168.61.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/old-k8s-version-947203/id_rsa Username:docker}
	I1010 19:23:31.331337  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1010 19:23:31.356266  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1010 19:23:31.381186  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1010 19:23:31.405301  148123 provision.go:87] duration metric: took 623.235619ms to configureAuth
	I1010 19:23:31.405337  148123 buildroot.go:189] setting minikube options for container-runtime
	I1010 19:23:31.405541  148123 config.go:182] Loaded profile config "old-k8s-version-947203": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1010 19:23:31.405620  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHHostname
	I1010 19:23:31.408111  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.408444  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:31.408470  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.408695  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHPort
	I1010 19:23:31.408947  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:31.409109  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:31.409229  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHUsername
	I1010 19:23:31.409374  148123 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:31.409583  148123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.112 22 <nil> <nil>}
	I1010 19:23:31.409609  148123 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1010 19:23:31.647023  148123 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1010 19:23:31.647050  148123 machine.go:96] duration metric: took 1.237364012s to provisionDockerMachine
	I1010 19:23:31.647063  148123 start.go:293] postStartSetup for "old-k8s-version-947203" (driver="kvm2")
	I1010 19:23:31.647073  148123 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1010 19:23:31.647104  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .DriverName
	I1010 19:23:31.647464  148123 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1010 19:23:31.647502  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHHostname
	I1010 19:23:31.649991  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.650288  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:31.650311  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.650484  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHPort
	I1010 19:23:31.650637  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:31.650832  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHUsername
	I1010 19:23:31.650968  148123 sshutil.go:53] new ssh client: &{IP:192.168.61.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/old-k8s-version-947203/id_rsa Username:docker}
	I1010 19:23:31.735985  148123 ssh_runner.go:195] Run: cat /etc/os-release
	I1010 19:23:31.740689  148123 info.go:137] Remote host: Buildroot 2023.02.9
	I1010 19:23:31.740725  148123 filesync.go:126] Scanning /home/jenkins/minikube-integration/19787-81676/.minikube/addons for local assets ...
	I1010 19:23:31.740803  148123 filesync.go:126] Scanning /home/jenkins/minikube-integration/19787-81676/.minikube/files for local assets ...
	I1010 19:23:31.740947  148123 filesync.go:149] local asset: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem -> 888762.pem in /etc/ssl/certs
	I1010 19:23:31.741057  148123 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1010 19:23:31.751383  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem --> /etc/ssl/certs/888762.pem (1708 bytes)
	I1010 19:23:31.777091  148123 start.go:296] duration metric: took 130.011618ms for postStartSetup
	I1010 19:23:31.777134  148123 fix.go:56] duration metric: took 20.455078936s for fixHost
	I1010 19:23:31.777158  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHHostname
	I1010 19:23:31.780083  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.780661  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:31.780708  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.780832  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHPort
	I1010 19:23:31.781069  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:31.781245  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:31.781382  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHUsername
	I1010 19:23:31.781534  148123 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:31.781711  148123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.112 22 <nil> <nil>}
	I1010 19:23:31.781721  148123 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1010 19:23:31.893727  148123 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728588211.849777581
	
	I1010 19:23:31.893756  148123 fix.go:216] guest clock: 1728588211.849777581
	I1010 19:23:31.893763  148123 fix.go:229] Guest: 2024-10-10 19:23:31.849777581 +0000 UTC Remote: 2024-10-10 19:23:31.777138808 +0000 UTC m=+218.253674512 (delta=72.638773ms)
	I1010 19:23:31.893806  148123 fix.go:200] guest clock delta is within tolerance: 72.638773ms
	I1010 19:23:31.893813  148123 start.go:83] releasing machines lock for "old-k8s-version-947203", held for 20.571797307s
	I1010 19:23:31.893848  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .DriverName
	I1010 19:23:31.894151  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetIP
	I1010 19:23:31.896747  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.897156  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:31.897207  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.897385  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .DriverName
	I1010 19:23:31.898036  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .DriverName
	I1010 19:23:31.898229  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .DriverName
	I1010 19:23:31.898375  148123 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1010 19:23:31.898422  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHHostname
	I1010 19:23:31.898461  148123 ssh_runner.go:195] Run: cat /version.json
	I1010 19:23:31.898487  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHHostname
	I1010 19:23:31.901274  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.901450  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.901608  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:31.901649  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.901784  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHPort
	I1010 19:23:31.901920  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:31.901945  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.901952  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:31.902112  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHPort
	I1010 19:23:31.902130  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHUsername
	I1010 19:23:31.902306  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:31.902348  148123 sshutil.go:53] new ssh client: &{IP:192.168.61.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/old-k8s-version-947203/id_rsa Username:docker}
	I1010 19:23:31.902412  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHUsername
	I1010 19:23:31.902533  148123 sshutil.go:53] new ssh client: &{IP:192.168.61.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/old-k8s-version-947203/id_rsa Username:docker}
	I1010 19:23:32.016264  148123 ssh_runner.go:195] Run: systemctl --version
	I1010 19:23:32.022618  148123 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1010 19:23:32.169502  148123 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1010 19:23:32.175960  148123 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1010 19:23:32.176039  148123 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1010 19:23:32.193584  148123 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1010 19:23:32.193615  148123 start.go:495] detecting cgroup driver to use...
	I1010 19:23:32.193698  148123 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1010 19:23:32.210923  148123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1010 19:23:32.227172  148123 docker.go:217] disabling cri-docker service (if available) ...
	I1010 19:23:32.227254  148123 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1010 19:23:32.242142  148123 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1010 19:23:32.256896  148123 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1010 19:23:32.387462  148123 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1010 19:23:32.575184  148123 docker.go:233] disabling docker service ...
	I1010 19:23:32.575257  148123 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1010 19:23:32.590667  148123 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1010 19:23:32.604825  148123 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1010 19:23:32.739827  148123 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1010 19:23:32.863648  148123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1010 19:23:32.879435  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1010 19:23:32.899582  148123 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1010 19:23:32.899659  148123 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:32.915010  148123 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1010 19:23:32.915081  148123 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:32.931181  148123 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:32.944038  148123 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:32.955182  148123 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1010 19:23:32.972220  148123 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1010 19:23:32.982681  148123 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1010 19:23:32.982739  148123 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1010 19:23:32.997012  148123 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1010 19:23:33.009468  148123 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 19:23:33.180403  148123 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1010 19:23:33.284934  148123 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1010 19:23:33.285008  148123 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1010 19:23:33.290063  148123 start.go:563] Will wait 60s for crictl version
	I1010 19:23:33.290124  148123 ssh_runner.go:195] Run: which crictl
	I1010 19:23:33.294752  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1010 19:23:33.342710  148123 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1010 19:23:33.342792  148123 ssh_runner.go:195] Run: crio --version
	I1010 19:23:33.372821  148123 ssh_runner.go:195] Run: crio --version
	I1010 19:23:33.409395  148123 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1010 19:23:33.411012  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetIP
	I1010 19:23:33.414374  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:33.414789  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:33.414836  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:33.415084  148123 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1010 19:23:33.420164  148123 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 19:23:33.433442  148123 kubeadm.go:883] updating cluster {Name:old-k8s-version-947203 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-947203 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.112 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1010 19:23:33.433631  148123 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1010 19:23:33.433717  148123 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 19:23:33.480013  148123 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1010 19:23:33.480078  148123 ssh_runner.go:195] Run: which lz4
	I1010 19:23:33.485443  148123 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1010 19:23:33.490986  148123 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1010 19:23:33.491032  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1010 19:23:31.921836  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .Start
	I1010 19:23:31.922036  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Ensuring networks are active...
	I1010 19:23:31.922890  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Ensuring network default is active
	I1010 19:23:31.923271  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Ensuring network mk-default-k8s-diff-port-361847 is active
	I1010 19:23:31.923685  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Getting domain xml...
	I1010 19:23:31.924449  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Creating domain...
	I1010 19:23:33.241164  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting to get IP...
	I1010 19:23:33.242273  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:33.242713  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | unable to find current IP address of domain default-k8s-diff-port-361847 in network mk-default-k8s-diff-port-361847
	I1010 19:23:33.242814  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | I1010 19:23:33.242702  149213 retry.go:31] will retry after 195.013046ms: waiting for machine to come up
	I1010 19:23:33.438965  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:33.439425  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | unable to find current IP address of domain default-k8s-diff-port-361847 in network mk-default-k8s-diff-port-361847
	I1010 19:23:33.439452  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | I1010 19:23:33.439379  149213 retry.go:31] will retry after 344.223823ms: waiting for machine to come up
	I1010 19:23:33.785167  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:33.785833  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | unable to find current IP address of domain default-k8s-diff-port-361847 in network mk-default-k8s-diff-port-361847
	I1010 19:23:33.785864  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | I1010 19:23:33.785780  149213 retry.go:31] will retry after 342.787658ms: waiting for machine to come up
	I1010 19:23:33.435066  147758 pod_ready.go:103] pod "coredns-7c65d6cfc9-fgtkg" in "kube-system" namespace has status "Ready":"False"
	I1010 19:23:34.936768  147758 pod_ready.go:93] pod "coredns-7c65d6cfc9-fgtkg" in "kube-system" namespace has status "Ready":"True"
	I1010 19:23:34.936800  147758 pod_ready.go:82] duration metric: took 10.009235225s for pod "coredns-7c65d6cfc9-fgtkg" in "kube-system" namespace to be "Ready" ...
	I1010 19:23:34.936814  147758 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:23:35.944395  147758 pod_ready.go:93] pod "etcd-embed-certs-541370" in "kube-system" namespace has status "Ready":"True"
	I1010 19:23:35.944430  147758 pod_ready.go:82] duration metric: took 1.007599746s for pod "etcd-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:23:35.944445  147758 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:23:35.953224  147758 pod_ready.go:93] pod "kube-apiserver-embed-certs-541370" in "kube-system" namespace has status "Ready":"True"
	I1010 19:23:35.953255  147758 pod_ready.go:82] duration metric: took 8.801702ms for pod "kube-apiserver-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:23:35.953266  147758 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:23:35.227279  148123 crio.go:462] duration metric: took 1.74186828s to copy over tarball
	I1010 19:23:35.227383  148123 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1010 19:23:38.216499  148123 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.989082447s)
	I1010 19:23:38.216533  148123 crio.go:469] duration metric: took 2.989218587s to extract the tarball
	I1010 19:23:38.216541  148123 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1010 19:23:38.259699  148123 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 19:23:38.308284  148123 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1010 19:23:38.308313  148123 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1010 19:23:38.308422  148123 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1010 19:23:38.308477  148123 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1010 19:23:38.308482  148123 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1010 19:23:38.308593  148123 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1010 19:23:38.308617  148123 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1010 19:23:38.308405  148123 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 19:23:38.308446  148123 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1010 19:23:38.308530  148123 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1010 19:23:38.310558  148123 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1010 19:23:38.310612  148123 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1010 19:23:38.310642  148123 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1010 19:23:38.310652  148123 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1010 19:23:38.310645  148123 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1010 19:23:38.310560  148123 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 19:23:38.310733  148123 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1010 19:23:38.311068  148123 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1010 19:23:38.469174  148123 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1010 19:23:38.469563  148123 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1010 19:23:38.488463  148123 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1010 19:23:38.490815  148123 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1010 19:23:38.491841  148123 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1010 19:23:38.496079  148123 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1010 19:23:38.501292  148123 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1010 19:23:34.130443  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:34.130968  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | unable to find current IP address of domain default-k8s-diff-port-361847 in network mk-default-k8s-diff-port-361847
	I1010 19:23:34.130998  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | I1010 19:23:34.130915  149213 retry.go:31] will retry after 393.100812ms: waiting for machine to come up
	I1010 19:23:34.525570  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:34.526032  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | unable to find current IP address of domain default-k8s-diff-port-361847 in network mk-default-k8s-diff-port-361847
	I1010 19:23:34.526060  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | I1010 19:23:34.525980  149213 retry.go:31] will retry after 465.468437ms: waiting for machine to come up
	I1010 19:23:34.992775  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:34.993348  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | unable to find current IP address of domain default-k8s-diff-port-361847 in network mk-default-k8s-diff-port-361847
	I1010 19:23:34.993386  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | I1010 19:23:34.993287  149213 retry.go:31] will retry after 907.884473ms: waiting for machine to come up
	I1010 19:23:35.902481  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:35.902942  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | unable to find current IP address of domain default-k8s-diff-port-361847 in network mk-default-k8s-diff-port-361847
	I1010 19:23:35.902974  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | I1010 19:23:35.902878  149213 retry.go:31] will retry after 1.157806188s: waiting for machine to come up
	I1010 19:23:37.062068  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:37.062749  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | unable to find current IP address of domain default-k8s-diff-port-361847 in network mk-default-k8s-diff-port-361847
	I1010 19:23:37.062777  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | I1010 19:23:37.062706  149213 retry.go:31] will retry after 1.432559208s: waiting for machine to come up
	I1010 19:23:38.496653  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:38.497120  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | unable to find current IP address of domain default-k8s-diff-port-361847 in network mk-default-k8s-diff-port-361847
	I1010 19:23:38.497153  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | I1010 19:23:38.497066  149213 retry.go:31] will retry after 1.559787003s: waiting for machine to come up
	I1010 19:23:37.961068  147758 pod_ready.go:103] pod "kube-controller-manager-embed-certs-541370" in "kube-system" namespace has status "Ready":"False"
	I1010 19:23:40.065559  147758 pod_ready.go:103] pod "kube-controller-manager-embed-certs-541370" in "kube-system" namespace has status "Ready":"False"
	I1010 19:23:40.528757  147758 pod_ready.go:93] pod "kube-controller-manager-embed-certs-541370" in "kube-system" namespace has status "Ready":"True"
	I1010 19:23:40.528786  147758 pod_ready.go:82] duration metric: took 4.575513259s for pod "kube-controller-manager-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:23:40.528802  147758 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-f5l6x" in "kube-system" namespace to be "Ready" ...
	I1010 19:23:40.538002  147758 pod_ready.go:93] pod "kube-proxy-f5l6x" in "kube-system" namespace has status "Ready":"True"
	I1010 19:23:40.538034  147758 pod_ready.go:82] duration metric: took 9.22357ms for pod "kube-proxy-f5l6x" in "kube-system" namespace to be "Ready" ...
	I1010 19:23:40.538049  147758 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:23:40.543594  147758 pod_ready.go:93] pod "kube-scheduler-embed-certs-541370" in "kube-system" namespace has status "Ready":"True"
	I1010 19:23:40.543615  147758 pod_ready.go:82] duration metric: took 5.558665ms for pod "kube-scheduler-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:23:40.543626  147758 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace to be "Ready" ...
	I1010 19:23:38.581315  148123 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1010 19:23:38.581361  148123 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1010 19:23:38.581385  148123 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1010 19:23:38.581407  148123 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1010 19:23:38.581442  148123 ssh_runner.go:195] Run: which crictl
	I1010 19:23:38.581457  148123 ssh_runner.go:195] Run: which crictl
	I1010 19:23:38.642668  148123 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1010 19:23:38.642721  148123 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1010 19:23:38.642777  148123 ssh_runner.go:195] Run: which crictl
	I1010 19:23:38.658235  148123 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1010 19:23:38.658291  148123 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1010 19:23:38.658288  148123 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1010 19:23:38.658328  148123 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1010 19:23:38.658346  148123 ssh_runner.go:195] Run: which crictl
	I1010 19:23:38.658373  148123 ssh_runner.go:195] Run: which crictl
	I1010 19:23:38.678038  148123 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1010 19:23:38.678097  148123 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1010 19:23:38.678155  148123 ssh_runner.go:195] Run: which crictl
	I1010 19:23:38.682719  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1010 19:23:38.682773  148123 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1010 19:23:38.682721  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1010 19:23:38.682791  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1010 19:23:38.682812  148123 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1010 19:23:38.682845  148123 ssh_runner.go:195] Run: which crictl
	I1010 19:23:38.682851  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1010 19:23:38.682872  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1010 19:23:38.691181  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1010 19:23:38.842626  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1010 19:23:38.842714  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1010 19:23:38.842754  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1010 19:23:38.842797  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1010 19:23:38.842721  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1010 19:23:38.842851  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1010 19:23:38.842789  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1010 19:23:38.989994  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1010 19:23:39.005815  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1010 19:23:39.005923  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1010 19:23:39.010029  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1010 19:23:39.010105  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1010 19:23:39.010150  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1010 19:23:39.010230  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1010 19:23:39.134646  148123 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 19:23:39.141431  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1010 19:23:39.176750  148123 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1010 19:23:39.176945  148123 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1010 19:23:39.176989  148123 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1010 19:23:39.177038  148123 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1010 19:23:39.177073  148123 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1010 19:23:39.177104  148123 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1010 19:23:39.335056  148123 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1010 19:23:39.335113  148123 cache_images.go:92] duration metric: took 1.026784312s to LoadCachedImages
	W1010 19:23:39.335180  148123 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I1010 19:23:39.335192  148123 kubeadm.go:934] updating node { 192.168.61.112 8443 v1.20.0 crio true true} ...
	I1010 19:23:39.335305  148123 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-947203 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.112
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-947203 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1010 19:23:39.335380  148123 ssh_runner.go:195] Run: crio config
	I1010 19:23:39.394307  148123 cni.go:84] Creating CNI manager for ""
	I1010 19:23:39.394338  148123 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1010 19:23:39.394350  148123 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1010 19:23:39.394378  148123 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.112 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-947203 NodeName:old-k8s-version-947203 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.112"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.112 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1010 19:23:39.394572  148123 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.112
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-947203"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.112
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.112"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1010 19:23:39.394662  148123 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1010 19:23:39.405510  148123 binaries.go:44] Found k8s binaries, skipping transfer
	I1010 19:23:39.405600  148123 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1010 19:23:39.415968  148123 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1010 19:23:39.443234  148123 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1010 19:23:39.462448  148123 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1010 19:23:39.481959  148123 ssh_runner.go:195] Run: grep 192.168.61.112	control-plane.minikube.internal$ /etc/hosts
	I1010 19:23:39.486188  148123 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.112	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 19:23:39.501922  148123 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 19:23:39.642769  148123 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 19:23:39.661335  148123 certs.go:68] Setting up /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/old-k8s-version-947203 for IP: 192.168.61.112
	I1010 19:23:39.661363  148123 certs.go:194] generating shared ca certs ...
	I1010 19:23:39.661384  148123 certs.go:226] acquiring lock for ca certs: {Name:mk934aa2a82050d7b4e8800492bf64f91d140517 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 19:23:39.661595  148123 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key
	I1010 19:23:39.661658  148123 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key
	I1010 19:23:39.661671  148123 certs.go:256] generating profile certs ...
	I1010 19:23:39.661779  148123 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/old-k8s-version-947203/client.key
	I1010 19:23:39.661867  148123 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/old-k8s-version-947203/apiserver.key.8a666a52
	I1010 19:23:39.661922  148123 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/old-k8s-version-947203/proxy-client.key
	I1010 19:23:39.662312  148123 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876.pem (1338 bytes)
	W1010 19:23:39.662395  148123 certs.go:480] ignoring /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876_empty.pem, impossibly tiny 0 bytes
	I1010 19:23:39.662410  148123 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem (1679 bytes)
	I1010 19:23:39.662447  148123 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem (1078 bytes)
	I1010 19:23:39.662501  148123 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem (1123 bytes)
	I1010 19:23:39.662531  148123 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem (1675 bytes)
	I1010 19:23:39.662622  148123 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem (1708 bytes)
	I1010 19:23:39.663372  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1010 19:23:39.710366  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1010 19:23:39.752724  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1010 19:23:39.801408  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1010 19:23:39.843557  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/old-k8s-version-947203/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1010 19:23:39.892522  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/old-k8s-version-947203/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1010 19:23:39.938519  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/old-k8s-version-947203/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1010 19:23:39.966140  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/old-k8s-version-947203/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1010 19:23:39.992974  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem --> /usr/share/ca-certificates/888762.pem (1708 bytes)
	I1010 19:23:40.019381  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1010 19:23:40.045769  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876.pem --> /usr/share/ca-certificates/88876.pem (1338 bytes)
	I1010 19:23:40.080683  148123 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1010 19:23:40.099747  148123 ssh_runner.go:195] Run: openssl version
	I1010 19:23:40.107865  148123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1010 19:23:40.123158  148123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1010 19:23:40.128594  148123 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 10 17:58 /usr/share/ca-certificates/minikubeCA.pem
	I1010 19:23:40.128660  148123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1010 19:23:40.135319  148123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1010 19:23:40.147514  148123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/88876.pem && ln -fs /usr/share/ca-certificates/88876.pem /etc/ssl/certs/88876.pem"
	I1010 19:23:40.162463  148123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/88876.pem
	I1010 19:23:40.168387  148123 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 10 18:09 /usr/share/ca-certificates/88876.pem
	I1010 19:23:40.168512  148123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/88876.pem
	I1010 19:23:40.176304  148123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/88876.pem /etc/ssl/certs/51391683.0"
	I1010 19:23:40.191306  148123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/888762.pem && ln -fs /usr/share/ca-certificates/888762.pem /etc/ssl/certs/888762.pem"
	I1010 19:23:40.204196  148123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/888762.pem
	I1010 19:23:40.211044  148123 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 10 18:09 /usr/share/ca-certificates/888762.pem
	I1010 19:23:40.211122  148123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/888762.pem
	I1010 19:23:40.219574  148123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/888762.pem /etc/ssl/certs/3ec20f2e.0"
	I1010 19:23:40.232634  148123 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1010 19:23:40.237639  148123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1010 19:23:40.244141  148123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1010 19:23:40.251188  148123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1010 19:23:40.258538  148123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1010 19:23:40.265029  148123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1010 19:23:40.271754  148123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1010 19:23:40.278352  148123 kubeadm.go:392] StartCluster: {Name:old-k8s-version-947203 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-947203 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.112 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 19:23:40.278453  148123 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1010 19:23:40.278518  148123 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1010 19:23:40.317771  148123 cri.go:89] found id: ""
	I1010 19:23:40.317838  148123 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1010 19:23:40.330494  148123 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1010 19:23:40.330520  148123 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1010 19:23:40.330576  148123 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1010 19:23:40.341984  148123 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1010 19:23:40.343386  148123 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-947203" does not appear in /home/jenkins/minikube-integration/19787-81676/kubeconfig
	I1010 19:23:40.344334  148123 kubeconfig.go:62] /home/jenkins/minikube-integration/19787-81676/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-947203" cluster setting kubeconfig missing "old-k8s-version-947203" context setting]
	I1010 19:23:40.345360  148123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19787-81676/kubeconfig: {Name:mk31297b607b774870865548d952622c04d970cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 19:23:40.347128  148123 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1010 19:23:40.357902  148123 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.112
	I1010 19:23:40.357944  148123 kubeadm.go:1160] stopping kube-system containers ...
	I1010 19:23:40.357961  148123 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1010 19:23:40.358031  148123 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1010 19:23:40.399970  148123 cri.go:89] found id: ""
	I1010 19:23:40.400057  148123 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1010 19:23:40.418527  148123 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1010 19:23:40.429182  148123 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1010 19:23:40.429217  148123 kubeadm.go:157] found existing configuration files:
	
	I1010 19:23:40.429262  148123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1010 19:23:40.439287  148123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1010 19:23:40.439363  148123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1010 19:23:40.449479  148123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1010 19:23:40.459288  148123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1010 19:23:40.459359  148123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1010 19:23:40.472733  148123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1010 19:23:40.484890  148123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1010 19:23:40.484958  148123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1010 19:23:40.495301  148123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1010 19:23:40.504750  148123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1010 19:23:40.504820  148123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1010 19:23:40.515034  148123 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1010 19:23:40.526168  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:23:40.675022  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:23:41.566371  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:23:41.815296  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:23:41.930082  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:23:42.027597  148123 api_server.go:52] waiting for apiserver process to appear ...
	I1010 19:23:42.027713  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:42.528219  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:43.027889  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:43.527735  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:40.058247  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:40.058783  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | unable to find current IP address of domain default-k8s-diff-port-361847 in network mk-default-k8s-diff-port-361847
	I1010 19:23:40.058835  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | I1010 19:23:40.058696  149213 retry.go:31] will retry after 2.214094081s: waiting for machine to come up
	I1010 19:23:42.274629  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:42.275167  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | unable to find current IP address of domain default-k8s-diff-port-361847 in network mk-default-k8s-diff-port-361847
	I1010 19:23:42.275194  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | I1010 19:23:42.275106  149213 retry.go:31] will retry after 2.126528577s: waiting for machine to come up
	I1010 19:23:42.550865  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:23:45.051043  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:23:44.028463  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:44.527916  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:45.027911  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:45.528797  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:46.028772  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:46.527982  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:47.027799  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:47.527894  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:48.028352  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:48.527935  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:44.403101  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:44.403575  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | unable to find current IP address of domain default-k8s-diff-port-361847 in network mk-default-k8s-diff-port-361847
	I1010 19:23:44.403616  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | I1010 19:23:44.403534  149213 retry.go:31] will retry after 3.603964622s: waiting for machine to come up
	I1010 19:23:48.008726  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:48.009142  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | unable to find current IP address of domain default-k8s-diff-port-361847 in network mk-default-k8s-diff-port-361847
	I1010 19:23:48.009191  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | I1010 19:23:48.009100  149213 retry.go:31] will retry after 3.639744981s: waiting for machine to come up
	I1010 19:23:47.551003  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:23:49.661572  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:23:52.858209  147213 start.go:364] duration metric: took 56.558774237s to acquireMachinesLock for "no-preload-320324"
	I1010 19:23:52.858274  147213 start.go:96] Skipping create...Using existing machine configuration
	I1010 19:23:52.858283  147213 fix.go:54] fixHost starting: 
	I1010 19:23:52.858705  147213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:23:52.858742  147213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:23:52.878428  147213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38525
	I1010 19:23:52.878955  147213 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:23:52.879563  147213 main.go:141] libmachine: Using API Version  1
	I1010 19:23:52.879599  147213 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:23:52.879945  147213 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:23:52.880144  147213 main.go:141] libmachine: (no-preload-320324) Calling .DriverName
	I1010 19:23:52.880282  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetState
	I1010 19:23:52.881626  147213 fix.go:112] recreateIfNeeded on no-preload-320324: state=Stopped err=<nil>
	I1010 19:23:52.881650  147213 main.go:141] libmachine: (no-preload-320324) Calling .DriverName
	W1010 19:23:52.881799  147213 fix.go:138] unexpected machine state, will restart: <nil>
	I1010 19:23:52.883912  147213 out.go:177] * Restarting existing kvm2 VM for "no-preload-320324" ...
	I1010 19:23:49.028421  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:49.528271  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:50.028462  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:50.527867  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:51.028782  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:51.528581  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:52.028732  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:52.527987  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:53.027852  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:53.528647  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:52.885239  147213 main.go:141] libmachine: (no-preload-320324) Calling .Start
	I1010 19:23:52.885429  147213 main.go:141] libmachine: (no-preload-320324) Ensuring networks are active...
	I1010 19:23:52.886211  147213 main.go:141] libmachine: (no-preload-320324) Ensuring network default is active
	I1010 19:23:52.886749  147213 main.go:141] libmachine: (no-preload-320324) Ensuring network mk-no-preload-320324 is active
	I1010 19:23:52.887310  147213 main.go:141] libmachine: (no-preload-320324) Getting domain xml...
	I1010 19:23:52.888034  147213 main.go:141] libmachine: (no-preload-320324) Creating domain...
	I1010 19:23:51.652975  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:51.653464  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Found IP for machine: 192.168.50.32
	I1010 19:23:51.653487  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Reserving static IP address...
	I1010 19:23:51.653509  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has current primary IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:51.653910  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-361847", mac: "52:54:00:a6:72:58", ip: "192.168.50.32"} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:51.653956  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | skip adding static IP to network mk-default-k8s-diff-port-361847 - found existing host DHCP lease matching {name: "default-k8s-diff-port-361847", mac: "52:54:00:a6:72:58", ip: "192.168.50.32"}
	I1010 19:23:51.653974  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Reserved static IP address: 192.168.50.32
	I1010 19:23:51.653993  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for SSH to be available...
	I1010 19:23:51.654006  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | Getting to WaitForSSH function...
	I1010 19:23:51.655927  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:51.656210  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:51.656240  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:51.656334  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | Using SSH client type: external
	I1010 19:23:51.656372  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | Using SSH private key: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/default-k8s-diff-port-361847/id_rsa (-rw-------)
	I1010 19:23:51.656409  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.32 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19787-81676/.minikube/machines/default-k8s-diff-port-361847/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1010 19:23:51.656425  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | About to run SSH command:
	I1010 19:23:51.656436  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | exit 0
	I1010 19:23:51.780839  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | SSH cmd err, output: <nil>: 
	I1010 19:23:51.781206  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetConfigRaw
	I1010 19:23:51.781939  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetIP
	I1010 19:23:51.784347  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:51.784663  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:51.784696  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:51.784918  148525 profile.go:143] Saving config to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/default-k8s-diff-port-361847/config.json ...
	I1010 19:23:51.785134  148525 machine.go:93] provisionDockerMachine start ...
	I1010 19:23:51.785158  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .DriverName
	I1010 19:23:51.785403  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHHostname
	I1010 19:23:51.787817  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:51.788306  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:51.788347  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:51.788547  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHPort
	I1010 19:23:51.788807  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:23:51.789038  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:23:51.789274  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHUsername
	I1010 19:23:51.789515  148525 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:51.789802  148525 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.32 22 <nil> <nil>}
	I1010 19:23:51.789825  148525 main.go:141] libmachine: About to run SSH command:
	hostname
	I1010 19:23:51.893367  148525 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1010 19:23:51.893399  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetMachineName
	I1010 19:23:51.893652  148525 buildroot.go:166] provisioning hostname "default-k8s-diff-port-361847"
	I1010 19:23:51.893699  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetMachineName
	I1010 19:23:51.893921  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHHostname
	I1010 19:23:51.896986  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:51.897377  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:51.897422  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:51.897662  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHPort
	I1010 19:23:51.897815  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:23:51.897949  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:23:51.898064  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHUsername
	I1010 19:23:51.898302  148525 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:51.898489  148525 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.32 22 <nil> <nil>}
	I1010 19:23:51.898502  148525 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-361847 && echo "default-k8s-diff-port-361847" | sudo tee /etc/hostname
	I1010 19:23:52.015158  148525 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-361847
	
	I1010 19:23:52.015199  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHHostname
	I1010 19:23:52.018094  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.018468  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:52.018497  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.018683  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHPort
	I1010 19:23:52.018901  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:23:52.019039  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:23:52.019209  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHUsername
	I1010 19:23:52.019474  148525 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:52.019690  148525 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.32 22 <nil> <nil>}
	I1010 19:23:52.019708  148525 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-361847' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-361847/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-361847' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1010 19:23:52.133923  148525 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1010 19:23:52.133960  148525 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19787-81676/.minikube CaCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19787-81676/.minikube}
	I1010 19:23:52.134007  148525 buildroot.go:174] setting up certificates
	I1010 19:23:52.134023  148525 provision.go:84] configureAuth start
	I1010 19:23:52.134043  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetMachineName
	I1010 19:23:52.134351  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetIP
	I1010 19:23:52.137242  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.137637  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:52.137670  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.137860  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHHostname
	I1010 19:23:52.140264  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.140640  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:52.140672  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.140833  148525 provision.go:143] copyHostCerts
	I1010 19:23:52.140907  148525 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem, removing ...
	I1010 19:23:52.140922  148525 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem
	I1010 19:23:52.140977  148525 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem (1078 bytes)
	I1010 19:23:52.141088  148525 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem, removing ...
	I1010 19:23:52.141098  148525 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem
	I1010 19:23:52.141118  148525 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem (1123 bytes)
	I1010 19:23:52.141175  148525 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem, removing ...
	I1010 19:23:52.141182  148525 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem
	I1010 19:23:52.141213  148525 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem (1675 bytes)
	I1010 19:23:52.141264  148525 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-361847 san=[127.0.0.1 192.168.50.32 default-k8s-diff-port-361847 localhost minikube]
	I1010 19:23:52.241146  148525 provision.go:177] copyRemoteCerts
	I1010 19:23:52.241212  148525 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1010 19:23:52.241241  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHHostname
	I1010 19:23:52.244061  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.244463  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:52.244490  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.244731  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHPort
	I1010 19:23:52.244929  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:23:52.245110  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHUsername
	I1010 19:23:52.245228  148525 sshutil.go:53] new ssh client: &{IP:192.168.50.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/default-k8s-diff-port-361847/id_rsa Username:docker}
	I1010 19:23:52.327309  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1010 19:23:52.352288  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1010 19:23:52.376308  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1010 19:23:52.400807  148525 provision.go:87] duration metric: took 266.765119ms to configureAuth
	I1010 19:23:52.400862  148525 buildroot.go:189] setting minikube options for container-runtime
	I1010 19:23:52.401065  148525 config.go:182] Loaded profile config "default-k8s-diff-port-361847": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 19:23:52.401171  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHHostname
	I1010 19:23:52.403552  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.403919  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:52.403950  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.404173  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHPort
	I1010 19:23:52.404371  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:23:52.404513  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:23:52.404629  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHUsername
	I1010 19:23:52.404743  148525 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:52.404927  148525 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.32 22 <nil> <nil>}
	I1010 19:23:52.404949  148525 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1010 19:23:52.622902  148525 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1010 19:23:52.622930  148525 machine.go:96] duration metric: took 837.779579ms to provisionDockerMachine
	I1010 19:23:52.622942  148525 start.go:293] postStartSetup for "default-k8s-diff-port-361847" (driver="kvm2")
	I1010 19:23:52.622952  148525 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1010 19:23:52.622968  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .DriverName
	I1010 19:23:52.623331  148525 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1010 19:23:52.623369  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHHostname
	I1010 19:23:52.626106  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.626435  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:52.626479  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.626721  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHPort
	I1010 19:23:52.626932  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:23:52.627091  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHUsername
	I1010 19:23:52.627262  148525 sshutil.go:53] new ssh client: &{IP:192.168.50.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/default-k8s-diff-port-361847/id_rsa Username:docker}
	I1010 19:23:52.708050  148525 ssh_runner.go:195] Run: cat /etc/os-release
	I1010 19:23:52.712524  148525 info.go:137] Remote host: Buildroot 2023.02.9
	I1010 19:23:52.712550  148525 filesync.go:126] Scanning /home/jenkins/minikube-integration/19787-81676/.minikube/addons for local assets ...
	I1010 19:23:52.712608  148525 filesync.go:126] Scanning /home/jenkins/minikube-integration/19787-81676/.minikube/files for local assets ...
	I1010 19:23:52.712688  148525 filesync.go:149] local asset: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem -> 888762.pem in /etc/ssl/certs
	I1010 19:23:52.712782  148525 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1010 19:23:52.723719  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem --> /etc/ssl/certs/888762.pem (1708 bytes)
	I1010 19:23:52.747686  148525 start.go:296] duration metric: took 124.729371ms for postStartSetup
	I1010 19:23:52.747727  148525 fix.go:56] duration metric: took 20.853721623s for fixHost
	I1010 19:23:52.747749  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHHostname
	I1010 19:23:52.750316  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.750645  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:52.750677  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.750817  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHPort
	I1010 19:23:52.751046  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:23:52.751195  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:23:52.751333  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHUsername
	I1010 19:23:52.751511  148525 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:52.751733  148525 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.32 22 <nil> <nil>}
	I1010 19:23:52.751749  148525 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1010 19:23:52.857986  148525 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728588232.831281012
	
	I1010 19:23:52.858019  148525 fix.go:216] guest clock: 1728588232.831281012
	I1010 19:23:52.858029  148525 fix.go:229] Guest: 2024-10-10 19:23:52.831281012 +0000 UTC Remote: 2024-10-10 19:23:52.747731551 +0000 UTC m=+158.845659062 (delta=83.549461ms)
	I1010 19:23:52.858075  148525 fix.go:200] guest clock delta is within tolerance: 83.549461ms
	I1010 19:23:52.858088  148525 start.go:83] releasing machines lock for "default-k8s-diff-port-361847", held for 20.964121636s
	I1010 19:23:52.858120  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .DriverName
	I1010 19:23:52.858491  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetIP
	I1010 19:23:52.861220  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.861640  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:52.861672  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.861828  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .DriverName
	I1010 19:23:52.862337  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .DriverName
	I1010 19:23:52.862548  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .DriverName
	I1010 19:23:52.862655  148525 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1010 19:23:52.862702  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHHostname
	I1010 19:23:52.862825  148525 ssh_runner.go:195] Run: cat /version.json
	I1010 19:23:52.862854  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHHostname
	I1010 19:23:52.865579  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.865840  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.865921  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:52.865960  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.866106  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHPort
	I1010 19:23:52.866290  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:23:52.866300  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:52.866319  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.866423  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHUsername
	I1010 19:23:52.866496  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHPort
	I1010 19:23:52.866648  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:23:52.866671  148525 sshutil.go:53] new ssh client: &{IP:192.168.50.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/default-k8s-diff-port-361847/id_rsa Username:docker}
	I1010 19:23:52.866798  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHUsername
	I1010 19:23:52.866910  148525 sshutil.go:53] new ssh client: &{IP:192.168.50.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/default-k8s-diff-port-361847/id_rsa Username:docker}
	I1010 19:23:52.966354  148525 ssh_runner.go:195] Run: systemctl --version
	I1010 19:23:52.972526  148525 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1010 19:23:53.119801  148525 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1010 19:23:53.126287  148525 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1010 19:23:53.126355  148525 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1010 19:23:53.147301  148525 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1010 19:23:53.147325  148525 start.go:495] detecting cgroup driver to use...
	I1010 19:23:53.147381  148525 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1010 19:23:53.167368  148525 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1010 19:23:53.183239  148525 docker.go:217] disabling cri-docker service (if available) ...
	I1010 19:23:53.183308  148525 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1010 19:23:53.203230  148525 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1010 19:23:53.217261  148525 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1010 19:23:53.343555  148525 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1010 19:23:53.491952  148525 docker.go:233] disabling docker service ...
	I1010 19:23:53.492054  148525 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1010 19:23:53.508136  148525 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1010 19:23:53.521662  148525 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1010 19:23:53.651858  148525 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1010 19:23:53.781954  148525 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1010 19:23:53.803934  148525 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1010 19:23:53.826070  148525 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1010 19:23:53.826146  148525 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:53.837506  148525 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1010 19:23:53.837587  148525 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:53.848653  148525 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:53.860511  148525 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:53.873254  148525 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1010 19:23:53.887862  148525 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:53.899507  148525 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:53.923325  148525 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:53.934999  148525 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1010 19:23:53.946869  148525 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1010 19:23:53.946945  148525 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1010 19:23:53.968116  148525 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1010 19:23:53.980109  148525 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 19:23:54.106345  148525 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1010 19:23:54.210345  148525 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1010 19:23:54.210417  148525 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1010 19:23:54.215968  148525 start.go:563] Will wait 60s for crictl version
	I1010 19:23:54.216037  148525 ssh_runner.go:195] Run: which crictl
	I1010 19:23:54.219885  148525 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1010 19:23:54.260286  148525 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1010 19:23:54.260375  148525 ssh_runner.go:195] Run: crio --version
	I1010 19:23:54.289908  148525 ssh_runner.go:195] Run: crio --version
	I1010 19:23:54.320940  148525 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1010 19:23:52.050137  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:23:54.060194  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:23:56.551981  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:23:54.027982  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:54.528065  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:55.027890  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:55.527753  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:56.027877  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:56.527790  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:57.028263  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:57.527790  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:58.028743  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:58.528039  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:54.234149  147213 main.go:141] libmachine: (no-preload-320324) Waiting to get IP...
	I1010 19:23:54.235147  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:23:54.235598  147213 main.go:141] libmachine: (no-preload-320324) DBG | unable to find current IP address of domain no-preload-320324 in network mk-no-preload-320324
	I1010 19:23:54.235657  147213 main.go:141] libmachine: (no-preload-320324) DBG | I1010 19:23:54.235580  149378 retry.go:31] will retry after 308.921504ms: waiting for machine to come up
	I1010 19:23:54.546327  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:23:54.547002  147213 main.go:141] libmachine: (no-preload-320324) DBG | unable to find current IP address of domain no-preload-320324 in network mk-no-preload-320324
	I1010 19:23:54.547029  147213 main.go:141] libmachine: (no-preload-320324) DBG | I1010 19:23:54.546956  149378 retry.go:31] will retry after 288.92327ms: waiting for machine to come up
	I1010 19:23:54.837625  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:23:54.838136  147213 main.go:141] libmachine: (no-preload-320324) DBG | unable to find current IP address of domain no-preload-320324 in network mk-no-preload-320324
	I1010 19:23:54.838164  147213 main.go:141] libmachine: (no-preload-320324) DBG | I1010 19:23:54.838054  149378 retry.go:31] will retry after 321.948113ms: waiting for machine to come up
	I1010 19:23:55.161940  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:23:55.162494  147213 main.go:141] libmachine: (no-preload-320324) DBG | unable to find current IP address of domain no-preload-320324 in network mk-no-preload-320324
	I1010 19:23:55.162526  147213 main.go:141] libmachine: (no-preload-320324) DBG | I1010 19:23:55.162441  149378 retry.go:31] will retry after 573.848095ms: waiting for machine to come up
	I1010 19:23:55.739080  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:23:55.739592  147213 main.go:141] libmachine: (no-preload-320324) DBG | unable to find current IP address of domain no-preload-320324 in network mk-no-preload-320324
	I1010 19:23:55.739620  147213 main.go:141] libmachine: (no-preload-320324) DBG | I1010 19:23:55.739494  149378 retry.go:31] will retry after 529.087622ms: waiting for machine to come up
	I1010 19:23:56.270324  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:23:56.270899  147213 main.go:141] libmachine: (no-preload-320324) DBG | unable to find current IP address of domain no-preload-320324 in network mk-no-preload-320324
	I1010 19:23:56.270929  147213 main.go:141] libmachine: (no-preload-320324) DBG | I1010 19:23:56.270850  149378 retry.go:31] will retry after 629.204989ms: waiting for machine to come up
	I1010 19:23:56.901836  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:23:56.902283  147213 main.go:141] libmachine: (no-preload-320324) DBG | unable to find current IP address of domain no-preload-320324 in network mk-no-preload-320324
	I1010 19:23:56.902325  147213 main.go:141] libmachine: (no-preload-320324) DBG | I1010 19:23:56.902222  149378 retry.go:31] will retry after 804.309499ms: waiting for machine to come up
	I1010 19:23:57.708806  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:23:57.709175  147213 main.go:141] libmachine: (no-preload-320324) DBG | unable to find current IP address of domain no-preload-320324 in network mk-no-preload-320324
	I1010 19:23:57.709208  147213 main.go:141] libmachine: (no-preload-320324) DBG | I1010 19:23:57.709151  149378 retry.go:31] will retry after 1.204078295s: waiting for machine to come up
	I1010 19:23:54.322534  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetIP
	I1010 19:23:54.325744  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:54.326217  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:54.326257  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:54.326533  148525 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1010 19:23:54.331527  148525 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 19:23:54.343881  148525 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-361847 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:default-k8s-diff-port-361847 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.32 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1010 19:23:54.344033  148525 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1010 19:23:54.344084  148525 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 19:23:54.389066  148525 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1010 19:23:54.389149  148525 ssh_runner.go:195] Run: which lz4
	I1010 19:23:54.393550  148525 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1010 19:23:54.397787  148525 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1010 19:23:54.397833  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1010 19:23:55.897111  148525 crio.go:462] duration metric: took 1.503593301s to copy over tarball
	I1010 19:23:55.897212  148525 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1010 19:23:58.060691  148525 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.16343467s)
	I1010 19:23:58.060731  148525 crio.go:469] duration metric: took 2.163580526s to extract the tarball
	I1010 19:23:58.060741  148525 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1010 19:23:58.103877  148525 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 19:23:58.162881  148525 crio.go:514] all images are preloaded for cri-o runtime.
	I1010 19:23:58.162907  148525 cache_images.go:84] Images are preloaded, skipping loading
	I1010 19:23:58.162915  148525 kubeadm.go:934] updating node { 192.168.50.32 8444 v1.31.1 crio true true} ...
	I1010 19:23:58.163031  148525 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-361847 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.32
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-361847 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1010 19:23:58.163098  148525 ssh_runner.go:195] Run: crio config
	I1010 19:23:58.219804  148525 cni.go:84] Creating CNI manager for ""
	I1010 19:23:58.219827  148525 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1010 19:23:58.219837  148525 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1010 19:23:58.219861  148525 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.32 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-361847 NodeName:default-k8s-diff-port-361847 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.32"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.32 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1010 19:23:58.219982  148525 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.32
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-361847"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.32
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.32"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1010 19:23:58.220042  148525 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1010 19:23:58.231444  148525 binaries.go:44] Found k8s binaries, skipping transfer
	I1010 19:23:58.231565  148525 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1010 19:23:58.241835  148525 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I1010 19:23:58.259408  148525 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1010 19:23:58.276571  148525 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I1010 19:23:58.294640  148525 ssh_runner.go:195] Run: grep 192.168.50.32	control-plane.minikube.internal$ /etc/hosts
	I1010 19:23:58.298503  148525 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.32	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 19:23:58.312286  148525 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 19:23:58.449757  148525 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 19:23:58.467342  148525 certs.go:68] Setting up /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/default-k8s-diff-port-361847 for IP: 192.168.50.32
	I1010 19:23:58.467377  148525 certs.go:194] generating shared ca certs ...
	I1010 19:23:58.467398  148525 certs.go:226] acquiring lock for ca certs: {Name:mk934aa2a82050d7b4e8800492bf64f91d140517 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 19:23:58.467583  148525 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key
	I1010 19:23:58.467642  148525 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key
	I1010 19:23:58.467655  148525 certs.go:256] generating profile certs ...
	I1010 19:23:58.467826  148525 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/default-k8s-diff-port-361847/client.key
	I1010 19:23:58.467895  148525 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/default-k8s-diff-port-361847/apiserver.key.ae5e3f04
	I1010 19:23:58.467951  148525 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/default-k8s-diff-port-361847/proxy-client.key
	I1010 19:23:58.468089  148525 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876.pem (1338 bytes)
	W1010 19:23:58.468136  148525 certs.go:480] ignoring /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876_empty.pem, impossibly tiny 0 bytes
	I1010 19:23:58.468153  148525 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem (1679 bytes)
	I1010 19:23:58.468194  148525 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem (1078 bytes)
	I1010 19:23:58.468226  148525 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem (1123 bytes)
	I1010 19:23:58.468260  148525 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem (1675 bytes)
	I1010 19:23:58.468317  148525 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem (1708 bytes)
	I1010 19:23:58.468931  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1010 19:23:58.529632  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1010 19:23:58.571900  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1010 19:23:58.612599  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1010 19:23:58.645536  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/default-k8s-diff-port-361847/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1010 19:23:58.675961  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/default-k8s-diff-port-361847/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1010 19:23:58.700712  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/default-k8s-diff-port-361847/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1010 19:23:58.725355  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/default-k8s-diff-port-361847/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1010 19:23:58.751138  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1010 19:23:58.775832  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876.pem --> /usr/share/ca-certificates/88876.pem (1338 bytes)
	I1010 19:23:58.800729  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem --> /usr/share/ca-certificates/888762.pem (1708 bytes)
	I1010 19:23:58.825558  148525 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1010 19:23:58.843331  148525 ssh_runner.go:195] Run: openssl version
	I1010 19:23:58.849271  148525 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/88876.pem && ln -fs /usr/share/ca-certificates/88876.pem /etc/ssl/certs/88876.pem"
	I1010 19:23:58.861031  148525 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/88876.pem
	I1010 19:23:58.865721  148525 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 10 18:09 /usr/share/ca-certificates/88876.pem
	I1010 19:23:58.865797  148525 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/88876.pem
	I1010 19:23:58.871961  148525 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/88876.pem /etc/ssl/certs/51391683.0"
	I1010 19:23:58.884520  148525 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/888762.pem && ln -fs /usr/share/ca-certificates/888762.pem /etc/ssl/certs/888762.pem"
	I1010 19:23:58.896744  148525 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/888762.pem
	I1010 19:23:58.901507  148525 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 10 18:09 /usr/share/ca-certificates/888762.pem
	I1010 19:23:58.901571  148525 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/888762.pem
	I1010 19:23:58.907366  148525 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/888762.pem /etc/ssl/certs/3ec20f2e.0"
	I1010 19:23:58.919784  148525 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1010 19:23:58.931972  148525 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1010 19:23:58.936897  148525 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 10 17:58 /usr/share/ca-certificates/minikubeCA.pem
	I1010 19:23:58.936981  148525 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1010 19:23:58.943007  148525 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1010 19:23:59.052037  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:01.551982  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:23:59.028519  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:59.527790  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:00.027889  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:00.528623  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:01.028697  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:01.528030  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:02.028062  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:02.527876  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:03.028745  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:03.528569  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:58.914409  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:23:58.914894  147213 main.go:141] libmachine: (no-preload-320324) DBG | unable to find current IP address of domain no-preload-320324 in network mk-no-preload-320324
	I1010 19:23:58.914927  147213 main.go:141] libmachine: (no-preload-320324) DBG | I1010 19:23:58.914831  149378 retry.go:31] will retry after 1.631827888s: waiting for machine to come up
	I1010 19:24:00.548505  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:00.549135  147213 main.go:141] libmachine: (no-preload-320324) DBG | unable to find current IP address of domain no-preload-320324 in network mk-no-preload-320324
	I1010 19:24:00.549164  147213 main.go:141] libmachine: (no-preload-320324) DBG | I1010 19:24:00.549043  149378 retry.go:31] will retry after 2.126895157s: waiting for machine to come up
	I1010 19:24:02.678328  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:02.678907  147213 main.go:141] libmachine: (no-preload-320324) DBG | unable to find current IP address of domain no-preload-320324 in network mk-no-preload-320324
	I1010 19:24:02.678969  147213 main.go:141] libmachine: (no-preload-320324) DBG | I1010 19:24:02.678891  149378 retry.go:31] will retry after 2.754376625s: waiting for machine to come up
	I1010 19:23:58.955104  148525 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1010 19:23:58.959833  148525 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1010 19:23:58.966528  148525 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1010 19:23:58.973590  148525 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1010 19:23:58.982390  148525 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1010 19:23:58.990767  148525 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1010 19:23:58.997162  148525 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1010 19:23:59.003647  148525 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-361847 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:default-k8s-diff-port-361847 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.32 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 19:23:59.003786  148525 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1010 19:23:59.003865  148525 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1010 19:23:59.048772  148525 cri.go:89] found id: ""
	I1010 19:23:59.048869  148525 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1010 19:23:59.061267  148525 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1010 19:23:59.061288  148525 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1010 19:23:59.061338  148525 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1010 19:23:59.072629  148525 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1010 19:23:59.074287  148525 kubeconfig.go:125] found "default-k8s-diff-port-361847" server: "https://192.168.50.32:8444"
	I1010 19:23:59.077880  148525 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1010 19:23:59.090738  148525 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.32
	I1010 19:23:59.090783  148525 kubeadm.go:1160] stopping kube-system containers ...
	I1010 19:23:59.090799  148525 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1010 19:23:59.090886  148525 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1010 19:23:59.136762  148525 cri.go:89] found id: ""
	I1010 19:23:59.136888  148525 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1010 19:23:59.155937  148525 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1010 19:23:59.166471  148525 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1010 19:23:59.166493  148525 kubeadm.go:157] found existing configuration files:
	
	I1010 19:23:59.166549  148525 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1010 19:23:59.178247  148525 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1010 19:23:59.178313  148525 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1010 19:23:59.189455  148525 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1010 19:23:59.200127  148525 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1010 19:23:59.200204  148525 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1010 19:23:59.210764  148525 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1010 19:23:59.221048  148525 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1010 19:23:59.221119  148525 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1010 19:23:59.231762  148525 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1010 19:23:59.242152  148525 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1010 19:23:59.242217  148525 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1010 19:23:59.252608  148525 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1010 19:23:59.265219  148525 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:23:59.391743  148525 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:24:00.243288  148525 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:24:00.453782  148525 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:24:00.532137  148525 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:24:00.623598  148525 api_server.go:52] waiting for apiserver process to appear ...
	I1010 19:24:00.623711  148525 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:01.124678  148525 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:01.624626  148525 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:01.667587  148525 api_server.go:72] duration metric: took 1.043987857s to wait for apiserver process to appear ...
	I1010 19:24:01.667621  148525 api_server.go:88] waiting for apiserver healthz status ...
	I1010 19:24:01.667649  148525 api_server.go:253] Checking apiserver healthz at https://192.168.50.32:8444/healthz ...
	I1010 19:24:01.668298  148525 api_server.go:269] stopped: https://192.168.50.32:8444/healthz: Get "https://192.168.50.32:8444/healthz": dial tcp 192.168.50.32:8444: connect: connection refused
	I1010 19:24:02.168273  148525 api_server.go:253] Checking apiserver healthz at https://192.168.50.32:8444/healthz ...
	I1010 19:24:05.275654  148525 api_server.go:279] https://192.168.50.32:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1010 19:24:05.275695  148525 api_server.go:103] status: https://192.168.50.32:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1010 19:24:05.275713  148525 api_server.go:253] Checking apiserver healthz at https://192.168.50.32:8444/healthz ...
	I1010 19:24:05.309713  148525 api_server.go:279] https://192.168.50.32:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1010 19:24:05.309770  148525 api_server.go:103] status: https://192.168.50.32:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1010 19:24:05.668325  148525 api_server.go:253] Checking apiserver healthz at https://192.168.50.32:8444/healthz ...
	I1010 19:24:05.684992  148525 api_server.go:279] https://192.168.50.32:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1010 19:24:05.685031  148525 api_server.go:103] status: https://192.168.50.32:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1010 19:24:06.168198  148525 api_server.go:253] Checking apiserver healthz at https://192.168.50.32:8444/healthz ...
	I1010 19:24:06.176584  148525 api_server.go:279] https://192.168.50.32:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1010 19:24:06.176627  148525 api_server.go:103] status: https://192.168.50.32:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1010 19:24:06.668130  148525 api_server.go:253] Checking apiserver healthz at https://192.168.50.32:8444/healthz ...
	I1010 19:24:06.682049  148525 api_server.go:279] https://192.168.50.32:8444/healthz returned 200:
	ok
	I1010 19:24:06.692780  148525 api_server.go:141] control plane version: v1.31.1
	I1010 19:24:06.692811  148525 api_server.go:131] duration metric: took 5.025182717s to wait for apiserver health ...
	I1010 19:24:06.692820  148525 cni.go:84] Creating CNI manager for ""
	I1010 19:24:06.692831  148525 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1010 19:24:06.694447  148525 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1010 19:24:03.558797  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:06.054012  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:04.028711  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:04.528770  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:05.028716  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:05.528083  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:06.028204  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:06.528430  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:07.027972  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:07.527987  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:08.027829  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:08.528676  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:05.435450  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:05.435940  147213 main.go:141] libmachine: (no-preload-320324) DBG | unable to find current IP address of domain no-preload-320324 in network mk-no-preload-320324
	I1010 19:24:05.435970  147213 main.go:141] libmachine: (no-preload-320324) DBG | I1010 19:24:05.435888  149378 retry.go:31] will retry after 2.981990051s: waiting for machine to come up
	I1010 19:24:08.419385  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:08.419982  147213 main.go:141] libmachine: (no-preload-320324) DBG | unable to find current IP address of domain no-preload-320324 in network mk-no-preload-320324
	I1010 19:24:08.420006  147213 main.go:141] libmachine: (no-preload-320324) DBG | I1010 19:24:08.419905  149378 retry.go:31] will retry after 3.976204267s: waiting for machine to come up
	I1010 19:24:06.695841  148525 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1010 19:24:06.711212  148525 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1010 19:24:06.747753  148525 system_pods.go:43] waiting for kube-system pods to appear ...
	I1010 19:24:06.768344  148525 system_pods.go:59] 8 kube-system pods found
	I1010 19:24:06.768429  148525 system_pods.go:61] "coredns-7c65d6cfc9-rv8vq" [93b209ea-bb5f-40c5-aea8-8771b785f021] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1010 19:24:06.768446  148525 system_pods.go:61] "etcd-default-k8s-diff-port-361847" [65129999-984d-497c-a6e1-9c53a5374991] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1010 19:24:06.768452  148525 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-361847" [5f18ba24-29cf-433e-a70d-23757278c04f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1010 19:24:06.768460  148525 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-361847" [c189c785-8ac5-4003-802d-9e7c089d450e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1010 19:24:06.768467  148525 system_pods.go:61] "kube-proxy-v5lm8" [e78eabf9-5c65-4cba-83fd-0837cef05126] Running
	I1010 19:24:06.768476  148525 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-361847" [4f84f0f5-e255-4534-9db3-e5cfee0b2447] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1010 19:24:06.768485  148525 system_pods.go:61] "metrics-server-6867b74b74-h5kjm" [a3979b79-bd21-490b-97ac-0a78efd43a99] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1010 19:24:06.768493  148525 system_pods.go:61] "storage-provisioner" [ca8606d3-9adb-46da-886a-3081b11b52a6] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1010 19:24:06.768499  148525 system_pods.go:74] duration metric: took 20.716461ms to wait for pod list to return data ...
	I1010 19:24:06.768509  148525 node_conditions.go:102] verifying NodePressure condition ...
	I1010 19:24:06.777935  148525 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1010 19:24:06.777973  148525 node_conditions.go:123] node cpu capacity is 2
	I1010 19:24:06.777988  148525 node_conditions.go:105] duration metric: took 9.473726ms to run NodePressure ...
	I1010 19:24:06.778019  148525 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:24:07.053296  148525 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1010 19:24:07.057585  148525 kubeadm.go:739] kubelet initialised
	I1010 19:24:07.057608  148525 kubeadm.go:740] duration metric: took 4.283027ms waiting for restarted kubelet to initialise ...
	I1010 19:24:07.057618  148525 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 19:24:07.064157  148525 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-rv8vq" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:07.069962  148525 pod_ready.go:98] node "default-k8s-diff-port-361847" hosting pod "coredns-7c65d6cfc9-rv8vq" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-361847" has status "Ready":"False"
	I1010 19:24:07.069989  148525 pod_ready.go:82] duration metric: took 5.791958ms for pod "coredns-7c65d6cfc9-rv8vq" in "kube-system" namespace to be "Ready" ...
	E1010 19:24:07.069999  148525 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-361847" hosting pod "coredns-7c65d6cfc9-rv8vq" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-361847" has status "Ready":"False"
	I1010 19:24:07.070022  148525 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:07.075615  148525 pod_ready.go:98] node "default-k8s-diff-port-361847" hosting pod "etcd-default-k8s-diff-port-361847" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-361847" has status "Ready":"False"
	I1010 19:24:07.075644  148525 pod_ready.go:82] duration metric: took 5.608749ms for pod "etcd-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	E1010 19:24:07.075654  148525 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-361847" hosting pod "etcd-default-k8s-diff-port-361847" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-361847" has status "Ready":"False"
	I1010 19:24:07.075661  148525 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:07.081717  148525 pod_ready.go:98] node "default-k8s-diff-port-361847" hosting pod "kube-apiserver-default-k8s-diff-port-361847" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-361847" has status "Ready":"False"
	I1010 19:24:07.081743  148525 pod_ready.go:82] duration metric: took 6.074977ms for pod "kube-apiserver-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	E1010 19:24:07.081754  148525 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-361847" hosting pod "kube-apiserver-default-k8s-diff-port-361847" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-361847" has status "Ready":"False"
	I1010 19:24:07.081761  148525 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:07.152204  148525 pod_ready.go:98] node "default-k8s-diff-port-361847" hosting pod "kube-controller-manager-default-k8s-diff-port-361847" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-361847" has status "Ready":"False"
	I1010 19:24:07.152244  148525 pod_ready.go:82] duration metric: took 70.475599ms for pod "kube-controller-manager-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	E1010 19:24:07.152258  148525 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-361847" hosting pod "kube-controller-manager-default-k8s-diff-port-361847" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-361847" has status "Ready":"False"
	I1010 19:24:07.152266  148525 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-v5lm8" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:07.551283  148525 pod_ready.go:93] pod "kube-proxy-v5lm8" in "kube-system" namespace has status "Ready":"True"
	I1010 19:24:07.551311  148525 pod_ready.go:82] duration metric: took 399.036581ms for pod "kube-proxy-v5lm8" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:07.551324  148525 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:08.550896  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:10.551437  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:09.028538  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:09.527883  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:10.028693  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:10.528439  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:11.028528  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:11.528679  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:12.028002  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:12.527904  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:13.028685  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:13.527833  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:12.401115  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.401808  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has current primary IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.401841  147213 main.go:141] libmachine: (no-preload-320324) Found IP for machine: 192.168.72.11
	I1010 19:24:12.401856  147213 main.go:141] libmachine: (no-preload-320324) Reserving static IP address...
	I1010 19:24:12.402368  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "no-preload-320324", mac: "52:54:00:95:03:cd", ip: "192.168.72.11"} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:12.402407  147213 main.go:141] libmachine: (no-preload-320324) DBG | skip adding static IP to network mk-no-preload-320324 - found existing host DHCP lease matching {name: "no-preload-320324", mac: "52:54:00:95:03:cd", ip: "192.168.72.11"}
	I1010 19:24:12.402426  147213 main.go:141] libmachine: (no-preload-320324) Reserved static IP address: 192.168.72.11
	I1010 19:24:12.402443  147213 main.go:141] libmachine: (no-preload-320324) Waiting for SSH to be available...
	I1010 19:24:12.402458  147213 main.go:141] libmachine: (no-preload-320324) DBG | Getting to WaitForSSH function...
	I1010 19:24:12.404803  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.405200  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:12.405226  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.405461  147213 main.go:141] libmachine: (no-preload-320324) DBG | Using SSH client type: external
	I1010 19:24:12.405494  147213 main.go:141] libmachine: (no-preload-320324) DBG | Using SSH private key: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/no-preload-320324/id_rsa (-rw-------)
	I1010 19:24:12.405527  147213 main.go:141] libmachine: (no-preload-320324) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.11 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19787-81676/.minikube/machines/no-preload-320324/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1010 19:24:12.405541  147213 main.go:141] libmachine: (no-preload-320324) DBG | About to run SSH command:
	I1010 19:24:12.405554  147213 main.go:141] libmachine: (no-preload-320324) DBG | exit 0
	I1010 19:24:12.529010  147213 main.go:141] libmachine: (no-preload-320324) DBG | SSH cmd err, output: <nil>: 
	I1010 19:24:12.529401  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetConfigRaw
	I1010 19:24:12.530257  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetIP
	I1010 19:24:12.533285  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.533692  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:12.533727  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.533963  147213 profile.go:143] Saving config to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/no-preload-320324/config.json ...
	I1010 19:24:12.534205  147213 machine.go:93] provisionDockerMachine start ...
	I1010 19:24:12.534230  147213 main.go:141] libmachine: (no-preload-320324) Calling .DriverName
	I1010 19:24:12.534450  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHHostname
	I1010 19:24:12.536585  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.536976  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:12.537003  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.537133  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHPort
	I1010 19:24:12.537323  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:12.537512  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:12.537689  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHUsername
	I1010 19:24:12.537925  147213 main.go:141] libmachine: Using SSH client type: native
	I1010 19:24:12.538138  147213 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.11 22 <nil> <nil>}
	I1010 19:24:12.538151  147213 main.go:141] libmachine: About to run SSH command:
	hostname
	I1010 19:24:12.641679  147213 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1010 19:24:12.641706  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetMachineName
	I1010 19:24:12.641964  147213 buildroot.go:166] provisioning hostname "no-preload-320324"
	I1010 19:24:12.642002  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetMachineName
	I1010 19:24:12.642235  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHHostname
	I1010 19:24:12.645149  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.645488  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:12.645521  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.645647  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHPort
	I1010 19:24:12.645836  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:12.646001  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:12.646155  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHUsername
	I1010 19:24:12.646352  147213 main.go:141] libmachine: Using SSH client type: native
	I1010 19:24:12.646533  147213 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.11 22 <nil> <nil>}
	I1010 19:24:12.646545  147213 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-320324 && echo "no-preload-320324" | sudo tee /etc/hostname
	I1010 19:24:12.766449  147213 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-320324
	
	I1010 19:24:12.766480  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHHostname
	I1010 19:24:12.769836  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.770331  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:12.770356  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.770584  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHPort
	I1010 19:24:12.770810  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:12.770962  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:12.771119  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHUsername
	I1010 19:24:12.771252  147213 main.go:141] libmachine: Using SSH client type: native
	I1010 19:24:12.771448  147213 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.11 22 <nil> <nil>}
	I1010 19:24:12.771470  147213 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-320324' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-320324/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-320324' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1010 19:24:12.882458  147213 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1010 19:24:12.882495  147213 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19787-81676/.minikube CaCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19787-81676/.minikube}
	I1010 19:24:12.882537  147213 buildroot.go:174] setting up certificates
	I1010 19:24:12.882547  147213 provision.go:84] configureAuth start
	I1010 19:24:12.882562  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetMachineName
	I1010 19:24:12.882865  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetIP
	I1010 19:24:12.885854  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.886139  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:12.886173  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.886308  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHHostname
	I1010 19:24:12.888479  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.888788  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:12.888819  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.888976  147213 provision.go:143] copyHostCerts
	I1010 19:24:12.889037  147213 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem, removing ...
	I1010 19:24:12.889049  147213 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem
	I1010 19:24:12.889102  147213 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem (1123 bytes)
	I1010 19:24:12.889235  147213 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem, removing ...
	I1010 19:24:12.889246  147213 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem
	I1010 19:24:12.889278  147213 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem (1675 bytes)
	I1010 19:24:12.889370  147213 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem, removing ...
	I1010 19:24:12.889381  147213 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem
	I1010 19:24:12.889406  147213 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem (1078 bytes)
	I1010 19:24:12.889493  147213 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem org=jenkins.no-preload-320324 san=[127.0.0.1 192.168.72.11 localhost minikube no-preload-320324]
	I1010 19:24:12.978176  147213 provision.go:177] copyRemoteCerts
	I1010 19:24:12.978235  147213 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1010 19:24:12.978261  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHHostname
	I1010 19:24:12.981662  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.982182  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:12.982218  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.982452  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHPort
	I1010 19:24:12.982647  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:12.982829  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHUsername
	I1010 19:24:12.983005  147213 sshutil.go:53] new ssh client: &{IP:192.168.72.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/no-preload-320324/id_rsa Username:docker}
	I1010 19:24:13.067269  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1010 19:24:13.092777  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1010 19:24:13.118530  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1010 19:24:13.143401  147213 provision.go:87] duration metric: took 260.833877ms to configureAuth
	I1010 19:24:13.143436  147213 buildroot.go:189] setting minikube options for container-runtime
	I1010 19:24:13.143678  147213 config.go:182] Loaded profile config "no-preload-320324": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 19:24:13.143776  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHHostname
	I1010 19:24:13.147086  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:13.147507  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:13.147531  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:13.147787  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHPort
	I1010 19:24:13.148032  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:13.148222  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:13.148452  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHUsername
	I1010 19:24:13.148660  147213 main.go:141] libmachine: Using SSH client type: native
	I1010 19:24:13.149013  147213 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.11 22 <nil> <nil>}
	I1010 19:24:13.149041  147213 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1010 19:24:13.375683  147213 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1010 19:24:13.375714  147213 machine.go:96] duration metric: took 841.493636ms to provisionDockerMachine
	I1010 19:24:13.375736  147213 start.go:293] postStartSetup for "no-preload-320324" (driver="kvm2")
	I1010 19:24:13.375754  147213 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1010 19:24:13.375775  147213 main.go:141] libmachine: (no-preload-320324) Calling .DriverName
	I1010 19:24:13.376085  147213 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1010 19:24:13.376116  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHHostname
	I1010 19:24:13.378855  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:13.379179  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:13.379224  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:13.379408  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHPort
	I1010 19:24:13.379608  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:13.379769  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHUsername
	I1010 19:24:13.379910  147213 sshutil.go:53] new ssh client: &{IP:192.168.72.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/no-preload-320324/id_rsa Username:docker}
	I1010 19:24:13.459580  147213 ssh_runner.go:195] Run: cat /etc/os-release
	I1010 19:24:13.463644  147213 info.go:137] Remote host: Buildroot 2023.02.9
	I1010 19:24:13.463674  147213 filesync.go:126] Scanning /home/jenkins/minikube-integration/19787-81676/.minikube/addons for local assets ...
	I1010 19:24:13.463751  147213 filesync.go:126] Scanning /home/jenkins/minikube-integration/19787-81676/.minikube/files for local assets ...
	I1010 19:24:13.463845  147213 filesync.go:149] local asset: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem -> 888762.pem in /etc/ssl/certs
	I1010 19:24:13.463963  147213 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1010 19:24:13.473483  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem --> /etc/ssl/certs/888762.pem (1708 bytes)
	I1010 19:24:13.498773  147213 start.go:296] duration metric: took 123.021762ms for postStartSetup
	I1010 19:24:13.498814  147213 fix.go:56] duration metric: took 20.640532088s for fixHost
	I1010 19:24:13.498834  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHHostname
	I1010 19:24:13.501681  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:13.502243  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:13.502281  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:13.502476  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHPort
	I1010 19:24:13.502679  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:13.502835  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:13.502993  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHUsername
	I1010 19:24:13.503177  147213 main.go:141] libmachine: Using SSH client type: native
	I1010 19:24:13.503383  147213 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.11 22 <nil> <nil>}
	I1010 19:24:13.503396  147213 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1010 19:24:13.613929  147213 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728588253.586950075
	
	I1010 19:24:13.613954  147213 fix.go:216] guest clock: 1728588253.586950075
	I1010 19:24:13.613963  147213 fix.go:229] Guest: 2024-10-10 19:24:13.586950075 +0000 UTC Remote: 2024-10-10 19:24:13.498818059 +0000 UTC m=+359.788559229 (delta=88.132016ms)
	I1010 19:24:13.613988  147213 fix.go:200] guest clock delta is within tolerance: 88.132016ms
	I1010 19:24:13.614020  147213 start.go:83] releasing machines lock for "no-preload-320324", held for 20.755775587s
	I1010 19:24:13.614063  147213 main.go:141] libmachine: (no-preload-320324) Calling .DriverName
	I1010 19:24:13.614473  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetIP
	I1010 19:24:13.617327  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:13.617694  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:13.617721  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:13.618016  147213 main.go:141] libmachine: (no-preload-320324) Calling .DriverName
	I1010 19:24:13.618670  147213 main.go:141] libmachine: (no-preload-320324) Calling .DriverName
	I1010 19:24:13.618884  147213 main.go:141] libmachine: (no-preload-320324) Calling .DriverName
	I1010 19:24:13.618989  147213 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1010 19:24:13.619047  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHHostname
	I1010 19:24:13.619142  147213 ssh_runner.go:195] Run: cat /version.json
	I1010 19:24:13.619185  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHHostname
	I1010 19:24:13.621972  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:13.622229  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:13.622322  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:13.622348  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:13.622533  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHPort
	I1010 19:24:13.622666  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:13.622697  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:13.622736  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:13.622865  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHPort
	I1010 19:24:13.622930  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHUsername
	I1010 19:24:13.623059  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:13.623073  147213 sshutil.go:53] new ssh client: &{IP:192.168.72.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/no-preload-320324/id_rsa Username:docker}
	I1010 19:24:13.623225  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHUsername
	I1010 19:24:13.623349  147213 sshutil.go:53] new ssh client: &{IP:192.168.72.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/no-preload-320324/id_rsa Username:docker}
	I1010 19:24:13.720999  147213 ssh_runner.go:195] Run: systemctl --version
	I1010 19:24:13.727679  147213 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1010 19:24:09.562834  148525 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-361847" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:12.058686  148525 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-361847" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:13.870558  147213 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1010 19:24:13.877853  147213 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1010 19:24:13.877923  147213 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1010 19:24:13.896295  147213 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1010 19:24:13.896325  147213 start.go:495] detecting cgroup driver to use...
	I1010 19:24:13.896400  147213 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1010 19:24:13.913122  147213 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1010 19:24:13.929359  147213 docker.go:217] disabling cri-docker service (if available) ...
	I1010 19:24:13.929437  147213 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1010 19:24:13.944840  147213 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1010 19:24:13.960062  147213 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1010 19:24:14.090774  147213 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1010 19:24:14.246094  147213 docker.go:233] disabling docker service ...
	I1010 19:24:14.246161  147213 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1010 19:24:14.264682  147213 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1010 19:24:14.280264  147213 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1010 19:24:14.437156  147213 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1010 19:24:14.569220  147213 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1010 19:24:14.585723  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1010 19:24:14.607349  147213 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1010 19:24:14.607429  147213 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:24:14.619113  147213 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1010 19:24:14.619198  147213 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:24:14.631818  147213 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:24:14.643977  147213 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:24:14.655753  147213 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1010 19:24:14.667235  147213 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:24:14.679225  147213 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:24:14.698760  147213 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:24:14.710440  147213 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1010 19:24:14.722565  147213 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1010 19:24:14.722625  147213 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1010 19:24:14.740587  147213 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1010 19:24:14.752630  147213 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 19:24:14.887728  147213 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1010 19:24:14.989026  147213 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1010 19:24:14.989109  147213 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1010 19:24:14.995309  147213 start.go:563] Will wait 60s for crictl version
	I1010 19:24:14.995366  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:24:14.999840  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1010 19:24:15.043758  147213 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1010 19:24:15.043856  147213 ssh_runner.go:195] Run: crio --version
	I1010 19:24:15.079274  147213 ssh_runner.go:195] Run: crio --version
	I1010 19:24:15.116630  147213 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1010 19:24:13.050633  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:15.552413  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:14.028090  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:14.528072  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:15.028648  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:15.528388  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:16.028060  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:16.528472  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:17.028098  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:17.528125  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:18.028563  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:18.528613  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:15.118343  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetIP
	I1010 19:24:15.121596  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:15.122101  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:15.122133  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:15.122396  147213 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1010 19:24:15.127140  147213 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 19:24:15.141249  147213 kubeadm.go:883] updating cluster {Name:no-preload-320324 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:no-preload-320324 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.11 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1010 19:24:15.141375  147213 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1010 19:24:15.141417  147213 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 19:24:15.183271  147213 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1010 19:24:15.183303  147213 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1010 19:24:15.183412  147213 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 19:24:15.183444  147213 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1010 19:24:15.183452  147213 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I1010 19:24:15.183459  147213 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1010 19:24:15.183422  147213 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1010 19:24:15.183493  147213 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1010 19:24:15.183512  147213 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I1010 19:24:15.183507  147213 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I1010 19:24:15.185097  147213 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I1010 19:24:15.185097  147213 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1010 19:24:15.185097  147213 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 19:24:15.185099  147213 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I1010 19:24:15.185097  147213 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I1010 19:24:15.185098  147213 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1010 19:24:15.185103  147213 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1010 19:24:15.185106  147213 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1010 19:24:15.328484  147213 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.1
	I1010 19:24:15.333573  147213 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I1010 19:24:15.340047  147213 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I1010 19:24:15.358922  147213 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I1010 19:24:15.359800  147213 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.1
	I1010 19:24:15.366668  147213 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.1
	I1010 19:24:15.409942  147213 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I1010 19:24:15.409995  147213 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1010 19:24:15.410050  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:24:15.416186  147213 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.1
	I1010 19:24:15.452343  147213 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I1010 19:24:15.452385  147213 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I1010 19:24:15.452426  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:24:15.533567  147213 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I1010 19:24:15.533620  147213 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I1010 19:24:15.533671  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:24:15.585611  147213 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I1010 19:24:15.585659  147213 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I1010 19:24:15.585685  147213 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I1010 19:24:15.585712  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:24:15.585724  147213 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I1010 19:24:15.585765  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:24:15.585769  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I1010 19:24:15.585805  147213 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I1010 19:24:15.585832  147213 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I1010 19:24:15.585856  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1010 19:24:15.585872  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:24:15.585943  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1010 19:24:15.603131  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I1010 19:24:15.661918  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1010 19:24:15.683739  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I1010 19:24:15.683760  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I1010 19:24:15.683833  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1010 19:24:15.683880  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I1010 19:24:15.685385  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I1010 19:24:15.792253  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1010 19:24:15.818116  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I1010 19:24:15.818183  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I1010 19:24:15.818289  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1010 19:24:15.818321  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I1010 19:24:15.818402  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I1010 19:24:15.878069  147213 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I1010 19:24:15.878202  147213 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I1010 19:24:15.940520  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I1010 19:24:15.953841  147213 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I1010 19:24:15.953955  147213 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I1010 19:24:15.953990  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I1010 19:24:15.954047  147213 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I1010 19:24:15.954115  147213 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I1010 19:24:15.954120  147213 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I1010 19:24:15.954130  147213 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I1010 19:24:15.954144  147213 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I1010 19:24:15.954157  147213 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I1010 19:24:15.954205  147213 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	I1010 19:24:16.005975  147213 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I1010 19:24:16.006028  147213 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I1010 19:24:16.006090  147213 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I1010 19:24:16.023905  147213 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I1010 19:24:16.023990  147213 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.1 (exists)
	I1010 19:24:16.024024  147213 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I1010 19:24:16.024023  147213 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.1 (exists)
	I1010 19:24:16.033715  147213 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 19:24:18.150881  147213 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1: (2.144766677s)
	I1010 19:24:18.150935  147213 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.1 (exists)
	I1010 19:24:18.150931  147213 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (2.196753845s)
	I1010 19:24:18.150944  147213 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1: (2.126894115s)
	I1010 19:24:18.150973  147213 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.1 (exists)
	I1010 19:24:18.150953  147213 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I1010 19:24:18.150982  147213 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.117235962s)
	I1010 19:24:18.151002  147213 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I1010 19:24:18.151014  147213 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1010 19:24:18.151053  147213 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 19:24:18.151069  147213 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I1010 19:24:18.151097  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:24:14.059223  148525 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-361847" in "kube-system" namespace has status "Ready":"True"
	I1010 19:24:14.059252  148525 pod_ready.go:82] duration metric: took 6.507918149s for pod "kube-scheduler-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:14.059266  148525 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:16.066908  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:18.082398  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:18.051799  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:20.552644  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:19.028306  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:19.527854  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:20.028765  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:20.528245  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:21.027936  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:21.528172  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:22.028527  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:22.528446  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:23.028709  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:23.528711  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:21.952099  147213 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.801005716s)
	I1010 19:24:21.952134  147213 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I1010 19:24:21.952163  147213 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I1010 19:24:21.952165  147213 ssh_runner.go:235] Completed: which crictl: (3.801048272s)
	I1010 19:24:21.952212  147213 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I1010 19:24:21.952225  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 19:24:21.993627  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 19:24:20.566055  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:22.567145  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:23.053514  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:25.554151  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:24.028402  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:24.528516  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:25.028207  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:25.527928  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:26.028032  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:26.527826  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:27.028609  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:27.528448  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:28.028018  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:28.528597  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:23.929370  147213 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1: (1.977128659s)
	I1010 19:24:23.929418  147213 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I1010 19:24:23.929450  147213 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I1010 19:24:23.929498  147213 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.935844384s)
	I1010 19:24:23.929532  147213 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1
	I1010 19:24:23.929551  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 19:24:26.009485  147213 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.079908324s)
	I1010 19:24:26.009567  147213 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1010 19:24:26.009484  147213 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1: (2.079925224s)
	I1010 19:24:26.009641  147213 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I1010 19:24:26.009671  147213 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I1010 19:24:26.009684  147213 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1010 19:24:26.009720  147213 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1
	I1010 19:24:27.968483  147213 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.958772952s)
	I1010 19:24:27.968534  147213 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1010 19:24:27.968559  147213 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1: (1.958813643s)
	I1010 19:24:27.968587  147213 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I1010 19:24:27.968619  147213 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I1010 19:24:27.968686  147213 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1
	I1010 19:24:25.069787  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:27.567013  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:28.050968  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:30.551528  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:29.027960  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:29.528054  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:30.028227  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:30.527791  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:31.027790  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:31.528761  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:32.028343  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:32.528553  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:33.028195  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:33.527895  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:29.315157  147213 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1: (1.346440456s)
	I1010 19:24:29.315211  147213 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I1010 19:24:29.315244  147213 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1010 19:24:29.315296  147213 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1010 19:24:30.173931  147213 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1010 19:24:30.173977  147213 cache_images.go:123] Successfully loaded all cached images
	I1010 19:24:30.173985  147213 cache_images.go:92] duration metric: took 14.990666845s to LoadCachedImages
	I1010 19:24:30.174001  147213 kubeadm.go:934] updating node { 192.168.72.11 8443 v1.31.1 crio true true} ...
	I1010 19:24:30.174129  147213 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-320324 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.11
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-320324 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1010 19:24:30.174221  147213 ssh_runner.go:195] Run: crio config
	I1010 19:24:30.222677  147213 cni.go:84] Creating CNI manager for ""
	I1010 19:24:30.222702  147213 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1010 19:24:30.222711  147213 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1010 19:24:30.222736  147213 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.11 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-320324 NodeName:no-preload-320324 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.11"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.11 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1010 19:24:30.222923  147213 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.11
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-320324"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.11
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.11"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1010 19:24:30.222998  147213 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1010 19:24:30.233755  147213 binaries.go:44] Found k8s binaries, skipping transfer
	I1010 19:24:30.233818  147213 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1010 19:24:30.243829  147213 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I1010 19:24:30.263056  147213 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1010 19:24:30.282362  147213 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I1010 19:24:30.300449  147213 ssh_runner.go:195] Run: grep 192.168.72.11	control-plane.minikube.internal$ /etc/hosts
	I1010 19:24:30.304661  147213 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.11	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 19:24:30.317462  147213 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 19:24:30.445515  147213 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 19:24:30.462816  147213 certs.go:68] Setting up /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/no-preload-320324 for IP: 192.168.72.11
	I1010 19:24:30.462847  147213 certs.go:194] generating shared ca certs ...
	I1010 19:24:30.462871  147213 certs.go:226] acquiring lock for ca certs: {Name:mk934aa2a82050d7b4e8800492bf64f91d140517 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 19:24:30.463074  147213 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key
	I1010 19:24:30.463132  147213 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key
	I1010 19:24:30.463145  147213 certs.go:256] generating profile certs ...
	I1010 19:24:30.463289  147213 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/no-preload-320324/client.key
	I1010 19:24:30.463364  147213 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/no-preload-320324/apiserver.key.a7785fc5
	I1010 19:24:30.463413  147213 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/no-preload-320324/proxy-client.key
	I1010 19:24:30.463565  147213 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876.pem (1338 bytes)
	W1010 19:24:30.463604  147213 certs.go:480] ignoring /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876_empty.pem, impossibly tiny 0 bytes
	I1010 19:24:30.463617  147213 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem (1679 bytes)
	I1010 19:24:30.463657  147213 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem (1078 bytes)
	I1010 19:24:30.463689  147213 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem (1123 bytes)
	I1010 19:24:30.463721  147213 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem (1675 bytes)
	I1010 19:24:30.463774  147213 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem (1708 bytes)
	I1010 19:24:30.464502  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1010 19:24:30.525320  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1010 19:24:30.565229  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1010 19:24:30.597731  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1010 19:24:30.626174  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/no-preload-320324/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1010 19:24:30.659991  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/no-preload-320324/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1010 19:24:30.685662  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/no-preload-320324/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1010 19:24:30.710757  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/no-preload-320324/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1010 19:24:30.736325  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem --> /usr/share/ca-certificates/888762.pem (1708 bytes)
	I1010 19:24:30.771239  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1010 19:24:30.796467  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876.pem --> /usr/share/ca-certificates/88876.pem (1338 bytes)
	I1010 19:24:30.821925  147213 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1010 19:24:30.840743  147213 ssh_runner.go:195] Run: openssl version
	I1010 19:24:30.846898  147213 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/88876.pem && ln -fs /usr/share/ca-certificates/88876.pem /etc/ssl/certs/88876.pem"
	I1010 19:24:30.858410  147213 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/88876.pem
	I1010 19:24:30.863188  147213 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 10 18:09 /usr/share/ca-certificates/88876.pem
	I1010 19:24:30.863260  147213 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/88876.pem
	I1010 19:24:30.869307  147213 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/88876.pem /etc/ssl/certs/51391683.0"
	I1010 19:24:30.880319  147213 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/888762.pem && ln -fs /usr/share/ca-certificates/888762.pem /etc/ssl/certs/888762.pem"
	I1010 19:24:30.891307  147213 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/888762.pem
	I1010 19:24:30.895771  147213 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 10 18:09 /usr/share/ca-certificates/888762.pem
	I1010 19:24:30.895828  147213 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/888762.pem
	I1010 19:24:30.901510  147213 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/888762.pem /etc/ssl/certs/3ec20f2e.0"
	I1010 19:24:30.912627  147213 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1010 19:24:30.924330  147213 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1010 19:24:30.929108  147213 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 10 17:58 /usr/share/ca-certificates/minikubeCA.pem
	I1010 19:24:30.929194  147213 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1010 19:24:30.935266  147213 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1010 19:24:30.946714  147213 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1010 19:24:30.951692  147213 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1010 19:24:30.957910  147213 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1010 19:24:30.964296  147213 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1010 19:24:30.971001  147213 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1010 19:24:30.977427  147213 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1010 19:24:30.984201  147213 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1010 19:24:30.990532  147213 kubeadm.go:392] StartCluster: {Name:no-preload-320324 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:no-preload-320324 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.11 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 19:24:30.990622  147213 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1010 19:24:30.990727  147213 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1010 19:24:31.033544  147213 cri.go:89] found id: ""
	I1010 19:24:31.033624  147213 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1010 19:24:31.044956  147213 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1010 19:24:31.044975  147213 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1010 19:24:31.045025  147213 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1010 19:24:31.056563  147213 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1010 19:24:31.057705  147213 kubeconfig.go:125] found "no-preload-320324" server: "https://192.168.72.11:8443"
	I1010 19:24:31.059853  147213 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1010 19:24:31.071304  147213 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.11
	I1010 19:24:31.071338  147213 kubeadm.go:1160] stopping kube-system containers ...
	I1010 19:24:31.071353  147213 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1010 19:24:31.071444  147213 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1010 19:24:31.107345  147213 cri.go:89] found id: ""
	I1010 19:24:31.107429  147213 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1010 19:24:31.125556  147213 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1010 19:24:31.135390  147213 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1010 19:24:31.135428  147213 kubeadm.go:157] found existing configuration files:
	
	I1010 19:24:31.135478  147213 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1010 19:24:31.144653  147213 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1010 19:24:31.144715  147213 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1010 19:24:31.154458  147213 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1010 19:24:31.163444  147213 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1010 19:24:31.163501  147213 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1010 19:24:31.172633  147213 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1010 19:24:31.181939  147213 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1010 19:24:31.182001  147213 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1010 19:24:31.191638  147213 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1010 19:24:31.200846  147213 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1010 19:24:31.200935  147213 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1010 19:24:31.211048  147213 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1010 19:24:31.221008  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:24:31.352733  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:24:32.270546  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:24:32.474510  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:24:32.551517  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:24:32.707737  147213 api_server.go:52] waiting for apiserver process to appear ...
	I1010 19:24:32.707826  147213 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:33.208647  147213 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:33.708539  147213 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:33.728647  147213 api_server.go:72] duration metric: took 1.020907246s to wait for apiserver process to appear ...
	I1010 19:24:33.728678  147213 api_server.go:88] waiting for apiserver healthz status ...
	I1010 19:24:33.728701  147213 api_server.go:253] Checking apiserver healthz at https://192.168.72.11:8443/healthz ...
	I1010 19:24:30.066635  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:32.066732  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:32.552277  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:35.051399  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:34.028434  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:34.527949  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:35.028224  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:35.528470  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:36.028540  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:36.528499  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:37.028362  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:37.528483  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:38.028531  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:38.527865  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:37.025756  147213 api_server.go:279] https://192.168.72.11:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1010 19:24:37.025787  147213 api_server.go:103] status: https://192.168.72.11:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1010 19:24:37.025802  147213 api_server.go:253] Checking apiserver healthz at https://192.168.72.11:8443/healthz ...
	I1010 19:24:37.078247  147213 api_server.go:279] https://192.168.72.11:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1010 19:24:37.078283  147213 api_server.go:103] status: https://192.168.72.11:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1010 19:24:37.229601  147213 api_server.go:253] Checking apiserver healthz at https://192.168.72.11:8443/healthz ...
	I1010 19:24:37.237166  147213 api_server.go:279] https://192.168.72.11:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1010 19:24:37.237204  147213 api_server.go:103] status: https://192.168.72.11:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1010 19:24:37.728824  147213 api_server.go:253] Checking apiserver healthz at https://192.168.72.11:8443/healthz ...
	I1010 19:24:37.735660  147213 api_server.go:279] https://192.168.72.11:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1010 19:24:37.735700  147213 api_server.go:103] status: https://192.168.72.11:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1010 19:24:38.229746  147213 api_server.go:253] Checking apiserver healthz at https://192.168.72.11:8443/healthz ...
	I1010 19:24:38.234449  147213 api_server.go:279] https://192.168.72.11:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1010 19:24:38.234491  147213 api_server.go:103] status: https://192.168.72.11:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1010 19:24:38.729000  147213 api_server.go:253] Checking apiserver healthz at https://192.168.72.11:8443/healthz ...
	I1010 19:24:38.737564  147213 api_server.go:279] https://192.168.72.11:8443/healthz returned 200:
	ok
	I1010 19:24:38.751982  147213 api_server.go:141] control plane version: v1.31.1
	I1010 19:24:38.752012  147213 api_server.go:131] duration metric: took 5.023326632s to wait for apiserver health ...
	I1010 19:24:38.752023  147213 cni.go:84] Creating CNI manager for ""
	I1010 19:24:38.752030  147213 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1010 19:24:38.753351  147213 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1010 19:24:34.067208  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:36.067413  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:38.566729  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:38.754645  147213 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1010 19:24:38.772086  147213 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1010 19:24:38.792017  147213 system_pods.go:43] waiting for kube-system pods to appear ...
	I1010 19:24:38.800547  147213 system_pods.go:59] 8 kube-system pods found
	I1010 19:24:38.800592  147213 system_pods.go:61] "coredns-7c65d6cfc9-86brb" [28e5f869-f82f-4bd4-9d9c-89499fa89c89] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1010 19:24:38.800602  147213 system_pods.go:61] "etcd-no-preload-320324" [3027fc7b-a2cd-47c9-a6f1-122bfd0475a8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1010 19:24:38.800609  147213 system_pods.go:61] "kube-apiserver-no-preload-320324" [6e3d2eab-4568-48e9-957c-e724ff89b5da] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1010 19:24:38.800617  147213 system_pods.go:61] "kube-controller-manager-no-preload-320324" [a6d65cd3-48b1-4faf-bb7f-f6433641d6f3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1010 19:24:38.800624  147213 system_pods.go:61] "kube-proxy-vn6sv" [e5b2c419-7299-4bc4-b263-99408b9484eb] Running
	I1010 19:24:38.800629  147213 system_pods.go:61] "kube-scheduler-no-preload-320324" [e089c258-e720-4c78-ada1-eebbe89556c1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1010 19:24:38.800638  147213 system_pods.go:61] "metrics-server-6867b74b74-8w9lk" [354939e6-2ca9-44f5-8e8e-c10493c68b79] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1010 19:24:38.800642  147213 system_pods.go:61] "storage-provisioner" [d4965b53-60c3-4a97-bd52-d164d977247a] Running
	I1010 19:24:38.800648  147213 system_pods.go:74] duration metric: took 8.60732ms to wait for pod list to return data ...
	I1010 19:24:38.800654  147213 node_conditions.go:102] verifying NodePressure condition ...
	I1010 19:24:38.804628  147213 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1010 19:24:38.804663  147213 node_conditions.go:123] node cpu capacity is 2
	I1010 19:24:38.804680  147213 node_conditions.go:105] duration metric: took 4.021699ms to run NodePressure ...
	I1010 19:24:38.804700  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:24:39.078452  147213 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1010 19:24:39.087090  147213 kubeadm.go:739] kubelet initialised
	I1010 19:24:39.087116  147213 kubeadm.go:740] duration metric: took 8.636436ms waiting for restarted kubelet to initialise ...
	I1010 19:24:39.087125  147213 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 19:24:39.094468  147213 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-86brb" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:39.108724  147213 pod_ready.go:98] node "no-preload-320324" hosting pod "coredns-7c65d6cfc9-86brb" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:39.108756  147213 pod_ready.go:82] duration metric: took 14.254631ms for pod "coredns-7c65d6cfc9-86brb" in "kube-system" namespace to be "Ready" ...
	E1010 19:24:39.108770  147213 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-320324" hosting pod "coredns-7c65d6cfc9-86brb" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:39.108780  147213 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:39.119304  147213 pod_ready.go:98] node "no-preload-320324" hosting pod "etcd-no-preload-320324" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:39.119335  147213 pod_ready.go:82] duration metric: took 10.543376ms for pod "etcd-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	E1010 19:24:39.119345  147213 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-320324" hosting pod "etcd-no-preload-320324" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:39.119352  147213 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:39.127243  147213 pod_ready.go:98] node "no-preload-320324" hosting pod "kube-apiserver-no-preload-320324" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:39.127268  147213 pod_ready.go:82] duration metric: took 7.907414ms for pod "kube-apiserver-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	E1010 19:24:39.127278  147213 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-320324" hosting pod "kube-apiserver-no-preload-320324" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:39.127285  147213 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:39.195549  147213 pod_ready.go:98] node "no-preload-320324" hosting pod "kube-controller-manager-no-preload-320324" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:39.195578  147213 pod_ready.go:82] duration metric: took 68.282333ms for pod "kube-controller-manager-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	E1010 19:24:39.195588  147213 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-320324" hosting pod "kube-controller-manager-no-preload-320324" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:39.195594  147213 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-vn6sv" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:39.595842  147213 pod_ready.go:98] node "no-preload-320324" hosting pod "kube-proxy-vn6sv" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:39.595871  147213 pod_ready.go:82] duration metric: took 400.267905ms for pod "kube-proxy-vn6sv" in "kube-system" namespace to be "Ready" ...
	E1010 19:24:39.595880  147213 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-320324" hosting pod "kube-proxy-vn6sv" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:39.595886  147213 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:39.995731  147213 pod_ready.go:98] node "no-preload-320324" hosting pod "kube-scheduler-no-preload-320324" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:39.995760  147213 pod_ready.go:82] duration metric: took 399.866947ms for pod "kube-scheduler-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	E1010 19:24:39.995770  147213 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-320324" hosting pod "kube-scheduler-no-preload-320324" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:39.995777  147213 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:40.396420  147213 pod_ready.go:98] node "no-preload-320324" hosting pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:40.396456  147213 pod_ready.go:82] duration metric: took 400.667834ms for pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace to be "Ready" ...
	E1010 19:24:40.396470  147213 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-320324" hosting pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:40.396482  147213 pod_ready.go:39] duration metric: took 1.309346973s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 19:24:40.396508  147213 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1010 19:24:40.409956  147213 ops.go:34] apiserver oom_adj: -16
	I1010 19:24:40.409980  147213 kubeadm.go:597] duration metric: took 9.364998977s to restartPrimaryControlPlane
	I1010 19:24:40.409991  147213 kubeadm.go:394] duration metric: took 9.419470024s to StartCluster
	I1010 19:24:40.410009  147213 settings.go:142] acquiring lock: {Name:mk8d73e472e6ca16e47be8eb712406a95625b8da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 19:24:40.410085  147213 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19787-81676/kubeconfig
	I1010 19:24:40.413037  147213 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19787-81676/kubeconfig: {Name:mk31297b607b774870865548d952622c04d970cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 19:24:40.413448  147213 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.11 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1010 19:24:40.413783  147213 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1010 19:24:40.413979  147213 config.go:182] Loaded profile config "no-preload-320324": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 19:24:40.413996  147213 addons.go:69] Setting default-storageclass=true in profile "no-preload-320324"
	I1010 19:24:40.414020  147213 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-320324"
	I1010 19:24:40.413983  147213 addons.go:69] Setting storage-provisioner=true in profile "no-preload-320324"
	I1010 19:24:40.414048  147213 addons.go:234] Setting addon storage-provisioner=true in "no-preload-320324"
	W1010 19:24:40.414057  147213 addons.go:243] addon storage-provisioner should already be in state true
	I1010 19:24:40.414091  147213 host.go:66] Checking if "no-preload-320324" exists ...
	I1010 19:24:40.414170  147213 addons.go:69] Setting metrics-server=true in profile "no-preload-320324"
	I1010 19:24:40.414230  147213 addons.go:234] Setting addon metrics-server=true in "no-preload-320324"
	W1010 19:24:40.414252  147213 addons.go:243] addon metrics-server should already be in state true
	I1010 19:24:40.414292  147213 host.go:66] Checking if "no-preload-320324" exists ...
	I1010 19:24:40.414612  147213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:24:40.414640  147213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:24:40.414678  147213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:24:40.414712  147213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:24:40.415409  147213 out.go:177] * Verifying Kubernetes components...
	I1010 19:24:40.415412  147213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:24:40.415553  147213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:24:40.416812  147213 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 19:24:40.431363  147213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44727
	I1010 19:24:40.431474  147213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46487
	I1010 19:24:40.431659  147213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33767
	I1010 19:24:40.431983  147213 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:24:40.432136  147213 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:24:40.432156  147213 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:24:40.432567  147213 main.go:141] libmachine: Using API Version  1
	I1010 19:24:40.432587  147213 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:24:40.432710  147213 main.go:141] libmachine: Using API Version  1
	I1010 19:24:40.432732  147213 main.go:141] libmachine: Using API Version  1
	I1010 19:24:40.432740  147213 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:24:40.432749  147213 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:24:40.433000  147213 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:24:40.433079  147213 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:24:40.433103  147213 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:24:40.433468  147213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:24:40.433498  147213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:24:40.436984  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetState
	I1010 19:24:40.453362  147213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:24:40.453426  147213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:24:40.454884  147213 addons.go:234] Setting addon default-storageclass=true in "no-preload-320324"
	W1010 19:24:40.454913  147213 addons.go:243] addon default-storageclass should already be in state true
	I1010 19:24:40.454947  147213 host.go:66] Checking if "no-preload-320324" exists ...
	I1010 19:24:40.455335  147213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:24:40.455394  147213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:24:40.470642  147213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38831
	I1010 19:24:40.471118  147213 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:24:40.471701  147213 main.go:141] libmachine: Using API Version  1
	I1010 19:24:40.471730  147213 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:24:40.472241  147213 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:24:40.472523  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetState
	I1010 19:24:40.473953  147213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36987
	I1010 19:24:40.474196  147213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34527
	I1010 19:24:40.474332  147213 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:24:40.474672  147213 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:24:40.474814  147213 main.go:141] libmachine: Using API Version  1
	I1010 19:24:40.474827  147213 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:24:40.475181  147213 main.go:141] libmachine: Using API Version  1
	I1010 19:24:40.475210  147213 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:24:40.475310  147213 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:24:40.475702  147213 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:24:40.475785  147213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:24:40.475825  147213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:24:40.475922  147213 main.go:141] libmachine: (no-preload-320324) Calling .DriverName
	I1010 19:24:40.476046  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetState
	I1010 19:24:40.478147  147213 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 19:24:40.478395  147213 main.go:141] libmachine: (no-preload-320324) Calling .DriverName
	I1010 19:24:40.479869  147213 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 19:24:40.479896  147213 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1010 19:24:40.479922  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHHostname
	I1010 19:24:40.480549  147213 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1010 19:24:37.051611  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:39.551952  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:41.553895  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:40.482101  147213 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1010 19:24:40.482119  147213 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1010 19:24:40.482144  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHHostname
	I1010 19:24:40.484066  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:40.484560  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:40.484588  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:40.484833  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHPort
	I1010 19:24:40.485065  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:40.485241  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHUsername
	I1010 19:24:40.485272  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:40.485443  147213 sshutil.go:53] new ssh client: &{IP:192.168.72.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/no-preload-320324/id_rsa Username:docker}
	I1010 19:24:40.485788  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:40.485807  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:40.485842  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHPort
	I1010 19:24:40.486017  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:40.486202  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHUsername
	I1010 19:24:40.486454  147213 sshutil.go:53] new ssh client: &{IP:192.168.72.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/no-preload-320324/id_rsa Username:docker}
	I1010 19:24:40.492533  147213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38967
	I1010 19:24:40.493012  147213 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:24:40.493566  147213 main.go:141] libmachine: Using API Version  1
	I1010 19:24:40.493595  147213 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:24:40.494056  147213 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:24:40.494325  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetState
	I1010 19:24:40.496053  147213 main.go:141] libmachine: (no-preload-320324) Calling .DriverName
	I1010 19:24:40.496301  147213 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1010 19:24:40.496321  147213 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1010 19:24:40.496344  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHHostname
	I1010 19:24:40.499125  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:40.499667  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:40.499690  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:40.499843  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHPort
	I1010 19:24:40.500022  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:40.500194  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHUsername
	I1010 19:24:40.500357  147213 sshutil.go:53] new ssh client: &{IP:192.168.72.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/no-preload-320324/id_rsa Username:docker}
	I1010 19:24:40.651454  147213 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 19:24:40.667056  147213 node_ready.go:35] waiting up to 6m0s for node "no-preload-320324" to be "Ready" ...
	I1010 19:24:40.782217  147213 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 19:24:40.803094  147213 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1010 19:24:40.803122  147213 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1010 19:24:40.812288  147213 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1010 19:24:40.837679  147213 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1010 19:24:40.837723  147213 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1010 19:24:40.882090  147213 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1010 19:24:40.882119  147213 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1010 19:24:40.940115  147213 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1010 19:24:41.949181  147213 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.136852217s)
	I1010 19:24:41.949258  147213 main.go:141] libmachine: Making call to close driver server
	I1010 19:24:41.949275  147213 main.go:141] libmachine: (no-preload-320324) Calling .Close
	I1010 19:24:41.949286  147213 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.167030419s)
	I1010 19:24:41.949327  147213 main.go:141] libmachine: Making call to close driver server
	I1010 19:24:41.949345  147213 main.go:141] libmachine: (no-preload-320324) Calling .Close
	I1010 19:24:41.949625  147213 main.go:141] libmachine: (no-preload-320324) DBG | Closing plugin on server side
	I1010 19:24:41.949652  147213 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:24:41.949660  147213 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:24:41.949661  147213 main.go:141] libmachine: (no-preload-320324) DBG | Closing plugin on server side
	I1010 19:24:41.949668  147213 main.go:141] libmachine: Making call to close driver server
	I1010 19:24:41.949679  147213 main.go:141] libmachine: (no-preload-320324) Calling .Close
	I1010 19:24:41.949761  147213 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:24:41.949804  147213 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:24:41.949819  147213 main.go:141] libmachine: Making call to close driver server
	I1010 19:24:41.949826  147213 main.go:141] libmachine: (no-preload-320324) Calling .Close
	I1010 19:24:41.950811  147213 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:24:41.950824  147213 main.go:141] libmachine: (no-preload-320324) DBG | Closing plugin on server side
	I1010 19:24:41.950827  147213 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:24:41.950822  147213 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:24:41.950845  147213 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:24:41.950811  147213 main.go:141] libmachine: (no-preload-320324) DBG | Closing plugin on server side
	I1010 19:24:41.957797  147213 main.go:141] libmachine: Making call to close driver server
	I1010 19:24:41.957814  147213 main.go:141] libmachine: (no-preload-320324) Calling .Close
	I1010 19:24:41.958071  147213 main.go:141] libmachine: (no-preload-320324) DBG | Closing plugin on server side
	I1010 19:24:41.958077  147213 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:24:41.958099  147213 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:24:42.005530  147213 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.065377363s)
	I1010 19:24:42.005590  147213 main.go:141] libmachine: Making call to close driver server
	I1010 19:24:42.005602  147213 main.go:141] libmachine: (no-preload-320324) Calling .Close
	I1010 19:24:42.005914  147213 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:24:42.005937  147213 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:24:42.005935  147213 main.go:141] libmachine: (no-preload-320324) DBG | Closing plugin on server side
	I1010 19:24:42.005972  147213 main.go:141] libmachine: Making call to close driver server
	I1010 19:24:42.006003  147213 main.go:141] libmachine: (no-preload-320324) Calling .Close
	I1010 19:24:42.006280  147213 main.go:141] libmachine: (no-preload-320324) DBG | Closing plugin on server side
	I1010 19:24:42.006313  147213 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:24:42.006335  147213 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:24:42.006354  147213 addons.go:475] Verifying addon metrics-server=true in "no-preload-320324"
	I1010 19:24:42.008523  147213 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1010 19:24:39.028363  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:39.528571  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:40.027992  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:40.528552  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:41.028220  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:41.527889  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:42.028462  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:24:42.028549  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:24:42.070669  148123 cri.go:89] found id: ""
	I1010 19:24:42.070701  148123 logs.go:282] 0 containers: []
	W1010 19:24:42.070710  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:24:42.070716  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:24:42.070775  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:24:42.110693  148123 cri.go:89] found id: ""
	I1010 19:24:42.110731  148123 logs.go:282] 0 containers: []
	W1010 19:24:42.110748  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:24:42.110756  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:24:42.110816  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:24:42.148471  148123 cri.go:89] found id: ""
	I1010 19:24:42.148511  148123 logs.go:282] 0 containers: []
	W1010 19:24:42.148525  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:24:42.148535  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:24:42.148603  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:24:42.184637  148123 cri.go:89] found id: ""
	I1010 19:24:42.184670  148123 logs.go:282] 0 containers: []
	W1010 19:24:42.184683  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:24:42.184691  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:24:42.184759  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:24:42.226794  148123 cri.go:89] found id: ""
	I1010 19:24:42.226834  148123 logs.go:282] 0 containers: []
	W1010 19:24:42.226848  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:24:42.226857  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:24:42.226942  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:24:42.262969  148123 cri.go:89] found id: ""
	I1010 19:24:42.263004  148123 logs.go:282] 0 containers: []
	W1010 19:24:42.263017  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:24:42.263027  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:24:42.263167  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:24:42.301063  148123 cri.go:89] found id: ""
	I1010 19:24:42.301088  148123 logs.go:282] 0 containers: []
	W1010 19:24:42.301096  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:24:42.301102  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:24:42.301153  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:24:42.338829  148123 cri.go:89] found id: ""
	I1010 19:24:42.338862  148123 logs.go:282] 0 containers: []
	W1010 19:24:42.338873  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:24:42.338886  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:24:42.338901  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:24:42.391426  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:24:42.391478  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:24:42.405520  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:24:42.405563  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:24:42.544245  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:24:42.544275  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:24:42.544292  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:24:42.620760  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:24:42.620804  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:24:42.009965  147213 addons.go:510] duration metric: took 1.596190602s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1010 19:24:42.672792  147213 node_ready.go:53] node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:41.066744  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:43.066850  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:43.557231  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:46.051820  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:45.164389  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:45.179714  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:24:45.179776  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:24:45.216271  148123 cri.go:89] found id: ""
	I1010 19:24:45.216308  148123 logs.go:282] 0 containers: []
	W1010 19:24:45.216319  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:24:45.216327  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:24:45.216394  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:24:45.255109  148123 cri.go:89] found id: ""
	I1010 19:24:45.255154  148123 logs.go:282] 0 containers: []
	W1010 19:24:45.255167  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:24:45.255176  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:24:45.255248  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:24:45.294338  148123 cri.go:89] found id: ""
	I1010 19:24:45.294369  148123 logs.go:282] 0 containers: []
	W1010 19:24:45.294380  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:24:45.294388  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:24:45.294457  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:24:45.339573  148123 cri.go:89] found id: ""
	I1010 19:24:45.339606  148123 logs.go:282] 0 containers: []
	W1010 19:24:45.339626  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:24:45.339636  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:24:45.339703  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:24:45.400184  148123 cri.go:89] found id: ""
	I1010 19:24:45.400214  148123 logs.go:282] 0 containers: []
	W1010 19:24:45.400225  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:24:45.400234  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:24:45.400301  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:24:45.436152  148123 cri.go:89] found id: ""
	I1010 19:24:45.436183  148123 logs.go:282] 0 containers: []
	W1010 19:24:45.436195  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:24:45.436203  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:24:45.436262  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:24:45.484321  148123 cri.go:89] found id: ""
	I1010 19:24:45.484347  148123 logs.go:282] 0 containers: []
	W1010 19:24:45.484355  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:24:45.484361  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:24:45.484441  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:24:45.532893  148123 cri.go:89] found id: ""
	I1010 19:24:45.532923  148123 logs.go:282] 0 containers: []
	W1010 19:24:45.532932  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:24:45.532943  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:24:45.532958  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:24:45.585183  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:24:45.585214  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:24:45.638800  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:24:45.638847  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:24:45.653928  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:24:45.653961  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:24:45.733534  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:24:45.733564  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:24:45.733580  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:24:48.314367  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:48.329687  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:24:48.329754  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:24:48.370372  148123 cri.go:89] found id: ""
	I1010 19:24:48.370400  148123 logs.go:282] 0 containers: []
	W1010 19:24:48.370409  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:24:48.370415  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:24:48.370490  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:24:48.409362  148123 cri.go:89] found id: ""
	I1010 19:24:48.409396  148123 logs.go:282] 0 containers: []
	W1010 19:24:48.409407  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:24:48.409415  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:24:48.409500  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:24:48.449640  148123 cri.go:89] found id: ""
	I1010 19:24:48.449672  148123 logs.go:282] 0 containers: []
	W1010 19:24:48.449681  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:24:48.449687  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:24:48.449746  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:24:48.485053  148123 cri.go:89] found id: ""
	I1010 19:24:48.485104  148123 logs.go:282] 0 containers: []
	W1010 19:24:48.485121  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:24:48.485129  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:24:48.485183  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:24:48.520083  148123 cri.go:89] found id: ""
	I1010 19:24:48.520113  148123 logs.go:282] 0 containers: []
	W1010 19:24:48.520121  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:24:48.520127  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:24:48.520185  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:24:48.555112  148123 cri.go:89] found id: ""
	I1010 19:24:48.555138  148123 logs.go:282] 0 containers: []
	W1010 19:24:48.555149  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:24:48.555156  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:24:48.555241  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:24:45.171882  147213 node_ready.go:53] node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:47.673073  147213 node_ready.go:49] node "no-preload-320324" has status "Ready":"True"
	I1010 19:24:47.673103  147213 node_ready.go:38] duration metric: took 7.00601327s for node "no-preload-320324" to be "Ready" ...
	I1010 19:24:47.673117  147213 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 19:24:47.682195  147213 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-86brb" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:47.690079  147213 pod_ready.go:93] pod "coredns-7c65d6cfc9-86brb" in "kube-system" namespace has status "Ready":"True"
	I1010 19:24:47.690111  147213 pod_ready.go:82] duration metric: took 7.882823ms for pod "coredns-7c65d6cfc9-86brb" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:47.690126  147213 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:47.698009  147213 pod_ready.go:93] pod "etcd-no-preload-320324" in "kube-system" namespace has status "Ready":"True"
	I1010 19:24:47.698038  147213 pod_ready.go:82] duration metric: took 7.903016ms for pod "etcd-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:47.698052  147213 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:45.066893  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:47.566144  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:48.551853  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:51.050365  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:48.592682  148123 cri.go:89] found id: ""
	I1010 19:24:48.592710  148123 logs.go:282] 0 containers: []
	W1010 19:24:48.592719  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:24:48.592725  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:24:48.592775  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:24:48.628449  148123 cri.go:89] found id: ""
	I1010 19:24:48.628482  148123 logs.go:282] 0 containers: []
	W1010 19:24:48.628490  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:24:48.628500  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:24:48.628513  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:24:48.709385  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:24:48.709428  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:24:48.752542  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:24:48.752677  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:24:48.812331  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:24:48.812385  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:24:48.827057  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:24:48.827095  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:24:48.908312  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:24:51.409122  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:51.423404  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:24:51.423482  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:24:51.465742  148123 cri.go:89] found id: ""
	I1010 19:24:51.465781  148123 logs.go:282] 0 containers: []
	W1010 19:24:51.465793  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:24:51.465803  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:24:51.465875  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:24:51.501597  148123 cri.go:89] found id: ""
	I1010 19:24:51.501630  148123 logs.go:282] 0 containers: []
	W1010 19:24:51.501641  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:24:51.501648  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:24:51.501711  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:24:51.537500  148123 cri.go:89] found id: ""
	I1010 19:24:51.537539  148123 logs.go:282] 0 containers: []
	W1010 19:24:51.537551  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:24:51.537559  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:24:51.537626  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:24:51.579546  148123 cri.go:89] found id: ""
	I1010 19:24:51.579576  148123 logs.go:282] 0 containers: []
	W1010 19:24:51.579587  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:24:51.579595  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:24:51.579660  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:24:51.619893  148123 cri.go:89] found id: ""
	I1010 19:24:51.619917  148123 logs.go:282] 0 containers: []
	W1010 19:24:51.619925  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:24:51.619931  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:24:51.619999  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:24:51.655787  148123 cri.go:89] found id: ""
	I1010 19:24:51.655824  148123 logs.go:282] 0 containers: []
	W1010 19:24:51.655837  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:24:51.655846  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:24:51.655921  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:24:51.699582  148123 cri.go:89] found id: ""
	I1010 19:24:51.699611  148123 logs.go:282] 0 containers: []
	W1010 19:24:51.699619  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:24:51.699625  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:24:51.699685  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:24:51.738658  148123 cri.go:89] found id: ""
	I1010 19:24:51.738689  148123 logs.go:282] 0 containers: []
	W1010 19:24:51.738697  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:24:51.738707  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:24:51.738721  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:24:51.789556  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:24:51.789587  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:24:51.858919  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:24:51.858968  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:24:51.887818  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:24:51.887854  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:24:51.964408  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:24:51.964434  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:24:51.964449  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:24:49.705130  147213 pod_ready.go:103] pod "kube-apiserver-no-preload-320324" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:51.705847  147213 pod_ready.go:103] pod "kube-apiserver-no-preload-320324" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:53.205374  147213 pod_ready.go:93] pod "kube-apiserver-no-preload-320324" in "kube-system" namespace has status "Ready":"True"
	I1010 19:24:53.205401  147213 pod_ready.go:82] duration metric: took 5.507341974s for pod "kube-apiserver-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:53.205413  147213 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:53.210237  147213 pod_ready.go:93] pod "kube-controller-manager-no-preload-320324" in "kube-system" namespace has status "Ready":"True"
	I1010 19:24:53.210259  147213 pod_ready.go:82] duration metric: took 4.83925ms for pod "kube-controller-manager-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:53.210269  147213 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-vn6sv" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:53.215158  147213 pod_ready.go:93] pod "kube-proxy-vn6sv" in "kube-system" namespace has status "Ready":"True"
	I1010 19:24:53.215186  147213 pod_ready.go:82] duration metric: took 4.909888ms for pod "kube-proxy-vn6sv" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:53.215198  147213 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:53.220077  147213 pod_ready.go:93] pod "kube-scheduler-no-preload-320324" in "kube-system" namespace has status "Ready":"True"
	I1010 19:24:53.220097  147213 pod_ready.go:82] duration metric: took 4.890652ms for pod "kube-scheduler-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:53.220105  147213 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:50.066165  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:52.066343  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:53.552604  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:56.050748  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:54.545627  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:54.560748  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:24:54.560830  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:24:54.595863  148123 cri.go:89] found id: ""
	I1010 19:24:54.595893  148123 logs.go:282] 0 containers: []
	W1010 19:24:54.595903  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:24:54.595912  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:24:54.595978  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:24:54.633645  148123 cri.go:89] found id: ""
	I1010 19:24:54.633681  148123 logs.go:282] 0 containers: []
	W1010 19:24:54.633693  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:24:54.633701  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:24:54.633761  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:24:54.668269  148123 cri.go:89] found id: ""
	I1010 19:24:54.668299  148123 logs.go:282] 0 containers: []
	W1010 19:24:54.668311  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:24:54.668317  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:24:54.668369  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:24:54.706559  148123 cri.go:89] found id: ""
	I1010 19:24:54.706591  148123 logs.go:282] 0 containers: []
	W1010 19:24:54.706600  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:24:54.706608  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:24:54.706673  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:24:54.746248  148123 cri.go:89] found id: ""
	I1010 19:24:54.746283  148123 logs.go:282] 0 containers: []
	W1010 19:24:54.746295  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:24:54.746303  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:24:54.746383  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:24:54.782982  148123 cri.go:89] found id: ""
	I1010 19:24:54.783017  148123 logs.go:282] 0 containers: []
	W1010 19:24:54.783027  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:24:54.783033  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:24:54.783085  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:24:54.819664  148123 cri.go:89] found id: ""
	I1010 19:24:54.819700  148123 logs.go:282] 0 containers: []
	W1010 19:24:54.819713  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:24:54.819722  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:24:54.819797  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:24:54.859603  148123 cri.go:89] found id: ""
	I1010 19:24:54.859632  148123 logs.go:282] 0 containers: []
	W1010 19:24:54.859640  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:24:54.859650  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:24:54.859662  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:24:54.910949  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:24:54.910987  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:24:54.925941  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:24:54.925975  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:24:55.009626  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:24:55.009654  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:24:55.009669  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:24:55.097196  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:24:55.097237  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:24:57.641732  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:57.661141  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:24:57.661222  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:24:57.716054  148123 cri.go:89] found id: ""
	I1010 19:24:57.716086  148123 logs.go:282] 0 containers: []
	W1010 19:24:57.716094  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:24:57.716100  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:24:57.716178  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:24:57.766862  148123 cri.go:89] found id: ""
	I1010 19:24:57.766892  148123 logs.go:282] 0 containers: []
	W1010 19:24:57.766906  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:24:57.766917  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:24:57.766989  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:24:57.814776  148123 cri.go:89] found id: ""
	I1010 19:24:57.814808  148123 logs.go:282] 0 containers: []
	W1010 19:24:57.814821  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:24:57.814829  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:24:57.814899  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:24:57.850458  148123 cri.go:89] found id: ""
	I1010 19:24:57.850495  148123 logs.go:282] 0 containers: []
	W1010 19:24:57.850505  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:24:57.850516  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:24:57.850667  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:24:57.886541  148123 cri.go:89] found id: ""
	I1010 19:24:57.886566  148123 logs.go:282] 0 containers: []
	W1010 19:24:57.886575  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:24:57.886581  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:24:57.886645  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:24:57.920748  148123 cri.go:89] found id: ""
	I1010 19:24:57.920783  148123 logs.go:282] 0 containers: []
	W1010 19:24:57.920795  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:24:57.920802  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:24:57.920887  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:24:57.957801  148123 cri.go:89] found id: ""
	I1010 19:24:57.957833  148123 logs.go:282] 0 containers: []
	W1010 19:24:57.957844  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:24:57.957852  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:24:57.957919  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:24:57.995648  148123 cri.go:89] found id: ""
	I1010 19:24:57.995683  148123 logs.go:282] 0 containers: []
	W1010 19:24:57.995694  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:24:57.995706  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:24:57.995723  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:24:58.034987  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:24:58.035030  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:24:58.089014  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:24:58.089066  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:24:58.104179  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:24:58.104211  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:24:58.178239  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:24:58.178271  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:24:58.178291  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:24:55.229459  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:57.727298  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:54.566779  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:57.065902  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:58.051248  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:00.550512  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:00.756057  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:00.770086  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:00.770150  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:00.811400  148123 cri.go:89] found id: ""
	I1010 19:25:00.811444  148123 logs.go:282] 0 containers: []
	W1010 19:25:00.811456  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:00.811466  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:00.811533  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:00.848821  148123 cri.go:89] found id: ""
	I1010 19:25:00.848859  148123 logs.go:282] 0 containers: []
	W1010 19:25:00.848870  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:00.848877  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:00.848947  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:00.883529  148123 cri.go:89] found id: ""
	I1010 19:25:00.883557  148123 logs.go:282] 0 containers: []
	W1010 19:25:00.883566  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:00.883573  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:00.883631  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:00.922952  148123 cri.go:89] found id: ""
	I1010 19:25:00.922982  148123 logs.go:282] 0 containers: []
	W1010 19:25:00.922992  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:00.922998  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:00.923057  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:00.956116  148123 cri.go:89] found id: ""
	I1010 19:25:00.956147  148123 logs.go:282] 0 containers: []
	W1010 19:25:00.956159  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:00.956168  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:00.956233  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:00.998892  148123 cri.go:89] found id: ""
	I1010 19:25:00.998919  148123 logs.go:282] 0 containers: []
	W1010 19:25:00.998930  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:00.998939  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:00.998996  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:01.037617  148123 cri.go:89] found id: ""
	I1010 19:25:01.037649  148123 logs.go:282] 0 containers: []
	W1010 19:25:01.037657  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:01.037663  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:01.037717  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:01.078003  148123 cri.go:89] found id: ""
	I1010 19:25:01.078034  148123 logs.go:282] 0 containers: []
	W1010 19:25:01.078046  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:01.078055  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:01.078069  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:01.118948  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:01.118977  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:01.171053  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:01.171091  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:01.187027  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:01.187060  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:01.261288  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:01.261315  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:01.261330  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:24:59.728997  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:02.227142  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:59.566448  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:02.066184  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:02.551951  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:05.050558  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:03.848144  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:03.862527  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:03.862601  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:03.899120  148123 cri.go:89] found id: ""
	I1010 19:25:03.899159  148123 logs.go:282] 0 containers: []
	W1010 19:25:03.899180  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:03.899187  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:03.899276  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:03.935461  148123 cri.go:89] found id: ""
	I1010 19:25:03.935492  148123 logs.go:282] 0 containers: []
	W1010 19:25:03.935500  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:03.935507  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:03.935569  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:03.971892  148123 cri.go:89] found id: ""
	I1010 19:25:03.971925  148123 logs.go:282] 0 containers: []
	W1010 19:25:03.971937  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:03.971945  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:03.972019  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:04.008341  148123 cri.go:89] found id: ""
	I1010 19:25:04.008368  148123 logs.go:282] 0 containers: []
	W1010 19:25:04.008377  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:04.008390  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:04.008447  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:04.042007  148123 cri.go:89] found id: ""
	I1010 19:25:04.042036  148123 logs.go:282] 0 containers: []
	W1010 19:25:04.042044  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:04.042051  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:04.042101  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:04.078017  148123 cri.go:89] found id: ""
	I1010 19:25:04.078045  148123 logs.go:282] 0 containers: []
	W1010 19:25:04.078053  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:04.078059  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:04.078112  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:04.116792  148123 cri.go:89] found id: ""
	I1010 19:25:04.116823  148123 logs.go:282] 0 containers: []
	W1010 19:25:04.116832  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:04.116839  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:04.116928  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:04.153468  148123 cri.go:89] found id: ""
	I1010 19:25:04.153496  148123 logs.go:282] 0 containers: []
	W1010 19:25:04.153503  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:04.153513  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:04.153525  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:04.230646  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:04.230683  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:04.270975  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:04.271015  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:04.320845  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:04.320902  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:04.337789  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:04.337828  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:04.416077  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:06.916412  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:06.931309  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:06.931376  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:06.969224  148123 cri.go:89] found id: ""
	I1010 19:25:06.969257  148123 logs.go:282] 0 containers: []
	W1010 19:25:06.969269  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:06.969277  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:06.969346  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:07.006598  148123 cri.go:89] found id: ""
	I1010 19:25:07.006653  148123 logs.go:282] 0 containers: []
	W1010 19:25:07.006667  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:07.006676  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:07.006744  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:07.046392  148123 cri.go:89] found id: ""
	I1010 19:25:07.046418  148123 logs.go:282] 0 containers: []
	W1010 19:25:07.046427  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:07.046433  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:07.046491  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:07.089570  148123 cri.go:89] found id: ""
	I1010 19:25:07.089603  148123 logs.go:282] 0 containers: []
	W1010 19:25:07.089615  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:07.089624  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:07.089689  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:07.127152  148123 cri.go:89] found id: ""
	I1010 19:25:07.127185  148123 logs.go:282] 0 containers: []
	W1010 19:25:07.127195  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:07.127204  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:07.127282  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:07.160516  148123 cri.go:89] found id: ""
	I1010 19:25:07.160543  148123 logs.go:282] 0 containers: []
	W1010 19:25:07.160554  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:07.160563  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:07.160635  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:07.197270  148123 cri.go:89] found id: ""
	I1010 19:25:07.197307  148123 logs.go:282] 0 containers: []
	W1010 19:25:07.197318  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:07.197327  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:07.197395  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:07.232658  148123 cri.go:89] found id: ""
	I1010 19:25:07.232687  148123 logs.go:282] 0 containers: []
	W1010 19:25:07.232696  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:07.232706  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:07.232720  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:07.246579  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:07.246609  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:07.315200  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:07.315230  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:07.315251  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:07.389532  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:07.389580  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:07.438808  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:07.438837  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:04.227537  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:06.727865  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:04.067121  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:06.565089  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:08.565565  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:07.051371  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:09.051420  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:11.054211  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:09.990294  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:10.004155  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:10.004270  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:10.039034  148123 cri.go:89] found id: ""
	I1010 19:25:10.039068  148123 logs.go:282] 0 containers: []
	W1010 19:25:10.039078  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:10.039087  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:10.039174  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:10.079936  148123 cri.go:89] found id: ""
	I1010 19:25:10.079967  148123 logs.go:282] 0 containers: []
	W1010 19:25:10.079978  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:10.079985  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:10.080068  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:10.116441  148123 cri.go:89] found id: ""
	I1010 19:25:10.116471  148123 logs.go:282] 0 containers: []
	W1010 19:25:10.116483  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:10.116491  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:10.116556  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:10.151998  148123 cri.go:89] found id: ""
	I1010 19:25:10.152045  148123 logs.go:282] 0 containers: []
	W1010 19:25:10.152058  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:10.152075  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:10.152153  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:10.188514  148123 cri.go:89] found id: ""
	I1010 19:25:10.188547  148123 logs.go:282] 0 containers: []
	W1010 19:25:10.188565  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:10.188574  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:10.188640  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:10.223648  148123 cri.go:89] found id: ""
	I1010 19:25:10.223682  148123 logs.go:282] 0 containers: []
	W1010 19:25:10.223694  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:10.223702  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:10.223771  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:10.264117  148123 cri.go:89] found id: ""
	I1010 19:25:10.264143  148123 logs.go:282] 0 containers: []
	W1010 19:25:10.264151  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:10.264158  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:10.264215  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:10.302566  148123 cri.go:89] found id: ""
	I1010 19:25:10.302601  148123 logs.go:282] 0 containers: []
	W1010 19:25:10.302613  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:10.302625  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:10.302649  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:10.316567  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:10.316606  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:10.388642  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:10.388674  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:10.388692  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:10.471035  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:10.471090  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:10.519600  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:10.519645  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:13.075428  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:13.090830  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:13.090915  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:13.126302  148123 cri.go:89] found id: ""
	I1010 19:25:13.126336  148123 logs.go:282] 0 containers: []
	W1010 19:25:13.126348  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:13.126357  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:13.126429  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:13.163309  148123 cri.go:89] found id: ""
	I1010 19:25:13.163344  148123 logs.go:282] 0 containers: []
	W1010 19:25:13.163357  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:13.163365  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:13.163428  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:13.197569  148123 cri.go:89] found id: ""
	I1010 19:25:13.197605  148123 logs.go:282] 0 containers: []
	W1010 19:25:13.197615  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:13.197621  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:13.197692  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:13.236299  148123 cri.go:89] found id: ""
	I1010 19:25:13.236328  148123 logs.go:282] 0 containers: []
	W1010 19:25:13.236336  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:13.236342  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:13.236406  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:13.275667  148123 cri.go:89] found id: ""
	I1010 19:25:13.275696  148123 logs.go:282] 0 containers: []
	W1010 19:25:13.275705  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:13.275711  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:13.275776  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:13.309728  148123 cri.go:89] found id: ""
	I1010 19:25:13.309763  148123 logs.go:282] 0 containers: []
	W1010 19:25:13.309774  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:13.309786  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:13.309854  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:13.347464  148123 cri.go:89] found id: ""
	I1010 19:25:13.347493  148123 logs.go:282] 0 containers: []
	W1010 19:25:13.347504  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:13.347513  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:13.347576  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:13.385086  148123 cri.go:89] found id: ""
	I1010 19:25:13.385119  148123 logs.go:282] 0 containers: []
	W1010 19:25:13.385130  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:13.385142  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:13.385156  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:13.466513  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:13.466553  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:13.508610  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:13.508651  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:09.226850  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:11.227241  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:13.726879  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:10.565663  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:12.565845  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:13.555465  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:16.051764  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:13.564897  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:13.564932  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:13.578771  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:13.578803  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:13.651857  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:16.153066  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:16.167613  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:16.167698  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:16.203867  148123 cri.go:89] found id: ""
	I1010 19:25:16.203897  148123 logs.go:282] 0 containers: []
	W1010 19:25:16.203906  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:16.203912  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:16.203965  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:16.242077  148123 cri.go:89] found id: ""
	I1010 19:25:16.242109  148123 logs.go:282] 0 containers: []
	W1010 19:25:16.242130  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:16.242138  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:16.242234  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:16.283792  148123 cri.go:89] found id: ""
	I1010 19:25:16.283825  148123 logs.go:282] 0 containers: []
	W1010 19:25:16.283836  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:16.283844  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:16.283907  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:16.319934  148123 cri.go:89] found id: ""
	I1010 19:25:16.319969  148123 logs.go:282] 0 containers: []
	W1010 19:25:16.319980  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:16.319990  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:16.320063  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:16.360439  148123 cri.go:89] found id: ""
	I1010 19:25:16.360482  148123 logs.go:282] 0 containers: []
	W1010 19:25:16.360495  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:16.360504  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:16.360569  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:16.395890  148123 cri.go:89] found id: ""
	I1010 19:25:16.395922  148123 logs.go:282] 0 containers: []
	W1010 19:25:16.395931  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:16.395941  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:16.396009  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:16.431396  148123 cri.go:89] found id: ""
	I1010 19:25:16.431442  148123 logs.go:282] 0 containers: []
	W1010 19:25:16.431456  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:16.431463  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:16.431531  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:16.471768  148123 cri.go:89] found id: ""
	I1010 19:25:16.471796  148123 logs.go:282] 0 containers: []
	W1010 19:25:16.471804  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:16.471814  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:16.471830  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:16.526112  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:16.526155  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:16.541041  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:16.541074  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:16.623245  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:16.623273  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:16.623289  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:16.710840  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:16.710884  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:15.727171  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:17.728705  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:15.067362  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:17.566242  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:18.551207  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:21.050222  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:19.257566  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:19.273982  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:19.274063  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:19.311360  148123 cri.go:89] found id: ""
	I1010 19:25:19.311392  148123 logs.go:282] 0 containers: []
	W1010 19:25:19.311404  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:19.311411  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:19.311475  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:19.347025  148123 cri.go:89] found id: ""
	I1010 19:25:19.347053  148123 logs.go:282] 0 containers: []
	W1010 19:25:19.347062  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:19.347068  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:19.347120  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:19.383575  148123 cri.go:89] found id: ""
	I1010 19:25:19.383608  148123 logs.go:282] 0 containers: []
	W1010 19:25:19.383620  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:19.383633  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:19.383698  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:19.419245  148123 cri.go:89] found id: ""
	I1010 19:25:19.419270  148123 logs.go:282] 0 containers: []
	W1010 19:25:19.419278  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:19.419284  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:19.419334  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:19.453563  148123 cri.go:89] found id: ""
	I1010 19:25:19.453591  148123 logs.go:282] 0 containers: []
	W1010 19:25:19.453601  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:19.453658  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:19.453715  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:19.496430  148123 cri.go:89] found id: ""
	I1010 19:25:19.496458  148123 logs.go:282] 0 containers: []
	W1010 19:25:19.496466  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:19.496473  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:19.496533  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:19.534626  148123 cri.go:89] found id: ""
	I1010 19:25:19.534670  148123 logs.go:282] 0 containers: []
	W1010 19:25:19.534682  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:19.534691  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:19.534757  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:19.574186  148123 cri.go:89] found id: ""
	I1010 19:25:19.574236  148123 logs.go:282] 0 containers: []
	W1010 19:25:19.574248  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:19.574261  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:19.574283  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:19.587952  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:19.587989  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:19.664607  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:19.664644  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:19.664664  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:19.742185  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:19.742224  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:19.781530  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:19.781562  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:22.335398  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:22.349627  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:22.349693  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:22.389075  148123 cri.go:89] found id: ""
	I1010 19:25:22.389102  148123 logs.go:282] 0 containers: []
	W1010 19:25:22.389110  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:22.389117  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:22.389168  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:22.424331  148123 cri.go:89] found id: ""
	I1010 19:25:22.424368  148123 logs.go:282] 0 containers: []
	W1010 19:25:22.424381  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:22.424389  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:22.424460  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:22.464011  148123 cri.go:89] found id: ""
	I1010 19:25:22.464051  148123 logs.go:282] 0 containers: []
	W1010 19:25:22.464062  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:22.464070  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:22.464141  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:22.500786  148123 cri.go:89] found id: ""
	I1010 19:25:22.500819  148123 logs.go:282] 0 containers: []
	W1010 19:25:22.500828  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:22.500835  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:22.500914  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:22.538016  148123 cri.go:89] found id: ""
	I1010 19:25:22.538053  148123 logs.go:282] 0 containers: []
	W1010 19:25:22.538061  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:22.538068  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:22.538124  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:22.576600  148123 cri.go:89] found id: ""
	I1010 19:25:22.576629  148123 logs.go:282] 0 containers: []
	W1010 19:25:22.576637  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:22.576643  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:22.576702  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:22.616020  148123 cri.go:89] found id: ""
	I1010 19:25:22.616048  148123 logs.go:282] 0 containers: []
	W1010 19:25:22.616057  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:22.616064  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:22.616114  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:22.652238  148123 cri.go:89] found id: ""
	I1010 19:25:22.652271  148123 logs.go:282] 0 containers: []
	W1010 19:25:22.652283  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:22.652295  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:22.652311  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:22.726182  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:22.726216  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:22.726234  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:22.814040  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:22.814086  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:22.859254  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:22.859289  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:22.916565  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:22.916607  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:20.227871  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:22.732566  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:20.066872  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:22.566173  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:23.050833  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:25.551662  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:25.432636  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:25.446982  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:25.447047  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:25.482440  148123 cri.go:89] found id: ""
	I1010 19:25:25.482472  148123 logs.go:282] 0 containers: []
	W1010 19:25:25.482484  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:25.482492  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:25.482552  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:25.517709  148123 cri.go:89] found id: ""
	I1010 19:25:25.517742  148123 logs.go:282] 0 containers: []
	W1010 19:25:25.517756  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:25.517764  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:25.517867  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:25.561504  148123 cri.go:89] found id: ""
	I1010 19:25:25.561532  148123 logs.go:282] 0 containers: []
	W1010 19:25:25.561544  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:25.561552  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:25.561616  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:25.601704  148123 cri.go:89] found id: ""
	I1010 19:25:25.601741  148123 logs.go:282] 0 containers: []
	W1010 19:25:25.601753  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:25.601762  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:25.601825  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:25.638519  148123 cri.go:89] found id: ""
	I1010 19:25:25.638544  148123 logs.go:282] 0 containers: []
	W1010 19:25:25.638552  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:25.638558  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:25.638609  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:25.673390  148123 cri.go:89] found id: ""
	I1010 19:25:25.673425  148123 logs.go:282] 0 containers: []
	W1010 19:25:25.673436  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:25.673447  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:25.673525  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:25.708251  148123 cri.go:89] found id: ""
	I1010 19:25:25.708278  148123 logs.go:282] 0 containers: []
	W1010 19:25:25.708286  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:25.708293  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:25.708354  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:25.749793  148123 cri.go:89] found id: ""
	I1010 19:25:25.749826  148123 logs.go:282] 0 containers: []
	W1010 19:25:25.749837  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:25.749846  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:25.749861  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:25.802511  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:25.802554  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:25.817523  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:25.817562  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:25.898595  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:25.898619  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:25.898635  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:25.978629  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:25.978684  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:28.520165  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:28.532725  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:28.532793  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:25.226875  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:27.729015  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:25.066298  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:27.066963  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:27.551915  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:29.558497  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:28.571667  148123 cri.go:89] found id: ""
	I1010 19:25:28.571695  148123 logs.go:282] 0 containers: []
	W1010 19:25:28.571704  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:28.571710  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:28.571770  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:28.608136  148123 cri.go:89] found id: ""
	I1010 19:25:28.608165  148123 logs.go:282] 0 containers: []
	W1010 19:25:28.608174  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:28.608181  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:28.608244  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:28.642123  148123 cri.go:89] found id: ""
	I1010 19:25:28.642161  148123 logs.go:282] 0 containers: []
	W1010 19:25:28.642173  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:28.642181  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:28.642242  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:28.678600  148123 cri.go:89] found id: ""
	I1010 19:25:28.678633  148123 logs.go:282] 0 containers: []
	W1010 19:25:28.678643  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:28.678651  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:28.678702  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:28.716627  148123 cri.go:89] found id: ""
	I1010 19:25:28.716660  148123 logs.go:282] 0 containers: []
	W1010 19:25:28.716673  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:28.716681  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:28.716766  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:28.752750  148123 cri.go:89] found id: ""
	I1010 19:25:28.752786  148123 logs.go:282] 0 containers: []
	W1010 19:25:28.752798  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:28.752806  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:28.752898  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:28.790021  148123 cri.go:89] found id: ""
	I1010 19:25:28.790054  148123 logs.go:282] 0 containers: []
	W1010 19:25:28.790066  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:28.790075  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:28.790128  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:28.828760  148123 cri.go:89] found id: ""
	I1010 19:25:28.828803  148123 logs.go:282] 0 containers: []
	W1010 19:25:28.828827  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:28.828839  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:28.828877  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:28.879773  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:28.879811  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:28.894311  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:28.894342  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:28.968675  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:28.968706  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:28.968729  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:29.045862  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:29.045913  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:31.588772  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:31.601453  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:31.601526  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:31.637056  148123 cri.go:89] found id: ""
	I1010 19:25:31.637090  148123 logs.go:282] 0 containers: []
	W1010 19:25:31.637101  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:31.637108  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:31.637183  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:31.677825  148123 cri.go:89] found id: ""
	I1010 19:25:31.677854  148123 logs.go:282] 0 containers: []
	W1010 19:25:31.677862  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:31.677867  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:31.677920  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:31.715372  148123 cri.go:89] found id: ""
	I1010 19:25:31.715402  148123 logs.go:282] 0 containers: []
	W1010 19:25:31.715411  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:31.715419  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:31.715470  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:31.753767  148123 cri.go:89] found id: ""
	I1010 19:25:31.753794  148123 logs.go:282] 0 containers: []
	W1010 19:25:31.753806  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:31.753814  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:31.753879  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:31.792999  148123 cri.go:89] found id: ""
	I1010 19:25:31.793024  148123 logs.go:282] 0 containers: []
	W1010 19:25:31.793035  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:31.793050  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:31.793106  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:31.831571  148123 cri.go:89] found id: ""
	I1010 19:25:31.831598  148123 logs.go:282] 0 containers: []
	W1010 19:25:31.831607  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:31.831614  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:31.831673  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:31.880382  148123 cri.go:89] found id: ""
	I1010 19:25:31.880422  148123 logs.go:282] 0 containers: []
	W1010 19:25:31.880436  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:31.880447  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:31.880527  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:31.919596  148123 cri.go:89] found id: ""
	I1010 19:25:31.919627  148123 logs.go:282] 0 containers: []
	W1010 19:25:31.919637  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:31.919647  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:31.919661  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:31.967963  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:31.968001  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:31.983064  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:31.983106  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:32.056073  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:32.056105  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:32.056122  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:32.142927  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:32.142978  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:30.226683  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:32.227047  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:29.565699  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:31.566109  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:32.051411  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:34.052064  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:36.550062  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:34.690025  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:34.705326  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:34.705413  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:34.740756  148123 cri.go:89] found id: ""
	I1010 19:25:34.740786  148123 logs.go:282] 0 containers: []
	W1010 19:25:34.740793  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:34.740799  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:34.740872  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:34.777021  148123 cri.go:89] found id: ""
	I1010 19:25:34.777049  148123 logs.go:282] 0 containers: []
	W1010 19:25:34.777061  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:34.777069  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:34.777123  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:34.826699  148123 cri.go:89] found id: ""
	I1010 19:25:34.826735  148123 logs.go:282] 0 containers: []
	W1010 19:25:34.826747  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:34.826754  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:34.826896  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:34.864297  148123 cri.go:89] found id: ""
	I1010 19:25:34.864329  148123 logs.go:282] 0 containers: []
	W1010 19:25:34.864340  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:34.864348  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:34.864415  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:34.899198  148123 cri.go:89] found id: ""
	I1010 19:25:34.899234  148123 logs.go:282] 0 containers: []
	W1010 19:25:34.899247  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:34.899256  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:34.899315  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:34.934132  148123 cri.go:89] found id: ""
	I1010 19:25:34.934162  148123 logs.go:282] 0 containers: []
	W1010 19:25:34.934170  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:34.934176  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:34.934238  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:34.969858  148123 cri.go:89] found id: ""
	I1010 19:25:34.969891  148123 logs.go:282] 0 containers: []
	W1010 19:25:34.969903  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:34.969911  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:34.969978  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:35.006082  148123 cri.go:89] found id: ""
	I1010 19:25:35.006121  148123 logs.go:282] 0 containers: []
	W1010 19:25:35.006132  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:35.006145  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:35.006160  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:35.060596  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:35.060631  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:35.076246  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:35.076281  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:35.151879  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:35.151912  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:35.151931  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:35.231802  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:35.231845  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:37.774550  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:37.787871  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:37.787938  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:37.824273  148123 cri.go:89] found id: ""
	I1010 19:25:37.824306  148123 logs.go:282] 0 containers: []
	W1010 19:25:37.824317  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:37.824325  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:37.824410  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:37.860548  148123 cri.go:89] found id: ""
	I1010 19:25:37.860582  148123 logs.go:282] 0 containers: []
	W1010 19:25:37.860593  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:37.860601  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:37.860677  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:37.894616  148123 cri.go:89] found id: ""
	I1010 19:25:37.894644  148123 logs.go:282] 0 containers: []
	W1010 19:25:37.894654  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:37.894662  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:37.894730  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:37.931247  148123 cri.go:89] found id: ""
	I1010 19:25:37.931272  148123 logs.go:282] 0 containers: []
	W1010 19:25:37.931280  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:37.931287  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:37.931337  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:37.968028  148123 cri.go:89] found id: ""
	I1010 19:25:37.968068  148123 logs.go:282] 0 containers: []
	W1010 19:25:37.968079  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:37.968086  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:37.968153  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:38.004661  148123 cri.go:89] found id: ""
	I1010 19:25:38.004692  148123 logs.go:282] 0 containers: []
	W1010 19:25:38.004700  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:38.004706  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:38.004760  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:38.040874  148123 cri.go:89] found id: ""
	I1010 19:25:38.040906  148123 logs.go:282] 0 containers: []
	W1010 19:25:38.040915  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:38.040922  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:38.040990  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:38.083771  148123 cri.go:89] found id: ""
	I1010 19:25:38.083802  148123 logs.go:282] 0 containers: []
	W1010 19:25:38.083811  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:38.083821  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:38.083836  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:38.099645  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:38.099683  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:38.172168  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:38.172207  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:38.172223  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:38.248523  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:38.248561  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:38.306594  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:38.306630  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:34.728106  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:37.226285  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:34.065919  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:36.066751  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:38.067361  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:38.550359  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:40.551190  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:40.873868  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:40.889561  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:40.889668  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:40.926769  148123 cri.go:89] found id: ""
	I1010 19:25:40.926800  148123 logs.go:282] 0 containers: []
	W1010 19:25:40.926808  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:40.926816  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:40.926869  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:40.962564  148123 cri.go:89] found id: ""
	I1010 19:25:40.962592  148123 logs.go:282] 0 containers: []
	W1010 19:25:40.962600  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:40.962606  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:40.962659  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:40.997856  148123 cri.go:89] found id: ""
	I1010 19:25:40.997885  148123 logs.go:282] 0 containers: []
	W1010 19:25:40.997894  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:40.997900  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:40.997951  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:41.035479  148123 cri.go:89] found id: ""
	I1010 19:25:41.035506  148123 logs.go:282] 0 containers: []
	W1010 19:25:41.035514  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:41.035520  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:41.035575  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:41.077733  148123 cri.go:89] found id: ""
	I1010 19:25:41.077817  148123 logs.go:282] 0 containers: []
	W1010 19:25:41.077830  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:41.077840  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:41.077916  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:41.115439  148123 cri.go:89] found id: ""
	I1010 19:25:41.115476  148123 logs.go:282] 0 containers: []
	W1010 19:25:41.115489  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:41.115498  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:41.115552  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:41.152586  148123 cri.go:89] found id: ""
	I1010 19:25:41.152625  148123 logs.go:282] 0 containers: []
	W1010 19:25:41.152636  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:41.152643  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:41.152695  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:41.186807  148123 cri.go:89] found id: ""
	I1010 19:25:41.186840  148123 logs.go:282] 0 containers: []
	W1010 19:25:41.186851  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:41.186862  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:41.186880  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:41.200404  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:41.200447  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:41.280717  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:41.280744  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:41.280760  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:41.360561  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:41.360611  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:41.404461  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:41.404497  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:39.226903  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:41.227077  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:43.727197  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:40.570404  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:43.066523  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:43.050813  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:45.051094  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:43.955723  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:43.970895  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:43.970964  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:44.012162  148123 cri.go:89] found id: ""
	I1010 19:25:44.012194  148123 logs.go:282] 0 containers: []
	W1010 19:25:44.012203  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:44.012209  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:44.012282  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:44.074583  148123 cri.go:89] found id: ""
	I1010 19:25:44.074606  148123 logs.go:282] 0 containers: []
	W1010 19:25:44.074614  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:44.074620  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:44.074675  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:44.133562  148123 cri.go:89] found id: ""
	I1010 19:25:44.133588  148123 logs.go:282] 0 containers: []
	W1010 19:25:44.133596  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:44.133602  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:44.133658  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:44.168832  148123 cri.go:89] found id: ""
	I1010 19:25:44.168883  148123 logs.go:282] 0 containers: []
	W1010 19:25:44.168896  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:44.168908  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:44.168967  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:44.204896  148123 cri.go:89] found id: ""
	I1010 19:25:44.204923  148123 logs.go:282] 0 containers: []
	W1010 19:25:44.204934  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:44.204943  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:44.205002  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:44.242410  148123 cri.go:89] found id: ""
	I1010 19:25:44.242437  148123 logs.go:282] 0 containers: []
	W1010 19:25:44.242448  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:44.242456  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:44.242524  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:44.278553  148123 cri.go:89] found id: ""
	I1010 19:25:44.278581  148123 logs.go:282] 0 containers: []
	W1010 19:25:44.278589  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:44.278595  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:44.278658  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:44.316099  148123 cri.go:89] found id: ""
	I1010 19:25:44.316125  148123 logs.go:282] 0 containers: []
	W1010 19:25:44.316132  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:44.316141  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:44.316155  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:44.365809  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:44.365860  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:44.380129  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:44.380162  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:44.454241  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:44.454267  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:44.454281  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:44.530928  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:44.530974  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:47.078279  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:47.092233  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:47.092310  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:47.129517  148123 cri.go:89] found id: ""
	I1010 19:25:47.129557  148123 logs.go:282] 0 containers: []
	W1010 19:25:47.129573  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:47.129582  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:47.129654  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:47.170309  148123 cri.go:89] found id: ""
	I1010 19:25:47.170345  148123 logs.go:282] 0 containers: []
	W1010 19:25:47.170357  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:47.170365  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:47.170432  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:47.205110  148123 cri.go:89] found id: ""
	I1010 19:25:47.205145  148123 logs.go:282] 0 containers: []
	W1010 19:25:47.205156  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:47.205162  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:47.205228  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:47.245668  148123 cri.go:89] found id: ""
	I1010 19:25:47.245695  148123 logs.go:282] 0 containers: []
	W1010 19:25:47.245704  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:47.245711  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:47.245768  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:47.284549  148123 cri.go:89] found id: ""
	I1010 19:25:47.284576  148123 logs.go:282] 0 containers: []
	W1010 19:25:47.284584  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:47.284590  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:47.284643  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:47.324676  148123 cri.go:89] found id: ""
	I1010 19:25:47.324708  148123 logs.go:282] 0 containers: []
	W1010 19:25:47.324720  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:47.324729  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:47.324782  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:47.364315  148123 cri.go:89] found id: ""
	I1010 19:25:47.364347  148123 logs.go:282] 0 containers: []
	W1010 19:25:47.364356  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:47.364362  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:47.364433  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:47.400611  148123 cri.go:89] found id: ""
	I1010 19:25:47.400641  148123 logs.go:282] 0 containers: []
	W1010 19:25:47.400652  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:47.400664  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:47.400680  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:47.415185  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:47.415227  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:47.481638  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:47.481665  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:47.481681  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:47.560135  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:47.560171  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:47.602144  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:47.602174  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:46.227386  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:48.227699  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:45.066887  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:47.565340  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:47.051459  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:49.550170  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:51.554542  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:50.151770  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:50.165336  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:50.165403  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:50.201045  148123 cri.go:89] found id: ""
	I1010 19:25:50.201072  148123 logs.go:282] 0 containers: []
	W1010 19:25:50.201082  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:50.201089  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:50.201154  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:50.237047  148123 cri.go:89] found id: ""
	I1010 19:25:50.237082  148123 logs.go:282] 0 containers: []
	W1010 19:25:50.237094  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:50.237102  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:50.237174  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:50.273727  148123 cri.go:89] found id: ""
	I1010 19:25:50.273756  148123 logs.go:282] 0 containers: []
	W1010 19:25:50.273767  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:50.273780  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:50.273843  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:50.311397  148123 cri.go:89] found id: ""
	I1010 19:25:50.311424  148123 logs.go:282] 0 containers: []
	W1010 19:25:50.311433  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:50.311450  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:50.311513  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:50.347593  148123 cri.go:89] found id: ""
	I1010 19:25:50.347625  148123 logs.go:282] 0 containers: []
	W1010 19:25:50.347637  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:50.347705  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:50.347782  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:50.382195  148123 cri.go:89] found id: ""
	I1010 19:25:50.382228  148123 logs.go:282] 0 containers: []
	W1010 19:25:50.382240  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:50.382247  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:50.382313  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:50.419189  148123 cri.go:89] found id: ""
	I1010 19:25:50.419221  148123 logs.go:282] 0 containers: []
	W1010 19:25:50.419229  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:50.419236  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:50.419297  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:50.454368  148123 cri.go:89] found id: ""
	I1010 19:25:50.454399  148123 logs.go:282] 0 containers: []
	W1010 19:25:50.454410  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:50.454421  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:50.454440  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:50.495575  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:50.495623  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:50.548691  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:50.548737  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:50.564585  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:50.564621  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:50.637152  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:50.637174  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:50.637190  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:53.221939  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:53.235889  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:53.235968  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:53.278589  148123 cri.go:89] found id: ""
	I1010 19:25:53.278620  148123 logs.go:282] 0 containers: []
	W1010 19:25:53.278632  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:53.278640  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:53.278705  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:53.315062  148123 cri.go:89] found id: ""
	I1010 19:25:53.315090  148123 logs.go:282] 0 containers: []
	W1010 19:25:53.315101  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:53.315109  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:53.315182  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:53.348994  148123 cri.go:89] found id: ""
	I1010 19:25:53.349031  148123 logs.go:282] 0 containers: []
	W1010 19:25:53.349040  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:53.349046  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:53.349099  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:53.383334  148123 cri.go:89] found id: ""
	I1010 19:25:53.383363  148123 logs.go:282] 0 containers: []
	W1010 19:25:53.383373  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:53.383380  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:53.383450  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:53.417888  148123 cri.go:89] found id: ""
	I1010 19:25:53.417920  148123 logs.go:282] 0 containers: []
	W1010 19:25:53.417931  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:53.417938  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:53.418013  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:53.454757  148123 cri.go:89] found id: ""
	I1010 19:25:53.454787  148123 logs.go:282] 0 containers: []
	W1010 19:25:53.454798  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:53.454806  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:53.454871  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:53.491422  148123 cri.go:89] found id: ""
	I1010 19:25:53.491452  148123 logs.go:282] 0 containers: []
	W1010 19:25:53.491464  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:53.491472  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:53.491541  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:53.531240  148123 cri.go:89] found id: ""
	I1010 19:25:53.531271  148123 logs.go:282] 0 containers: []
	W1010 19:25:53.531282  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:53.531294  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:53.531310  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:50.727196  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:53.226957  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:50.065907  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:52.567056  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:54.051112  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:56.554137  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:53.582576  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:53.582608  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:53.596481  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:53.596511  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:53.666134  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:53.666162  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:53.666178  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:53.745621  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:53.745659  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:56.290744  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:56.307536  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:56.307610  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:56.353456  148123 cri.go:89] found id: ""
	I1010 19:25:56.353484  148123 logs.go:282] 0 containers: []
	W1010 19:25:56.353494  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:56.353501  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:56.353553  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:56.394093  148123 cri.go:89] found id: ""
	I1010 19:25:56.394120  148123 logs.go:282] 0 containers: []
	W1010 19:25:56.394131  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:56.394138  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:56.394203  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:56.428388  148123 cri.go:89] found id: ""
	I1010 19:25:56.428428  148123 logs.go:282] 0 containers: []
	W1010 19:25:56.428441  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:56.428450  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:56.428524  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:56.464414  148123 cri.go:89] found id: ""
	I1010 19:25:56.464459  148123 logs.go:282] 0 containers: []
	W1010 19:25:56.464472  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:56.464480  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:56.464563  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:56.505271  148123 cri.go:89] found id: ""
	I1010 19:25:56.505307  148123 logs.go:282] 0 containers: []
	W1010 19:25:56.505316  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:56.505322  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:56.505374  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:56.546424  148123 cri.go:89] found id: ""
	I1010 19:25:56.546456  148123 logs.go:282] 0 containers: []
	W1010 19:25:56.546467  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:56.546475  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:56.546545  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:56.587458  148123 cri.go:89] found id: ""
	I1010 19:25:56.587489  148123 logs.go:282] 0 containers: []
	W1010 19:25:56.587501  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:56.587509  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:56.587588  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:56.628248  148123 cri.go:89] found id: ""
	I1010 19:25:56.628279  148123 logs.go:282] 0 containers: []
	W1010 19:25:56.628291  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:56.628304  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:56.628320  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:56.642109  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:56.642141  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:56.723646  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:56.723678  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:56.723696  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:56.805849  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:56.805899  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:56.849863  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:56.849891  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:55.230447  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:57.726896  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:55.066248  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:57.565240  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:59.051145  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:01.554276  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:59.407093  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:59.422915  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:59.423005  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:59.462867  148123 cri.go:89] found id: ""
	I1010 19:25:59.462898  148123 logs.go:282] 0 containers: []
	W1010 19:25:59.462909  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:59.462917  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:59.462980  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:59.496931  148123 cri.go:89] found id: ""
	I1010 19:25:59.496958  148123 logs.go:282] 0 containers: []
	W1010 19:25:59.496968  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:59.496983  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:59.497049  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:59.532241  148123 cri.go:89] found id: ""
	I1010 19:25:59.532271  148123 logs.go:282] 0 containers: []
	W1010 19:25:59.532283  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:59.532291  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:59.532352  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:59.571285  148123 cri.go:89] found id: ""
	I1010 19:25:59.571313  148123 logs.go:282] 0 containers: []
	W1010 19:25:59.571324  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:59.571331  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:59.571391  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:59.606687  148123 cri.go:89] found id: ""
	I1010 19:25:59.606721  148123 logs.go:282] 0 containers: []
	W1010 19:25:59.606734  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:59.606741  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:59.606800  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:59.643247  148123 cri.go:89] found id: ""
	I1010 19:25:59.643276  148123 logs.go:282] 0 containers: []
	W1010 19:25:59.643286  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:59.643294  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:59.643369  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:59.679305  148123 cri.go:89] found id: ""
	I1010 19:25:59.679335  148123 logs.go:282] 0 containers: []
	W1010 19:25:59.679344  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:59.679350  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:59.679407  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:59.716149  148123 cri.go:89] found id: ""
	I1010 19:25:59.716198  148123 logs.go:282] 0 containers: []
	W1010 19:25:59.716210  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:59.716222  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:59.716239  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:59.730334  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:59.730364  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:59.799759  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:59.799786  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:59.799804  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:59.881883  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:59.881925  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:59.923755  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:59.923784  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:02.478043  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:02.492627  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:02.492705  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:02.532133  148123 cri.go:89] found id: ""
	I1010 19:26:02.532160  148123 logs.go:282] 0 containers: []
	W1010 19:26:02.532172  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:02.532179  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:02.532244  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:02.571915  148123 cri.go:89] found id: ""
	I1010 19:26:02.571943  148123 logs.go:282] 0 containers: []
	W1010 19:26:02.571951  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:02.571957  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:02.572014  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:02.616330  148123 cri.go:89] found id: ""
	I1010 19:26:02.616365  148123 logs.go:282] 0 containers: []
	W1010 19:26:02.616376  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:02.616385  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:02.616455  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:02.662245  148123 cri.go:89] found id: ""
	I1010 19:26:02.662275  148123 logs.go:282] 0 containers: []
	W1010 19:26:02.662284  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:02.662291  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:02.662354  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:02.698692  148123 cri.go:89] found id: ""
	I1010 19:26:02.698720  148123 logs.go:282] 0 containers: []
	W1010 19:26:02.698731  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:02.698739  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:02.698805  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:02.736643  148123 cri.go:89] found id: ""
	I1010 19:26:02.736667  148123 logs.go:282] 0 containers: []
	W1010 19:26:02.736677  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:02.736685  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:02.736742  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:02.780356  148123 cri.go:89] found id: ""
	I1010 19:26:02.780388  148123 logs.go:282] 0 containers: []
	W1010 19:26:02.780399  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:02.780407  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:02.780475  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:02.816716  148123 cri.go:89] found id: ""
	I1010 19:26:02.816744  148123 logs.go:282] 0 containers: []
	W1010 19:26:02.816752  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:02.816761  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:02.816775  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:02.899002  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:02.899046  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:02.942060  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:02.942093  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:02.997605  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:02.997661  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:03.011768  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:03.011804  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:03.096404  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:59.727075  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:02.227526  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:59.565903  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:02.066179  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:04.049656  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:06.050425  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:05.596971  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:05.612524  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:05.612598  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:05.649999  148123 cri.go:89] found id: ""
	I1010 19:26:05.650032  148123 logs.go:282] 0 containers: []
	W1010 19:26:05.650044  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:05.650052  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:05.650119  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:05.684365  148123 cri.go:89] found id: ""
	I1010 19:26:05.684398  148123 logs.go:282] 0 containers: []
	W1010 19:26:05.684419  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:05.684427  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:05.684494  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:05.721801  148123 cri.go:89] found id: ""
	I1010 19:26:05.721832  148123 logs.go:282] 0 containers: []
	W1010 19:26:05.721840  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:05.721847  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:05.721908  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:05.762010  148123 cri.go:89] found id: ""
	I1010 19:26:05.762037  148123 logs.go:282] 0 containers: []
	W1010 19:26:05.762046  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:05.762056  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:05.762117  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:05.803425  148123 cri.go:89] found id: ""
	I1010 19:26:05.803457  148123 logs.go:282] 0 containers: []
	W1010 19:26:05.803468  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:05.803476  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:05.803551  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:05.838800  148123 cri.go:89] found id: ""
	I1010 19:26:05.838833  148123 logs.go:282] 0 containers: []
	W1010 19:26:05.838845  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:05.838853  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:05.838921  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:05.875110  148123 cri.go:89] found id: ""
	I1010 19:26:05.875147  148123 logs.go:282] 0 containers: []
	W1010 19:26:05.875158  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:05.875167  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:05.875245  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:05.908951  148123 cri.go:89] found id: ""
	I1010 19:26:05.908988  148123 logs.go:282] 0 containers: []
	W1010 19:26:05.909000  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:05.909012  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:05.909027  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:05.922263  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:05.922293  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:05.996441  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:05.996465  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:05.996484  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:06.078113  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:06.078160  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:06.122919  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:06.122956  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:04.726335  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:06.728178  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:04.066573  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:06.564991  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:08.566655  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:08.050522  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:10.550288  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:08.674464  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:08.688452  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:08.688570  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:08.726240  148123 cri.go:89] found id: ""
	I1010 19:26:08.726269  148123 logs.go:282] 0 containers: []
	W1010 19:26:08.726279  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:08.726287  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:08.726370  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:08.764762  148123 cri.go:89] found id: ""
	I1010 19:26:08.764790  148123 logs.go:282] 0 containers: []
	W1010 19:26:08.764799  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:08.764806  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:08.764887  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:08.805522  148123 cri.go:89] found id: ""
	I1010 19:26:08.805552  148123 logs.go:282] 0 containers: []
	W1010 19:26:08.805563  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:08.805572  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:08.805642  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:08.845694  148123 cri.go:89] found id: ""
	I1010 19:26:08.845729  148123 logs.go:282] 0 containers: []
	W1010 19:26:08.845740  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:08.845747  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:08.845817  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:08.885024  148123 cri.go:89] found id: ""
	I1010 19:26:08.885054  148123 logs.go:282] 0 containers: []
	W1010 19:26:08.885066  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:08.885074  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:08.885138  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:08.927500  148123 cri.go:89] found id: ""
	I1010 19:26:08.927531  148123 logs.go:282] 0 containers: []
	W1010 19:26:08.927540  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:08.927547  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:08.927616  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:08.963870  148123 cri.go:89] found id: ""
	I1010 19:26:08.963904  148123 logs.go:282] 0 containers: []
	W1010 19:26:08.963916  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:08.963924  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:08.963988  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:08.997984  148123 cri.go:89] found id: ""
	I1010 19:26:08.998017  148123 logs.go:282] 0 containers: []
	W1010 19:26:08.998027  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:08.998039  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:08.998056  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:09.049307  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:09.049341  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:09.063341  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:09.063432  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:09.145190  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:09.145214  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:09.145226  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:09.231409  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:09.231445  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:11.773475  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:11.788981  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:11.789055  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:11.830289  148123 cri.go:89] found id: ""
	I1010 19:26:11.830319  148123 logs.go:282] 0 containers: []
	W1010 19:26:11.830327  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:11.830333  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:11.830399  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:11.867093  148123 cri.go:89] found id: ""
	I1010 19:26:11.867120  148123 logs.go:282] 0 containers: []
	W1010 19:26:11.867131  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:11.867138  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:11.867212  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:11.902146  148123 cri.go:89] found id: ""
	I1010 19:26:11.902181  148123 logs.go:282] 0 containers: []
	W1010 19:26:11.902192  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:11.902201  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:11.902271  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:11.938652  148123 cri.go:89] found id: ""
	I1010 19:26:11.938681  148123 logs.go:282] 0 containers: []
	W1010 19:26:11.938691  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:11.938703  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:11.938771  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:11.975903  148123 cri.go:89] found id: ""
	I1010 19:26:11.975937  148123 logs.go:282] 0 containers: []
	W1010 19:26:11.975947  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:11.975955  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:11.976020  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:12.014083  148123 cri.go:89] found id: ""
	I1010 19:26:12.014111  148123 logs.go:282] 0 containers: []
	W1010 19:26:12.014119  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:12.014126  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:12.014190  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:12.057832  148123 cri.go:89] found id: ""
	I1010 19:26:12.057858  148123 logs.go:282] 0 containers: []
	W1010 19:26:12.057867  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:12.057874  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:12.057925  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:12.103253  148123 cri.go:89] found id: ""
	I1010 19:26:12.103286  148123 logs.go:282] 0 containers: []
	W1010 19:26:12.103298  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:12.103311  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:12.103326  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:12.171062  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:12.171104  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:12.185463  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:12.185496  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:12.261568  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:12.261592  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:12.261607  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:12.345819  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:12.345860  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:09.226954  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:11.227205  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:13.227457  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:11.066777  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:13.565854  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:12.551323  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:15.051745  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:14.886283  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:14.901117  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:14.901213  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:14.935235  148123 cri.go:89] found id: ""
	I1010 19:26:14.935264  148123 logs.go:282] 0 containers: []
	W1010 19:26:14.935273  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:14.935280  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:14.935336  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:14.970617  148123 cri.go:89] found id: ""
	I1010 19:26:14.970642  148123 logs.go:282] 0 containers: []
	W1010 19:26:14.970652  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:14.970658  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:14.970722  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:15.011086  148123 cri.go:89] found id: ""
	I1010 19:26:15.011113  148123 logs.go:282] 0 containers: []
	W1010 19:26:15.011122  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:15.011128  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:15.011188  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:15.053004  148123 cri.go:89] found id: ""
	I1010 19:26:15.053026  148123 logs.go:282] 0 containers: []
	W1010 19:26:15.053033  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:15.053039  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:15.053099  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:15.089275  148123 cri.go:89] found id: ""
	I1010 19:26:15.089304  148123 logs.go:282] 0 containers: []
	W1010 19:26:15.089312  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:15.089319  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:15.089383  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:15.126620  148123 cri.go:89] found id: ""
	I1010 19:26:15.126645  148123 logs.go:282] 0 containers: []
	W1010 19:26:15.126654  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:15.126660  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:15.126723  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:15.160920  148123 cri.go:89] found id: ""
	I1010 19:26:15.160956  148123 logs.go:282] 0 containers: []
	W1010 19:26:15.160969  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:15.160979  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:15.161037  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:15.197430  148123 cri.go:89] found id: ""
	I1010 19:26:15.197462  148123 logs.go:282] 0 containers: []
	W1010 19:26:15.197474  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:15.197486  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:15.197507  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:15.240905  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:15.240953  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:15.297663  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:15.297719  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:15.312279  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:15.312313  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:15.395296  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:15.395320  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:15.395336  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:17.978030  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:17.992574  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:17.992651  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:18.028846  148123 cri.go:89] found id: ""
	I1010 19:26:18.028890  148123 logs.go:282] 0 containers: []
	W1010 19:26:18.028901  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:18.028908  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:18.028965  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:18.073004  148123 cri.go:89] found id: ""
	I1010 19:26:18.073033  148123 logs.go:282] 0 containers: []
	W1010 19:26:18.073041  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:18.073047  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:18.073118  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:18.112062  148123 cri.go:89] found id: ""
	I1010 19:26:18.112098  148123 logs.go:282] 0 containers: []
	W1010 19:26:18.112111  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:18.112121  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:18.112209  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:18.151565  148123 cri.go:89] found id: ""
	I1010 19:26:18.151597  148123 logs.go:282] 0 containers: []
	W1010 19:26:18.151608  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:18.151616  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:18.151675  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:18.197483  148123 cri.go:89] found id: ""
	I1010 19:26:18.197509  148123 logs.go:282] 0 containers: []
	W1010 19:26:18.197518  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:18.197524  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:18.197591  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:18.235151  148123 cri.go:89] found id: ""
	I1010 19:26:18.235181  148123 logs.go:282] 0 containers: []
	W1010 19:26:18.235193  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:18.235201  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:18.235279  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:18.273807  148123 cri.go:89] found id: ""
	I1010 19:26:18.273841  148123 logs.go:282] 0 containers: []
	W1010 19:26:18.273852  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:18.273858  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:18.273920  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:18.311644  148123 cri.go:89] found id: ""
	I1010 19:26:18.311677  148123 logs.go:282] 0 containers: []
	W1010 19:26:18.311688  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:18.311700  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:18.311716  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:18.355599  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:18.355635  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:18.407786  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:18.407830  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:18.422944  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:18.422976  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:18.496830  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:18.496873  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:18.496887  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:15.227600  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:17.726712  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:16.065701  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:18.066861  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:17.558257  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:20.050914  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:21.108300  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:21.123177  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:21.123243  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:21.162544  148123 cri.go:89] found id: ""
	I1010 19:26:21.162575  148123 logs.go:282] 0 containers: []
	W1010 19:26:21.162586  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:21.162595  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:21.162662  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:21.199799  148123 cri.go:89] found id: ""
	I1010 19:26:21.199828  148123 logs.go:282] 0 containers: []
	W1010 19:26:21.199839  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:21.199845  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:21.199901  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:21.246333  148123 cri.go:89] found id: ""
	I1010 19:26:21.246357  148123 logs.go:282] 0 containers: []
	W1010 19:26:21.246366  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:21.246372  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:21.246433  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:21.282681  148123 cri.go:89] found id: ""
	I1010 19:26:21.282719  148123 logs.go:282] 0 containers: []
	W1010 19:26:21.282732  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:21.282740  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:21.282810  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:21.319500  148123 cri.go:89] found id: ""
	I1010 19:26:21.319535  148123 logs.go:282] 0 containers: []
	W1010 19:26:21.319546  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:21.319554  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:21.319623  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:21.355779  148123 cri.go:89] found id: ""
	I1010 19:26:21.355809  148123 logs.go:282] 0 containers: []
	W1010 19:26:21.355820  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:21.355828  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:21.355902  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:21.394567  148123 cri.go:89] found id: ""
	I1010 19:26:21.394608  148123 logs.go:282] 0 containers: []
	W1010 19:26:21.394617  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:21.394623  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:21.394684  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:21.430009  148123 cri.go:89] found id: ""
	I1010 19:26:21.430046  148123 logs.go:282] 0 containers: []
	W1010 19:26:21.430058  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:21.430069  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:21.430089  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:21.443267  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:21.443301  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:21.517046  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:21.517068  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:21.517083  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:21.594927  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:21.594978  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:21.634274  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:21.634311  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:20.227157  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:22.727736  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:20.566652  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:23.066459  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:22.550526  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:25.050647  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:24.189760  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:24.204355  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:24.204420  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:24.248130  148123 cri.go:89] found id: ""
	I1010 19:26:24.248160  148123 logs.go:282] 0 containers: []
	W1010 19:26:24.248171  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:24.248178  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:24.248244  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:24.293281  148123 cri.go:89] found id: ""
	I1010 19:26:24.293312  148123 logs.go:282] 0 containers: []
	W1010 19:26:24.293324  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:24.293331  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:24.293400  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:24.350709  148123 cri.go:89] found id: ""
	I1010 19:26:24.350743  148123 logs.go:282] 0 containers: []
	W1010 19:26:24.350755  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:24.350765  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:24.350838  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:24.394113  148123 cri.go:89] found id: ""
	I1010 19:26:24.394152  148123 logs.go:282] 0 containers: []
	W1010 19:26:24.394170  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:24.394181  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:24.394256  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:24.445999  148123 cri.go:89] found id: ""
	I1010 19:26:24.446030  148123 logs.go:282] 0 containers: []
	W1010 19:26:24.446042  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:24.446051  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:24.446120  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:24.483566  148123 cri.go:89] found id: ""
	I1010 19:26:24.483596  148123 logs.go:282] 0 containers: []
	W1010 19:26:24.483605  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:24.483612  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:24.483665  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:24.520705  148123 cri.go:89] found id: ""
	I1010 19:26:24.520736  148123 logs.go:282] 0 containers: []
	W1010 19:26:24.520748  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:24.520757  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:24.520825  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:24.557316  148123 cri.go:89] found id: ""
	I1010 19:26:24.557346  148123 logs.go:282] 0 containers: []
	W1010 19:26:24.557355  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:24.557364  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:24.557376  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:24.608065  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:24.608109  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:24.623202  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:24.623250  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:24.694665  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:24.694697  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:24.694716  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:24.783369  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:24.783414  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:27.327524  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:27.341132  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:27.341214  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:27.376589  148123 cri.go:89] found id: ""
	I1010 19:26:27.376618  148123 logs.go:282] 0 containers: []
	W1010 19:26:27.376627  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:27.376633  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:27.376684  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:27.414452  148123 cri.go:89] found id: ""
	I1010 19:26:27.414491  148123 logs.go:282] 0 containers: []
	W1010 19:26:27.414502  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:27.414510  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:27.414572  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:27.449839  148123 cri.go:89] found id: ""
	I1010 19:26:27.449867  148123 logs.go:282] 0 containers: []
	W1010 19:26:27.449875  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:27.449882  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:27.449932  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:27.492445  148123 cri.go:89] found id: ""
	I1010 19:26:27.492472  148123 logs.go:282] 0 containers: []
	W1010 19:26:27.492480  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:27.492487  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:27.492549  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:27.533032  148123 cri.go:89] found id: ""
	I1010 19:26:27.533085  148123 logs.go:282] 0 containers: []
	W1010 19:26:27.533095  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:27.533122  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:27.533199  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:27.571096  148123 cri.go:89] found id: ""
	I1010 19:26:27.571122  148123 logs.go:282] 0 containers: []
	W1010 19:26:27.571130  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:27.571135  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:27.571204  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:27.608762  148123 cri.go:89] found id: ""
	I1010 19:26:27.608798  148123 logs.go:282] 0 containers: []
	W1010 19:26:27.608809  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:27.608818  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:27.608896  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:27.648200  148123 cri.go:89] found id: ""
	I1010 19:26:27.648236  148123 logs.go:282] 0 containers: []
	W1010 19:26:27.648248  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:27.648260  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:27.648275  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:27.698124  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:27.698154  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:27.755009  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:27.755052  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:27.770048  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:27.770084  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:27.845475  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:27.845505  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:27.845519  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:24.729352  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:26.731831  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:25.566028  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:27.567052  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:27.555698  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:30.049914  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:30.423117  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:30.436706  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:30.436768  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:30.473367  148123 cri.go:89] found id: ""
	I1010 19:26:30.473396  148123 logs.go:282] 0 containers: []
	W1010 19:26:30.473408  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:30.473417  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:30.473470  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:30.510560  148123 cri.go:89] found id: ""
	I1010 19:26:30.510599  148123 logs.go:282] 0 containers: []
	W1010 19:26:30.510610  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:30.510619  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:30.510687  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:30.549682  148123 cri.go:89] found id: ""
	I1010 19:26:30.549715  148123 logs.go:282] 0 containers: []
	W1010 19:26:30.549726  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:30.549734  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:30.549798  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:30.586401  148123 cri.go:89] found id: ""
	I1010 19:26:30.586425  148123 logs.go:282] 0 containers: []
	W1010 19:26:30.586434  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:30.586441  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:30.586492  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:30.622012  148123 cri.go:89] found id: ""
	I1010 19:26:30.622037  148123 logs.go:282] 0 containers: []
	W1010 19:26:30.622045  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:30.622052  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:30.622107  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:30.658336  148123 cri.go:89] found id: ""
	I1010 19:26:30.658364  148123 logs.go:282] 0 containers: []
	W1010 19:26:30.658372  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:30.658378  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:30.658442  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:30.695495  148123 cri.go:89] found id: ""
	I1010 19:26:30.695523  148123 logs.go:282] 0 containers: []
	W1010 19:26:30.695532  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:30.695538  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:30.695591  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:30.735093  148123 cri.go:89] found id: ""
	I1010 19:26:30.735125  148123 logs.go:282] 0 containers: []
	W1010 19:26:30.735136  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:30.735148  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:30.735163  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:30.816097  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:30.816140  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:30.853878  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:30.853915  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:30.906289  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:30.906341  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:30.923742  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:30.923769  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:30.993226  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:33.493799  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:33.508083  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:33.508166  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:33.545019  148123 cri.go:89] found id: ""
	I1010 19:26:33.545055  148123 logs.go:282] 0 containers: []
	W1010 19:26:33.545072  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:33.545081  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:33.545150  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:29.226673  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:31.227117  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:33.727777  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:30.068231  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:32.566025  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:32.050118  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:34.051720  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:36.550138  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:33.584215  148123 cri.go:89] found id: ""
	I1010 19:26:33.584242  148123 logs.go:282] 0 containers: []
	W1010 19:26:33.584252  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:33.584259  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:33.584315  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:33.620598  148123 cri.go:89] found id: ""
	I1010 19:26:33.620627  148123 logs.go:282] 0 containers: []
	W1010 19:26:33.620636  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:33.620643  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:33.620691  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:33.655225  148123 cri.go:89] found id: ""
	I1010 19:26:33.655252  148123 logs.go:282] 0 containers: []
	W1010 19:26:33.655260  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:33.655267  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:33.655322  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:33.693464  148123 cri.go:89] found id: ""
	I1010 19:26:33.693490  148123 logs.go:282] 0 containers: []
	W1010 19:26:33.693498  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:33.693505  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:33.693554  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:33.734024  148123 cri.go:89] found id: ""
	I1010 19:26:33.734059  148123 logs.go:282] 0 containers: []
	W1010 19:26:33.734071  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:33.734079  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:33.734141  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:33.769434  148123 cri.go:89] found id: ""
	I1010 19:26:33.769467  148123 logs.go:282] 0 containers: []
	W1010 19:26:33.769476  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:33.769483  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:33.769533  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:33.809011  148123 cri.go:89] found id: ""
	I1010 19:26:33.809040  148123 logs.go:282] 0 containers: []
	W1010 19:26:33.809049  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:33.809059  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:33.809071  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:33.864186  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:33.864230  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:33.878681  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:33.878711  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:33.958225  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:33.958250  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:33.958265  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:34.034730  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:34.034774  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:36.577123  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:36.591494  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:36.591560  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:36.625170  148123 cri.go:89] found id: ""
	I1010 19:26:36.625210  148123 logs.go:282] 0 containers: []
	W1010 19:26:36.625222  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:36.625230  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:36.625295  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:36.660790  148123 cri.go:89] found id: ""
	I1010 19:26:36.660821  148123 logs.go:282] 0 containers: []
	W1010 19:26:36.660831  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:36.660838  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:36.660911  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:36.695742  148123 cri.go:89] found id: ""
	I1010 19:26:36.695774  148123 logs.go:282] 0 containers: []
	W1010 19:26:36.695786  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:36.695793  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:36.695854  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:36.729523  148123 cri.go:89] found id: ""
	I1010 19:26:36.729552  148123 logs.go:282] 0 containers: []
	W1010 19:26:36.729561  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:36.729569  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:36.729630  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:36.765515  148123 cri.go:89] found id: ""
	I1010 19:26:36.765549  148123 logs.go:282] 0 containers: []
	W1010 19:26:36.765561  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:36.765571  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:36.765635  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:36.799110  148123 cri.go:89] found id: ""
	I1010 19:26:36.799144  148123 logs.go:282] 0 containers: []
	W1010 19:26:36.799155  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:36.799164  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:36.799224  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:36.832806  148123 cri.go:89] found id: ""
	I1010 19:26:36.832834  148123 logs.go:282] 0 containers: []
	W1010 19:26:36.832845  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:36.832865  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:36.832933  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:36.869093  148123 cri.go:89] found id: ""
	I1010 19:26:36.869126  148123 logs.go:282] 0 containers: []
	W1010 19:26:36.869137  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:36.869149  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:36.869165  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:36.922229  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:36.922276  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:36.936973  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:36.937019  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:37.016400  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:37.016425  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:37.016448  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:37.106308  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:37.106354  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:36.227451  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:38.726229  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:35.067396  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:37.565711  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:38.550438  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:41.050698  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:39.652101  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:39.665546  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:39.665639  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:39.699646  148123 cri.go:89] found id: ""
	I1010 19:26:39.699675  148123 logs.go:282] 0 containers: []
	W1010 19:26:39.699686  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:39.699693  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:39.699745  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:39.735568  148123 cri.go:89] found id: ""
	I1010 19:26:39.735592  148123 logs.go:282] 0 containers: []
	W1010 19:26:39.735600  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:39.735606  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:39.735656  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:39.770021  148123 cri.go:89] found id: ""
	I1010 19:26:39.770049  148123 logs.go:282] 0 containers: []
	W1010 19:26:39.770060  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:39.770067  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:39.770139  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:39.804867  148123 cri.go:89] found id: ""
	I1010 19:26:39.804894  148123 logs.go:282] 0 containers: []
	W1010 19:26:39.804903  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:39.804910  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:39.804972  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:39.847074  148123 cri.go:89] found id: ""
	I1010 19:26:39.847104  148123 logs.go:282] 0 containers: []
	W1010 19:26:39.847115  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:39.847124  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:39.847200  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:39.890334  148123 cri.go:89] found id: ""
	I1010 19:26:39.890368  148123 logs.go:282] 0 containers: []
	W1010 19:26:39.890379  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:39.890387  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:39.890456  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:39.926316  148123 cri.go:89] found id: ""
	I1010 19:26:39.926346  148123 logs.go:282] 0 containers: []
	W1010 19:26:39.926355  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:39.926362  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:39.926416  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:39.966179  148123 cri.go:89] found id: ""
	I1010 19:26:39.966215  148123 logs.go:282] 0 containers: []
	W1010 19:26:39.966227  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:39.966239  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:39.966255  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:40.047457  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:40.047505  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:40.086611  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:40.086656  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:40.139123  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:40.139167  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:40.152966  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:40.152995  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:40.231009  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:42.731851  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:42.748201  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:42.748287  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:42.787567  148123 cri.go:89] found id: ""
	I1010 19:26:42.787599  148123 logs.go:282] 0 containers: []
	W1010 19:26:42.787607  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:42.787614  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:42.787697  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:42.831051  148123 cri.go:89] found id: ""
	I1010 19:26:42.831089  148123 logs.go:282] 0 containers: []
	W1010 19:26:42.831100  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:42.831109  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:42.831180  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:42.871352  148123 cri.go:89] found id: ""
	I1010 19:26:42.871390  148123 logs.go:282] 0 containers: []
	W1010 19:26:42.871402  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:42.871410  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:42.871484  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:42.910283  148123 cri.go:89] found id: ""
	I1010 19:26:42.910316  148123 logs.go:282] 0 containers: []
	W1010 19:26:42.910327  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:42.910335  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:42.910403  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:42.946971  148123 cri.go:89] found id: ""
	I1010 19:26:42.947003  148123 logs.go:282] 0 containers: []
	W1010 19:26:42.947012  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:42.947019  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:42.947087  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:42.985484  148123 cri.go:89] found id: ""
	I1010 19:26:42.985517  148123 logs.go:282] 0 containers: []
	W1010 19:26:42.985528  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:42.985537  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:42.985603  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:43.024500  148123 cri.go:89] found id: ""
	I1010 19:26:43.024535  148123 logs.go:282] 0 containers: []
	W1010 19:26:43.024544  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:43.024552  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:43.024608  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:43.064008  148123 cri.go:89] found id: ""
	I1010 19:26:43.064039  148123 logs.go:282] 0 containers: []
	W1010 19:26:43.064049  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:43.064060  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:43.064075  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:43.119367  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:43.119415  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:43.133396  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:43.133428  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:43.207447  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:43.207470  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:43.207491  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:43.290416  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:43.290456  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:40.727919  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:43.227782  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:40.066461  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:42.565505  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:43.051835  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:45.052308  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:45.834115  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:45.849172  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:45.849238  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:45.886020  148123 cri.go:89] found id: ""
	I1010 19:26:45.886049  148123 logs.go:282] 0 containers: []
	W1010 19:26:45.886060  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:45.886067  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:45.886152  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:45.923188  148123 cri.go:89] found id: ""
	I1010 19:26:45.923218  148123 logs.go:282] 0 containers: []
	W1010 19:26:45.923245  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:45.923253  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:45.923316  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:45.960297  148123 cri.go:89] found id: ""
	I1010 19:26:45.960337  148123 logs.go:282] 0 containers: []
	W1010 19:26:45.960351  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:45.960361  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:45.960432  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:45.995140  148123 cri.go:89] found id: ""
	I1010 19:26:45.995173  148123 logs.go:282] 0 containers: []
	W1010 19:26:45.995184  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:45.995191  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:45.995265  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:46.040390  148123 cri.go:89] found id: ""
	I1010 19:26:46.040418  148123 logs.go:282] 0 containers: []
	W1010 19:26:46.040426  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:46.040433  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:46.040500  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:46.079999  148123 cri.go:89] found id: ""
	I1010 19:26:46.080029  148123 logs.go:282] 0 containers: []
	W1010 19:26:46.080042  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:46.080052  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:46.080105  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:46.117331  148123 cri.go:89] found id: ""
	I1010 19:26:46.117357  148123 logs.go:282] 0 containers: []
	W1010 19:26:46.117364  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:46.117370  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:46.117441  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:46.152248  148123 cri.go:89] found id: ""
	I1010 19:26:46.152279  148123 logs.go:282] 0 containers: []
	W1010 19:26:46.152290  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:46.152303  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:46.152319  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:46.192503  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:46.192533  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:46.246419  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:46.246456  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:46.259864  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:46.259895  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:46.338518  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:46.338549  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:46.338567  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:45.726776  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:48.228318  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:44.567056  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:47.065636  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:47.551013  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:50.053824  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:48.924216  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:48.938741  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:48.938805  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:48.973310  148123 cri.go:89] found id: ""
	I1010 19:26:48.973339  148123 logs.go:282] 0 containers: []
	W1010 19:26:48.973348  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:48.973356  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:48.973411  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:49.010467  148123 cri.go:89] found id: ""
	I1010 19:26:49.010492  148123 logs.go:282] 0 containers: []
	W1010 19:26:49.010500  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:49.010506  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:49.010585  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:49.051621  148123 cri.go:89] found id: ""
	I1010 19:26:49.051646  148123 logs.go:282] 0 containers: []
	W1010 19:26:49.051653  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:49.051664  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:49.051727  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:49.091090  148123 cri.go:89] found id: ""
	I1010 19:26:49.091121  148123 logs.go:282] 0 containers: []
	W1010 19:26:49.091132  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:49.091140  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:49.091202  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:49.127669  148123 cri.go:89] found id: ""
	I1010 19:26:49.127712  148123 logs.go:282] 0 containers: []
	W1010 19:26:49.127724  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:49.127732  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:49.127804  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:49.164446  148123 cri.go:89] found id: ""
	I1010 19:26:49.164476  148123 logs.go:282] 0 containers: []
	W1010 19:26:49.164485  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:49.164491  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:49.164553  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:49.201222  148123 cri.go:89] found id: ""
	I1010 19:26:49.201263  148123 logs.go:282] 0 containers: []
	W1010 19:26:49.201275  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:49.201284  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:49.201345  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:49.238258  148123 cri.go:89] found id: ""
	I1010 19:26:49.238283  148123 logs.go:282] 0 containers: []
	W1010 19:26:49.238290  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:49.238299  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:49.238311  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:49.313576  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:49.313619  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:49.351066  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:49.351096  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:49.402772  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:49.402823  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:49.417713  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:49.417752  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:49.488834  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:51.989031  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:52.003056  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:52.003140  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:52.039682  148123 cri.go:89] found id: ""
	I1010 19:26:52.039709  148123 logs.go:282] 0 containers: []
	W1010 19:26:52.039718  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:52.039725  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:52.039788  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:52.081402  148123 cri.go:89] found id: ""
	I1010 19:26:52.081433  148123 logs.go:282] 0 containers: []
	W1010 19:26:52.081443  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:52.081449  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:52.081502  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:52.118270  148123 cri.go:89] found id: ""
	I1010 19:26:52.118304  148123 logs.go:282] 0 containers: []
	W1010 19:26:52.118315  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:52.118325  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:52.118392  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:52.154692  148123 cri.go:89] found id: ""
	I1010 19:26:52.154724  148123 logs.go:282] 0 containers: []
	W1010 19:26:52.154735  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:52.154743  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:52.154807  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:52.190056  148123 cri.go:89] found id: ""
	I1010 19:26:52.190085  148123 logs.go:282] 0 containers: []
	W1010 19:26:52.190094  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:52.190101  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:52.190161  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:52.225468  148123 cri.go:89] found id: ""
	I1010 19:26:52.225501  148123 logs.go:282] 0 containers: []
	W1010 19:26:52.225511  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:52.225521  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:52.225589  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:52.260682  148123 cri.go:89] found id: ""
	I1010 19:26:52.260710  148123 logs.go:282] 0 containers: []
	W1010 19:26:52.260718  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:52.260724  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:52.260774  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:52.297613  148123 cri.go:89] found id: ""
	I1010 19:26:52.297642  148123 logs.go:282] 0 containers: []
	W1010 19:26:52.297659  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:52.297672  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:52.297689  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:52.352224  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:52.352267  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:52.367003  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:52.367033  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:52.443124  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:52.443157  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:52.443173  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:52.522391  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:52.522433  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:50.726363  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:52.727069  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:49.069109  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:51.566132  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:53.567867  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:52.554195  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:55.050995  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:55.066877  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:55.082191  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:55.082258  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:55.120729  148123 cri.go:89] found id: ""
	I1010 19:26:55.120763  148123 logs.go:282] 0 containers: []
	W1010 19:26:55.120775  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:55.120784  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:55.120863  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:55.154795  148123 cri.go:89] found id: ""
	I1010 19:26:55.154827  148123 logs.go:282] 0 containers: []
	W1010 19:26:55.154839  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:55.154848  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:55.154904  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:55.194846  148123 cri.go:89] found id: ""
	I1010 19:26:55.194876  148123 logs.go:282] 0 containers: []
	W1010 19:26:55.194889  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:55.194897  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:55.194958  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:55.230173  148123 cri.go:89] found id: ""
	I1010 19:26:55.230201  148123 logs.go:282] 0 containers: []
	W1010 19:26:55.230212  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:55.230220  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:55.230290  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:55.264999  148123 cri.go:89] found id: ""
	I1010 19:26:55.265025  148123 logs.go:282] 0 containers: []
	W1010 19:26:55.265032  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:55.265039  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:55.265096  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:55.303666  148123 cri.go:89] found id: ""
	I1010 19:26:55.303695  148123 logs.go:282] 0 containers: []
	W1010 19:26:55.303703  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:55.303724  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:55.303797  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:55.342061  148123 cri.go:89] found id: ""
	I1010 19:26:55.342087  148123 logs.go:282] 0 containers: []
	W1010 19:26:55.342098  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:55.342106  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:55.342171  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:55.378305  148123 cri.go:89] found id: ""
	I1010 19:26:55.378336  148123 logs.go:282] 0 containers: []
	W1010 19:26:55.378345  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:55.378354  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:55.378365  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:55.416815  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:55.416865  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:55.467371  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:55.467415  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:55.481704  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:55.481744  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:55.559219  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:55.559242  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:55.559254  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:58.141472  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:58.154326  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:58.154394  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:58.193053  148123 cri.go:89] found id: ""
	I1010 19:26:58.193080  148123 logs.go:282] 0 containers: []
	W1010 19:26:58.193088  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:58.193094  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:58.193142  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:58.232514  148123 cri.go:89] found id: ""
	I1010 19:26:58.232538  148123 logs.go:282] 0 containers: []
	W1010 19:26:58.232545  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:58.232551  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:58.232599  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:58.266724  148123 cri.go:89] found id: ""
	I1010 19:26:58.266756  148123 logs.go:282] 0 containers: []
	W1010 19:26:58.266765  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:58.266771  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:58.266824  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:58.301986  148123 cri.go:89] found id: ""
	I1010 19:26:58.302012  148123 logs.go:282] 0 containers: []
	W1010 19:26:58.302019  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:58.302031  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:58.302080  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:58.340394  148123 cri.go:89] found id: ""
	I1010 19:26:58.340431  148123 logs.go:282] 0 containers: []
	W1010 19:26:58.340440  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:58.340448  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:58.340500  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:58.407035  148123 cri.go:89] found id: ""
	I1010 19:26:58.407069  148123 logs.go:282] 0 containers: []
	W1010 19:26:58.407081  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:58.407088  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:58.407151  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:58.446650  148123 cri.go:89] found id: ""
	I1010 19:26:58.446682  148123 logs.go:282] 0 containers: []
	W1010 19:26:58.446694  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:58.446703  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:58.446763  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:58.480719  148123 cri.go:89] found id: ""
	I1010 19:26:58.480750  148123 logs.go:282] 0 containers: []
	W1010 19:26:58.480759  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:58.480768  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:58.480781  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:58.561568  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:58.561602  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:55.227199  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:57.726841  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:56.065787  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:58.566732  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:57.550718  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:59.550793  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:58.609237  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:58.609264  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:58.659705  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:58.659748  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:58.674646  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:58.674680  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:58.750922  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:01.252098  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:01.265420  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:01.265497  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:01.300133  148123 cri.go:89] found id: ""
	I1010 19:27:01.300168  148123 logs.go:282] 0 containers: []
	W1010 19:27:01.300180  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:01.300188  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:01.300272  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:01.337451  148123 cri.go:89] found id: ""
	I1010 19:27:01.337477  148123 logs.go:282] 0 containers: []
	W1010 19:27:01.337485  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:01.337492  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:01.337539  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:01.371987  148123 cri.go:89] found id: ""
	I1010 19:27:01.372019  148123 logs.go:282] 0 containers: []
	W1010 19:27:01.372028  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:01.372034  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:01.372098  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:01.408996  148123 cri.go:89] found id: ""
	I1010 19:27:01.409022  148123 logs.go:282] 0 containers: []
	W1010 19:27:01.409033  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:01.409042  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:01.409109  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:01.443558  148123 cri.go:89] found id: ""
	I1010 19:27:01.443587  148123 logs.go:282] 0 containers: []
	W1010 19:27:01.443595  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:01.443602  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:01.443663  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:01.478517  148123 cri.go:89] found id: ""
	I1010 19:27:01.478546  148123 logs.go:282] 0 containers: []
	W1010 19:27:01.478555  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:01.478561  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:01.478611  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:01.513528  148123 cri.go:89] found id: ""
	I1010 19:27:01.513556  148123 logs.go:282] 0 containers: []
	W1010 19:27:01.513568  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:01.513576  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:01.513641  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:01.558861  148123 cri.go:89] found id: ""
	I1010 19:27:01.558897  148123 logs.go:282] 0 containers: []
	W1010 19:27:01.558909  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:01.558921  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:01.558937  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:01.599719  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:01.599747  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:01.650736  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:01.650774  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:01.664643  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:01.664669  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:01.736533  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:01.736557  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:01.736570  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:00.225540  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:02.226962  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:00.567193  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:03.066587  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:02.050439  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:04.050984  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:06.550977  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:04.317302  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:04.330450  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:04.330519  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:04.368893  148123 cri.go:89] found id: ""
	I1010 19:27:04.368923  148123 logs.go:282] 0 containers: []
	W1010 19:27:04.368932  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:04.368939  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:04.368993  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:04.406598  148123 cri.go:89] found id: ""
	I1010 19:27:04.406625  148123 logs.go:282] 0 containers: []
	W1010 19:27:04.406634  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:04.406640  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:04.406692  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:04.441485  148123 cri.go:89] found id: ""
	I1010 19:27:04.441515  148123 logs.go:282] 0 containers: []
	W1010 19:27:04.441525  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:04.441532  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:04.441581  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:04.476663  148123 cri.go:89] found id: ""
	I1010 19:27:04.476690  148123 logs.go:282] 0 containers: []
	W1010 19:27:04.476698  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:04.476704  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:04.476765  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:04.512124  148123 cri.go:89] found id: ""
	I1010 19:27:04.512171  148123 logs.go:282] 0 containers: []
	W1010 19:27:04.512186  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:04.512195  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:04.512260  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:04.547902  148123 cri.go:89] found id: ""
	I1010 19:27:04.547929  148123 logs.go:282] 0 containers: []
	W1010 19:27:04.547940  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:04.547949  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:04.548008  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:04.585257  148123 cri.go:89] found id: ""
	I1010 19:27:04.585287  148123 logs.go:282] 0 containers: []
	W1010 19:27:04.585297  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:04.585303  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:04.585352  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:04.621019  148123 cri.go:89] found id: ""
	I1010 19:27:04.621048  148123 logs.go:282] 0 containers: []
	W1010 19:27:04.621057  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:04.621068  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:04.621080  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:04.662770  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:04.662801  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:04.714551  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:04.714592  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:04.730922  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:04.730951  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:04.841139  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:04.841163  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:04.841178  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:07.428528  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:07.442423  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:07.442498  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:07.476722  148123 cri.go:89] found id: ""
	I1010 19:27:07.476753  148123 logs.go:282] 0 containers: []
	W1010 19:27:07.476764  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:07.476772  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:07.476835  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:07.512211  148123 cri.go:89] found id: ""
	I1010 19:27:07.512245  148123 logs.go:282] 0 containers: []
	W1010 19:27:07.512256  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:07.512263  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:07.512325  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:07.546546  148123 cri.go:89] found id: ""
	I1010 19:27:07.546588  148123 logs.go:282] 0 containers: []
	W1010 19:27:07.546601  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:07.546619  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:07.546688  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:07.585743  148123 cri.go:89] found id: ""
	I1010 19:27:07.585768  148123 logs.go:282] 0 containers: []
	W1010 19:27:07.585777  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:07.585783  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:07.585848  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:07.626827  148123 cri.go:89] found id: ""
	I1010 19:27:07.626855  148123 logs.go:282] 0 containers: []
	W1010 19:27:07.626865  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:07.626874  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:07.626960  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:07.661910  148123 cri.go:89] found id: ""
	I1010 19:27:07.661940  148123 logs.go:282] 0 containers: []
	W1010 19:27:07.661948  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:07.661955  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:07.662072  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:07.698255  148123 cri.go:89] found id: ""
	I1010 19:27:07.698288  148123 logs.go:282] 0 containers: []
	W1010 19:27:07.698301  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:07.698309  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:07.698374  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:07.734389  148123 cri.go:89] found id: ""
	I1010 19:27:07.734417  148123 logs.go:282] 0 containers: []
	W1010 19:27:07.734428  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:07.734440  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:07.734454  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:07.814089  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:07.814130  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:07.854799  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:07.854831  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:07.906595  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:07.906635  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:07.921928  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:07.921966  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:07.989839  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:04.727522  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:07.226694  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:05.565868  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:07.567139  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:09.050772  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:11.051291  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:10.490115  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:10.504216  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:10.504292  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:10.554789  148123 cri.go:89] found id: ""
	I1010 19:27:10.554815  148123 logs.go:282] 0 containers: []
	W1010 19:27:10.554825  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:10.554831  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:10.554889  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:10.618970  148123 cri.go:89] found id: ""
	I1010 19:27:10.618997  148123 logs.go:282] 0 containers: []
	W1010 19:27:10.619005  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:10.619012  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:10.619061  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:10.666900  148123 cri.go:89] found id: ""
	I1010 19:27:10.666933  148123 logs.go:282] 0 containers: []
	W1010 19:27:10.666946  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:10.666953  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:10.667023  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:10.704364  148123 cri.go:89] found id: ""
	I1010 19:27:10.704405  148123 logs.go:282] 0 containers: []
	W1010 19:27:10.704430  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:10.704450  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:10.704525  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:10.742341  148123 cri.go:89] found id: ""
	I1010 19:27:10.742380  148123 logs.go:282] 0 containers: []
	W1010 19:27:10.742389  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:10.742396  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:10.742461  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:10.783056  148123 cri.go:89] found id: ""
	I1010 19:27:10.783088  148123 logs.go:282] 0 containers: []
	W1010 19:27:10.783098  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:10.783105  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:10.783174  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:10.823198  148123 cri.go:89] found id: ""
	I1010 19:27:10.823224  148123 logs.go:282] 0 containers: []
	W1010 19:27:10.823233  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:10.823243  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:10.823302  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:10.858154  148123 cri.go:89] found id: ""
	I1010 19:27:10.858195  148123 logs.go:282] 0 containers: []
	W1010 19:27:10.858204  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:10.858213  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:10.858229  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:10.935491  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:10.935518  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:10.935532  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:11.015653  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:11.015694  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:11.058765  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:11.058793  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:11.116748  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:11.116799  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:09.727270  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:12.225797  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:10.065372  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:12.065695  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:13.550669  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:16.051044  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:13.631579  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:13.644885  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:13.644953  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:13.685242  148123 cri.go:89] found id: ""
	I1010 19:27:13.685273  148123 logs.go:282] 0 containers: []
	W1010 19:27:13.685285  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:13.685294  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:13.685364  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:13.720831  148123 cri.go:89] found id: ""
	I1010 19:27:13.720866  148123 logs.go:282] 0 containers: []
	W1010 19:27:13.720877  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:13.720885  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:13.720947  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:13.766374  148123 cri.go:89] found id: ""
	I1010 19:27:13.766404  148123 logs.go:282] 0 containers: []
	W1010 19:27:13.766413  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:13.766419  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:13.766468  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:13.804550  148123 cri.go:89] found id: ""
	I1010 19:27:13.804585  148123 logs.go:282] 0 containers: []
	W1010 19:27:13.804597  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:13.804606  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:13.804674  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:13.842406  148123 cri.go:89] found id: ""
	I1010 19:27:13.842432  148123 logs.go:282] 0 containers: []
	W1010 19:27:13.842440  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:13.842460  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:13.842678  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:13.882365  148123 cri.go:89] found id: ""
	I1010 19:27:13.882402  148123 logs.go:282] 0 containers: []
	W1010 19:27:13.882415  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:13.882423  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:13.882505  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:13.926016  148123 cri.go:89] found id: ""
	I1010 19:27:13.926052  148123 logs.go:282] 0 containers: []
	W1010 19:27:13.926065  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:13.926075  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:13.926177  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:13.960485  148123 cri.go:89] found id: ""
	I1010 19:27:13.960523  148123 logs.go:282] 0 containers: []
	W1010 19:27:13.960533  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:13.960542  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:13.960558  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:14.013013  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:14.013055  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:14.026906  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:14.026935  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:14.104469  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:14.104494  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:14.104507  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:14.185917  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:14.185959  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:16.732088  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:16.746646  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:16.746710  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:16.784715  148123 cri.go:89] found id: ""
	I1010 19:27:16.784739  148123 logs.go:282] 0 containers: []
	W1010 19:27:16.784749  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:16.784755  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:16.784807  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:16.824181  148123 cri.go:89] found id: ""
	I1010 19:27:16.824210  148123 logs.go:282] 0 containers: []
	W1010 19:27:16.824220  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:16.824228  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:16.824289  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:16.860901  148123 cri.go:89] found id: ""
	I1010 19:27:16.860932  148123 logs.go:282] 0 containers: []
	W1010 19:27:16.860941  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:16.860947  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:16.860997  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:16.895127  148123 cri.go:89] found id: ""
	I1010 19:27:16.895159  148123 logs.go:282] 0 containers: []
	W1010 19:27:16.895175  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:16.895183  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:16.895266  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:16.931989  148123 cri.go:89] found id: ""
	I1010 19:27:16.932020  148123 logs.go:282] 0 containers: []
	W1010 19:27:16.932028  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:16.932035  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:16.932086  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:16.967254  148123 cri.go:89] found id: ""
	I1010 19:27:16.967283  148123 logs.go:282] 0 containers: []
	W1010 19:27:16.967292  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:16.967299  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:16.967347  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:17.003010  148123 cri.go:89] found id: ""
	I1010 19:27:17.003035  148123 logs.go:282] 0 containers: []
	W1010 19:27:17.003043  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:17.003050  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:17.003097  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:17.040053  148123 cri.go:89] found id: ""
	I1010 19:27:17.040093  148123 logs.go:282] 0 containers: []
	W1010 19:27:17.040102  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:17.040118  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:17.040131  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:17.093176  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:17.093217  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:17.108086  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:17.108116  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:17.186423  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:17.186452  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:17.186467  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:17.271884  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:17.271926  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:14.227197  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:16.739354  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:14.066233  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:16.565852  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:18.566337  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:18.051613  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:20.549888  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:19.817954  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:19.831924  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:19.831993  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:19.873415  148123 cri.go:89] found id: ""
	I1010 19:27:19.873441  148123 logs.go:282] 0 containers: []
	W1010 19:27:19.873459  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:19.873466  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:19.873527  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:19.912174  148123 cri.go:89] found id: ""
	I1010 19:27:19.912207  148123 logs.go:282] 0 containers: []
	W1010 19:27:19.912219  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:19.912227  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:19.912298  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:19.948422  148123 cri.go:89] found id: ""
	I1010 19:27:19.948456  148123 logs.go:282] 0 containers: []
	W1010 19:27:19.948466  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:19.948472  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:19.948524  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:19.983917  148123 cri.go:89] found id: ""
	I1010 19:27:19.983949  148123 logs.go:282] 0 containers: []
	W1010 19:27:19.983962  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:19.983970  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:19.984024  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:20.019160  148123 cri.go:89] found id: ""
	I1010 19:27:20.019187  148123 logs.go:282] 0 containers: []
	W1010 19:27:20.019198  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:20.019207  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:20.019271  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:20.056073  148123 cri.go:89] found id: ""
	I1010 19:27:20.056104  148123 logs.go:282] 0 containers: []
	W1010 19:27:20.056116  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:20.056124  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:20.056187  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:20.095877  148123 cri.go:89] found id: ""
	I1010 19:27:20.095916  148123 logs.go:282] 0 containers: []
	W1010 19:27:20.095928  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:20.095935  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:20.096007  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:20.132360  148123 cri.go:89] found id: ""
	I1010 19:27:20.132385  148123 logs.go:282] 0 containers: []
	W1010 19:27:20.132394  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:20.132402  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:20.132413  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:20.190573  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:20.190619  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:20.205785  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:20.205819  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:20.278882  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:20.278911  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:20.278924  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:20.363982  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:20.364024  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:22.906059  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:22.919839  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:22.919912  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:22.958203  148123 cri.go:89] found id: ""
	I1010 19:27:22.958242  148123 logs.go:282] 0 containers: []
	W1010 19:27:22.958252  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:22.958258  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:22.958312  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:22.995834  148123 cri.go:89] found id: ""
	I1010 19:27:22.995866  148123 logs.go:282] 0 containers: []
	W1010 19:27:22.995874  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:22.995880  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:22.995945  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:23.035913  148123 cri.go:89] found id: ""
	I1010 19:27:23.035950  148123 logs.go:282] 0 containers: []
	W1010 19:27:23.035962  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:23.035970  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:23.036039  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:23.073995  148123 cri.go:89] found id: ""
	I1010 19:27:23.074036  148123 logs.go:282] 0 containers: []
	W1010 19:27:23.074049  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:23.074057  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:23.074117  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:23.111190  148123 cri.go:89] found id: ""
	I1010 19:27:23.111222  148123 logs.go:282] 0 containers: []
	W1010 19:27:23.111234  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:23.111243  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:23.111305  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:23.147771  148123 cri.go:89] found id: ""
	I1010 19:27:23.147797  148123 logs.go:282] 0 containers: []
	W1010 19:27:23.147806  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:23.147814  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:23.147878  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:23.185485  148123 cri.go:89] found id: ""
	I1010 19:27:23.185517  148123 logs.go:282] 0 containers: []
	W1010 19:27:23.185527  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:23.185535  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:23.185598  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:23.222030  148123 cri.go:89] found id: ""
	I1010 19:27:23.222060  148123 logs.go:282] 0 containers: []
	W1010 19:27:23.222070  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:23.222081  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:23.222097  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:23.301826  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:23.301849  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:23.301861  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:23.378688  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:23.378730  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:23.426683  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:23.426717  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:23.478425  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:23.478467  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:19.226994  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:21.727366  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:21.067094  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:23.567075  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:22.550076  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:24.551681  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:25.993768  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:26.008311  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:26.008383  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:26.046315  148123 cri.go:89] found id: ""
	I1010 19:27:26.046343  148123 logs.go:282] 0 containers: []
	W1010 19:27:26.046355  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:26.046364  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:26.046418  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:26.081823  148123 cri.go:89] found id: ""
	I1010 19:27:26.081847  148123 logs.go:282] 0 containers: []
	W1010 19:27:26.081855  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:26.081861  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:26.081913  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:26.117139  148123 cri.go:89] found id: ""
	I1010 19:27:26.117167  148123 logs.go:282] 0 containers: []
	W1010 19:27:26.117183  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:26.117191  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:26.117242  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:26.154477  148123 cri.go:89] found id: ""
	I1010 19:27:26.154501  148123 logs.go:282] 0 containers: []
	W1010 19:27:26.154510  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:26.154517  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:26.154568  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:26.187497  148123 cri.go:89] found id: ""
	I1010 19:27:26.187527  148123 logs.go:282] 0 containers: []
	W1010 19:27:26.187535  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:26.187541  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:26.187604  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:26.223110  148123 cri.go:89] found id: ""
	I1010 19:27:26.223140  148123 logs.go:282] 0 containers: []
	W1010 19:27:26.223151  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:26.223160  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:26.223226  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:26.264975  148123 cri.go:89] found id: ""
	I1010 19:27:26.265003  148123 logs.go:282] 0 containers: []
	W1010 19:27:26.265011  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:26.265017  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:26.265067  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:26.307467  148123 cri.go:89] found id: ""
	I1010 19:27:26.307494  148123 logs.go:282] 0 containers: []
	W1010 19:27:26.307503  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:26.307512  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:26.307524  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:26.346313  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:26.346342  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:26.400069  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:26.400109  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:26.415467  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:26.415500  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:26.486986  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:26.487016  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:26.487033  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:24.226736  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:26.228720  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:28.726470  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:26.067100  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:28.565675  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:27.051110  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:29.051207  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:31.553085  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:29.064522  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:29.077631  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:29.077696  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:29.116127  148123 cri.go:89] found id: ""
	I1010 19:27:29.116154  148123 logs.go:282] 0 containers: []
	W1010 19:27:29.116164  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:29.116172  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:29.116241  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:29.152863  148123 cri.go:89] found id: ""
	I1010 19:27:29.152893  148123 logs.go:282] 0 containers: []
	W1010 19:27:29.152901  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:29.152907  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:29.152963  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:29.189951  148123 cri.go:89] found id: ""
	I1010 19:27:29.189983  148123 logs.go:282] 0 containers: []
	W1010 19:27:29.189992  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:29.189999  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:29.190060  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:29.226611  148123 cri.go:89] found id: ""
	I1010 19:27:29.226646  148123 logs.go:282] 0 containers: []
	W1010 19:27:29.226657  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:29.226665  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:29.226729  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:29.271884  148123 cri.go:89] found id: ""
	I1010 19:27:29.271921  148123 logs.go:282] 0 containers: []
	W1010 19:27:29.271934  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:29.271944  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:29.272016  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:29.306128  148123 cri.go:89] found id: ""
	I1010 19:27:29.306168  148123 logs.go:282] 0 containers: []
	W1010 19:27:29.306181  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:29.306191  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:29.306255  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:29.341869  148123 cri.go:89] found id: ""
	I1010 19:27:29.341893  148123 logs.go:282] 0 containers: []
	W1010 19:27:29.341901  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:29.341908  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:29.341962  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:29.376213  148123 cri.go:89] found id: ""
	I1010 19:27:29.376240  148123 logs.go:282] 0 containers: []
	W1010 19:27:29.376249  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:29.376259  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:29.376273  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:29.428827  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:29.428878  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:29.443719  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:29.443754  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:29.516166  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:29.516193  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:29.516208  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:29.596643  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:29.596681  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:32.135286  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:32.148791  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:32.148893  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:32.185511  148123 cri.go:89] found id: ""
	I1010 19:27:32.185542  148123 logs.go:282] 0 containers: []
	W1010 19:27:32.185553  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:32.185560  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:32.185626  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:32.222908  148123 cri.go:89] found id: ""
	I1010 19:27:32.222934  148123 logs.go:282] 0 containers: []
	W1010 19:27:32.222942  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:32.222948  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:32.222999  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:32.260998  148123 cri.go:89] found id: ""
	I1010 19:27:32.261033  148123 logs.go:282] 0 containers: []
	W1010 19:27:32.261045  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:32.261063  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:32.261128  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:32.298311  148123 cri.go:89] found id: ""
	I1010 19:27:32.298339  148123 logs.go:282] 0 containers: []
	W1010 19:27:32.298348  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:32.298355  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:32.298409  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:32.334134  148123 cri.go:89] found id: ""
	I1010 19:27:32.334220  148123 logs.go:282] 0 containers: []
	W1010 19:27:32.334236  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:32.334249  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:32.334319  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:32.369690  148123 cri.go:89] found id: ""
	I1010 19:27:32.369723  148123 logs.go:282] 0 containers: []
	W1010 19:27:32.369735  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:32.369743  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:32.369807  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:32.407164  148123 cri.go:89] found id: ""
	I1010 19:27:32.407210  148123 logs.go:282] 0 containers: []
	W1010 19:27:32.407218  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:32.407224  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:32.407279  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:32.446468  148123 cri.go:89] found id: ""
	I1010 19:27:32.446494  148123 logs.go:282] 0 containers: []
	W1010 19:27:32.446505  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:32.446519  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:32.446535  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:32.497953  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:32.497997  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:32.513590  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:32.513620  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:32.594037  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:32.594072  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:32.594090  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:32.673546  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:32.673587  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:30.727725  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:32.727813  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:31.066731  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:33.067815  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:34.050574  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:36.550119  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:35.226084  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:35.241152  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:35.241222  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:35.283208  148123 cri.go:89] found id: ""
	I1010 19:27:35.283245  148123 logs.go:282] 0 containers: []
	W1010 19:27:35.283269  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:35.283286  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:35.283346  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:35.320406  148123 cri.go:89] found id: ""
	I1010 19:27:35.320444  148123 logs.go:282] 0 containers: []
	W1010 19:27:35.320457  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:35.320466  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:35.320523  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:35.360984  148123 cri.go:89] found id: ""
	I1010 19:27:35.361015  148123 logs.go:282] 0 containers: []
	W1010 19:27:35.361027  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:35.361034  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:35.361097  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:35.400191  148123 cri.go:89] found id: ""
	I1010 19:27:35.400219  148123 logs.go:282] 0 containers: []
	W1010 19:27:35.400230  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:35.400244  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:35.400314  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:35.438092  148123 cri.go:89] found id: ""
	I1010 19:27:35.438126  148123 logs.go:282] 0 containers: []
	W1010 19:27:35.438139  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:35.438147  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:35.438215  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:35.472656  148123 cri.go:89] found id: ""
	I1010 19:27:35.472681  148123 logs.go:282] 0 containers: []
	W1010 19:27:35.472688  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:35.472694  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:35.472743  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:35.505895  148123 cri.go:89] found id: ""
	I1010 19:27:35.505931  148123 logs.go:282] 0 containers: []
	W1010 19:27:35.505942  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:35.505950  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:35.506015  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:35.540406  148123 cri.go:89] found id: ""
	I1010 19:27:35.540441  148123 logs.go:282] 0 containers: []
	W1010 19:27:35.540451  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:35.540462  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:35.540478  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:35.555874  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:35.555903  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:35.628400  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:35.628427  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:35.628453  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:35.708262  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:35.708305  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:35.752796  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:35.752824  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:38.306212  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:38.319473  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:38.319543  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:38.357493  148123 cri.go:89] found id: ""
	I1010 19:27:38.357520  148123 logs.go:282] 0 containers: []
	W1010 19:27:38.357529  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:38.357536  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:38.357588  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:38.395908  148123 cri.go:89] found id: ""
	I1010 19:27:38.395935  148123 logs.go:282] 0 containers: []
	W1010 19:27:38.395943  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:38.395949  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:38.396010  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:38.434495  148123 cri.go:89] found id: ""
	I1010 19:27:38.434529  148123 logs.go:282] 0 containers: []
	W1010 19:27:38.434539  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:38.434545  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:38.434608  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:38.474647  148123 cri.go:89] found id: ""
	I1010 19:27:38.474686  148123 logs.go:282] 0 containers: []
	W1010 19:27:38.474698  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:38.474710  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:38.474784  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:38.511105  148123 cri.go:89] found id: ""
	I1010 19:27:38.511133  148123 logs.go:282] 0 containers: []
	W1010 19:27:38.511141  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:38.511148  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:38.511199  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:38.551011  148123 cri.go:89] found id: ""
	I1010 19:27:38.551055  148123 logs.go:282] 0 containers: []
	W1010 19:27:38.551066  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:38.551074  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:38.551139  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:35.227301  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:37.726528  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:35.567838  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:38.066658  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:38.552499  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:40.544561  147758 pod_ready.go:82] duration metric: took 4m0.00091784s for pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace to be "Ready" ...
	E1010 19:27:40.544600  147758 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace to be "Ready" (will not retry!)
	I1010 19:27:40.544623  147758 pod_ready.go:39] duration metric: took 4m15.623470592s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 19:27:40.544664  147758 kubeadm.go:597] duration metric: took 4m22.92080204s to restartPrimaryControlPlane
	W1010 19:27:40.544737  147758 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1010 19:27:40.544829  147758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1010 19:27:38.590579  148123 cri.go:89] found id: ""
	I1010 19:27:38.590606  148123 logs.go:282] 0 containers: []
	W1010 19:27:38.590615  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:38.590621  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:38.590686  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:38.627775  148123 cri.go:89] found id: ""
	I1010 19:27:38.627812  148123 logs.go:282] 0 containers: []
	W1010 19:27:38.627826  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:38.627838  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:38.627854  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:38.683195  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:38.683239  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:38.697220  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:38.697250  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:38.771619  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:38.771651  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:38.771668  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:38.851324  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:38.851362  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:41.408165  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:41.422958  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:41.423032  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:41.466774  148123 cri.go:89] found id: ""
	I1010 19:27:41.466806  148123 logs.go:282] 0 containers: []
	W1010 19:27:41.466816  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:41.466823  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:41.466874  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:41.506429  148123 cri.go:89] found id: ""
	I1010 19:27:41.506470  148123 logs.go:282] 0 containers: []
	W1010 19:27:41.506482  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:41.506489  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:41.506555  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:41.548929  148123 cri.go:89] found id: ""
	I1010 19:27:41.548965  148123 logs.go:282] 0 containers: []
	W1010 19:27:41.548976  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:41.548983  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:41.549044  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:41.592243  148123 cri.go:89] found id: ""
	I1010 19:27:41.592274  148123 logs.go:282] 0 containers: []
	W1010 19:27:41.592283  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:41.592290  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:41.592342  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:41.628516  148123 cri.go:89] found id: ""
	I1010 19:27:41.628558  148123 logs.go:282] 0 containers: []
	W1010 19:27:41.628570  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:41.628578  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:41.628650  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:41.665011  148123 cri.go:89] found id: ""
	I1010 19:27:41.665041  148123 logs.go:282] 0 containers: []
	W1010 19:27:41.665053  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:41.665060  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:41.665112  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:41.702650  148123 cri.go:89] found id: ""
	I1010 19:27:41.702681  148123 logs.go:282] 0 containers: []
	W1010 19:27:41.702692  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:41.702700  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:41.702764  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:41.738971  148123 cri.go:89] found id: ""
	I1010 19:27:41.739002  148123 logs.go:282] 0 containers: []
	W1010 19:27:41.739021  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:41.739031  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:41.739062  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:41.813296  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:41.813321  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:41.813335  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:41.895138  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:41.895185  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:41.941975  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:41.942012  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:41.994838  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:41.994888  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:39.727140  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:41.728263  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:40.566241  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:43.065219  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:44.511153  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:44.524871  148123 kubeadm.go:597] duration metric: took 4m4.194337013s to restartPrimaryControlPlane
	W1010 19:27:44.524953  148123 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1010 19:27:44.524988  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1010 19:27:44.994063  148123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 19:27:45.010926  148123 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1010 19:27:45.021757  148123 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1010 19:27:45.032246  148123 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1010 19:27:45.032270  148123 kubeadm.go:157] found existing configuration files:
	
	I1010 19:27:45.032326  148123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1010 19:27:45.042799  148123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1010 19:27:45.042866  148123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1010 19:27:45.053017  148123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1010 19:27:45.063373  148123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1010 19:27:45.063445  148123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1010 19:27:45.074689  148123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1010 19:27:45.085187  148123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1010 19:27:45.085241  148123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1010 19:27:45.096275  148123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1010 19:27:45.106686  148123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1010 19:27:45.106750  148123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1010 19:27:45.117018  148123 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1010 19:27:45.358920  148123 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1010 19:27:44.226853  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:46.227586  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:48.727469  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:45.066410  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:47.569864  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:51.230704  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:53.727351  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:50.065845  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:52.066267  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:55.727457  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:58.226861  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:54.564611  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:56.566702  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:00.728542  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:03.225779  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:59.065614  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:01.068088  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:03.566502  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:06.739904  147758 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.195045639s)
	I1010 19:28:06.739984  147758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 19:28:06.756046  147758 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1010 19:28:06.768580  147758 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1010 19:28:06.780663  147758 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1010 19:28:06.780732  147758 kubeadm.go:157] found existing configuration files:
	
	I1010 19:28:06.780807  147758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1010 19:28:06.792092  147758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1010 19:28:06.792179  147758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1010 19:28:06.804515  147758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1010 19:28:06.814969  147758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1010 19:28:06.815040  147758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1010 19:28:06.826056  147758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1010 19:28:06.836050  147758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1010 19:28:06.836108  147758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1010 19:28:06.846125  147758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1010 19:28:06.855505  147758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1010 19:28:06.855559  147758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1010 19:28:06.865367  147758 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1010 19:28:06.916227  147758 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1010 19:28:06.916375  147758 kubeadm.go:310] [preflight] Running pre-flight checks
	I1010 19:28:07.036539  147758 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1010 19:28:07.036652  147758 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1010 19:28:07.036762  147758 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1010 19:28:07.044897  147758 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1010 19:28:07.046978  147758 out.go:235]   - Generating certificates and keys ...
	I1010 19:28:07.047117  147758 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1010 19:28:07.047229  147758 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1010 19:28:07.047384  147758 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1010 19:28:07.047467  147758 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1010 19:28:07.047584  147758 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1010 19:28:07.047675  147758 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1010 19:28:07.047794  147758 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1010 19:28:07.047902  147758 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1010 19:28:07.048005  147758 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1010 19:28:07.048093  147758 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1010 19:28:07.048142  147758 kubeadm.go:310] [certs] Using the existing "sa" key
	I1010 19:28:07.048210  147758 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1010 19:28:07.127836  147758 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1010 19:28:07.434492  147758 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1010 19:28:07.487567  147758 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1010 19:28:07.731314  147758 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1010 19:28:07.919060  147758 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1010 19:28:07.919565  147758 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1010 19:28:07.922740  147758 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1010 19:28:05.227611  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:07.229836  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:06.065246  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:08.067360  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:07.925140  147758 out.go:235]   - Booting up control plane ...
	I1010 19:28:07.925239  147758 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1010 19:28:07.925356  147758 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1010 19:28:07.925444  147758 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1010 19:28:07.944375  147758 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1010 19:28:07.951182  147758 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1010 19:28:07.951274  147758 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1010 19:28:08.087325  147758 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1010 19:28:08.087560  147758 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1010 19:28:08.598361  147758 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 511.081439ms
	I1010 19:28:08.598502  147758 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1010 19:28:09.727932  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:12.227939  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:10.566945  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:13.067142  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:14.100517  147758 kubeadm.go:310] [api-check] The API server is healthy after 5.501985157s
	I1010 19:28:14.119932  147758 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1010 19:28:14.149557  147758 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1010 19:28:14.207413  147758 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1010 19:28:14.207735  147758 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-541370 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1010 19:28:14.226199  147758 kubeadm.go:310] [bootstrap-token] Using token: sbg4v0.t5me93bb5vn8m913
	I1010 19:28:14.228059  147758 out.go:235]   - Configuring RBAC rules ...
	I1010 19:28:14.228208  147758 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1010 19:28:14.241706  147758 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1010 19:28:14.256554  147758 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1010 19:28:14.263129  147758 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1010 19:28:14.274346  147758 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1010 19:28:14.282313  147758 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1010 19:28:14.507850  147758 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1010 19:28:14.970234  147758 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1010 19:28:15.508328  147758 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1010 19:28:15.509530  147758 kubeadm.go:310] 
	I1010 19:28:15.509635  147758 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1010 19:28:15.509653  147758 kubeadm.go:310] 
	I1010 19:28:15.509743  147758 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1010 19:28:15.509762  147758 kubeadm.go:310] 
	I1010 19:28:15.509795  147758 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1010 19:28:15.509888  147758 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1010 19:28:15.509954  147758 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1010 19:28:15.509970  147758 kubeadm.go:310] 
	I1010 19:28:15.510083  147758 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1010 19:28:15.510103  147758 kubeadm.go:310] 
	I1010 19:28:15.510203  147758 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1010 19:28:15.510214  147758 kubeadm.go:310] 
	I1010 19:28:15.510297  147758 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1010 19:28:15.510410  147758 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1010 19:28:15.510489  147758 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1010 19:28:15.510495  147758 kubeadm.go:310] 
	I1010 19:28:15.510603  147758 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1010 19:28:15.510707  147758 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1010 19:28:15.510724  147758 kubeadm.go:310] 
	I1010 19:28:15.510807  147758 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token sbg4v0.t5me93bb5vn8m913 \
	I1010 19:28:15.510958  147758 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:bc8568f96f96483e4403a7eff9866ac7bd8b0e76d8203796a49dd1a769f11bda \
	I1010 19:28:15.511005  147758 kubeadm.go:310] 	--control-plane 
	I1010 19:28:15.511034  147758 kubeadm.go:310] 
	I1010 19:28:15.511161  147758 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1010 19:28:15.511173  147758 kubeadm.go:310] 
	I1010 19:28:15.511268  147758 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token sbg4v0.t5me93bb5vn8m913 \
	I1010 19:28:15.511403  147758 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:bc8568f96f96483e4403a7eff9866ac7bd8b0e76d8203796a49dd1a769f11bda 
	I1010 19:28:15.512298  147758 kubeadm.go:310] W1010 19:28:06.890572    2539 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1010 19:28:15.512594  147758 kubeadm.go:310] W1010 19:28:06.891448    2539 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1010 19:28:15.512702  147758 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1010 19:28:15.512734  147758 cni.go:84] Creating CNI manager for ""
	I1010 19:28:15.512744  147758 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1010 19:28:15.514703  147758 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1010 19:28:15.516229  147758 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1010 19:28:15.527554  147758 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1010 19:28:15.549266  147758 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1010 19:28:15.549362  147758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:15.549399  147758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-541370 minikube.k8s.io/updated_at=2024_10_10T19_28_15_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=e90d7de550e839433dc631623053398b652747fc minikube.k8s.io/name=embed-certs-541370 minikube.k8s.io/primary=true
	I1010 19:28:15.590732  147758 ops.go:34] apiserver oom_adj: -16
	I1010 19:28:15.740942  147758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:16.241392  147758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:16.741807  147758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:14.229241  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:16.727260  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:14.059512  148525 pod_ready.go:82] duration metric: took 4m0.00022742s for pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace to be "Ready" ...
	E1010 19:28:14.059550  148525 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace to be "Ready" (will not retry!)
	I1010 19:28:14.059569  148525 pod_ready.go:39] duration metric: took 4m7.001942194s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 19:28:14.059614  148525 kubeadm.go:597] duration metric: took 4m14.998320151s to restartPrimaryControlPlane
	W1010 19:28:14.059672  148525 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1010 19:28:14.059698  148525 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1010 19:28:17.241315  147758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:17.741580  147758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:18.241006  147758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:18.742042  147758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:19.241251  147758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:19.741030  147758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:19.862541  147758 kubeadm.go:1113] duration metric: took 4.313246481s to wait for elevateKubeSystemPrivileges
	I1010 19:28:19.862579  147758 kubeadm.go:394] duration metric: took 5m2.288571479s to StartCluster
	I1010 19:28:19.862628  147758 settings.go:142] acquiring lock: {Name:mk8d73e472e6ca16e47be8eb712406a95625b8da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 19:28:19.862751  147758 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19787-81676/kubeconfig
	I1010 19:28:19.864528  147758 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19787-81676/kubeconfig: {Name:mk31297b607b774870865548d952622c04d970cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 19:28:19.864812  147758 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.120 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1010 19:28:19.864910  147758 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1010 19:28:19.865019  147758 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-541370"
	I1010 19:28:19.865041  147758 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-541370"
	W1010 19:28:19.865053  147758 addons.go:243] addon storage-provisioner should already be in state true
	I1010 19:28:19.865062  147758 addons.go:69] Setting default-storageclass=true in profile "embed-certs-541370"
	I1010 19:28:19.865085  147758 host.go:66] Checking if "embed-certs-541370" exists ...
	I1010 19:28:19.865077  147758 config.go:182] Loaded profile config "embed-certs-541370": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 19:28:19.865129  147758 addons.go:69] Setting metrics-server=true in profile "embed-certs-541370"
	I1010 19:28:19.865164  147758 addons.go:234] Setting addon metrics-server=true in "embed-certs-541370"
	W1010 19:28:19.865179  147758 addons.go:243] addon metrics-server should already be in state true
	I1010 19:28:19.865115  147758 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-541370"
	I1010 19:28:19.865215  147758 host.go:66] Checking if "embed-certs-541370" exists ...
	I1010 19:28:19.865558  147758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:28:19.865593  147758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:28:19.865607  147758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:28:19.865629  147758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:28:19.865595  147758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:28:19.865725  147758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:28:19.866857  147758 out.go:177] * Verifying Kubernetes components...
	I1010 19:28:19.868590  147758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 19:28:19.882524  147758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42899
	I1010 19:28:19.882595  147758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39477
	I1010 19:28:19.882678  147758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42999
	I1010 19:28:19.883065  147758 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:28:19.883168  147758 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:28:19.883281  147758 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:28:19.883559  147758 main.go:141] libmachine: Using API Version  1
	I1010 19:28:19.883575  147758 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:28:19.883657  147758 main.go:141] libmachine: Using API Version  1
	I1010 19:28:19.883669  147758 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:28:19.883802  147758 main.go:141] libmachine: Using API Version  1
	I1010 19:28:19.883818  147758 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:28:19.883968  147758 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:28:19.883976  147758 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:28:19.884141  147758 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:28:19.884194  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetState
	I1010 19:28:19.884408  147758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:28:19.884437  147758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:28:19.884684  147758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:28:19.884746  147758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:28:19.887912  147758 addons.go:234] Setting addon default-storageclass=true in "embed-certs-541370"
	W1010 19:28:19.887942  147758 addons.go:243] addon default-storageclass should already be in state true
	I1010 19:28:19.887973  147758 host.go:66] Checking if "embed-certs-541370" exists ...
	I1010 19:28:19.888333  147758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:28:19.888383  147758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:28:19.901588  147758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37895
	I1010 19:28:19.902131  147758 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:28:19.902597  147758 main.go:141] libmachine: Using API Version  1
	I1010 19:28:19.902621  147758 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:28:19.902927  147758 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:28:19.903101  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetState
	I1010 19:28:19.904556  147758 main.go:141] libmachine: (embed-certs-541370) Calling .DriverName
	I1010 19:28:19.905207  147758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33927
	I1010 19:28:19.905621  147758 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:28:19.906188  147758 main.go:141] libmachine: Using API Version  1
	I1010 19:28:19.906209  147758 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:28:19.906599  147758 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:28:19.906647  147758 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 19:28:19.906837  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetState
	I1010 19:28:19.907699  147758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34245
	I1010 19:28:19.908147  147758 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:28:19.908557  147758 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 19:28:19.908584  147758 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1010 19:28:19.908610  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHHostname
	I1010 19:28:19.908705  147758 main.go:141] libmachine: (embed-certs-541370) Calling .DriverName
	I1010 19:28:19.908717  147758 main.go:141] libmachine: Using API Version  1
	I1010 19:28:19.908745  147758 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:28:19.909364  147758 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:28:19.910154  147758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:28:19.910208  147758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:28:19.910840  147758 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1010 19:28:19.912716  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:28:19.912722  147758 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1010 19:28:19.912743  147758 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1010 19:28:19.912769  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHHostname
	I1010 19:28:19.913199  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:28:19.913224  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:28:19.913500  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHPort
	I1010 19:28:19.913682  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:28:19.913845  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHUsername
	I1010 19:28:19.913972  147758 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/embed-certs-541370/id_rsa Username:docker}
	I1010 19:28:19.921800  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:28:19.922343  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:28:19.922374  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:28:19.922653  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHPort
	I1010 19:28:19.922842  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:28:19.922965  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHUsername
	I1010 19:28:19.923108  147758 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/embed-certs-541370/id_rsa Username:docker}
	I1010 19:28:19.935097  147758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39933
	I1010 19:28:19.935605  147758 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:28:19.936123  147758 main.go:141] libmachine: Using API Version  1
	I1010 19:28:19.936146  147758 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:28:19.936561  147758 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:28:19.936747  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetState
	I1010 19:28:19.938789  147758 main.go:141] libmachine: (embed-certs-541370) Calling .DriverName
	I1010 19:28:19.939019  147758 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1010 19:28:19.939034  147758 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1010 19:28:19.939054  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHHostname
	I1010 19:28:19.941682  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:28:19.942137  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:28:19.942165  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:28:19.942404  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHPort
	I1010 19:28:19.942642  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:28:19.942767  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHUsername
	I1010 19:28:19.942915  147758 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/embed-certs-541370/id_rsa Username:docker}
	I1010 19:28:20.108247  147758 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 19:28:20.149819  147758 node_ready.go:35] waiting up to 6m0s for node "embed-certs-541370" to be "Ready" ...
	I1010 19:28:20.163096  147758 node_ready.go:49] node "embed-certs-541370" has status "Ready":"True"
	I1010 19:28:20.163118  147758 node_ready.go:38] duration metric: took 13.26779ms for node "embed-certs-541370" to be "Ready" ...
	I1010 19:28:20.163128  147758 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 19:28:20.168620  147758 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-59752" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:20.241952  147758 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1010 19:28:20.241978  147758 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1010 19:28:20.249679  147758 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1010 19:28:20.290149  147758 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1010 19:28:20.290190  147758 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1010 19:28:20.291475  147758 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 19:28:20.410539  147758 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1010 19:28:20.410582  147758 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1010 19:28:20.491567  147758 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1010 19:28:20.684370  147758 main.go:141] libmachine: Making call to close driver server
	I1010 19:28:20.684403  147758 main.go:141] libmachine: (embed-certs-541370) Calling .Close
	I1010 19:28:20.684695  147758 main.go:141] libmachine: (embed-certs-541370) DBG | Closing plugin on server side
	I1010 19:28:20.684742  147758 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:28:20.684749  147758 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:28:20.684756  147758 main.go:141] libmachine: Making call to close driver server
	I1010 19:28:20.684764  147758 main.go:141] libmachine: (embed-certs-541370) Calling .Close
	I1010 19:28:20.685029  147758 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:28:20.685059  147758 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:28:20.685036  147758 main.go:141] libmachine: (embed-certs-541370) DBG | Closing plugin on server side
	I1010 19:28:20.695901  147758 main.go:141] libmachine: Making call to close driver server
	I1010 19:28:20.695926  147758 main.go:141] libmachine: (embed-certs-541370) Calling .Close
	I1010 19:28:20.696202  147758 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:28:20.696249  147758 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:28:21.439463  147758 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.147952803s)
	I1010 19:28:21.439626  147758 main.go:141] libmachine: Making call to close driver server
	I1010 19:28:21.439659  147758 main.go:141] libmachine: (embed-certs-541370) Calling .Close
	I1010 19:28:21.439951  147758 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:28:21.439969  147758 main.go:141] libmachine: (embed-certs-541370) DBG | Closing plugin on server side
	I1010 19:28:21.439976  147758 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:28:21.439997  147758 main.go:141] libmachine: Making call to close driver server
	I1010 19:28:21.440009  147758 main.go:141] libmachine: (embed-certs-541370) Calling .Close
	I1010 19:28:21.440299  147758 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:28:21.440298  147758 main.go:141] libmachine: (embed-certs-541370) DBG | Closing plugin on server side
	I1010 19:28:21.440314  147758 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:28:21.780486  147758 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.288854773s)
	I1010 19:28:21.780551  147758 main.go:141] libmachine: Making call to close driver server
	I1010 19:28:21.780567  147758 main.go:141] libmachine: (embed-certs-541370) Calling .Close
	I1010 19:28:21.780948  147758 main.go:141] libmachine: (embed-certs-541370) DBG | Closing plugin on server side
	I1010 19:28:21.780980  147758 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:28:21.780996  147758 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:28:21.781007  147758 main.go:141] libmachine: Making call to close driver server
	I1010 19:28:21.781016  147758 main.go:141] libmachine: (embed-certs-541370) Calling .Close
	I1010 19:28:21.781289  147758 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:28:21.781310  147758 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:28:21.781331  147758 addons.go:475] Verifying addon metrics-server=true in "embed-certs-541370"
	I1010 19:28:21.783512  147758 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1010 19:28:21.784958  147758 addons.go:510] duration metric: took 1.92006141s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1010 19:28:19.225844  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:21.227960  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:23.726439  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:22.195129  147758 pod_ready.go:103] pod "coredns-7c65d6cfc9-59752" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:24.678736  147758 pod_ready.go:103] pod "coredns-7c65d6cfc9-59752" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:25.727053  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:27.727657  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:27.177348  147758 pod_ready.go:103] pod "coredns-7c65d6cfc9-59752" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:29.177459  147758 pod_ready.go:93] pod "coredns-7c65d6cfc9-59752" in "kube-system" namespace has status "Ready":"True"
	I1010 19:28:29.177485  147758 pod_ready.go:82] duration metric: took 9.008841503s for pod "coredns-7c65d6cfc9-59752" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:29.177495  147758 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-n7wxs" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:29.182744  147758 pod_ready.go:93] pod "coredns-7c65d6cfc9-n7wxs" in "kube-system" namespace has status "Ready":"True"
	I1010 19:28:29.182777  147758 pod_ready.go:82] duration metric: took 5.273263ms for pod "coredns-7c65d6cfc9-n7wxs" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:29.182791  147758 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:29.191507  147758 pod_ready.go:93] pod "etcd-embed-certs-541370" in "kube-system" namespace has status "Ready":"True"
	I1010 19:28:29.191539  147758 pod_ready.go:82] duration metric: took 8.738961ms for pod "etcd-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:29.191554  147758 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:29.199167  147758 pod_ready.go:93] pod "kube-apiserver-embed-certs-541370" in "kube-system" namespace has status "Ready":"True"
	I1010 19:28:29.199218  147758 pod_ready.go:82] duration metric: took 7.635672ms for pod "kube-apiserver-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:29.199234  147758 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:29.204558  147758 pod_ready.go:93] pod "kube-controller-manager-embed-certs-541370" in "kube-system" namespace has status "Ready":"True"
	I1010 19:28:29.204581  147758 pod_ready.go:82] duration metric: took 5.337574ms for pod "kube-controller-manager-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:29.204591  147758 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-6hdds" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:29.573781  147758 pod_ready.go:93] pod "kube-proxy-6hdds" in "kube-system" namespace has status "Ready":"True"
	I1010 19:28:29.573808  147758 pod_ready.go:82] duration metric: took 369.210969ms for pod "kube-proxy-6hdds" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:29.573818  147758 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:29.974015  147758 pod_ready.go:93] pod "kube-scheduler-embed-certs-541370" in "kube-system" namespace has status "Ready":"True"
	I1010 19:28:29.974039  147758 pod_ready.go:82] duration metric: took 400.214845ms for pod "kube-scheduler-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:29.974048  147758 pod_ready.go:39] duration metric: took 9.810911064s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 19:28:29.974066  147758 api_server.go:52] waiting for apiserver process to appear ...
	I1010 19:28:29.974120  147758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:28:29.991332  147758 api_server.go:72] duration metric: took 10.126480862s to wait for apiserver process to appear ...
	I1010 19:28:29.991356  147758 api_server.go:88] waiting for apiserver healthz status ...
	I1010 19:28:29.991382  147758 api_server.go:253] Checking apiserver healthz at https://192.168.39.120:8443/healthz ...
	I1010 19:28:29.995855  147758 api_server.go:279] https://192.168.39.120:8443/healthz returned 200:
	ok
	I1010 19:28:29.997488  147758 api_server.go:141] control plane version: v1.31.1
	I1010 19:28:29.997516  147758 api_server.go:131] duration metric: took 6.152312ms to wait for apiserver health ...
	I1010 19:28:29.997526  147758 system_pods.go:43] waiting for kube-system pods to appear ...
	I1010 19:28:30.176631  147758 system_pods.go:59] 9 kube-system pods found
	I1010 19:28:30.176662  147758 system_pods.go:61] "coredns-7c65d6cfc9-59752" [f7980c69-dd8e-42e0-a0ab-1dedf2203367] Running
	I1010 19:28:30.176668  147758 system_pods.go:61] "coredns-7c65d6cfc9-n7wxs" [ac89df92-bbdb-432b-8c4a-d9ced3c2f2b5] Running
	I1010 19:28:30.176672  147758 system_pods.go:61] "etcd-embed-certs-541370" [94f92d38-753f-47c7-95b6-7ef0486a172d] Running
	I1010 19:28:30.176676  147758 system_pods.go:61] "kube-apiserver-embed-certs-541370" [b60f7ec1-6416-4e0b-8f49-d673496187b6] Running
	I1010 19:28:30.176680  147758 system_pods.go:61] "kube-controller-manager-embed-certs-541370" [ca09511b-ec93-4146-9d43-5fbc0880394a] Running
	I1010 19:28:30.176683  147758 system_pods.go:61] "kube-proxy-6hdds" [fe7cbbf4-12be-469d-b176-37c4daccab96] Running
	I1010 19:28:30.176686  147758 system_pods.go:61] "kube-scheduler-embed-certs-541370" [37c96f22-d5a4-4233-ba65-7367b075656a] Running
	I1010 19:28:30.176693  147758 system_pods.go:61] "metrics-server-6867b74b74-znhn4" [5dc1f764-c7c7-480e-b787-5f5cf6c14a84] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1010 19:28:30.176699  147758 system_pods.go:61] "storage-provisioner" [cfb28184-daef-40be-9170-b42058727418] Running
	I1010 19:28:30.176707  147758 system_pods.go:74] duration metric: took 179.174083ms to wait for pod list to return data ...
	I1010 19:28:30.176714  147758 default_sa.go:34] waiting for default service account to be created ...
	I1010 19:28:30.375326  147758 default_sa.go:45] found service account: "default"
	I1010 19:28:30.375361  147758 default_sa.go:55] duration metric: took 198.640267ms for default service account to be created ...
	I1010 19:28:30.375374  147758 system_pods.go:116] waiting for k8s-apps to be running ...
	I1010 19:28:30.578749  147758 system_pods.go:86] 9 kube-system pods found
	I1010 19:28:30.578780  147758 system_pods.go:89] "coredns-7c65d6cfc9-59752" [f7980c69-dd8e-42e0-a0ab-1dedf2203367] Running
	I1010 19:28:30.578786  147758 system_pods.go:89] "coredns-7c65d6cfc9-n7wxs" [ac89df92-bbdb-432b-8c4a-d9ced3c2f2b5] Running
	I1010 19:28:30.578790  147758 system_pods.go:89] "etcd-embed-certs-541370" [94f92d38-753f-47c7-95b6-7ef0486a172d] Running
	I1010 19:28:30.578794  147758 system_pods.go:89] "kube-apiserver-embed-certs-541370" [b60f7ec1-6416-4e0b-8f49-d673496187b6] Running
	I1010 19:28:30.578797  147758 system_pods.go:89] "kube-controller-manager-embed-certs-541370" [ca09511b-ec93-4146-9d43-5fbc0880394a] Running
	I1010 19:28:30.578801  147758 system_pods.go:89] "kube-proxy-6hdds" [fe7cbbf4-12be-469d-b176-37c4daccab96] Running
	I1010 19:28:30.578804  147758 system_pods.go:89] "kube-scheduler-embed-certs-541370" [37c96f22-d5a4-4233-ba65-7367b075656a] Running
	I1010 19:28:30.578810  147758 system_pods.go:89] "metrics-server-6867b74b74-znhn4" [5dc1f764-c7c7-480e-b787-5f5cf6c14a84] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1010 19:28:30.578814  147758 system_pods.go:89] "storage-provisioner" [cfb28184-daef-40be-9170-b42058727418] Running
	I1010 19:28:30.578822  147758 system_pods.go:126] duration metric: took 203.441477ms to wait for k8s-apps to be running ...
	I1010 19:28:30.578829  147758 system_svc.go:44] waiting for kubelet service to be running ....
	I1010 19:28:30.578877  147758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 19:28:30.596523  147758 system_svc.go:56] duration metric: took 17.684729ms WaitForService to wait for kubelet
	I1010 19:28:30.596553  147758 kubeadm.go:582] duration metric: took 10.731708748s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1010 19:28:30.596573  147758 node_conditions.go:102] verifying NodePressure condition ...
	I1010 19:28:30.774749  147758 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1010 19:28:30.774783  147758 node_conditions.go:123] node cpu capacity is 2
	I1010 19:28:30.774807  147758 node_conditions.go:105] duration metric: took 178.228671ms to run NodePressure ...
	I1010 19:28:30.774822  147758 start.go:241] waiting for startup goroutines ...
	I1010 19:28:30.774831  147758 start.go:246] waiting for cluster config update ...
	I1010 19:28:30.774845  147758 start.go:255] writing updated cluster config ...
	I1010 19:28:30.775121  147758 ssh_runner.go:195] Run: rm -f paused
	I1010 19:28:30.826689  147758 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1010 19:28:30.828795  147758 out.go:177] * Done! kubectl is now configured to use "embed-certs-541370" cluster and "default" namespace by default
	I1010 19:28:29.728096  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:32.229632  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:34.726536  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:36.727032  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:38.727488  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:40.372903  148525 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.31317648s)
	I1010 19:28:40.372991  148525 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 19:28:40.389319  148525 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1010 19:28:40.400123  148525 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1010 19:28:40.411906  148525 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1010 19:28:40.411932  148525 kubeadm.go:157] found existing configuration files:
	
	I1010 19:28:40.411976  148525 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1010 19:28:40.421840  148525 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1010 19:28:40.421904  148525 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1010 19:28:40.432229  148525 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1010 19:28:40.442121  148525 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1010 19:28:40.442203  148525 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1010 19:28:40.452969  148525 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1010 19:28:40.463085  148525 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1010 19:28:40.463146  148525 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1010 19:28:40.473103  148525 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1010 19:28:40.482854  148525 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1010 19:28:40.482914  148525 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1010 19:28:40.494023  148525 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1010 19:28:40.543369  148525 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1010 19:28:40.543466  148525 kubeadm.go:310] [preflight] Running pre-flight checks
	I1010 19:28:40.657301  148525 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1010 19:28:40.657462  148525 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1010 19:28:40.657579  148525 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1010 19:28:40.669222  148525 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1010 19:28:40.670995  148525 out.go:235]   - Generating certificates and keys ...
	I1010 19:28:40.671102  148525 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1010 19:28:40.671171  148525 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1010 19:28:40.671284  148525 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1010 19:28:40.671374  148525 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1010 19:28:40.671471  148525 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1010 19:28:40.671557  148525 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1010 19:28:40.671650  148525 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1010 19:28:40.671751  148525 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1010 19:28:40.671895  148525 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1010 19:28:40.672000  148525 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1010 19:28:40.672056  148525 kubeadm.go:310] [certs] Using the existing "sa" key
	I1010 19:28:40.672136  148525 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1010 19:28:40.876613  148525 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1010 19:28:41.109518  148525 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1010 19:28:41.186751  148525 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1010 19:28:41.424710  148525 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1010 19:28:41.479611  148525 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1010 19:28:41.480235  148525 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1010 19:28:41.483222  148525 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1010 19:28:41.227521  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:43.728023  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:41.484809  148525 out.go:235]   - Booting up control plane ...
	I1010 19:28:41.484935  148525 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1010 19:28:41.485020  148525 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1010 19:28:41.485317  148525 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1010 19:28:41.506919  148525 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1010 19:28:41.517006  148525 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1010 19:28:41.517077  148525 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1010 19:28:41.653211  148525 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1010 19:28:41.653364  148525 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1010 19:28:42.655360  148525 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001910447s
	I1010 19:28:42.655482  148525 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1010 19:28:47.658431  148525 kubeadm.go:310] [api-check] The API server is healthy after 5.003169217s
	I1010 19:28:47.676178  148525 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1010 19:28:47.694752  148525 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1010 19:28:47.720376  148525 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1010 19:28:47.720645  148525 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-361847 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1010 19:28:47.736489  148525 kubeadm.go:310] [bootstrap-token] Using token: cprf0t.lm4xp75yi0cdu4sy
	I1010 19:28:46.228217  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:48.726740  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:47.737958  148525 out.go:235]   - Configuring RBAC rules ...
	I1010 19:28:47.738089  148525 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1010 19:28:47.750073  148525 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1010 19:28:47.758010  148525 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1010 19:28:47.761649  148525 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1010 19:28:47.768953  148525 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1010 19:28:47.774428  148525 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1010 19:28:48.065988  148525 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1010 19:28:48.502538  148525 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1010 19:28:49.066479  148525 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1010 19:28:49.069842  148525 kubeadm.go:310] 
	I1010 19:28:49.069937  148525 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1010 19:28:49.069947  148525 kubeadm.go:310] 
	I1010 19:28:49.070046  148525 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1010 19:28:49.070058  148525 kubeadm.go:310] 
	I1010 19:28:49.070089  148525 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1010 19:28:49.070166  148525 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1010 19:28:49.070254  148525 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1010 19:28:49.070265  148525 kubeadm.go:310] 
	I1010 19:28:49.070342  148525 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1010 19:28:49.070353  148525 kubeadm.go:310] 
	I1010 19:28:49.070446  148525 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1010 19:28:49.070478  148525 kubeadm.go:310] 
	I1010 19:28:49.070544  148525 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1010 19:28:49.070640  148525 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1010 19:28:49.070750  148525 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1010 19:28:49.070773  148525 kubeadm.go:310] 
	I1010 19:28:49.070880  148525 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1010 19:28:49.070990  148525 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1010 19:28:49.071001  148525 kubeadm.go:310] 
	I1010 19:28:49.071153  148525 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token cprf0t.lm4xp75yi0cdu4sy \
	I1010 19:28:49.071299  148525 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:bc8568f96f96483e4403a7eff9866ac7bd8b0e76d8203796a49dd1a769f11bda \
	I1010 19:28:49.071330  148525 kubeadm.go:310] 	--control-plane 
	I1010 19:28:49.071349  148525 kubeadm.go:310] 
	I1010 19:28:49.071468  148525 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1010 19:28:49.071497  148525 kubeadm.go:310] 
	I1010 19:28:49.072228  148525 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token cprf0t.lm4xp75yi0cdu4sy \
	I1010 19:28:49.072354  148525 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:bc8568f96f96483e4403a7eff9866ac7bd8b0e76d8203796a49dd1a769f11bda 
	I1010 19:28:49.074595  148525 kubeadm.go:310] W1010 19:28:40.525557    2560 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1010 19:28:49.074944  148525 kubeadm.go:310] W1010 19:28:40.526329    2560 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1010 19:28:49.075102  148525 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1010 19:28:49.075143  148525 cni.go:84] Creating CNI manager for ""
	I1010 19:28:49.075166  148525 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1010 19:28:49.077190  148525 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1010 19:28:49.078665  148525 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1010 19:28:49.091792  148525 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1010 19:28:49.113801  148525 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1010 19:28:49.113920  148525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-361847 minikube.k8s.io/updated_at=2024_10_10T19_28_49_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=e90d7de550e839433dc631623053398b652747fc minikube.k8s.io/name=default-k8s-diff-port-361847 minikube.k8s.io/primary=true
	I1010 19:28:49.114074  148525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:49.154398  148525 ops.go:34] apiserver oom_adj: -16
	I1010 19:28:49.351271  148525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:49.852049  148525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:50.351441  148525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:50.852022  148525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:51.351391  148525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:51.851329  148525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:52.351840  148525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:52.852392  148525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:53.351397  148525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:53.443325  148525 kubeadm.go:1113] duration metric: took 4.329288133s to wait for elevateKubeSystemPrivileges
	I1010 19:28:53.443363  148525 kubeadm.go:394] duration metric: took 4m54.439732071s to StartCluster
	I1010 19:28:53.443386  148525 settings.go:142] acquiring lock: {Name:mk8d73e472e6ca16e47be8eb712406a95625b8da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 19:28:53.443481  148525 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19787-81676/kubeconfig
	I1010 19:28:53.445465  148525 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19787-81676/kubeconfig: {Name:mk31297b607b774870865548d952622c04d970cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 19:28:53.445747  148525 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.32 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1010 19:28:53.445842  148525 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1010 19:28:53.445957  148525 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-361847"
	I1010 19:28:53.445980  148525 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-361847"
	W1010 19:28:53.445992  148525 addons.go:243] addon storage-provisioner should already be in state true
	I1010 19:28:53.446004  148525 config.go:182] Loaded profile config "default-k8s-diff-port-361847": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 19:28:53.446026  148525 host.go:66] Checking if "default-k8s-diff-port-361847" exists ...
	I1010 19:28:53.446065  148525 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-361847"
	I1010 19:28:53.446100  148525 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-361847"
	I1010 19:28:53.446085  148525 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-361847"
	I1010 19:28:53.446137  148525 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-361847"
	W1010 19:28:53.446151  148525 addons.go:243] addon metrics-server should already be in state true
	I1010 19:28:53.446242  148525 host.go:66] Checking if "default-k8s-diff-port-361847" exists ...
	I1010 19:28:53.446515  148525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:28:53.446562  148525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:28:53.447089  148525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:28:53.447135  148525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:28:53.447315  148525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:28:53.447360  148525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:28:53.450779  148525 out.go:177] * Verifying Kubernetes components...
	I1010 19:28:53.452838  148525 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 19:28:53.465502  148525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36411
	I1010 19:28:53.466020  148525 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:28:53.466572  148525 main.go:141] libmachine: Using API Version  1
	I1010 19:28:53.466594  148525 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:28:53.466772  148525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42293
	I1010 19:28:53.467034  148525 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:28:53.467209  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetState
	I1010 19:28:53.467310  148525 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:28:53.467828  148525 main.go:141] libmachine: Using API Version  1
	I1010 19:28:53.467857  148525 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:28:53.467899  148525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36387
	I1010 19:28:53.468270  148525 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:28:53.468451  148525 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:28:53.468866  148525 main.go:141] libmachine: Using API Version  1
	I1010 19:28:53.468891  148525 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:28:53.469102  148525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:28:53.469150  148525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:28:53.469484  148525 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:28:53.470068  148525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:28:53.470114  148525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:28:53.471192  148525 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-361847"
	W1010 19:28:53.471213  148525 addons.go:243] addon default-storageclass should already be in state true
	I1010 19:28:53.471261  148525 host.go:66] Checking if "default-k8s-diff-port-361847" exists ...
	I1010 19:28:53.471618  148525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:28:53.471664  148525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:28:53.486550  148525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38971
	I1010 19:28:53.487068  148525 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:28:53.487608  148525 main.go:141] libmachine: Using API Version  1
	I1010 19:28:53.487626  148525 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:28:53.488015  148525 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:28:53.488329  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetState
	I1010 19:28:53.490200  148525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44971
	I1010 19:28:53.490240  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .DriverName
	I1010 19:28:53.490790  148525 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:28:53.491318  148525 main.go:141] libmachine: Using API Version  1
	I1010 19:28:53.491341  148525 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:28:53.491682  148525 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:28:53.491957  148525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45297
	I1010 19:28:53.492100  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetState
	I1010 19:28:53.492423  148525 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:28:53.492731  148525 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1010 19:28:53.492811  148525 main.go:141] libmachine: Using API Version  1
	I1010 19:28:53.492831  148525 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:28:53.493240  148525 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:28:53.493885  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .DriverName
	I1010 19:28:53.493979  148525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:28:53.494031  148525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:28:53.494359  148525 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1010 19:28:53.494381  148525 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1010 19:28:53.494397  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHHostname
	I1010 19:28:53.495771  148525 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 19:28:51.226596  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:53.227299  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:53.227335  147213 pod_ready.go:82] duration metric: took 4m0.007224391s for pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace to be "Ready" ...
	E1010 19:28:53.227346  147213 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1010 19:28:53.227355  147213 pod_ready.go:39] duration metric: took 4m5.554224355s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 19:28:53.227375  147213 api_server.go:52] waiting for apiserver process to appear ...
	I1010 19:28:53.227419  147213 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:28:53.227484  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:28:53.288713  147213 cri.go:89] found id: "20a9cb514f18abab89d82173a52c43693b13548712f96726c491e14100e5deba"
	I1010 19:28:53.288749  147213 cri.go:89] found id: ""
	I1010 19:28:53.288759  147213 logs.go:282] 1 containers: [20a9cb514f18abab89d82173a52c43693b13548712f96726c491e14100e5deba]
	I1010 19:28:53.288823  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:53.294819  147213 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:28:53.294904  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:28:53.340169  147213 cri.go:89] found id: "bfc9f1f069a0299235908025d3c8840e059ddc2fc2f579932edb134cff568c36"
	I1010 19:28:53.340197  147213 cri.go:89] found id: ""
	I1010 19:28:53.340207  147213 logs.go:282] 1 containers: [bfc9f1f069a0299235908025d3c8840e059ddc2fc2f579932edb134cff568c36]
	I1010 19:28:53.340271  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:53.345214  147213 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:28:53.345292  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:28:53.392808  147213 cri.go:89] found id: "3c98f0e3e46ce82feb8638281ffb8c8cdf326b15115a6edfe91de59de6868846"
	I1010 19:28:53.392838  147213 cri.go:89] found id: ""
	I1010 19:28:53.392859  147213 logs.go:282] 1 containers: [3c98f0e3e46ce82feb8638281ffb8c8cdf326b15115a6edfe91de59de6868846]
	I1010 19:28:53.392921  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:53.398275  147213 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:28:53.398361  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:28:53.439567  147213 cri.go:89] found id: "d397ef1d012accb6f9cb41fa5bd5ed4649588e9ad8552f87d2f9a579a9c3f023"
	I1010 19:28:53.439594  147213 cri.go:89] found id: ""
	I1010 19:28:53.439604  147213 logs.go:282] 1 containers: [d397ef1d012accb6f9cb41fa5bd5ed4649588e9ad8552f87d2f9a579a9c3f023]
	I1010 19:28:53.439665  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:53.444366  147213 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:28:53.444436  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:28:53.522580  147213 cri.go:89] found id: "3a26f9cbec8dc8e6e1b1c1cc24a2432dc85819a78557be2263db6bc34e847664"
	I1010 19:28:53.522597  147213 cri.go:89] found id: ""
	I1010 19:28:53.522605  147213 logs.go:282] 1 containers: [3a26f9cbec8dc8e6e1b1c1cc24a2432dc85819a78557be2263db6bc34e847664]
	I1010 19:28:53.522654  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:53.528890  147213 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:28:53.528974  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:28:53.575933  147213 cri.go:89] found id: "d59196636b2826805e226757e3c5d3683d49f2fddb43f2e965aeae02026742ea"
	I1010 19:28:53.575963  147213 cri.go:89] found id: ""
	I1010 19:28:53.575975  147213 logs.go:282] 1 containers: [d59196636b2826805e226757e3c5d3683d49f2fddb43f2e965aeae02026742ea]
	I1010 19:28:53.576035  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:53.581693  147213 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:28:53.581763  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:28:53.619789  147213 cri.go:89] found id: ""
	I1010 19:28:53.619819  147213 logs.go:282] 0 containers: []
	W1010 19:28:53.619831  147213 logs.go:284] No container was found matching "kindnet"
	I1010 19:28:53.619839  147213 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1010 19:28:53.619899  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1010 19:28:53.659715  147213 cri.go:89] found id: "dfabbf70cd44985666c6b54a456e49c66a41cc19300a483565decd937ce240ab"
	I1010 19:28:53.659746  147213 cri.go:89] found id: "e14d37c6da3f630d60720c5dc7ec414f060a7b327fafd27b633f3402e552840e"
	I1010 19:28:53.659752  147213 cri.go:89] found id: ""
	I1010 19:28:53.659762  147213 logs.go:282] 2 containers: [dfabbf70cd44985666c6b54a456e49c66a41cc19300a483565decd937ce240ab e14d37c6da3f630d60720c5dc7ec414f060a7b327fafd27b633f3402e552840e]
	I1010 19:28:53.659828  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:53.664377  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:53.668766  147213 logs.go:123] Gathering logs for dmesg ...
	I1010 19:28:53.668796  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:28:53.685976  147213 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:28:53.686007  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 19:28:53.497232  148525 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 19:28:53.497251  148525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1010 19:28:53.497273  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHHostname
	I1010 19:28:53.497732  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:28:53.498599  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:28:53.498627  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:28:53.498971  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHPort
	I1010 19:28:53.499159  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:28:53.499312  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHUsername
	I1010 19:28:53.499414  148525 sshutil.go:53] new ssh client: &{IP:192.168.50.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/default-k8s-diff-port-361847/id_rsa Username:docker}
	I1010 19:28:53.501044  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:28:53.501509  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:28:53.501531  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:28:53.501782  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHPort
	I1010 19:28:53.501956  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:28:53.502080  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHUsername
	I1010 19:28:53.502232  148525 sshutil.go:53] new ssh client: &{IP:192.168.50.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/default-k8s-diff-port-361847/id_rsa Username:docker}
	I1010 19:28:53.512240  148525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34899
	I1010 19:28:53.512809  148525 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:28:53.513347  148525 main.go:141] libmachine: Using API Version  1
	I1010 19:28:53.513368  148525 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:28:53.513787  148525 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:28:53.514001  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetState
	I1010 19:28:53.515436  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .DriverName
	I1010 19:28:53.515639  148525 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1010 19:28:53.515659  148525 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1010 19:28:53.515681  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHHostname
	I1010 19:28:53.518128  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:28:53.518596  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:28:53.518628  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:28:53.518909  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHPort
	I1010 19:28:53.519080  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:28:53.519216  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHUsername
	I1010 19:28:53.519376  148525 sshutil.go:53] new ssh client: &{IP:192.168.50.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/default-k8s-diff-port-361847/id_rsa Username:docker}
	I1010 19:28:53.712871  148525 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 19:28:53.755059  148525 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-361847" to be "Ready" ...
	I1010 19:28:53.766564  148525 node_ready.go:49] node "default-k8s-diff-port-361847" has status "Ready":"True"
	I1010 19:28:53.766590  148525 node_ready.go:38] duration metric: took 11.490223ms for node "default-k8s-diff-port-361847" to be "Ready" ...
	I1010 19:28:53.766603  148525 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 19:28:53.777458  148525 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-dh9th" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:53.875493  148525 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1010 19:28:53.875525  148525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1010 19:28:53.911443  148525 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1010 19:28:53.944885  148525 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1010 19:28:53.944919  148525 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1010 19:28:53.945487  148525 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 19:28:54.011209  148525 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1010 19:28:54.011239  148525 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1010 19:28:54.039679  148525 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1010 19:28:54.598172  148525 main.go:141] libmachine: Making call to close driver server
	I1010 19:28:54.598226  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .Close
	I1010 19:28:54.598584  148525 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:28:54.598608  148525 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:28:54.598619  148525 main.go:141] libmachine: Making call to close driver server
	I1010 19:28:54.598629  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .Close
	I1010 19:28:54.598898  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | Closing plugin on server side
	I1010 19:28:54.598931  148525 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:28:54.598939  148525 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:28:54.643365  148525 main.go:141] libmachine: Making call to close driver server
	I1010 19:28:54.643392  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .Close
	I1010 19:28:54.643734  148525 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:28:54.643760  148525 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:28:55.287018  148525 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.341483807s)
	I1010 19:28:55.287045  148525 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.247326452s)
	I1010 19:28:55.287089  148525 main.go:141] libmachine: Making call to close driver server
	I1010 19:28:55.287094  148525 main.go:141] libmachine: Making call to close driver server
	I1010 19:28:55.287106  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .Close
	I1010 19:28:55.287112  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .Close
	I1010 19:28:55.287440  148525 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:28:55.287479  148525 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:28:55.287506  148525 main.go:141] libmachine: Making call to close driver server
	I1010 19:28:55.287524  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .Close
	I1010 19:28:55.287570  148525 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:28:55.287589  148525 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:28:55.287598  148525 main.go:141] libmachine: Making call to close driver server
	I1010 19:28:55.287607  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .Close
	I1010 19:28:55.287818  148525 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:28:55.287831  148525 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:28:55.287840  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | Closing plugin on server side
	I1010 19:28:55.287855  148525 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:28:55.287862  148525 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:28:55.287872  148525 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-361847"
	I1010 19:28:55.287880  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | Closing plugin on server side
	I1010 19:28:55.289944  148525 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1010 19:28:53.841387  147213 logs.go:123] Gathering logs for kube-apiserver [20a9cb514f18abab89d82173a52c43693b13548712f96726c491e14100e5deba] ...
	I1010 19:28:53.841441  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 20a9cb514f18abab89d82173a52c43693b13548712f96726c491e14100e5deba"
	I1010 19:28:53.892951  147213 logs.go:123] Gathering logs for coredns [3c98f0e3e46ce82feb8638281ffb8c8cdf326b15115a6edfe91de59de6868846] ...
	I1010 19:28:53.893005  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c98f0e3e46ce82feb8638281ffb8c8cdf326b15115a6edfe91de59de6868846"
	I1010 19:28:53.947636  147213 logs.go:123] Gathering logs for kube-scheduler [d397ef1d012accb6f9cb41fa5bd5ed4649588e9ad8552f87d2f9a579a9c3f023] ...
	I1010 19:28:53.947668  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d397ef1d012accb6f9cb41fa5bd5ed4649588e9ad8552f87d2f9a579a9c3f023"
	I1010 19:28:53.992969  147213 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:28:53.992998  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:28:54.520652  147213 logs.go:123] Gathering logs for kubelet ...
	I1010 19:28:54.520703  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:28:54.588366  147213 logs.go:123] Gathering logs for etcd [bfc9f1f069a0299235908025d3c8840e059ddc2fc2f579932edb134cff568c36] ...
	I1010 19:28:54.588418  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bfc9f1f069a0299235908025d3c8840e059ddc2fc2f579932edb134cff568c36"
	I1010 19:28:54.651179  147213 logs.go:123] Gathering logs for kube-proxy [3a26f9cbec8dc8e6e1b1c1cc24a2432dc85819a78557be2263db6bc34e847664] ...
	I1010 19:28:54.651227  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3a26f9cbec8dc8e6e1b1c1cc24a2432dc85819a78557be2263db6bc34e847664"
	I1010 19:28:54.712881  147213 logs.go:123] Gathering logs for kube-controller-manager [d59196636b2826805e226757e3c5d3683d49f2fddb43f2e965aeae02026742ea] ...
	I1010 19:28:54.712925  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d59196636b2826805e226757e3c5d3683d49f2fddb43f2e965aeae02026742ea"
	I1010 19:28:54.779030  147213 logs.go:123] Gathering logs for storage-provisioner [dfabbf70cd44985666c6b54a456e49c66a41cc19300a483565decd937ce240ab] ...
	I1010 19:28:54.779094  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dfabbf70cd44985666c6b54a456e49c66a41cc19300a483565decd937ce240ab"
	I1010 19:28:54.821961  147213 logs.go:123] Gathering logs for storage-provisioner [e14d37c6da3f630d60720c5dc7ec414f060a7b327fafd27b633f3402e552840e] ...
	I1010 19:28:54.822002  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e14d37c6da3f630d60720c5dc7ec414f060a7b327fafd27b633f3402e552840e"
	I1010 19:28:54.871409  147213 logs.go:123] Gathering logs for container status ...
	I1010 19:28:54.871446  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:28:57.425310  147213 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:28:57.442308  147213 api_server.go:72] duration metric: took 4m17.02881034s to wait for apiserver process to appear ...
	I1010 19:28:57.442343  147213 api_server.go:88] waiting for apiserver healthz status ...
	I1010 19:28:57.442383  147213 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:28:57.442444  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:28:57.481392  147213 cri.go:89] found id: "20a9cb514f18abab89d82173a52c43693b13548712f96726c491e14100e5deba"
	I1010 19:28:57.481420  147213 cri.go:89] found id: ""
	I1010 19:28:57.481430  147213 logs.go:282] 1 containers: [20a9cb514f18abab89d82173a52c43693b13548712f96726c491e14100e5deba]
	I1010 19:28:57.481503  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:57.486191  147213 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:28:57.486269  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:28:57.532238  147213 cri.go:89] found id: "bfc9f1f069a0299235908025d3c8840e059ddc2fc2f579932edb134cff568c36"
	I1010 19:28:57.532271  147213 cri.go:89] found id: ""
	I1010 19:28:57.532284  147213 logs.go:282] 1 containers: [bfc9f1f069a0299235908025d3c8840e059ddc2fc2f579932edb134cff568c36]
	I1010 19:28:57.532357  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:57.538105  147213 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:28:57.538188  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:28:57.579729  147213 cri.go:89] found id: "3c98f0e3e46ce82feb8638281ffb8c8cdf326b15115a6edfe91de59de6868846"
	I1010 19:28:57.579757  147213 cri.go:89] found id: ""
	I1010 19:28:57.579767  147213 logs.go:282] 1 containers: [3c98f0e3e46ce82feb8638281ffb8c8cdf326b15115a6edfe91de59de6868846]
	I1010 19:28:57.579833  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:57.584494  147213 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:28:57.584568  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:28:57.623920  147213 cri.go:89] found id: "d397ef1d012accb6f9cb41fa5bd5ed4649588e9ad8552f87d2f9a579a9c3f023"
	I1010 19:28:57.623949  147213 cri.go:89] found id: ""
	I1010 19:28:57.623960  147213 logs.go:282] 1 containers: [d397ef1d012accb6f9cb41fa5bd5ed4649588e9ad8552f87d2f9a579a9c3f023]
	I1010 19:28:57.624028  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:57.628927  147213 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:28:57.629018  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:28:57.669669  147213 cri.go:89] found id: "3a26f9cbec8dc8e6e1b1c1cc24a2432dc85819a78557be2263db6bc34e847664"
	I1010 19:28:57.669698  147213 cri.go:89] found id: ""
	I1010 19:28:57.669707  147213 logs.go:282] 1 containers: [3a26f9cbec8dc8e6e1b1c1cc24a2432dc85819a78557be2263db6bc34e847664]
	I1010 19:28:57.669771  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:57.674449  147213 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:28:57.674526  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:28:57.721856  147213 cri.go:89] found id: "d59196636b2826805e226757e3c5d3683d49f2fddb43f2e965aeae02026742ea"
	I1010 19:28:57.721881  147213 cri.go:89] found id: ""
	I1010 19:28:57.721891  147213 logs.go:282] 1 containers: [d59196636b2826805e226757e3c5d3683d49f2fddb43f2e965aeae02026742ea]
	I1010 19:28:57.721955  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:57.726422  147213 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:28:57.726497  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:28:57.764464  147213 cri.go:89] found id: ""
	I1010 19:28:57.764499  147213 logs.go:282] 0 containers: []
	W1010 19:28:57.764512  147213 logs.go:284] No container was found matching "kindnet"
	I1010 19:28:57.764521  147213 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1010 19:28:57.764595  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1010 19:28:57.809758  147213 cri.go:89] found id: "dfabbf70cd44985666c6b54a456e49c66a41cc19300a483565decd937ce240ab"
	I1010 19:28:57.809784  147213 cri.go:89] found id: "e14d37c6da3f630d60720c5dc7ec414f060a7b327fafd27b633f3402e552840e"
	I1010 19:28:57.809788  147213 cri.go:89] found id: ""
	I1010 19:28:57.809797  147213 logs.go:282] 2 containers: [dfabbf70cd44985666c6b54a456e49c66a41cc19300a483565decd937ce240ab e14d37c6da3f630d60720c5dc7ec414f060a7b327fafd27b633f3402e552840e]
	I1010 19:28:57.809854  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:57.815576  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:57.820152  147213 logs.go:123] Gathering logs for kube-apiserver [20a9cb514f18abab89d82173a52c43693b13548712f96726c491e14100e5deba] ...
	I1010 19:28:57.820181  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 20a9cb514f18abab89d82173a52c43693b13548712f96726c491e14100e5deba"
	I1010 19:28:57.869339  147213 logs.go:123] Gathering logs for etcd [bfc9f1f069a0299235908025d3c8840e059ddc2fc2f579932edb134cff568c36] ...
	I1010 19:28:57.869383  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bfc9f1f069a0299235908025d3c8840e059ddc2fc2f579932edb134cff568c36"
	I1010 19:28:57.918698  147213 logs.go:123] Gathering logs for kube-proxy [3a26f9cbec8dc8e6e1b1c1cc24a2432dc85819a78557be2263db6bc34e847664] ...
	I1010 19:28:57.918739  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3a26f9cbec8dc8e6e1b1c1cc24a2432dc85819a78557be2263db6bc34e847664"
	I1010 19:28:57.960939  147213 logs.go:123] Gathering logs for kube-controller-manager [d59196636b2826805e226757e3c5d3683d49f2fddb43f2e965aeae02026742ea] ...
	I1010 19:28:57.960985  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d59196636b2826805e226757e3c5d3683d49f2fddb43f2e965aeae02026742ea"
	I1010 19:28:58.013572  147213 logs.go:123] Gathering logs for storage-provisioner [dfabbf70cd44985666c6b54a456e49c66a41cc19300a483565decd937ce240ab] ...
	I1010 19:28:58.013612  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dfabbf70cd44985666c6b54a456e49c66a41cc19300a483565decd937ce240ab"
	I1010 19:28:58.053247  147213 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:28:58.053277  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:28:58.507428  147213 logs.go:123] Gathering logs for container status ...
	I1010 19:28:58.507473  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:28:58.552704  147213 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:28:58.552742  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 19:28:58.672077  147213 logs.go:123] Gathering logs for dmesg ...
	I1010 19:28:58.672127  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:28:58.690997  147213 logs.go:123] Gathering logs for coredns [3c98f0e3e46ce82feb8638281ffb8c8cdf326b15115a6edfe91de59de6868846] ...
	I1010 19:28:58.691049  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c98f0e3e46ce82feb8638281ffb8c8cdf326b15115a6edfe91de59de6868846"
	I1010 19:28:58.735251  147213 logs.go:123] Gathering logs for kube-scheduler [d397ef1d012accb6f9cb41fa5bd5ed4649588e9ad8552f87d2f9a579a9c3f023] ...
	I1010 19:28:58.735287  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d397ef1d012accb6f9cb41fa5bd5ed4649588e9ad8552f87d2f9a579a9c3f023"
	I1010 19:28:55.291700  148525 addons.go:510] duration metric: took 1.845864985s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1010 19:28:55.785186  148525 pod_ready.go:103] pod "coredns-7c65d6cfc9-dh9th" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:57.789567  148525 pod_ready.go:103] pod "coredns-7c65d6cfc9-dh9th" in "kube-system" namespace has status "Ready":"False"
	I1010 19:29:00.284444  148525 pod_ready.go:103] pod "coredns-7c65d6cfc9-dh9th" in "kube-system" namespace has status "Ready":"False"
	I1010 19:29:01.297627  148525 pod_ready.go:93] pod "coredns-7c65d6cfc9-dh9th" in "kube-system" namespace has status "Ready":"True"
	I1010 19:29:01.297660  148525 pod_ready.go:82] duration metric: took 7.520173084s for pod "coredns-7c65d6cfc9-dh9th" in "kube-system" namespace to be "Ready" ...
	I1010 19:29:01.297676  148525 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-fgxh7" in "kube-system" namespace to be "Ready" ...
	I1010 19:29:01.804654  148525 pod_ready.go:93] pod "coredns-7c65d6cfc9-fgxh7" in "kube-system" namespace has status "Ready":"True"
	I1010 19:29:01.804676  148525 pod_ready.go:82] duration metric: took 506.992872ms for pod "coredns-7c65d6cfc9-fgxh7" in "kube-system" namespace to be "Ready" ...
	I1010 19:29:01.804690  148525 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	I1010 19:29:01.809788  148525 pod_ready.go:93] pod "etcd-default-k8s-diff-port-361847" in "kube-system" namespace has status "Ready":"True"
	I1010 19:29:01.809814  148525 pod_ready.go:82] duration metric: took 5.116023ms for pod "etcd-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	I1010 19:29:01.809825  148525 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	I1010 19:29:01.814460  148525 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-361847" in "kube-system" namespace has status "Ready":"True"
	I1010 19:29:01.814486  148525 pod_ready.go:82] duration metric: took 4.652085ms for pod "kube-apiserver-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	I1010 19:29:01.814501  148525 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	I1010 19:29:01.819719  148525 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-361847" in "kube-system" namespace has status "Ready":"True"
	I1010 19:29:01.819741  148525 pod_ready.go:82] duration metric: took 5.231258ms for pod "kube-controller-manager-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	I1010 19:29:01.819753  148525 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-jlvn6" in "kube-system" namespace to be "Ready" ...
	I1010 19:29:02.082285  148525 pod_ready.go:93] pod "kube-proxy-jlvn6" in "kube-system" namespace has status "Ready":"True"
	I1010 19:29:02.082325  148525 pod_ready.go:82] duration metric: took 262.562954ms for pod "kube-proxy-jlvn6" in "kube-system" namespace to be "Ready" ...
	I1010 19:29:02.082342  148525 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	I1010 19:29:02.481705  148525 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-361847" in "kube-system" namespace has status "Ready":"True"
	I1010 19:29:02.481730  148525 pod_ready.go:82] duration metric: took 399.378957ms for pod "kube-scheduler-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	I1010 19:29:02.481742  148525 pod_ready.go:39] duration metric: took 8.715126416s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 19:29:02.481779  148525 api_server.go:52] waiting for apiserver process to appear ...
	I1010 19:29:02.481832  148525 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:29:02.498706  148525 api_server.go:72] duration metric: took 9.052891898s to wait for apiserver process to appear ...
	I1010 19:29:02.498760  148525 api_server.go:88] waiting for apiserver healthz status ...
	I1010 19:29:02.498795  148525 api_server.go:253] Checking apiserver healthz at https://192.168.50.32:8444/healthz ...
	I1010 19:29:02.503501  148525 api_server.go:279] https://192.168.50.32:8444/healthz returned 200:
	ok
	I1010 19:29:02.504594  148525 api_server.go:141] control plane version: v1.31.1
	I1010 19:29:02.504620  148525 api_server.go:131] duration metric: took 5.850548ms to wait for apiserver health ...
	I1010 19:29:02.504629  148525 system_pods.go:43] waiting for kube-system pods to appear ...
	I1010 19:29:02.685579  148525 system_pods.go:59] 9 kube-system pods found
	I1010 19:29:02.685611  148525 system_pods.go:61] "coredns-7c65d6cfc9-dh9th" [ff14d755-810a-497a-b1fc-7fe231748af3] Running
	I1010 19:29:02.685618  148525 system_pods.go:61] "coredns-7c65d6cfc9-fgxh7" [b4faa977-3205-4395-bda3-8fe24fdcf6cc] Running
	I1010 19:29:02.685624  148525 system_pods.go:61] "etcd-default-k8s-diff-port-361847" [d7e1e625-2945-4755-927a-aab5e40f5392] Running
	I1010 19:29:02.685630  148525 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-361847" [0f7dace6-6cdb-4438-a4b3-3fecac56c709] Running
	I1010 19:29:02.685635  148525 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-361847" [d08fb57e-d916-4d4f-929a-509c1c8c0f89] Running
	I1010 19:29:02.685639  148525 system_pods.go:61] "kube-proxy-jlvn6" [6336f682-0362-4855-b848-3540052aec19] Running
	I1010 19:29:02.685644  148525 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-361847" [60282768-5672-42b9-b9f5-6d5cd0b186cf] Running
	I1010 19:29:02.685653  148525 system_pods.go:61] "metrics-server-6867b74b74-fdf7p" [6f8ca204-13fe-4adb-9c09-33ec6821ff2d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1010 19:29:02.685658  148525 system_pods.go:61] "storage-provisioner" [ea1a4ade-9648-401f-a0ad-633ab3c1196b] Running
	I1010 19:29:02.685669  148525 system_pods.go:74] duration metric: took 181.032548ms to wait for pod list to return data ...
	I1010 19:29:02.685683  148525 default_sa.go:34] waiting for default service account to be created ...
	I1010 19:29:02.883256  148525 default_sa.go:45] found service account: "default"
	I1010 19:29:02.883288  148525 default_sa.go:55] duration metric: took 197.59742ms for default service account to be created ...
	I1010 19:29:02.883298  148525 system_pods.go:116] waiting for k8s-apps to be running ...
	I1010 19:29:03.084706  148525 system_pods.go:86] 9 kube-system pods found
	I1010 19:29:03.084737  148525 system_pods.go:89] "coredns-7c65d6cfc9-dh9th" [ff14d755-810a-497a-b1fc-7fe231748af3] Running
	I1010 19:29:03.084742  148525 system_pods.go:89] "coredns-7c65d6cfc9-fgxh7" [b4faa977-3205-4395-bda3-8fe24fdcf6cc] Running
	I1010 19:29:03.084746  148525 system_pods.go:89] "etcd-default-k8s-diff-port-361847" [d7e1e625-2945-4755-927a-aab5e40f5392] Running
	I1010 19:29:03.084751  148525 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-361847" [0f7dace6-6cdb-4438-a4b3-3fecac56c709] Running
	I1010 19:29:03.084755  148525 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-361847" [d08fb57e-d916-4d4f-929a-509c1c8c0f89] Running
	I1010 19:29:03.084759  148525 system_pods.go:89] "kube-proxy-jlvn6" [6336f682-0362-4855-b848-3540052aec19] Running
	I1010 19:29:03.084762  148525 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-361847" [60282768-5672-42b9-b9f5-6d5cd0b186cf] Running
	I1010 19:29:03.084768  148525 system_pods.go:89] "metrics-server-6867b74b74-fdf7p" [6f8ca204-13fe-4adb-9c09-33ec6821ff2d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1010 19:29:03.084772  148525 system_pods.go:89] "storage-provisioner" [ea1a4ade-9648-401f-a0ad-633ab3c1196b] Running
	I1010 19:29:03.084779  148525 system_pods.go:126] duration metric: took 201.476637ms to wait for k8s-apps to be running ...
	I1010 19:29:03.084787  148525 system_svc.go:44] waiting for kubelet service to be running ....
	I1010 19:29:03.084832  148525 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 19:29:03.100986  148525 system_svc.go:56] duration metric: took 16.183062ms WaitForService to wait for kubelet
	I1010 19:29:03.101026  148525 kubeadm.go:582] duration metric: took 9.655245557s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1010 19:29:03.101050  148525 node_conditions.go:102] verifying NodePressure condition ...
	I1010 19:29:03.282063  148525 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1010 19:29:03.282095  148525 node_conditions.go:123] node cpu capacity is 2
	I1010 19:29:03.282106  148525 node_conditions.go:105] duration metric: took 181.049888ms to run NodePressure ...
	I1010 19:29:03.282119  148525 start.go:241] waiting for startup goroutines ...
	I1010 19:29:03.282125  148525 start.go:246] waiting for cluster config update ...
	I1010 19:29:03.282135  148525 start.go:255] writing updated cluster config ...
	I1010 19:29:03.282414  148525 ssh_runner.go:195] Run: rm -f paused
	I1010 19:29:03.331838  148525 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1010 19:29:03.333698  148525 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-361847" cluster and "default" namespace by default
	I1010 19:28:58.775358  147213 logs.go:123] Gathering logs for storage-provisioner [e14d37c6da3f630d60720c5dc7ec414f060a7b327fafd27b633f3402e552840e] ...
	I1010 19:28:58.775396  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e14d37c6da3f630d60720c5dc7ec414f060a7b327fafd27b633f3402e552840e"
	I1010 19:28:58.812210  147213 logs.go:123] Gathering logs for kubelet ...
	I1010 19:28:58.812269  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:29:01.381750  147213 api_server.go:253] Checking apiserver healthz at https://192.168.72.11:8443/healthz ...
	I1010 19:29:01.386658  147213 api_server.go:279] https://192.168.72.11:8443/healthz returned 200:
	ok
	I1010 19:29:01.387793  147213 api_server.go:141] control plane version: v1.31.1
	I1010 19:29:01.387819  147213 api_server.go:131] duration metric: took 3.945468552s to wait for apiserver health ...
	I1010 19:29:01.387829  147213 system_pods.go:43] waiting for kube-system pods to appear ...
	I1010 19:29:01.387861  147213 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:29:01.387948  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:29:01.433312  147213 cri.go:89] found id: "20a9cb514f18abab89d82173a52c43693b13548712f96726c491e14100e5deba"
	I1010 19:29:01.433344  147213 cri.go:89] found id: ""
	I1010 19:29:01.433433  147213 logs.go:282] 1 containers: [20a9cb514f18abab89d82173a52c43693b13548712f96726c491e14100e5deba]
	I1010 19:29:01.433521  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:29:01.437920  147213 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:29:01.437983  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:29:01.476429  147213 cri.go:89] found id: "bfc9f1f069a0299235908025d3c8840e059ddc2fc2f579932edb134cff568c36"
	I1010 19:29:01.476458  147213 cri.go:89] found id: ""
	I1010 19:29:01.476470  147213 logs.go:282] 1 containers: [bfc9f1f069a0299235908025d3c8840e059ddc2fc2f579932edb134cff568c36]
	I1010 19:29:01.476522  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:29:01.480912  147213 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:29:01.480987  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:29:01.522141  147213 cri.go:89] found id: "3c98f0e3e46ce82feb8638281ffb8c8cdf326b15115a6edfe91de59de6868846"
	I1010 19:29:01.522164  147213 cri.go:89] found id: ""
	I1010 19:29:01.522173  147213 logs.go:282] 1 containers: [3c98f0e3e46ce82feb8638281ffb8c8cdf326b15115a6edfe91de59de6868846]
	I1010 19:29:01.522238  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:29:01.526742  147213 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:29:01.526803  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:29:01.572715  147213 cri.go:89] found id: "d397ef1d012accb6f9cb41fa5bd5ed4649588e9ad8552f87d2f9a579a9c3f023"
	I1010 19:29:01.572747  147213 cri.go:89] found id: ""
	I1010 19:29:01.572759  147213 logs.go:282] 1 containers: [d397ef1d012accb6f9cb41fa5bd5ed4649588e9ad8552f87d2f9a579a9c3f023]
	I1010 19:29:01.572814  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:29:01.577754  147213 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:29:01.577832  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:29:01.616077  147213 cri.go:89] found id: "3a26f9cbec8dc8e6e1b1c1cc24a2432dc85819a78557be2263db6bc34e847664"
	I1010 19:29:01.616104  147213 cri.go:89] found id: ""
	I1010 19:29:01.616121  147213 logs.go:282] 1 containers: [3a26f9cbec8dc8e6e1b1c1cc24a2432dc85819a78557be2263db6bc34e847664]
	I1010 19:29:01.616185  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:29:01.620622  147213 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:29:01.620702  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:29:01.662859  147213 cri.go:89] found id: "d59196636b2826805e226757e3c5d3683d49f2fddb43f2e965aeae02026742ea"
	I1010 19:29:01.662889  147213 cri.go:89] found id: ""
	I1010 19:29:01.662903  147213 logs.go:282] 1 containers: [d59196636b2826805e226757e3c5d3683d49f2fddb43f2e965aeae02026742ea]
	I1010 19:29:01.662964  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:29:01.667491  147213 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:29:01.667585  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:29:01.706191  147213 cri.go:89] found id: ""
	I1010 19:29:01.706217  147213 logs.go:282] 0 containers: []
	W1010 19:29:01.706228  147213 logs.go:284] No container was found matching "kindnet"
	I1010 19:29:01.706234  147213 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1010 19:29:01.706299  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1010 19:29:01.753559  147213 cri.go:89] found id: "dfabbf70cd44985666c6b54a456e49c66a41cc19300a483565decd937ce240ab"
	I1010 19:29:01.753581  147213 cri.go:89] found id: "e14d37c6da3f630d60720c5dc7ec414f060a7b327fafd27b633f3402e552840e"
	I1010 19:29:01.753584  147213 cri.go:89] found id: ""
	I1010 19:29:01.753591  147213 logs.go:282] 2 containers: [dfabbf70cd44985666c6b54a456e49c66a41cc19300a483565decd937ce240ab e14d37c6da3f630d60720c5dc7ec414f060a7b327fafd27b633f3402e552840e]
	I1010 19:29:01.753645  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:29:01.758179  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:29:01.762336  147213 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:29:01.762358  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 19:29:01.867667  147213 logs.go:123] Gathering logs for etcd [bfc9f1f069a0299235908025d3c8840e059ddc2fc2f579932edb134cff568c36] ...
	I1010 19:29:01.867698  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bfc9f1f069a0299235908025d3c8840e059ddc2fc2f579932edb134cff568c36"
	I1010 19:29:01.911722  147213 logs.go:123] Gathering logs for coredns [3c98f0e3e46ce82feb8638281ffb8c8cdf326b15115a6edfe91de59de6868846] ...
	I1010 19:29:01.911756  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c98f0e3e46ce82feb8638281ffb8c8cdf326b15115a6edfe91de59de6868846"
	I1010 19:29:01.955152  147213 logs.go:123] Gathering logs for kube-proxy [3a26f9cbec8dc8e6e1b1c1cc24a2432dc85819a78557be2263db6bc34e847664] ...
	I1010 19:29:01.955189  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3a26f9cbec8dc8e6e1b1c1cc24a2432dc85819a78557be2263db6bc34e847664"
	I1010 19:29:01.995010  147213 logs.go:123] Gathering logs for kube-controller-manager [d59196636b2826805e226757e3c5d3683d49f2fddb43f2e965aeae02026742ea] ...
	I1010 19:29:01.995041  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d59196636b2826805e226757e3c5d3683d49f2fddb43f2e965aeae02026742ea"
	I1010 19:29:02.047505  147213 logs.go:123] Gathering logs for storage-provisioner [dfabbf70cd44985666c6b54a456e49c66a41cc19300a483565decd937ce240ab] ...
	I1010 19:29:02.047546  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dfabbf70cd44985666c6b54a456e49c66a41cc19300a483565decd937ce240ab"
	I1010 19:29:02.085080  147213 logs.go:123] Gathering logs for storage-provisioner [e14d37c6da3f630d60720c5dc7ec414f060a7b327fafd27b633f3402e552840e] ...
	I1010 19:29:02.085110  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e14d37c6da3f630d60720c5dc7ec414f060a7b327fafd27b633f3402e552840e"
	I1010 19:29:02.128482  147213 logs.go:123] Gathering logs for kubelet ...
	I1010 19:29:02.128527  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:29:02.194867  147213 logs.go:123] Gathering logs for dmesg ...
	I1010 19:29:02.194904  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:29:02.211881  147213 logs.go:123] Gathering logs for kube-apiserver [20a9cb514f18abab89d82173a52c43693b13548712f96726c491e14100e5deba] ...
	I1010 19:29:02.211911  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 20a9cb514f18abab89d82173a52c43693b13548712f96726c491e14100e5deba"
	I1010 19:29:02.262969  147213 logs.go:123] Gathering logs for kube-scheduler [d397ef1d012accb6f9cb41fa5bd5ed4649588e9ad8552f87d2f9a579a9c3f023] ...
	I1010 19:29:02.263013  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d397ef1d012accb6f9cb41fa5bd5ed4649588e9ad8552f87d2f9a579a9c3f023"
	I1010 19:29:02.302921  147213 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:29:02.302956  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:29:02.671102  147213 logs.go:123] Gathering logs for container status ...
	I1010 19:29:02.671169  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:29:05.241477  147213 system_pods.go:59] 8 kube-system pods found
	I1010 19:29:05.241508  147213 system_pods.go:61] "coredns-7c65d6cfc9-86brb" [28e5f869-f82f-4bd4-9d9c-89499fa89c89] Running
	I1010 19:29:05.241513  147213 system_pods.go:61] "etcd-no-preload-320324" [3027fc7b-a2cd-47c9-a6f1-122bfd0475a8] Running
	I1010 19:29:05.241517  147213 system_pods.go:61] "kube-apiserver-no-preload-320324" [6e3d2eab-4568-48e9-957c-e724ff89b5da] Running
	I1010 19:29:05.241521  147213 system_pods.go:61] "kube-controller-manager-no-preload-320324" [a6d65cd3-48b1-4faf-bb7f-f6433641d6f3] Running
	I1010 19:29:05.241525  147213 system_pods.go:61] "kube-proxy-vn6sv" [e5b2c419-7299-4bc4-b263-99408b9484eb] Running
	I1010 19:29:05.241528  147213 system_pods.go:61] "kube-scheduler-no-preload-320324" [e089c258-e720-4c78-ada1-eebbe89556c1] Running
	I1010 19:29:05.241534  147213 system_pods.go:61] "metrics-server-6867b74b74-8w9lk" [354939e6-2ca9-44f5-8e8e-c10493c68b79] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1010 19:29:05.241540  147213 system_pods.go:61] "storage-provisioner" [d4965b53-60c3-4a97-bd52-d164d977247a] Running
	I1010 19:29:05.241549  147213 system_pods.go:74] duration metric: took 3.853712488s to wait for pod list to return data ...
	I1010 19:29:05.241556  147213 default_sa.go:34] waiting for default service account to be created ...
	I1010 19:29:05.244686  147213 default_sa.go:45] found service account: "default"
	I1010 19:29:05.244721  147213 default_sa.go:55] duration metric: took 3.158069ms for default service account to be created ...
	I1010 19:29:05.244733  147213 system_pods.go:116] waiting for k8s-apps to be running ...
	I1010 19:29:05.249372  147213 system_pods.go:86] 8 kube-system pods found
	I1010 19:29:05.249398  147213 system_pods.go:89] "coredns-7c65d6cfc9-86brb" [28e5f869-f82f-4bd4-9d9c-89499fa89c89] Running
	I1010 19:29:05.249404  147213 system_pods.go:89] "etcd-no-preload-320324" [3027fc7b-a2cd-47c9-a6f1-122bfd0475a8] Running
	I1010 19:29:05.249408  147213 system_pods.go:89] "kube-apiserver-no-preload-320324" [6e3d2eab-4568-48e9-957c-e724ff89b5da] Running
	I1010 19:29:05.249413  147213 system_pods.go:89] "kube-controller-manager-no-preload-320324" [a6d65cd3-48b1-4faf-bb7f-f6433641d6f3] Running
	I1010 19:29:05.249418  147213 system_pods.go:89] "kube-proxy-vn6sv" [e5b2c419-7299-4bc4-b263-99408b9484eb] Running
	I1010 19:29:05.249425  147213 system_pods.go:89] "kube-scheduler-no-preload-320324" [e089c258-e720-4c78-ada1-eebbe89556c1] Running
	I1010 19:29:05.249433  147213 system_pods.go:89] "metrics-server-6867b74b74-8w9lk" [354939e6-2ca9-44f5-8e8e-c10493c68b79] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1010 19:29:05.249442  147213 system_pods.go:89] "storage-provisioner" [d4965b53-60c3-4a97-bd52-d164d977247a] Running
	I1010 19:29:05.249455  147213 system_pods.go:126] duration metric: took 4.715381ms to wait for k8s-apps to be running ...
	I1010 19:29:05.249467  147213 system_svc.go:44] waiting for kubelet service to be running ....
	I1010 19:29:05.249519  147213 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 19:29:05.265180  147213 system_svc.go:56] duration metric: took 15.703413ms WaitForService to wait for kubelet
	I1010 19:29:05.265216  147213 kubeadm.go:582] duration metric: took 4m24.851723603s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1010 19:29:05.265237  147213 node_conditions.go:102] verifying NodePressure condition ...
	I1010 19:29:05.268775  147213 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1010 19:29:05.268807  147213 node_conditions.go:123] node cpu capacity is 2
	I1010 19:29:05.268821  147213 node_conditions.go:105] duration metric: took 3.575195ms to run NodePressure ...
	I1010 19:29:05.268834  147213 start.go:241] waiting for startup goroutines ...
	I1010 19:29:05.268840  147213 start.go:246] waiting for cluster config update ...
	I1010 19:29:05.268869  147213 start.go:255] writing updated cluster config ...
	I1010 19:29:05.269148  147213 ssh_runner.go:195] Run: rm -f paused
	I1010 19:29:05.319999  147213 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1010 19:29:05.322189  147213 out.go:177] * Done! kubectl is now configured to use "no-preload-320324" cluster and "default" namespace by default
	I1010 19:29:41.275119  148123 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1010 19:29:41.275272  148123 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1010 19:29:41.276822  148123 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1010 19:29:41.276919  148123 kubeadm.go:310] [preflight] Running pre-flight checks
	I1010 19:29:41.277017  148123 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1010 19:29:41.277142  148123 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1010 19:29:41.277254  148123 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1010 19:29:41.277357  148123 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1010 19:29:41.279069  148123 out.go:235]   - Generating certificates and keys ...
	I1010 19:29:41.279160  148123 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1010 19:29:41.279217  148123 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1010 19:29:41.279306  148123 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1010 19:29:41.279381  148123 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1010 19:29:41.279484  148123 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1010 19:29:41.279576  148123 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1010 19:29:41.279674  148123 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1010 19:29:41.279779  148123 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1010 19:29:41.279906  148123 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1010 19:29:41.279971  148123 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1010 19:29:41.280005  148123 kubeadm.go:310] [certs] Using the existing "sa" key
	I1010 19:29:41.280052  148123 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1010 19:29:41.280095  148123 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1010 19:29:41.280144  148123 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1010 19:29:41.280219  148123 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1010 19:29:41.280317  148123 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1010 19:29:41.280474  148123 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1010 19:29:41.280583  148123 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1010 19:29:41.280648  148123 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1010 19:29:41.280736  148123 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1010 19:29:41.282180  148123 out.go:235]   - Booting up control plane ...
	I1010 19:29:41.282266  148123 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1010 19:29:41.282336  148123 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1010 19:29:41.282414  148123 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1010 19:29:41.282538  148123 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1010 19:29:41.282748  148123 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1010 19:29:41.282822  148123 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1010 19:29:41.282896  148123 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1010 19:29:41.283061  148123 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1010 19:29:41.283123  148123 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1010 19:29:41.283287  148123 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1010 19:29:41.283348  148123 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1010 19:29:41.283519  148123 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1010 19:29:41.283623  148123 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1010 19:29:41.283809  148123 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1010 19:29:41.283900  148123 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1010 19:29:41.284115  148123 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1010 19:29:41.284135  148123 kubeadm.go:310] 
	I1010 19:29:41.284201  148123 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1010 19:29:41.284243  148123 kubeadm.go:310] 		timed out waiting for the condition
	I1010 19:29:41.284257  148123 kubeadm.go:310] 
	I1010 19:29:41.284307  148123 kubeadm.go:310] 	This error is likely caused by:
	I1010 19:29:41.284346  148123 kubeadm.go:310] 		- The kubelet is not running
	I1010 19:29:41.284481  148123 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1010 19:29:41.284505  148123 kubeadm.go:310] 
	I1010 19:29:41.284655  148123 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1010 19:29:41.284714  148123 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1010 19:29:41.284752  148123 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1010 19:29:41.284758  148123 kubeadm.go:310] 
	I1010 19:29:41.284913  148123 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1010 19:29:41.285038  148123 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1010 19:29:41.285055  148123 kubeadm.go:310] 
	I1010 19:29:41.285169  148123 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1010 19:29:41.285307  148123 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1010 19:29:41.285475  148123 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1010 19:29:41.285587  148123 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1010 19:29:41.285644  148123 kubeadm.go:310] 
	W1010 19:29:41.285747  148123 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1010 19:29:41.285792  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1010 19:29:46.681684  148123 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.395862838s)
	I1010 19:29:46.681797  148123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 19:29:46.696927  148123 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1010 19:29:46.708250  148123 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1010 19:29:46.708280  148123 kubeadm.go:157] found existing configuration files:
	
	I1010 19:29:46.708339  148123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1010 19:29:46.719748  148123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1010 19:29:46.719828  148123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1010 19:29:46.730892  148123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1010 19:29:46.742329  148123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1010 19:29:46.742401  148123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1010 19:29:46.752919  148123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1010 19:29:46.762860  148123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1010 19:29:46.762932  148123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1010 19:29:46.772513  148123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1010 19:29:46.781672  148123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1010 19:29:46.781746  148123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1010 19:29:46.791250  148123 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1010 19:29:47.018706  148123 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1010 19:31:43.351851  148123 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1010 19:31:43.351992  148123 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1010 19:31:43.353664  148123 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1010 19:31:43.353752  148123 kubeadm.go:310] [preflight] Running pre-flight checks
	I1010 19:31:43.353861  148123 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1010 19:31:43.353998  148123 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1010 19:31:43.354130  148123 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1010 19:31:43.354235  148123 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1010 19:31:43.356450  148123 out.go:235]   - Generating certificates and keys ...
	I1010 19:31:43.356557  148123 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1010 19:31:43.356652  148123 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1010 19:31:43.356783  148123 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1010 19:31:43.356902  148123 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1010 19:31:43.357002  148123 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1010 19:31:43.357074  148123 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1010 19:31:43.357181  148123 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1010 19:31:43.357247  148123 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1010 19:31:43.357325  148123 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1010 19:31:43.357408  148123 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1010 19:31:43.357459  148123 kubeadm.go:310] [certs] Using the existing "sa" key
	I1010 19:31:43.357529  148123 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1010 19:31:43.357604  148123 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1010 19:31:43.357676  148123 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1010 19:31:43.357751  148123 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1010 19:31:43.357830  148123 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1010 19:31:43.357979  148123 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1010 19:31:43.358092  148123 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1010 19:31:43.358158  148123 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1010 19:31:43.358263  148123 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1010 19:31:43.360216  148123 out.go:235]   - Booting up control plane ...
	I1010 19:31:43.360332  148123 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1010 19:31:43.360463  148123 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1010 19:31:43.360551  148123 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1010 19:31:43.360673  148123 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1010 19:31:43.360907  148123 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1010 19:31:43.360976  148123 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1010 19:31:43.361058  148123 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1010 19:31:43.361309  148123 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1010 19:31:43.361410  148123 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1010 19:31:43.361648  148123 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1010 19:31:43.361742  148123 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1010 19:31:43.361960  148123 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1010 19:31:43.362058  148123 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1010 19:31:43.362308  148123 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1010 19:31:43.362399  148123 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1010 19:31:43.362640  148123 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1010 19:31:43.362652  148123 kubeadm.go:310] 
	I1010 19:31:43.362708  148123 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1010 19:31:43.362758  148123 kubeadm.go:310] 		timed out waiting for the condition
	I1010 19:31:43.362768  148123 kubeadm.go:310] 
	I1010 19:31:43.362809  148123 kubeadm.go:310] 	This error is likely caused by:
	I1010 19:31:43.362858  148123 kubeadm.go:310] 		- The kubelet is not running
	I1010 19:31:43.362981  148123 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1010 19:31:43.362993  148123 kubeadm.go:310] 
	I1010 19:31:43.363119  148123 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1010 19:31:43.363164  148123 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1010 19:31:43.363204  148123 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1010 19:31:43.363219  148123 kubeadm.go:310] 
	I1010 19:31:43.363344  148123 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1010 19:31:43.363454  148123 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1010 19:31:43.363465  148123 kubeadm.go:310] 
	I1010 19:31:43.363591  148123 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1010 19:31:43.363708  148123 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1010 19:31:43.363803  148123 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1010 19:31:43.363892  148123 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1010 19:31:43.363978  148123 kubeadm.go:394] duration metric: took 8m3.085634474s to StartCluster
	I1010 19:31:43.364052  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:31:43.364159  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:31:43.364268  148123 kubeadm.go:310] 
	I1010 19:31:43.413041  148123 cri.go:89] found id: ""
	I1010 19:31:43.413075  148123 logs.go:282] 0 containers: []
	W1010 19:31:43.413086  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:31:43.413092  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:31:43.413206  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:31:43.456454  148123 cri.go:89] found id: ""
	I1010 19:31:43.456487  148123 logs.go:282] 0 containers: []
	W1010 19:31:43.456499  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:31:43.456507  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:31:43.456567  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:31:43.498582  148123 cri.go:89] found id: ""
	I1010 19:31:43.498614  148123 logs.go:282] 0 containers: []
	W1010 19:31:43.498624  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:31:43.498633  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:31:43.498694  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:31:43.540557  148123 cri.go:89] found id: ""
	I1010 19:31:43.540589  148123 logs.go:282] 0 containers: []
	W1010 19:31:43.540598  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:31:43.540605  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:31:43.540661  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:31:43.576743  148123 cri.go:89] found id: ""
	I1010 19:31:43.576776  148123 logs.go:282] 0 containers: []
	W1010 19:31:43.576787  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:31:43.576796  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:31:43.576887  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:31:43.614620  148123 cri.go:89] found id: ""
	I1010 19:31:43.614652  148123 logs.go:282] 0 containers: []
	W1010 19:31:43.614662  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:31:43.614668  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:31:43.614730  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:31:43.652008  148123 cri.go:89] found id: ""
	I1010 19:31:43.652061  148123 logs.go:282] 0 containers: []
	W1010 19:31:43.652075  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:31:43.652084  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:31:43.652153  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:31:43.689990  148123 cri.go:89] found id: ""
	I1010 19:31:43.690019  148123 logs.go:282] 0 containers: []
	W1010 19:31:43.690028  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:31:43.690043  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:31:43.690056  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:31:43.745634  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:31:43.745673  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:31:43.760064  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:31:43.760095  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:31:43.840168  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:31:43.840197  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:31:43.840214  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:31:43.943265  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:31:43.943303  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1010 19:31:43.987758  148123 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1010 19:31:43.987816  148123 out.go:270] * 
	W1010 19:31:43.987891  148123 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1010 19:31:43.987906  148123 out.go:270] * 
	W1010 19:31:43.988703  148123 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1010 19:31:43.991928  148123 out.go:201] 
	W1010 19:31:43.993494  148123 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1010 19:31:43.993556  148123 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1010 19:31:43.993585  148123 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1010 19:31:43.995273  148123 out.go:201] 
	
	
	==> CRI-O <==
	Oct 10 19:44:52 embed-certs-541370 crio[703]: time="2024-10-10 19:44:52.707807881Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bf9819ab-1065-4d08-b7b1-9d942ddf22ad name=/runtime.v1.RuntimeService/Version
	Oct 10 19:44:52 embed-certs-541370 crio[703]: time="2024-10-10 19:44:52.709609824Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9c6264c4-7cb4-47ee-9889-d187665e6b98 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 10 19:44:52 embed-certs-541370 crio[703]: time="2024-10-10 19:44:52.710122763Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728589492710093770,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9c6264c4-7cb4-47ee-9889-d187665e6b98 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 10 19:44:52 embed-certs-541370 crio[703]: time="2024-10-10 19:44:52.710895021Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=682943a3-23dd-4d1b-bb98-85aa8403b170 name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 19:44:52 embed-certs-541370 crio[703]: time="2024-10-10 19:44:52.710963614Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=682943a3-23dd-4d1b-bb98-85aa8403b170 name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 19:44:52 embed-certs-541370 crio[703]: time="2024-10-10 19:44:52.711167225Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0def20106145a96ba01f0e844ac07a0c46313190a6835ec2ad8167db5f072e2f,PodSandboxId:3dcb6748315187166471c06f6429a87db9b7528ec88e1aa713728283129efd09,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728588501941363227,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfb28184-daef-40be-9170-b42058727418,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df9196310387b44e529ddad250a9c2322eb9daccc42f72fb7c3e06b51b182d24,PodSandboxId:e2fb0a2cbe21d7ebcc79a87f810888033c4d35c79e94b2ec7c7c591a618cc989,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728588501347414391,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-n7wxs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac89df92-bbdb-432b-8c4a-d9ced3c2f2b5,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6eb76f8e0d16bf490c5b0a16c6b7b62555965c8d0c861c7b8b68c27da4dcd936,PodSandboxId:fc83c022f5ff686a625fce3b0f99700157dbe5e393ebb8fc3f1a4b89554ec274,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728588501282332611,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-59752,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f
7980c69-dd8e-42e0-a0ab-1dedf2203367,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff67e9a4d0b9d09d6489b72b20e6c96dde13d1c32f246a901abe1da570d4218b,PodSandboxId:c7a0c173a7780258fdd91a72fa84622c49d87586253ab7b313e5ceb98582b031,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1728588500229316816,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6hdds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe7cbbf4-12be-469d-b176-37c4daccab96,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6373a0366f8bd51fdb46f462b2b7d6b68e53f3726cfce9da4d1272094b33cfb,PodSandboxId:8412c7aa1ef71a8d51a14ef57fa83858096e2f5c20c25b0e8e898baf828bd79d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728588489225563567,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-541370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb6f95bcc8a65be3ea45039a22662946,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73c9d5c03b79507aec9719e9caf167d73e80671922639bec8a689fd1a4190ad0,PodSandboxId:ef03a0bfea565b3bd4550a95e429ed9bc9d7337edcbd8fe110914f581dcbc973,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728588489215686149,Labels:map[
string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-541370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac5c1e154d45345ed7b2f0bd497cc877,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:408a273bb46695bd06d19af956d4f1088d12f627e4715751845732fcb2e6e5b9,PodSandboxId:037867becb2b264537a1035b4af6b727abd224bd3ced4139e4e22bed0de67403,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728588489247220640,Labels:map[stri
ng]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-541370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ff06946beb7a9457525c6f483ee8641,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f7d921296cb4c649b76fbfd34f3670185a813610d065cfb2722c8b53977ab43,PodSandboxId:09baaf777b3b1146a8da3c69966ab512e19611eb713d50867a5a79ee4264490d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728588489124804774,Labels:map[string]string{io.kubernetes.containe
r.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-541370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a510bdbe8cdaaff1bec4f8a189faa6ac,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f81ee864d6e25c939a6b71b6b5507fed3cd1d7211fc7c2ff4963d9df8733e685,PodSandboxId:0ab0c873c1e1c3c2b5cb5c6fb3ef49585a8419193abfb7c4aa167da814115830,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728588200154518322,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-541370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb6f95bcc8a65be3ea45039a22662946,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=682943a3-23dd-4d1b-bb98-85aa8403b170 name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 19:44:52 embed-certs-541370 crio[703]: time="2024-10-10 19:44:52.719180216Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=b0f245cb-0955-482a-a45f-b3d41ef0483a name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 10 19:44:52 embed-certs-541370 crio[703]: time="2024-10-10 19:44:52.719452860Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:3da4a991e8d73d33d3780074add6d9810212e86debac19c5432eedbd412638ec,Metadata:&PodSandboxMetadata{Name:metrics-server-6867b74b74-znhn4,Uid:5dc1f764-c7c7-480e-b787-5f5cf6c14a84,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728588501861179216,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-6867b74b74-znhn4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5dc1f764-c7c7-480e-b787-5f5cf6c14a84,k8s-app: metrics-server,pod-template-hash: 6867b74b74,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-10T19:28:21.550927892Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3dcb6748315187166471c06f6429a87db9b7528ec88e1aa713728283129efd09,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:cfb28184-daef-40be-9170-b42058727418,N
amespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728588501819075780,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfb28184-daef-40be-9170-b42058727418,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"vol
umes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-10-10T19:28:21.511680521Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e2fb0a2cbe21d7ebcc79a87f810888033c4d35c79e94b2ec7c7c591a618cc989,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-n7wxs,Uid:ac89df92-bbdb-432b-8c4a-d9ced3c2f2b5,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728588500324042192,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-n7wxs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac89df92-bbdb-432b-8c4a-d9ced3c2f2b5,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-10T19:28:20.009113430Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:fc83c022f5ff686a625fce3b0f99700157dbe5e393ebb8fc3f1a4b89554ec274,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-59752,Uid:f7980c69-dd8e-42e0
-a0ab-1dedf2203367,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728588500280802429,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-59752,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7980c69-dd8e-42e0-a0ab-1dedf2203367,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-10T19:28:19.966074644Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c7a0c173a7780258fdd91a72fa84622c49d87586253ab7b313e5ceb98582b031,Metadata:&PodSandboxMetadata{Name:kube-proxy-6hdds,Uid:fe7cbbf4-12be-469d-b176-37c4daccab96,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728588499921124007,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-6hdds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe7cbbf4-12be-469d-b176-37c4daccab96,k8s-app: kube-proxy,pod-tem
plate-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-10T19:28:19.595571125Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8412c7aa1ef71a8d51a14ef57fa83858096e2f5c20c25b0e8e898baf828bd79d,Metadata:&PodSandboxMetadata{Name:kube-apiserver-embed-certs-541370,Uid:eb6f95bcc8a65be3ea45039a22662946,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1728588489004037509,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-embed-certs-541370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb6f95bcc8a65be3ea45039a22662946,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.120:8443,kubernetes.io/config.hash: eb6f95bcc8a65be3ea45039a22662946,kubernetes.io/config.seen: 2024-10-10T19:28:08.537609194Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:037867becb2b264537a1035b4af6
b727abd224bd3ced4139e4e22bed0de67403,Metadata:&PodSandboxMetadata{Name:etcd-embed-certs-541370,Uid:3ff06946beb7a9457525c6f483ee8641,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728588489002662854,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-embed-certs-541370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ff06946beb7a9457525c6f483ee8641,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.120:2379,kubernetes.io/config.hash: 3ff06946beb7a9457525c6f483ee8641,kubernetes.io/config.seen: 2024-10-10T19:28:08.537605351Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:ef03a0bfea565b3bd4550a95e429ed9bc9d7337edcbd8fe110914f581dcbc973,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-embed-certs-541370,Uid:ac5c1e154d45345ed7b2f0bd497cc877,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728588488988501156,Labels:m
ap[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-embed-certs-541370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac5c1e154d45345ed7b2f0bd497cc877,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: ac5c1e154d45345ed7b2f0bd497cc877,kubernetes.io/config.seen: 2024-10-10T19:28:08.537610396Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:09baaf777b3b1146a8da3c69966ab512e19611eb713d50867a5a79ee4264490d,Metadata:&PodSandboxMetadata{Name:kube-scheduler-embed-certs-541370,Uid:a510bdbe8cdaaff1bec4f8a189faa6ac,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728588488974771055,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-embed-certs-541370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a510bdbe8cdaaff1bec4f8a189faa6ac,tier: control-plane,},Annotations:map[str
ing]string{kubernetes.io/config.hash: a510bdbe8cdaaff1bec4f8a189faa6ac,kubernetes.io/config.seen: 2024-10-10T19:28:08.537611342Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:0ab0c873c1e1c3c2b5cb5c6fb3ef49585a8419193abfb7c4aa167da814115830,Metadata:&PodSandboxMetadata{Name:kube-apiserver-embed-certs-541370,Uid:eb6f95bcc8a65be3ea45039a22662946,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1728588199961021444,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-embed-certs-541370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb6f95bcc8a65be3ea45039a22662946,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.120:8443,kubernetes.io/config.hash: eb6f95bcc8a65be3ea45039a22662946,kubernetes.io/config.seen: 2024-10-10T19:23:19.473161717Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-coll
ector/interceptors.go:74" id=b0f245cb-0955-482a-a45f-b3d41ef0483a name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 10 19:44:52 embed-certs-541370 crio[703]: time="2024-10-10 19:44:52.720402960Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3fe70ef1-c188-48a6-a145-dd75265decc2 name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 19:44:52 embed-certs-541370 crio[703]: time="2024-10-10 19:44:52.720475653Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3fe70ef1-c188-48a6-a145-dd75265decc2 name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 19:44:52 embed-certs-541370 crio[703]: time="2024-10-10 19:44:52.720695492Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0def20106145a96ba01f0e844ac07a0c46313190a6835ec2ad8167db5f072e2f,PodSandboxId:3dcb6748315187166471c06f6429a87db9b7528ec88e1aa713728283129efd09,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728588501941363227,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfb28184-daef-40be-9170-b42058727418,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df9196310387b44e529ddad250a9c2322eb9daccc42f72fb7c3e06b51b182d24,PodSandboxId:e2fb0a2cbe21d7ebcc79a87f810888033c4d35c79e94b2ec7c7c591a618cc989,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728588501347414391,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-n7wxs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac89df92-bbdb-432b-8c4a-d9ced3c2f2b5,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6eb76f8e0d16bf490c5b0a16c6b7b62555965c8d0c861c7b8b68c27da4dcd936,PodSandboxId:fc83c022f5ff686a625fce3b0f99700157dbe5e393ebb8fc3f1a4b89554ec274,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728588501282332611,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-59752,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f
7980c69-dd8e-42e0-a0ab-1dedf2203367,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff67e9a4d0b9d09d6489b72b20e6c96dde13d1c32f246a901abe1da570d4218b,PodSandboxId:c7a0c173a7780258fdd91a72fa84622c49d87586253ab7b313e5ceb98582b031,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1728588500229316816,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6hdds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe7cbbf4-12be-469d-b176-37c4daccab96,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6373a0366f8bd51fdb46f462b2b7d6b68e53f3726cfce9da4d1272094b33cfb,PodSandboxId:8412c7aa1ef71a8d51a14ef57fa83858096e2f5c20c25b0e8e898baf828bd79d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728588489225563567,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-541370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb6f95bcc8a65be3ea45039a22662946,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73c9d5c03b79507aec9719e9caf167d73e80671922639bec8a689fd1a4190ad0,PodSandboxId:ef03a0bfea565b3bd4550a95e429ed9bc9d7337edcbd8fe110914f581dcbc973,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728588489215686149,Labels:map[
string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-541370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac5c1e154d45345ed7b2f0bd497cc877,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:408a273bb46695bd06d19af956d4f1088d12f627e4715751845732fcb2e6e5b9,PodSandboxId:037867becb2b264537a1035b4af6b727abd224bd3ced4139e4e22bed0de67403,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728588489247220640,Labels:map[stri
ng]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-541370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ff06946beb7a9457525c6f483ee8641,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f7d921296cb4c649b76fbfd34f3670185a813610d065cfb2722c8b53977ab43,PodSandboxId:09baaf777b3b1146a8da3c69966ab512e19611eb713d50867a5a79ee4264490d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728588489124804774,Labels:map[string]string{io.kubernetes.containe
r.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-541370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a510bdbe8cdaaff1bec4f8a189faa6ac,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f81ee864d6e25c939a6b71b6b5507fed3cd1d7211fc7c2ff4963d9df8733e685,PodSandboxId:0ab0c873c1e1c3c2b5cb5c6fb3ef49585a8419193abfb7c4aa167da814115830,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728588200154518322,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-541370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb6f95bcc8a65be3ea45039a22662946,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3fe70ef1-c188-48a6-a145-dd75265decc2 name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 19:44:52 embed-certs-541370 crio[703]: time="2024-10-10 19:44:52.751784233Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=44f98913-f6b1-4058-adcc-d483ee8530b5 name=/runtime.v1.RuntimeService/Version
	Oct 10 19:44:52 embed-certs-541370 crio[703]: time="2024-10-10 19:44:52.751915718Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=44f98913-f6b1-4058-adcc-d483ee8530b5 name=/runtime.v1.RuntimeService/Version
	Oct 10 19:44:52 embed-certs-541370 crio[703]: time="2024-10-10 19:44:52.753355057Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0a0f0c81-eaef-4b87-9876-46e90a9f3ec1 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 10 19:44:52 embed-certs-541370 crio[703]: time="2024-10-10 19:44:52.753982876Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728589492753955264,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0a0f0c81-eaef-4b87-9876-46e90a9f3ec1 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 10 19:44:52 embed-certs-541370 crio[703]: time="2024-10-10 19:44:52.754608262Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=867ac64a-32cd-4270-80ba-86712b49ff13 name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 19:44:52 embed-certs-541370 crio[703]: time="2024-10-10 19:44:52.754684862Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=867ac64a-32cd-4270-80ba-86712b49ff13 name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 19:44:52 embed-certs-541370 crio[703]: time="2024-10-10 19:44:52.754940889Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0def20106145a96ba01f0e844ac07a0c46313190a6835ec2ad8167db5f072e2f,PodSandboxId:3dcb6748315187166471c06f6429a87db9b7528ec88e1aa713728283129efd09,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728588501941363227,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfb28184-daef-40be-9170-b42058727418,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df9196310387b44e529ddad250a9c2322eb9daccc42f72fb7c3e06b51b182d24,PodSandboxId:e2fb0a2cbe21d7ebcc79a87f810888033c4d35c79e94b2ec7c7c591a618cc989,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728588501347414391,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-n7wxs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac89df92-bbdb-432b-8c4a-d9ced3c2f2b5,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6eb76f8e0d16bf490c5b0a16c6b7b62555965c8d0c861c7b8b68c27da4dcd936,PodSandboxId:fc83c022f5ff686a625fce3b0f99700157dbe5e393ebb8fc3f1a4b89554ec274,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728588501282332611,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-59752,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f
7980c69-dd8e-42e0-a0ab-1dedf2203367,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff67e9a4d0b9d09d6489b72b20e6c96dde13d1c32f246a901abe1da570d4218b,PodSandboxId:c7a0c173a7780258fdd91a72fa84622c49d87586253ab7b313e5ceb98582b031,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1728588500229316816,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6hdds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe7cbbf4-12be-469d-b176-37c4daccab96,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6373a0366f8bd51fdb46f462b2b7d6b68e53f3726cfce9da4d1272094b33cfb,PodSandboxId:8412c7aa1ef71a8d51a14ef57fa83858096e2f5c20c25b0e8e898baf828bd79d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728588489225563567,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-541370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb6f95bcc8a65be3ea45039a22662946,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73c9d5c03b79507aec9719e9caf167d73e80671922639bec8a689fd1a4190ad0,PodSandboxId:ef03a0bfea565b3bd4550a95e429ed9bc9d7337edcbd8fe110914f581dcbc973,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728588489215686149,Labels:map[
string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-541370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac5c1e154d45345ed7b2f0bd497cc877,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:408a273bb46695bd06d19af956d4f1088d12f627e4715751845732fcb2e6e5b9,PodSandboxId:037867becb2b264537a1035b4af6b727abd224bd3ced4139e4e22bed0de67403,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728588489247220640,Labels:map[stri
ng]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-541370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ff06946beb7a9457525c6f483ee8641,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f7d921296cb4c649b76fbfd34f3670185a813610d065cfb2722c8b53977ab43,PodSandboxId:09baaf777b3b1146a8da3c69966ab512e19611eb713d50867a5a79ee4264490d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728588489124804774,Labels:map[string]string{io.kubernetes.containe
r.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-541370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a510bdbe8cdaaff1bec4f8a189faa6ac,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f81ee864d6e25c939a6b71b6b5507fed3cd1d7211fc7c2ff4963d9df8733e685,PodSandboxId:0ab0c873c1e1c3c2b5cb5c6fb3ef49585a8419193abfb7c4aa167da814115830,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728588200154518322,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-541370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb6f95bcc8a65be3ea45039a22662946,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=867ac64a-32cd-4270-80ba-86712b49ff13 name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 19:44:52 embed-certs-541370 crio[703]: time="2024-10-10 19:44:52.789815769Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=17afa8cb-682d-4aa9-838f-c58cfc6af725 name=/runtime.v1.RuntimeService/Version
	Oct 10 19:44:52 embed-certs-541370 crio[703]: time="2024-10-10 19:44:52.790053527Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=17afa8cb-682d-4aa9-838f-c58cfc6af725 name=/runtime.v1.RuntimeService/Version
	Oct 10 19:44:52 embed-certs-541370 crio[703]: time="2024-10-10 19:44:52.791273809Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6140b77f-e3c5-4faa-a389-e709cac8fc52 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 10 19:44:52 embed-certs-541370 crio[703]: time="2024-10-10 19:44:52.792413983Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728589492792381663,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6140b77f-e3c5-4faa-a389-e709cac8fc52 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 10 19:44:52 embed-certs-541370 crio[703]: time="2024-10-10 19:44:52.798750054Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=68fbbce3-8012-4d43-98d7-27c428946935 name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 19:44:52 embed-certs-541370 crio[703]: time="2024-10-10 19:44:52.798934250Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=68fbbce3-8012-4d43-98d7-27c428946935 name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 19:44:52 embed-certs-541370 crio[703]: time="2024-10-10 19:44:52.799146849Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0def20106145a96ba01f0e844ac07a0c46313190a6835ec2ad8167db5f072e2f,PodSandboxId:3dcb6748315187166471c06f6429a87db9b7528ec88e1aa713728283129efd09,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728588501941363227,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfb28184-daef-40be-9170-b42058727418,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df9196310387b44e529ddad250a9c2322eb9daccc42f72fb7c3e06b51b182d24,PodSandboxId:e2fb0a2cbe21d7ebcc79a87f810888033c4d35c79e94b2ec7c7c591a618cc989,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728588501347414391,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-n7wxs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac89df92-bbdb-432b-8c4a-d9ced3c2f2b5,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6eb76f8e0d16bf490c5b0a16c6b7b62555965c8d0c861c7b8b68c27da4dcd936,PodSandboxId:fc83c022f5ff686a625fce3b0f99700157dbe5e393ebb8fc3f1a4b89554ec274,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728588501282332611,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-59752,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f
7980c69-dd8e-42e0-a0ab-1dedf2203367,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff67e9a4d0b9d09d6489b72b20e6c96dde13d1c32f246a901abe1da570d4218b,PodSandboxId:c7a0c173a7780258fdd91a72fa84622c49d87586253ab7b313e5ceb98582b031,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1728588500229316816,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6hdds,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe7cbbf4-12be-469d-b176-37c4daccab96,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6373a0366f8bd51fdb46f462b2b7d6b68e53f3726cfce9da4d1272094b33cfb,PodSandboxId:8412c7aa1ef71a8d51a14ef57fa83858096e2f5c20c25b0e8e898baf828bd79d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728588489225563567,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-541370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb6f95bcc8a65be3ea45039a22662946,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73c9d5c03b79507aec9719e9caf167d73e80671922639bec8a689fd1a4190ad0,PodSandboxId:ef03a0bfea565b3bd4550a95e429ed9bc9d7337edcbd8fe110914f581dcbc973,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728588489215686149,Labels:map[
string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-541370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac5c1e154d45345ed7b2f0bd497cc877,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:408a273bb46695bd06d19af956d4f1088d12f627e4715751845732fcb2e6e5b9,PodSandboxId:037867becb2b264537a1035b4af6b727abd224bd3ced4139e4e22bed0de67403,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728588489247220640,Labels:map[stri
ng]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-541370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ff06946beb7a9457525c6f483ee8641,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f7d921296cb4c649b76fbfd34f3670185a813610d065cfb2722c8b53977ab43,PodSandboxId:09baaf777b3b1146a8da3c69966ab512e19611eb713d50867a5a79ee4264490d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728588489124804774,Labels:map[string]string{io.kubernetes.containe
r.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-541370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a510bdbe8cdaaff1bec4f8a189faa6ac,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f81ee864d6e25c939a6b71b6b5507fed3cd1d7211fc7c2ff4963d9df8733e685,PodSandboxId:0ab0c873c1e1c3c2b5cb5c6fb3ef49585a8419193abfb7c4aa167da814115830,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728588200154518322,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-541370,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb6f95bcc8a65be3ea45039a22662946,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=68fbbce3-8012-4d43-98d7-27c428946935 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	0def20106145a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   16 minutes ago      Running             storage-provisioner       0                   3dcb674831518       storage-provisioner
	df9196310387b       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   16 minutes ago      Running             coredns                   0                   e2fb0a2cbe21d       coredns-7c65d6cfc9-n7wxs
	6eb76f8e0d16b       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   16 minutes ago      Running             coredns                   0                   fc83c022f5ff6       coredns-7c65d6cfc9-59752
	ff67e9a4d0b9d       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   16 minutes ago      Running             kube-proxy                0                   c7a0c173a7780       kube-proxy-6hdds
	408a273bb4669       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   16 minutes ago      Running             etcd                      2                   037867becb2b2       etcd-embed-certs-541370
	c6373a0366f8b       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   16 minutes ago      Running             kube-apiserver            2                   8412c7aa1ef71       kube-apiserver-embed-certs-541370
	73c9d5c03b795       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   16 minutes ago      Running             kube-controller-manager   2                   ef03a0bfea565       kube-controller-manager-embed-certs-541370
	2f7d921296cb4       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   16 minutes ago      Running             kube-scheduler            2                   09baaf777b3b1       kube-scheduler-embed-certs-541370
	f81ee864d6e25       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   21 minutes ago      Exited              kube-apiserver            1                   0ab0c873c1e1c       kube-apiserver-embed-certs-541370
	
	
	==> coredns [6eb76f8e0d16bf490c5b0a16c6b7b62555965c8d0c861c7b8b68c27da4dcd936] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [df9196310387b44e529ddad250a9c2322eb9daccc42f72fb7c3e06b51b182d24] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               embed-certs-541370
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-541370
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e90d7de550e839433dc631623053398b652747fc
	                    minikube.k8s.io/name=embed-certs-541370
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_10T19_28_15_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 10 Oct 2024 19:28:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-541370
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 10 Oct 2024 19:44:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 10 Oct 2024 19:43:43 +0000   Thu, 10 Oct 2024 19:28:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 10 Oct 2024 19:43:43 +0000   Thu, 10 Oct 2024 19:28:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 10 Oct 2024 19:43:43 +0000   Thu, 10 Oct 2024 19:28:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 10 Oct 2024 19:43:43 +0000   Thu, 10 Oct 2024 19:28:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.120
	  Hostname:    embed-certs-541370
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e13a128c52ea4352aceae98f7e8f44c9
	  System UUID:                e13a128c-52ea-4352-acea-e98f7e8f44c9
	  Boot ID:                    8c4a8121-3b24-41fb-98a7-05e8fae9b2c6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-59752                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     16m
	  kube-system                 coredns-7c65d6cfc9-n7wxs                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     16m
	  kube-system                 etcd-embed-certs-541370                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         16m
	  kube-system                 kube-apiserver-embed-certs-541370             250m (12%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-controller-manager-embed-certs-541370    200m (10%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-proxy-6hdds                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-scheduler-embed-certs-541370             100m (5%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 metrics-server-6867b74b74-znhn4               100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         16m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 16m                kube-proxy       
	  Normal  Starting                 16m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  16m (x8 over 16m)  kubelet          Node embed-certs-541370 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m (x8 over 16m)  kubelet          Node embed-certs-541370 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m (x7 over 16m)  kubelet          Node embed-certs-541370 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 16m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  16m                kubelet          Node embed-certs-541370 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m                kubelet          Node embed-certs-541370 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m                kubelet          Node embed-certs-541370 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           16m                node-controller  Node embed-certs-541370 event: Registered Node embed-certs-541370 in Controller
	
	
	==> dmesg <==
	[  +0.050711] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040360] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.897453] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[Oct10 19:23] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.469146] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.852281] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.057338] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.071939] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +0.220739] systemd-fstab-generator[649]: Ignoring "noauto" option for root device
	[  +0.149348] systemd-fstab-generator[661]: Ignoring "noauto" option for root device
	[  +0.336470] systemd-fstab-generator[692]: Ignoring "noauto" option for root device
	[  +4.390101] systemd-fstab-generator[781]: Ignoring "noauto" option for root device
	[  +0.062272] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.231427] systemd-fstab-generator[903]: Ignoring "noauto" option for root device
	[  +4.581702] kauditd_printk_skb: 97 callbacks suppressed
	[  +6.018338] kauditd_printk_skb: 85 callbacks suppressed
	[Oct10 19:28] kauditd_printk_skb: 7 callbacks suppressed
	[  +1.285654] systemd-fstab-generator[2565]: Ignoring "noauto" option for root device
	[  +4.604140] kauditd_printk_skb: 54 callbacks suppressed
	[  +1.959778] systemd-fstab-generator[2884]: Ignoring "noauto" option for root device
	[  +5.398379] systemd-fstab-generator[3000]: Ignoring "noauto" option for root device
	[  +0.130563] kauditd_printk_skb: 14 callbacks suppressed
	[  +8.016002] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [408a273bb46695bd06d19af956d4f1088d12f627e4715751845732fcb2e6e5b9] <==
	{"level":"info","ts":"2024-10-10T19:28:10.034015Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"af2c917f7a70ddd0 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-10-10T19:28:10.034066Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"af2c917f7a70ddd0 received MsgPreVoteResp from af2c917f7a70ddd0 at term 1"}
	{"level":"info","ts":"2024-10-10T19:28:10.034101Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"af2c917f7a70ddd0 became candidate at term 2"}
	{"level":"info","ts":"2024-10-10T19:28:10.034133Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"af2c917f7a70ddd0 received MsgVoteResp from af2c917f7a70ddd0 at term 2"}
	{"level":"info","ts":"2024-10-10T19:28:10.034171Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"af2c917f7a70ddd0 became leader at term 2"}
	{"level":"info","ts":"2024-10-10T19:28:10.034196Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: af2c917f7a70ddd0 elected leader af2c917f7a70ddd0 at term 2"}
	{"level":"info","ts":"2024-10-10T19:28:10.039249Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-10T19:28:10.042148Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"af2c917f7a70ddd0","local-member-attributes":"{Name:embed-certs-541370 ClientURLs:[https://192.168.39.120:2379]}","request-path":"/0/members/af2c917f7a70ddd0/attributes","cluster-id":"f3de5e1602edc73b","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-10T19:28:10.042395Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-10T19:28:10.042792Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-10T19:28:10.043643Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-10T19:28:10.050386Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.120:2379"}
	{"level":"info","ts":"2024-10-10T19:28:10.053188Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"f3de5e1602edc73b","local-member-id":"af2c917f7a70ddd0","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-10T19:28:10.057992Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-10T19:28:10.059884Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-10T19:28:10.053670Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-10T19:28:10.060631Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-10T19:28:10.063944Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-10T19:28:10.064316Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-10T19:38:10.246047Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":723}
	{"level":"info","ts":"2024-10-10T19:38:10.260367Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":723,"took":"12.019466ms","hash":435238906,"current-db-size-bytes":2363392,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":2363392,"current-db-size-in-use":"2.4 MB"}
	{"level":"info","ts":"2024-10-10T19:38:10.260484Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":435238906,"revision":723,"compact-revision":-1}
	{"level":"info","ts":"2024-10-10T19:43:10.261465Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":966}
	{"level":"info","ts":"2024-10-10T19:43:10.265958Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":966,"took":"3.741361ms","hash":3107684303,"current-db-size-bytes":2363392,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":1581056,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-10-10T19:43:10.266065Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3107684303,"revision":966,"compact-revision":723}
	
	
	==> kernel <==
	 19:44:53 up 21 min,  0 users,  load average: 0.20, 0.15, 0.14
	Linux embed-certs-541370 5.10.207 #1 SMP Tue Oct 8 15:16:25 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [c6373a0366f8bd51fdb46f462b2b7d6b68e53f3726cfce9da4d1272094b33cfb] <==
	I1010 19:41:13.078341       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1010 19:41:13.078377       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1010 19:43:12.077818       1 handler_proxy.go:99] no RequestInfo found in the context
	E1010 19:43:12.078037       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1010 19:43:13.080134       1 handler_proxy.go:99] no RequestInfo found in the context
	E1010 19:43:13.080193       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1010 19:43:13.080238       1 handler_proxy.go:99] no RequestInfo found in the context
	E1010 19:43:13.080287       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1010 19:43:13.081416       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1010 19:43:13.081468       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1010 19:44:13.081941       1 handler_proxy.go:99] no RequestInfo found in the context
	E1010 19:44:13.082025       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1010 19:44:13.082105       1 handler_proxy.go:99] no RequestInfo found in the context
	E1010 19:44:13.082166       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1010 19:44:13.083190       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1010 19:44:13.083300       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [f81ee864d6e25c939a6b71b6b5507fed3cd1d7211fc7c2ff4963d9df8733e685] <==
	W1010 19:28:00.986345       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1010 19:28:04.491191       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1010 19:28:04.678344       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1010 19:28:04.688806       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1010 19:28:04.839451       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1010 19:28:05.203085       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1010 19:28:05.310810       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1010 19:28:05.315530       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1010 19:28:05.554372       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1010 19:28:05.607042       1 logging.go:55] [core] [Channel #17 SubChannel #18]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1010 19:28:05.627185       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1010 19:28:05.692668       1 logging.go:55] [core] [Channel #10 SubChannel #11]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1010 19:28:05.989573       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1010 19:28:06.017705       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1010 19:28:06.029339       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1010 19:28:06.109177       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1010 19:28:06.121763       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1010 19:28:06.226352       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1010 19:28:06.230120       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1010 19:28:06.249101       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1010 19:28:06.284490       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1010 19:28:06.375669       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1010 19:28:06.384344       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1010 19:28:06.451123       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1010 19:28:06.451498       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [73c9d5c03b79507aec9719e9caf167d73e80671922639bec8a689fd1a4190ad0] <==
	E1010 19:39:49.191089       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1010 19:39:49.694444       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1010 19:40:19.197951       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1010 19:40:19.703668       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1010 19:40:49.204982       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1010 19:40:49.714584       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1010 19:41:19.212489       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1010 19:41:19.726576       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1010 19:41:49.219675       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1010 19:41:49.734483       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1010 19:42:19.226029       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1010 19:42:19.742477       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1010 19:42:49.232576       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1010 19:42:49.750337       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1010 19:43:19.239786       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1010 19:43:19.758192       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1010 19:43:43.104329       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="embed-certs-541370"
	E1010 19:43:49.246723       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1010 19:43:49.766174       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1010 19:44:15.889453       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="294.257µs"
	E1010 19:44:19.253680       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1010 19:44:19.773609       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1010 19:44:29.888237       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="68.413µs"
	E1010 19:44:49.260019       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1010 19:44:49.781383       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [ff67e9a4d0b9d09d6489b72b20e6c96dde13d1c32f246a901abe1da570d4218b] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1010 19:28:20.736976       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1010 19:28:20.752734       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.120"]
	E1010 19:28:20.752803       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1010 19:28:20.840970       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1010 19:28:20.841022       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1010 19:28:20.841044       1 server_linux.go:169] "Using iptables Proxier"
	I1010 19:28:20.846444       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1010 19:28:20.846742       1 server.go:483] "Version info" version="v1.31.1"
	I1010 19:28:20.846755       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1010 19:28:20.848975       1 config.go:199] "Starting service config controller"
	I1010 19:28:20.849000       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1010 19:28:20.849063       1 config.go:105] "Starting endpoint slice config controller"
	I1010 19:28:20.849067       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1010 19:28:20.849452       1 config.go:328] "Starting node config controller"
	I1010 19:28:20.849458       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1010 19:28:20.950957       1 shared_informer.go:320] Caches are synced for node config
	I1010 19:28:20.951021       1 shared_informer.go:320] Caches are synced for service config
	I1010 19:28:20.951040       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [2f7d921296cb4c649b76fbfd34f3670185a813610d065cfb2722c8b53977ab43] <==
	W1010 19:28:12.939259       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1010 19:28:12.939372       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1010 19:28:13.007748       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1010 19:28:13.007919       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1010 19:28:13.027416       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1010 19:28:13.027506       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1010 19:28:13.060343       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1010 19:28:13.060471       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1010 19:28:13.079037       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1010 19:28:13.079130       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1010 19:28:13.134236       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1010 19:28:13.134390       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1010 19:28:13.150917       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1010 19:28:13.151047       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1010 19:28:13.377384       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1010 19:28:13.377434       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1010 19:28:13.446226       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1010 19:28:13.446277       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1010 19:28:13.450042       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1010 19:28:13.450090       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1010 19:28:13.464009       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1010 19:28:13.464058       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1010 19:28:13.472984       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1010 19:28:13.473147       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1010 19:28:15.983437       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 10 19:43:46 embed-certs-541370 kubelet[2891]: E1010 19:43:46.873097    2891 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-znhn4" podUID="5dc1f764-c7c7-480e-b787-5f5cf6c14a84"
	Oct 10 19:43:55 embed-certs-541370 kubelet[2891]: E1010 19:43:55.177096    2891 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728589435176680311,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 19:43:55 embed-certs-541370 kubelet[2891]: E1010 19:43:55.177150    2891 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728589435176680311,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 19:44:01 embed-certs-541370 kubelet[2891]: E1010 19:44:01.887699    2891 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Oct 10 19:44:01 embed-certs-541370 kubelet[2891]: E1010 19:44:01.888074    2891 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Oct 10 19:44:01 embed-certs-541370 kubelet[2891]: E1010 19:44:01.888280    2891 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-prs5d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation
:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Std
in:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-6867b74b74-znhn4_kube-system(5dc1f764-c7c7-480e-b787-5f5cf6c14a84): ErrImagePull: pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" logger="UnhandledError"
	Oct 10 19:44:01 embed-certs-541370 kubelet[2891]: E1010 19:44:01.889667    2891 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-6867b74b74-znhn4" podUID="5dc1f764-c7c7-480e-b787-5f5cf6c14a84"
	Oct 10 19:44:05 embed-certs-541370 kubelet[2891]: E1010 19:44:05.179110    2891 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728589445178378619,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 19:44:05 embed-certs-541370 kubelet[2891]: E1010 19:44:05.179445    2891 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728589445178378619,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 19:44:14 embed-certs-541370 kubelet[2891]: E1010 19:44:14.893755    2891 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 10 19:44:14 embed-certs-541370 kubelet[2891]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 10 19:44:14 embed-certs-541370 kubelet[2891]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 10 19:44:14 embed-certs-541370 kubelet[2891]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 10 19:44:14 embed-certs-541370 kubelet[2891]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 10 19:44:15 embed-certs-541370 kubelet[2891]: E1010 19:44:15.181489    2891 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728589455180934982,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 19:44:15 embed-certs-541370 kubelet[2891]: E1010 19:44:15.181583    2891 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728589455180934982,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 19:44:15 embed-certs-541370 kubelet[2891]: E1010 19:44:15.873337    2891 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-znhn4" podUID="5dc1f764-c7c7-480e-b787-5f5cf6c14a84"
	Oct 10 19:44:25 embed-certs-541370 kubelet[2891]: E1010 19:44:25.183641    2891 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728589465183191402,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 19:44:25 embed-certs-541370 kubelet[2891]: E1010 19:44:25.183702    2891 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728589465183191402,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 19:44:29 embed-certs-541370 kubelet[2891]: E1010 19:44:29.872795    2891 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-znhn4" podUID="5dc1f764-c7c7-480e-b787-5f5cf6c14a84"
	Oct 10 19:44:35 embed-certs-541370 kubelet[2891]: E1010 19:44:35.186129    2891 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728589475185472638,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 19:44:35 embed-certs-541370 kubelet[2891]: E1010 19:44:35.186249    2891 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728589475185472638,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 19:44:44 embed-certs-541370 kubelet[2891]: E1010 19:44:44.873286    2891 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-znhn4" podUID="5dc1f764-c7c7-480e-b787-5f5cf6c14a84"
	Oct 10 19:44:45 embed-certs-541370 kubelet[2891]: E1010 19:44:45.188287    2891 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728589485187879338,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 19:44:45 embed-certs-541370 kubelet[2891]: E1010 19:44:45.188356    2891 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728589485187879338,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [0def20106145a96ba01f0e844ac07a0c46313190a6835ec2ad8167db5f072e2f] <==
	I1010 19:28:22.054302       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1010 19:28:22.080099       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1010 19:28:22.080194       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1010 19:28:22.123905       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1010 19:28:22.128072       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-541370_93528444-997f-4b6b-ab97-82466bf6ac65!
	I1010 19:28:22.130073       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1112543b-e8f4-4def-b9b6-5b576a2e4ce3", APIVersion:"v1", ResourceVersion:"432", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-541370_93528444-997f-4b6b-ab97-82466bf6ac65 became leader
	I1010 19:28:22.229427       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-541370_93528444-997f-4b6b-ab97-82466bf6ac65!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-541370 -n embed-certs-541370
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-541370 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-znhn4
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-541370 describe pod metrics-server-6867b74b74-znhn4
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-541370 describe pod metrics-server-6867b74b74-znhn4: exit status 1 (70.063956ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-znhn4" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-541370 describe pod metrics-server-6867b74b74-znhn4: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (439.72s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (536.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-361847 -n default-k8s-diff-port-361847
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-10-10 19:47:00.315805129 +0000 UTC m=+6568.000740325
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-361847 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-361847 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.46µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-361847 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-361847 -n default-k8s-diff-port-361847
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-361847 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-361847 logs -n 25: (2.11100389s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable dashboard -p newest-cni-029826                  | newest-cni-029826            | jenkins | v1.34.0 | 10 Oct 24 19:16 UTC | 10 Oct 24 19:16 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-029826 --memory=2200 --alsologtostderr   | newest-cni-029826            | jenkins | v1.34.0 | 10 Oct 24 19:16 UTC | 10 Oct 24 19:16 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-541370            | embed-certs-541370           | jenkins | v1.34.0 | 10 Oct 24 19:16 UTC | 10 Oct 24 19:16 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-541370                                  | embed-certs-541370           | jenkins | v1.34.0 | 10 Oct 24 19:16 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| image   | newest-cni-029826 image list                           | newest-cni-029826            | jenkins | v1.34.0 | 10 Oct 24 19:16 UTC | 10 Oct 24 19:16 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-029826                                   | newest-cni-029826            | jenkins | v1.34.0 | 10 Oct 24 19:16 UTC | 10 Oct 24 19:16 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-029826                                   | newest-cni-029826            | jenkins | v1.34.0 | 10 Oct 24 19:16 UTC | 10 Oct 24 19:16 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-029826                                   | newest-cni-029826            | jenkins | v1.34.0 | 10 Oct 24 19:16 UTC | 10 Oct 24 19:16 UTC |
	| delete  | -p newest-cni-029826                                   | newest-cni-029826            | jenkins | v1.34.0 | 10 Oct 24 19:16 UTC | 10 Oct 24 19:17 UTC |
	| start   | -p                                                     | default-k8s-diff-port-361847 | jenkins | v1.34.0 | 10 Oct 24 19:17 UTC | 10 Oct 24 19:18 UTC |
	|         | default-k8s-diff-port-361847                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-320324                  | no-preload-320324            | jenkins | v1.34.0 | 10 Oct 24 19:18 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-320324                                   | no-preload-320324            | jenkins | v1.34.0 | 10 Oct 24 19:18 UTC | 10 Oct 24 19:29 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-947203        | old-k8s-version-947203       | jenkins | v1.34.0 | 10 Oct 24 19:18 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-361847  | default-k8s-diff-port-361847 | jenkins | v1.34.0 | 10 Oct 24 19:18 UTC | 10 Oct 24 19:18 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-361847 | jenkins | v1.34.0 | 10 Oct 24 19:18 UTC |                     |
	|         | default-k8s-diff-port-361847                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-541370                 | embed-certs-541370           | jenkins | v1.34.0 | 10 Oct 24 19:18 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-541370                                  | embed-certs-541370           | jenkins | v1.34.0 | 10 Oct 24 19:19 UTC | 10 Oct 24 19:28 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-947203                              | old-k8s-version-947203       | jenkins | v1.34.0 | 10 Oct 24 19:19 UTC | 10 Oct 24 19:19 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-947203             | old-k8s-version-947203       | jenkins | v1.34.0 | 10 Oct 24 19:19 UTC | 10 Oct 24 19:19 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-947203                              | old-k8s-version-947203       | jenkins | v1.34.0 | 10 Oct 24 19:19 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-361847       | default-k8s-diff-port-361847 | jenkins | v1.34.0 | 10 Oct 24 19:21 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-361847 | jenkins | v1.34.0 | 10 Oct 24 19:21 UTC | 10 Oct 24 19:29 UTC |
	|         | default-k8s-diff-port-361847                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-947203                              | old-k8s-version-947203       | jenkins | v1.34.0 | 10 Oct 24 19:43 UTC | 10 Oct 24 19:43 UTC |
	| delete  | -p no-preload-320324                                   | no-preload-320324            | jenkins | v1.34.0 | 10 Oct 24 19:44 UTC | 10 Oct 24 19:44 UTC |
	| delete  | -p embed-certs-541370                                  | embed-certs-541370           | jenkins | v1.34.0 | 10 Oct 24 19:44 UTC | 10 Oct 24 19:44 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/10 19:21:13
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1010 19:21:13.943219  148525 out.go:345] Setting OutFile to fd 1 ...
	I1010 19:21:13.943336  148525 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 19:21:13.943343  148525 out.go:358] Setting ErrFile to fd 2...
	I1010 19:21:13.943347  148525 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 19:21:13.943560  148525 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19787-81676/.minikube/bin
	I1010 19:21:13.944109  148525 out.go:352] Setting JSON to false
	I1010 19:21:13.945219  148525 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":11020,"bootTime":1728577054,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1010 19:21:13.945321  148525 start.go:139] virtualization: kvm guest
	I1010 19:21:13.947915  148525 out.go:177] * [default-k8s-diff-port-361847] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1010 19:21:13.950021  148525 out.go:177]   - MINIKUBE_LOCATION=19787
	I1010 19:21:13.950037  148525 notify.go:220] Checking for updates...
	I1010 19:21:13.952994  148525 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1010 19:21:13.954661  148525 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19787-81676/kubeconfig
	I1010 19:21:13.956438  148525 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19787-81676/.minikube
	I1010 19:21:13.958502  148525 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1010 19:21:13.960099  148525 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1010 19:21:13.961930  148525 config.go:182] Loaded profile config "default-k8s-diff-port-361847": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 19:21:13.962374  148525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:21:13.962450  148525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:21:13.978323  148525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41247
	I1010 19:21:13.978926  148525 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:21:13.979520  148525 main.go:141] libmachine: Using API Version  1
	I1010 19:21:13.979538  148525 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:21:13.979954  148525 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:21:13.980144  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .DriverName
	I1010 19:21:13.980446  148525 driver.go:394] Setting default libvirt URI to qemu:///system
	I1010 19:21:13.980745  148525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:21:13.980784  148525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:21:13.996046  148525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37617
	I1010 19:21:13.996534  148525 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:21:13.997069  148525 main.go:141] libmachine: Using API Version  1
	I1010 19:21:13.997097  148525 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:21:13.997530  148525 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:21:13.997788  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .DriverName
	I1010 19:21:14.033593  148525 out.go:177] * Using the kvm2 driver based on existing profile
	I1010 19:21:14.035367  148525 start.go:297] selected driver: kvm2
	I1010 19:21:14.035394  148525 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-361847 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-361847 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.32 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 19:21:14.035526  148525 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1010 19:21:14.036341  148525 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 19:21:14.036452  148525 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19787-81676/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1010 19:21:14.052462  148525 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1010 19:21:14.052918  148525 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1010 19:21:14.052967  148525 cni.go:84] Creating CNI manager for ""
	I1010 19:21:14.053019  148525 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1010 19:21:14.053067  148525 start.go:340] cluster config:
	{Name:default-k8s-diff-port-361847 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-361847 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.32 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-h
ost Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 19:21:14.053178  148525 iso.go:125] acquiring lock: {Name:mk53f4624ffe67cac4f5d6b29b9ed87292a010a5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 19:21:14.055485  148525 out.go:177] * Starting "default-k8s-diff-port-361847" primary control-plane node in "default-k8s-diff-port-361847" cluster
	I1010 19:21:16.773106  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:21:14.056945  148525 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1010 19:21:14.057002  148525 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1010 19:21:14.057014  148525 cache.go:56] Caching tarball of preloaded images
	I1010 19:21:14.057118  148525 preload.go:172] Found /home/jenkins/minikube-integration/19787-81676/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1010 19:21:14.057134  148525 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1010 19:21:14.057268  148525 profile.go:143] Saving config to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/default-k8s-diff-port-361847/config.json ...
	I1010 19:21:14.057476  148525 start.go:360] acquireMachinesLock for default-k8s-diff-port-361847: {Name:mkf50a8a3600473176c10a3ea212772e747151e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1010 19:21:22.853158  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:21:25.925174  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:21:32.005160  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:21:35.077198  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:21:41.157130  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:21:44.229127  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:21:50.309136  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:21:53.381191  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:21:59.461129  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:22:02.533201  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:22:08.613124  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:22:11.685169  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:22:17.765161  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:22:20.837208  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:22:26.917127  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:22:29.989172  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:22:36.069147  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:22:39.141173  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:22:45.221160  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:22:48.293141  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:22:51.297376  147758 start.go:364] duration metric: took 3m49.312490934s to acquireMachinesLock for "embed-certs-541370"
	I1010 19:22:51.297453  147758 start.go:96] Skipping create...Using existing machine configuration
	I1010 19:22:51.297464  147758 fix.go:54] fixHost starting: 
	I1010 19:22:51.297787  147758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:22:51.297848  147758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:22:51.314087  147758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43639
	I1010 19:22:51.314588  147758 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:22:51.315115  147758 main.go:141] libmachine: Using API Version  1
	I1010 19:22:51.315138  147758 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:22:51.315509  147758 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:22:51.315691  147758 main.go:141] libmachine: (embed-certs-541370) Calling .DriverName
	I1010 19:22:51.315879  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetState
	I1010 19:22:51.317597  147758 fix.go:112] recreateIfNeeded on embed-certs-541370: state=Stopped err=<nil>
	I1010 19:22:51.317621  147758 main.go:141] libmachine: (embed-certs-541370) Calling .DriverName
	W1010 19:22:51.317781  147758 fix.go:138] unexpected machine state, will restart: <nil>
	I1010 19:22:51.319664  147758 out.go:177] * Restarting existing kvm2 VM for "embed-certs-541370" ...
	I1010 19:22:51.320967  147758 main.go:141] libmachine: (embed-certs-541370) Calling .Start
	I1010 19:22:51.321134  147758 main.go:141] libmachine: (embed-certs-541370) Ensuring networks are active...
	I1010 19:22:51.322026  147758 main.go:141] libmachine: (embed-certs-541370) Ensuring network default is active
	I1010 19:22:51.322468  147758 main.go:141] libmachine: (embed-certs-541370) Ensuring network mk-embed-certs-541370 is active
	I1010 19:22:51.322874  147758 main.go:141] libmachine: (embed-certs-541370) Getting domain xml...
	I1010 19:22:51.323687  147758 main.go:141] libmachine: (embed-certs-541370) Creating domain...
	I1010 19:22:51.294881  147213 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1010 19:22:51.294927  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetMachineName
	I1010 19:22:51.295226  147213 buildroot.go:166] provisioning hostname "no-preload-320324"
	I1010 19:22:51.295256  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetMachineName
	I1010 19:22:51.295454  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHHostname
	I1010 19:22:51.297198  147213 machine.go:96] duration metric: took 4m37.414594306s to provisionDockerMachine
	I1010 19:22:51.297252  147213 fix.go:56] duration metric: took 4m37.436635356s for fixHost
	I1010 19:22:51.297259  147213 start.go:83] releasing machines lock for "no-preload-320324", held for 4m37.436668423s
	W1010 19:22:51.297278  147213 start.go:714] error starting host: provision: host is not running
	W1010 19:22:51.297382  147213 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1010 19:22:51.297396  147213 start.go:729] Will try again in 5 seconds ...
	I1010 19:22:52.568699  147758 main.go:141] libmachine: (embed-certs-541370) Waiting to get IP...
	I1010 19:22:52.569582  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:22:52.569952  147758 main.go:141] libmachine: (embed-certs-541370) DBG | unable to find current IP address of domain embed-certs-541370 in network mk-embed-certs-541370
	I1010 19:22:52.570018  147758 main.go:141] libmachine: (embed-certs-541370) DBG | I1010 19:22:52.569935  148914 retry.go:31] will retry after 261.244287ms: waiting for machine to come up
	I1010 19:22:52.832639  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:22:52.833280  147758 main.go:141] libmachine: (embed-certs-541370) DBG | unable to find current IP address of domain embed-certs-541370 in network mk-embed-certs-541370
	I1010 19:22:52.833310  147758 main.go:141] libmachine: (embed-certs-541370) DBG | I1010 19:22:52.833200  148914 retry.go:31] will retry after 304.116732ms: waiting for machine to come up
	I1010 19:22:53.138770  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:22:53.139091  147758 main.go:141] libmachine: (embed-certs-541370) DBG | unable to find current IP address of domain embed-certs-541370 in network mk-embed-certs-541370
	I1010 19:22:53.139124  147758 main.go:141] libmachine: (embed-certs-541370) DBG | I1010 19:22:53.139055  148914 retry.go:31] will retry after 484.354474ms: waiting for machine to come up
	I1010 19:22:53.624831  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:22:53.625293  147758 main.go:141] libmachine: (embed-certs-541370) DBG | unable to find current IP address of domain embed-certs-541370 in network mk-embed-certs-541370
	I1010 19:22:53.625323  147758 main.go:141] libmachine: (embed-certs-541370) DBG | I1010 19:22:53.625234  148914 retry.go:31] will retry after 591.916836ms: waiting for machine to come up
	I1010 19:22:54.219214  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:22:54.219732  147758 main.go:141] libmachine: (embed-certs-541370) DBG | unable to find current IP address of domain embed-certs-541370 in network mk-embed-certs-541370
	I1010 19:22:54.219763  147758 main.go:141] libmachine: (embed-certs-541370) DBG | I1010 19:22:54.219673  148914 retry.go:31] will retry after 614.162479ms: waiting for machine to come up
	I1010 19:22:54.835573  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:22:54.836038  147758 main.go:141] libmachine: (embed-certs-541370) DBG | unable to find current IP address of domain embed-certs-541370 in network mk-embed-certs-541370
	I1010 19:22:54.836063  147758 main.go:141] libmachine: (embed-certs-541370) DBG | I1010 19:22:54.835988  148914 retry.go:31] will retry after 824.170953ms: waiting for machine to come up
	I1010 19:22:55.662092  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:22:55.662646  147758 main.go:141] libmachine: (embed-certs-541370) DBG | unable to find current IP address of domain embed-certs-541370 in network mk-embed-certs-541370
	I1010 19:22:55.662668  147758 main.go:141] libmachine: (embed-certs-541370) DBG | I1010 19:22:55.662586  148914 retry.go:31] will retry after 928.483848ms: waiting for machine to come up
	I1010 19:22:56.593200  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:22:56.593724  147758 main.go:141] libmachine: (embed-certs-541370) DBG | unable to find current IP address of domain embed-certs-541370 in network mk-embed-certs-541370
	I1010 19:22:56.593756  147758 main.go:141] libmachine: (embed-certs-541370) DBG | I1010 19:22:56.593679  148914 retry.go:31] will retry after 941.138644ms: waiting for machine to come up
	I1010 19:22:56.299351  147213 start.go:360] acquireMachinesLock for no-preload-320324: {Name:mkf50a8a3600473176c10a3ea212772e747151e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1010 19:22:57.536977  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:22:57.537403  147758 main.go:141] libmachine: (embed-certs-541370) DBG | unable to find current IP address of domain embed-certs-541370 in network mk-embed-certs-541370
	I1010 19:22:57.537429  147758 main.go:141] libmachine: (embed-certs-541370) DBG | I1010 19:22:57.537331  148914 retry.go:31] will retry after 1.262203584s: waiting for machine to come up
	I1010 19:22:58.801921  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:22:58.802420  147758 main.go:141] libmachine: (embed-certs-541370) DBG | unable to find current IP address of domain embed-certs-541370 in network mk-embed-certs-541370
	I1010 19:22:58.802454  147758 main.go:141] libmachine: (embed-certs-541370) DBG | I1010 19:22:58.802381  148914 retry.go:31] will retry after 2.154751391s: waiting for machine to come up
	I1010 19:23:00.960100  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:00.960661  147758 main.go:141] libmachine: (embed-certs-541370) DBG | unable to find current IP address of domain embed-certs-541370 in network mk-embed-certs-541370
	I1010 19:23:00.960684  147758 main.go:141] libmachine: (embed-certs-541370) DBG | I1010 19:23:00.960607  148914 retry.go:31] will retry after 1.945155171s: waiting for machine to come up
	I1010 19:23:02.907705  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:02.908097  147758 main.go:141] libmachine: (embed-certs-541370) DBG | unable to find current IP address of domain embed-certs-541370 in network mk-embed-certs-541370
	I1010 19:23:02.908129  147758 main.go:141] libmachine: (embed-certs-541370) DBG | I1010 19:23:02.908038  148914 retry.go:31] will retry after 3.245262469s: waiting for machine to come up
	I1010 19:23:06.157527  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:06.157897  147758 main.go:141] libmachine: (embed-certs-541370) DBG | unable to find current IP address of domain embed-certs-541370 in network mk-embed-certs-541370
	I1010 19:23:06.157925  147758 main.go:141] libmachine: (embed-certs-541370) DBG | I1010 19:23:06.157858  148914 retry.go:31] will retry after 3.973579024s: waiting for machine to come up
	I1010 19:23:11.321975  148123 start.go:364] duration metric: took 3m17.648521443s to acquireMachinesLock for "old-k8s-version-947203"
	I1010 19:23:11.322043  148123 start.go:96] Skipping create...Using existing machine configuration
	I1010 19:23:11.322056  148123 fix.go:54] fixHost starting: 
	I1010 19:23:11.322537  148123 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:23:11.322596  148123 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:23:11.339586  148123 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42157
	I1010 19:23:11.340091  148123 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:23:11.340686  148123 main.go:141] libmachine: Using API Version  1
	I1010 19:23:11.340719  148123 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:23:11.341101  148123 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:23:11.341295  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .DriverName
	I1010 19:23:11.341453  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetState
	I1010 19:23:11.343215  148123 fix.go:112] recreateIfNeeded on old-k8s-version-947203: state=Stopped err=<nil>
	I1010 19:23:11.343247  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .DriverName
	W1010 19:23:11.343408  148123 fix.go:138] unexpected machine state, will restart: <nil>
	I1010 19:23:11.345653  148123 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-947203" ...
	I1010 19:23:10.135369  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.135799  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has current primary IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.135830  147758 main.go:141] libmachine: (embed-certs-541370) Found IP for machine: 192.168.39.120
	I1010 19:23:10.135839  147758 main.go:141] libmachine: (embed-certs-541370) Reserving static IP address...
	I1010 19:23:10.136283  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "embed-certs-541370", mac: "52:54:00:e2:ee:d0", ip: "192.168.39.120"} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:10.136311  147758 main.go:141] libmachine: (embed-certs-541370) Reserved static IP address: 192.168.39.120
	I1010 19:23:10.136327  147758 main.go:141] libmachine: (embed-certs-541370) DBG | skip adding static IP to network mk-embed-certs-541370 - found existing host DHCP lease matching {name: "embed-certs-541370", mac: "52:54:00:e2:ee:d0", ip: "192.168.39.120"}
	I1010 19:23:10.136339  147758 main.go:141] libmachine: (embed-certs-541370) DBG | Getting to WaitForSSH function...
	I1010 19:23:10.136351  147758 main.go:141] libmachine: (embed-certs-541370) Waiting for SSH to be available...
	I1010 19:23:10.138861  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.139259  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:10.139293  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.139438  147758 main.go:141] libmachine: (embed-certs-541370) DBG | Using SSH client type: external
	I1010 19:23:10.139472  147758 main.go:141] libmachine: (embed-certs-541370) DBG | Using SSH private key: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/embed-certs-541370/id_rsa (-rw-------)
	I1010 19:23:10.139517  147758 main.go:141] libmachine: (embed-certs-541370) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.120 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19787-81676/.minikube/machines/embed-certs-541370/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1010 19:23:10.139541  147758 main.go:141] libmachine: (embed-certs-541370) DBG | About to run SSH command:
	I1010 19:23:10.139562  147758 main.go:141] libmachine: (embed-certs-541370) DBG | exit 0
	I1010 19:23:10.261078  147758 main.go:141] libmachine: (embed-certs-541370) DBG | SSH cmd err, output: <nil>: 
	I1010 19:23:10.261533  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetConfigRaw
	I1010 19:23:10.262192  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetIP
	I1010 19:23:10.265071  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.265467  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:10.265515  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.265737  147758 profile.go:143] Saving config to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/embed-certs-541370/config.json ...
	I1010 19:23:10.265941  147758 machine.go:93] provisionDockerMachine start ...
	I1010 19:23:10.265960  147758 main.go:141] libmachine: (embed-certs-541370) Calling .DriverName
	I1010 19:23:10.266188  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHHostname
	I1010 19:23:10.269186  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.269618  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:10.269649  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.269799  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHPort
	I1010 19:23:10.269984  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:23:10.270206  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:23:10.270345  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHUsername
	I1010 19:23:10.270550  147758 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:10.270834  147758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.120 22 <nil> <nil>}
	I1010 19:23:10.270849  147758 main.go:141] libmachine: About to run SSH command:
	hostname
	I1010 19:23:10.373285  147758 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1010 19:23:10.373316  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetMachineName
	I1010 19:23:10.373625  147758 buildroot.go:166] provisioning hostname "embed-certs-541370"
	I1010 19:23:10.373660  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetMachineName
	I1010 19:23:10.373835  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHHostname
	I1010 19:23:10.376552  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.376951  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:10.376994  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.377132  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHPort
	I1010 19:23:10.377332  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:23:10.377489  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:23:10.377606  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHUsername
	I1010 19:23:10.377745  147758 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:10.377918  147758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.120 22 <nil> <nil>}
	I1010 19:23:10.377930  147758 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-541370 && echo "embed-certs-541370" | sudo tee /etc/hostname
	I1010 19:23:10.495847  147758 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-541370
	
	I1010 19:23:10.495880  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHHostname
	I1010 19:23:10.498868  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.499205  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:10.499247  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.499362  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHPort
	I1010 19:23:10.499556  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:23:10.499700  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:23:10.499829  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHUsername
	I1010 19:23:10.499961  147758 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:10.500187  147758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.120 22 <nil> <nil>}
	I1010 19:23:10.500210  147758 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-541370' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-541370/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-541370' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1010 19:23:10.614318  147758 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1010 19:23:10.614357  147758 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19787-81676/.minikube CaCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19787-81676/.minikube}
	I1010 19:23:10.614412  147758 buildroot.go:174] setting up certificates
	I1010 19:23:10.614429  147758 provision.go:84] configureAuth start
	I1010 19:23:10.614457  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetMachineName
	I1010 19:23:10.614763  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetIP
	I1010 19:23:10.617457  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.617888  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:10.617916  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.618078  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHHostname
	I1010 19:23:10.620243  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.620635  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:10.620666  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.620789  147758 provision.go:143] copyHostCerts
	I1010 19:23:10.620895  147758 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem, removing ...
	I1010 19:23:10.620913  147758 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem
	I1010 19:23:10.620998  147758 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem (1078 bytes)
	I1010 19:23:10.621111  147758 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem, removing ...
	I1010 19:23:10.621123  147758 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem
	I1010 19:23:10.621159  147758 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem (1123 bytes)
	I1010 19:23:10.621245  147758 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem, removing ...
	I1010 19:23:10.621257  147758 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem
	I1010 19:23:10.621292  147758 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem (1675 bytes)
	I1010 19:23:10.621364  147758 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem org=jenkins.embed-certs-541370 san=[127.0.0.1 192.168.39.120 embed-certs-541370 localhost minikube]
	I1010 19:23:10.697456  147758 provision.go:177] copyRemoteCerts
	I1010 19:23:10.697515  147758 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1010 19:23:10.697547  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHHostname
	I1010 19:23:10.700439  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.700763  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:10.700799  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.700956  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHPort
	I1010 19:23:10.701162  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:23:10.701320  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHUsername
	I1010 19:23:10.701465  147758 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/embed-certs-541370/id_rsa Username:docker}
	I1010 19:23:10.783442  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1010 19:23:10.808446  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1010 19:23:10.832117  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1010 19:23:10.856286  147758 provision.go:87] duration metric: took 241.840139ms to configureAuth
	I1010 19:23:10.856318  147758 buildroot.go:189] setting minikube options for container-runtime
	I1010 19:23:10.856528  147758 config.go:182] Loaded profile config "embed-certs-541370": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 19:23:10.856640  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHHostname
	I1010 19:23:10.859252  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.859677  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:10.859708  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.859916  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHPort
	I1010 19:23:10.860087  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:23:10.860222  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:23:10.860344  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHUsername
	I1010 19:23:10.860524  147758 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:10.860688  147758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.120 22 <nil> <nil>}
	I1010 19:23:10.860702  147758 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1010 19:23:11.086349  147758 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1010 19:23:11.086375  147758 machine.go:96] duration metric: took 820.421344ms to provisionDockerMachine
	I1010 19:23:11.086386  147758 start.go:293] postStartSetup for "embed-certs-541370" (driver="kvm2")
	I1010 19:23:11.086401  147758 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1010 19:23:11.086423  147758 main.go:141] libmachine: (embed-certs-541370) Calling .DriverName
	I1010 19:23:11.086755  147758 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1010 19:23:11.086783  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHHostname
	I1010 19:23:11.089482  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:11.089838  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:11.089860  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:11.090042  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHPort
	I1010 19:23:11.090253  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:23:11.090410  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHUsername
	I1010 19:23:11.090535  147758 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/embed-certs-541370/id_rsa Username:docker}
	I1010 19:23:11.172474  147758 ssh_runner.go:195] Run: cat /etc/os-release
	I1010 19:23:11.176699  147758 info.go:137] Remote host: Buildroot 2023.02.9
	I1010 19:23:11.176733  147758 filesync.go:126] Scanning /home/jenkins/minikube-integration/19787-81676/.minikube/addons for local assets ...
	I1010 19:23:11.176800  147758 filesync.go:126] Scanning /home/jenkins/minikube-integration/19787-81676/.minikube/files for local assets ...
	I1010 19:23:11.176899  147758 filesync.go:149] local asset: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem -> 888762.pem in /etc/ssl/certs
	I1010 19:23:11.177044  147758 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1010 19:23:11.186985  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem --> /etc/ssl/certs/888762.pem (1708 bytes)
	I1010 19:23:11.211385  147758 start.go:296] duration metric: took 124.982089ms for postStartSetup
	I1010 19:23:11.211442  147758 fix.go:56] duration metric: took 19.913977793s for fixHost
	I1010 19:23:11.211472  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHHostname
	I1010 19:23:11.214421  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:11.214780  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:11.214812  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:11.214999  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHPort
	I1010 19:23:11.215219  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:23:11.215429  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:23:11.215612  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHUsername
	I1010 19:23:11.215786  147758 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:11.215974  147758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.120 22 <nil> <nil>}
	I1010 19:23:11.215985  147758 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1010 19:23:11.321786  147758 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728588191.295446348
	
	I1010 19:23:11.321814  147758 fix.go:216] guest clock: 1728588191.295446348
	I1010 19:23:11.321822  147758 fix.go:229] Guest: 2024-10-10 19:23:11.295446348 +0000 UTC Remote: 2024-10-10 19:23:11.211447413 +0000 UTC m=+249.373680838 (delta=83.998935ms)
	I1010 19:23:11.321870  147758 fix.go:200] guest clock delta is within tolerance: 83.998935ms
	I1010 19:23:11.321877  147758 start.go:83] releasing machines lock for "embed-certs-541370", held for 20.024455781s
	I1010 19:23:11.321905  147758 main.go:141] libmachine: (embed-certs-541370) Calling .DriverName
	I1010 19:23:11.322169  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetIP
	I1010 19:23:11.325004  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:11.325350  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:11.325375  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:11.325566  147758 main.go:141] libmachine: (embed-certs-541370) Calling .DriverName
	I1010 19:23:11.326090  147758 main.go:141] libmachine: (embed-certs-541370) Calling .DriverName
	I1010 19:23:11.326294  147758 main.go:141] libmachine: (embed-certs-541370) Calling .DriverName
	I1010 19:23:11.326383  147758 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1010 19:23:11.326444  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHHostname
	I1010 19:23:11.326501  147758 ssh_runner.go:195] Run: cat /version.json
	I1010 19:23:11.326529  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHHostname
	I1010 19:23:11.329311  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:11.329657  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:11.329690  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:11.329713  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:11.329866  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHPort
	I1010 19:23:11.330057  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:23:11.330160  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:11.330188  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:11.330204  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHUsername
	I1010 19:23:11.330344  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHPort
	I1010 19:23:11.330346  147758 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/embed-certs-541370/id_rsa Username:docker}
	I1010 19:23:11.330538  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:23:11.330687  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHUsername
	I1010 19:23:11.330821  147758 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/embed-certs-541370/id_rsa Username:docker}
	I1010 19:23:11.406525  147758 ssh_runner.go:195] Run: systemctl --version
	I1010 19:23:11.428958  147758 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1010 19:23:11.577663  147758 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1010 19:23:11.584024  147758 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1010 19:23:11.584112  147758 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1010 19:23:11.603163  147758 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1010 19:23:11.603190  147758 start.go:495] detecting cgroup driver to use...
	I1010 19:23:11.603291  147758 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1010 19:23:11.624744  147758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1010 19:23:11.645477  147758 docker.go:217] disabling cri-docker service (if available) ...
	I1010 19:23:11.645537  147758 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1010 19:23:11.660216  147758 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1010 19:23:11.675019  147758 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1010 19:23:11.796038  147758 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1010 19:23:11.967750  147758 docker.go:233] disabling docker service ...
	I1010 19:23:11.967828  147758 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1010 19:23:11.983184  147758 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1010 19:23:12.001603  147758 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1010 19:23:12.149408  147758 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1010 19:23:12.306724  147758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1010 19:23:12.324302  147758 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1010 19:23:12.345426  147758 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1010 19:23:12.345508  147758 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:12.357812  147758 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1010 19:23:12.357883  147758 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:12.370095  147758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:12.382389  147758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:12.395000  147758 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1010 19:23:12.408429  147758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:12.426851  147758 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:12.450568  147758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:12.463434  147758 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1010 19:23:12.474537  147758 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1010 19:23:12.474606  147758 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1010 19:23:12.489074  147758 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1010 19:23:12.500048  147758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 19:23:12.635695  147758 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1010 19:23:12.733511  147758 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1010 19:23:12.733593  147758 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1010 19:23:12.739072  147758 start.go:563] Will wait 60s for crictl version
	I1010 19:23:12.739138  147758 ssh_runner.go:195] Run: which crictl
	I1010 19:23:12.743675  147758 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1010 19:23:12.792272  147758 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1010 19:23:12.792379  147758 ssh_runner.go:195] Run: crio --version
	I1010 19:23:12.829968  147758 ssh_runner.go:195] Run: crio --version
	I1010 19:23:12.862579  147758 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1010 19:23:11.347158  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .Start
	I1010 19:23:11.347359  148123 main.go:141] libmachine: (old-k8s-version-947203) Ensuring networks are active...
	I1010 19:23:11.348252  148123 main.go:141] libmachine: (old-k8s-version-947203) Ensuring network default is active
	I1010 19:23:11.348689  148123 main.go:141] libmachine: (old-k8s-version-947203) Ensuring network mk-old-k8s-version-947203 is active
	I1010 19:23:11.349139  148123 main.go:141] libmachine: (old-k8s-version-947203) Getting domain xml...
	I1010 19:23:11.349799  148123 main.go:141] libmachine: (old-k8s-version-947203) Creating domain...
	I1010 19:23:12.671472  148123 main.go:141] libmachine: (old-k8s-version-947203) Waiting to get IP...
	I1010 19:23:12.672628  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:12.673145  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:12.673278  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:12.673132  149057 retry.go:31] will retry after 280.111667ms: waiting for machine to come up
	I1010 19:23:12.954865  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:12.955325  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:12.955377  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:12.955300  149057 retry.go:31] will retry after 289.967238ms: waiting for machine to come up
	I1010 19:23:13.247039  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:13.247663  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:13.247769  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:13.247632  149057 retry.go:31] will retry after 319.085935ms: waiting for machine to come up
	I1010 19:23:12.863797  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetIP
	I1010 19:23:12.867335  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:12.867760  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:12.867794  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:12.868029  147758 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1010 19:23:12.872503  147758 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 19:23:12.887684  147758 kubeadm.go:883] updating cluster {Name:embed-certs-541370 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:embed-certs-541370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.120 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1010 19:23:12.887809  147758 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1010 19:23:12.887853  147758 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 19:23:12.924155  147758 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1010 19:23:12.924240  147758 ssh_runner.go:195] Run: which lz4
	I1010 19:23:12.928613  147758 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1010 19:23:12.933024  147758 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1010 19:23:12.933069  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1010 19:23:14.450790  147758 crio.go:462] duration metric: took 1.522223644s to copy over tarball
	I1010 19:23:14.450893  147758 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1010 19:23:16.642155  147758 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.191220673s)
	I1010 19:23:16.642193  147758 crio.go:469] duration metric: took 2.191371146s to extract the tarball
	I1010 19:23:16.642202  147758 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1010 19:23:16.679611  147758 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 19:23:16.723840  147758 crio.go:514] all images are preloaded for cri-o runtime.
	I1010 19:23:16.723865  147758 cache_images.go:84] Images are preloaded, skipping loading
	I1010 19:23:16.723874  147758 kubeadm.go:934] updating node { 192.168.39.120 8443 v1.31.1 crio true true} ...
	I1010 19:23:16.723998  147758 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-541370 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.120
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-541370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1010 19:23:16.724081  147758 ssh_runner.go:195] Run: crio config
	I1010 19:23:16.779659  147758 cni.go:84] Creating CNI manager for ""
	I1010 19:23:16.779682  147758 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1010 19:23:16.779693  147758 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1010 19:23:16.779714  147758 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.120 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-541370 NodeName:embed-certs-541370 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.120"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.120 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1010 19:23:16.779842  147758 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.120
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-541370"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.120
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.120"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1010 19:23:16.779904  147758 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1010 19:23:16.791424  147758 binaries.go:44] Found k8s binaries, skipping transfer
	I1010 19:23:16.791493  147758 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1010 19:23:16.801715  147758 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I1010 19:23:16.821364  147758 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1010 19:23:16.842703  147758 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I1010 19:23:16.864835  147758 ssh_runner.go:195] Run: grep 192.168.39.120	control-plane.minikube.internal$ /etc/hosts
	I1010 19:23:16.868928  147758 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.120	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 19:23:13.568192  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:13.568690  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:13.568721  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:13.568657  149057 retry.go:31] will retry after 575.032546ms: waiting for machine to come up
	I1010 19:23:14.145650  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:14.146293  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:14.146319  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:14.146234  149057 retry.go:31] will retry after 702.803794ms: waiting for machine to come up
	I1010 19:23:14.851201  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:14.851736  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:14.851769  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:14.851706  149057 retry.go:31] will retry after 883.195401ms: waiting for machine to come up
	I1010 19:23:15.736627  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:15.737199  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:15.737252  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:15.737155  149057 retry.go:31] will retry after 794.699815ms: waiting for machine to come up
	I1010 19:23:16.533952  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:16.534510  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:16.534545  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:16.534443  149057 retry.go:31] will retry after 961.751912ms: waiting for machine to come up
	I1010 19:23:17.497535  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:17.498007  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:17.498035  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:17.497936  149057 retry.go:31] will retry after 1.423503471s: waiting for machine to come up
	I1010 19:23:16.883162  147758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 19:23:17.027646  147758 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 19:23:17.045083  147758 certs.go:68] Setting up /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/embed-certs-541370 for IP: 192.168.39.120
	I1010 19:23:17.045108  147758 certs.go:194] generating shared ca certs ...
	I1010 19:23:17.045130  147758 certs.go:226] acquiring lock for ca certs: {Name:mk934aa2a82050d7b4e8800492bf64f91d140517 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 19:23:17.045491  147758 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key
	I1010 19:23:17.045561  147758 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key
	I1010 19:23:17.045579  147758 certs.go:256] generating profile certs ...
	I1010 19:23:17.045730  147758 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/embed-certs-541370/client.key
	I1010 19:23:17.045814  147758 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/embed-certs-541370/apiserver.key.dd7630a8
	I1010 19:23:17.045874  147758 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/embed-certs-541370/proxy-client.key
	I1010 19:23:17.046015  147758 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876.pem (1338 bytes)
	W1010 19:23:17.046055  147758 certs.go:480] ignoring /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876_empty.pem, impossibly tiny 0 bytes
	I1010 19:23:17.046075  147758 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem (1679 bytes)
	I1010 19:23:17.046114  147758 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem (1078 bytes)
	I1010 19:23:17.046150  147758 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem (1123 bytes)
	I1010 19:23:17.046177  147758 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem (1675 bytes)
	I1010 19:23:17.046235  147758 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem (1708 bytes)
	I1010 19:23:17.047131  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1010 19:23:17.087057  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1010 19:23:17.137707  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1010 19:23:17.181707  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1010 19:23:17.213227  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/embed-certs-541370/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1010 19:23:17.247846  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/embed-certs-541370/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1010 19:23:17.275989  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/embed-certs-541370/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1010 19:23:17.301144  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/embed-certs-541370/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1010 19:23:17.326232  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem --> /usr/share/ca-certificates/888762.pem (1708 bytes)
	I1010 19:23:17.350586  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1010 19:23:17.374666  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876.pem --> /usr/share/ca-certificates/88876.pem (1338 bytes)
	I1010 19:23:17.399570  147758 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1010 19:23:17.417846  147758 ssh_runner.go:195] Run: openssl version
	I1010 19:23:17.424206  147758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/888762.pem && ln -fs /usr/share/ca-certificates/888762.pem /etc/ssl/certs/888762.pem"
	I1010 19:23:17.436091  147758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/888762.pem
	I1010 19:23:17.441020  147758 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 10 18:09 /usr/share/ca-certificates/888762.pem
	I1010 19:23:17.441090  147758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/888762.pem
	I1010 19:23:17.447318  147758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/888762.pem /etc/ssl/certs/3ec20f2e.0"
	I1010 19:23:17.459191  147758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1010 19:23:17.470878  147758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1010 19:23:17.476185  147758 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 10 17:58 /usr/share/ca-certificates/minikubeCA.pem
	I1010 19:23:17.476248  147758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1010 19:23:17.482808  147758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1010 19:23:17.494626  147758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/88876.pem && ln -fs /usr/share/ca-certificates/88876.pem /etc/ssl/certs/88876.pem"
	I1010 19:23:17.506522  147758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/88876.pem
	I1010 19:23:17.511484  147758 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 10 18:09 /usr/share/ca-certificates/88876.pem
	I1010 19:23:17.511558  147758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/88876.pem
	I1010 19:23:17.517445  147758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/88876.pem /etc/ssl/certs/51391683.0"
	I1010 19:23:17.529109  147758 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1010 19:23:17.534139  147758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1010 19:23:17.540846  147758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1010 19:23:17.547429  147758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1010 19:23:17.554350  147758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1010 19:23:17.561036  147758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1010 19:23:17.567571  147758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1010 19:23:17.574019  147758 kubeadm.go:392] StartCluster: {Name:embed-certs-541370 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:embed-certs-541370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.120 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 19:23:17.574128  147758 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1010 19:23:17.574187  147758 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1010 19:23:17.612699  147758 cri.go:89] found id: ""
	I1010 19:23:17.612804  147758 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1010 19:23:17.623827  147758 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1010 19:23:17.623856  147758 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1010 19:23:17.623917  147758 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1010 19:23:17.634732  147758 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1010 19:23:17.635754  147758 kubeconfig.go:125] found "embed-certs-541370" server: "https://192.168.39.120:8443"
	I1010 19:23:17.637813  147758 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1010 19:23:17.648543  147758 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.120
	I1010 19:23:17.648590  147758 kubeadm.go:1160] stopping kube-system containers ...
	I1010 19:23:17.648606  147758 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1010 19:23:17.648671  147758 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1010 19:23:17.693966  147758 cri.go:89] found id: ""
	I1010 19:23:17.694057  147758 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1010 19:23:17.715977  147758 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1010 19:23:17.727871  147758 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1010 19:23:17.727891  147758 kubeadm.go:157] found existing configuration files:
	
	I1010 19:23:17.727942  147758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1010 19:23:17.738274  147758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1010 19:23:17.738340  147758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1010 19:23:17.748925  147758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1010 19:23:17.758945  147758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1010 19:23:17.759008  147758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1010 19:23:17.769169  147758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1010 19:23:17.779196  147758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1010 19:23:17.779282  147758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1010 19:23:17.790948  147758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1010 19:23:17.802264  147758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1010 19:23:17.802332  147758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1010 19:23:17.814009  147758 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1010 19:23:17.826820  147758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:23:17.947270  147758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:23:19.128720  147758 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.181409785s)
	I1010 19:23:19.128770  147758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:23:19.343735  147758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:23:19.419728  147758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:23:19.529802  147758 api_server.go:52] waiting for apiserver process to appear ...
	I1010 19:23:19.529930  147758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:20.030019  147758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:20.530833  147758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:20.558314  147758 api_server.go:72] duration metric: took 1.028510044s to wait for apiserver process to appear ...
	I1010 19:23:20.558350  147758 api_server.go:88] waiting for apiserver healthz status ...
	I1010 19:23:20.558375  147758 api_server.go:253] Checking apiserver healthz at https://192.168.39.120:8443/healthz ...
	I1010 19:23:20.558991  147758 api_server.go:269] stopped: https://192.168.39.120:8443/healthz: Get "https://192.168.39.120:8443/healthz": dial tcp 192.168.39.120:8443: connect: connection refused
	I1010 19:23:21.058727  147758 api_server.go:253] Checking apiserver healthz at https://192.168.39.120:8443/healthz ...
	I1010 19:23:18.923057  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:18.923636  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:18.923671  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:18.923582  149057 retry.go:31] will retry after 2.09836426s: waiting for machine to come up
	I1010 19:23:21.023500  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:21.024054  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:21.024084  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:21.023999  149057 retry.go:31] will retry after 2.809962093s: waiting for machine to come up
	I1010 19:23:23.187135  147758 api_server.go:279] https://192.168.39.120:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1010 19:23:23.187187  147758 api_server.go:103] status: https://192.168.39.120:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1010 19:23:23.187203  147758 api_server.go:253] Checking apiserver healthz at https://192.168.39.120:8443/healthz ...
	I1010 19:23:23.233367  147758 api_server.go:279] https://192.168.39.120:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1010 19:23:23.233414  147758 api_server.go:103] status: https://192.168.39.120:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1010 19:23:23.558658  147758 api_server.go:253] Checking apiserver healthz at https://192.168.39.120:8443/healthz ...
	I1010 19:23:23.575108  147758 api_server.go:279] https://192.168.39.120:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1010 19:23:23.575139  147758 api_server.go:103] status: https://192.168.39.120:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1010 19:23:24.058679  147758 api_server.go:253] Checking apiserver healthz at https://192.168.39.120:8443/healthz ...
	I1010 19:23:24.065735  147758 api_server.go:279] https://192.168.39.120:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1010 19:23:24.065763  147758 api_server.go:103] status: https://192.168.39.120:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1010 19:23:24.559440  147758 api_server.go:253] Checking apiserver healthz at https://192.168.39.120:8443/healthz ...
	I1010 19:23:24.565460  147758 api_server.go:279] https://192.168.39.120:8443/healthz returned 200:
	ok
	I1010 19:23:24.571828  147758 api_server.go:141] control plane version: v1.31.1
	I1010 19:23:24.571859  147758 api_server.go:131] duration metric: took 4.013501806s to wait for apiserver health ...
	I1010 19:23:24.571869  147758 cni.go:84] Creating CNI manager for ""
	I1010 19:23:24.571875  147758 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1010 19:23:24.573875  147758 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1010 19:23:24.575458  147758 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1010 19:23:24.586870  147758 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1010 19:23:24.624362  147758 system_pods.go:43] waiting for kube-system pods to appear ...
	I1010 19:23:24.643465  147758 system_pods.go:59] 8 kube-system pods found
	I1010 19:23:24.643516  147758 system_pods.go:61] "coredns-7c65d6cfc9-fgtkg" [df696e79-ca6f-4d73-a57e-9c6cdc93c505] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1010 19:23:24.643532  147758 system_pods.go:61] "etcd-embed-certs-541370" [254fa12c-b0d2-499f-8dd9-c1505efeaaab] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1010 19:23:24.643543  147758 system_pods.go:61] "kube-apiserver-embed-certs-541370" [fcd3809d-d325-4481-8e86-c246e29458fe] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1010 19:23:24.643565  147758 system_pods.go:61] "kube-controller-manager-embed-certs-541370" [ab0fdd6b-d9b7-48dc-b82f-29b21d2295ee] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1010 19:23:24.643584  147758 system_pods.go:61] "kube-proxy-f5l6x" [446383fa-44c5-4b9e-bfc5-e38799597e75] Running
	I1010 19:23:24.643592  147758 system_pods.go:61] "kube-scheduler-embed-certs-541370" [1c6af7e7-ce16-4ae2-8feb-e5d474173de1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1010 19:23:24.643603  147758 system_pods.go:61] "metrics-server-6867b74b74-kw529" [aad00321-d499-4563-849e-286d6e699fc8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1010 19:23:24.643611  147758 system_pods.go:61] "storage-provisioner" [df4ae621-5066-4276-9276-a0538a9f9dd1] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1010 19:23:24.643620  147758 system_pods.go:74] duration metric: took 19.234558ms to wait for pod list to return data ...
	I1010 19:23:24.643637  147758 node_conditions.go:102] verifying NodePressure condition ...
	I1010 19:23:24.651647  147758 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1010 19:23:24.651683  147758 node_conditions.go:123] node cpu capacity is 2
	I1010 19:23:24.651699  147758 node_conditions.go:105] duration metric: took 8.056629ms to run NodePressure ...
	I1010 19:23:24.651720  147758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:23:24.915651  147758 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1010 19:23:24.921104  147758 kubeadm.go:739] kubelet initialised
	I1010 19:23:24.921131  147758 kubeadm.go:740] duration metric: took 5.44643ms waiting for restarted kubelet to initialise ...
	I1010 19:23:24.921142  147758 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 19:23:24.927535  147758 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-fgtkg" in "kube-system" namespace to be "Ready" ...
	I1010 19:23:23.837827  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:23.838271  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:23.838295  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:23.838220  149057 retry.go:31] will retry after 2.562944835s: waiting for machine to come up
	I1010 19:23:26.402699  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:26.403250  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:26.403294  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:26.403192  149057 retry.go:31] will retry after 3.867656846s: waiting for machine to come up
	I1010 19:23:26.932764  147758 pod_ready.go:103] pod "coredns-7c65d6cfc9-fgtkg" in "kube-system" namespace has status "Ready":"False"
	I1010 19:23:28.936055  147758 pod_ready.go:103] pod "coredns-7c65d6cfc9-fgtkg" in "kube-system" namespace has status "Ready":"False"
	I1010 19:23:31.434959  147758 pod_ready.go:103] pod "coredns-7c65d6cfc9-fgtkg" in "kube-system" namespace has status "Ready":"False"
	I1010 19:23:31.893914  148525 start.go:364] duration metric: took 2m17.836396131s to acquireMachinesLock for "default-k8s-diff-port-361847"
	I1010 19:23:31.893993  148525 start.go:96] Skipping create...Using existing machine configuration
	I1010 19:23:31.894007  148525 fix.go:54] fixHost starting: 
	I1010 19:23:31.894438  148525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:23:31.894502  148525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:23:31.914583  148525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37215
	I1010 19:23:31.915054  148525 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:23:31.915535  148525 main.go:141] libmachine: Using API Version  1
	I1010 19:23:31.915560  148525 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:23:31.915967  148525 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:23:31.916207  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .DriverName
	I1010 19:23:31.916387  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetState
	I1010 19:23:31.918035  148525 fix.go:112] recreateIfNeeded on default-k8s-diff-port-361847: state=Stopped err=<nil>
	I1010 19:23:31.918073  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .DriverName
	W1010 19:23:31.918241  148525 fix.go:138] unexpected machine state, will restart: <nil>
	I1010 19:23:31.920390  148525 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-361847" ...
	I1010 19:23:30.275222  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.275762  148123 main.go:141] libmachine: (old-k8s-version-947203) Found IP for machine: 192.168.61.112
	I1010 19:23:30.275779  148123 main.go:141] libmachine: (old-k8s-version-947203) Reserving static IP address...
	I1010 19:23:30.275794  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has current primary IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.276478  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "old-k8s-version-947203", mac: "52:54:00:b5:0b:b2", ip: "192.168.61.112"} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:30.276529  148123 main.go:141] libmachine: (old-k8s-version-947203) Reserved static IP address: 192.168.61.112
	I1010 19:23:30.276553  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | skip adding static IP to network mk-old-k8s-version-947203 - found existing host DHCP lease matching {name: "old-k8s-version-947203", mac: "52:54:00:b5:0b:b2", ip: "192.168.61.112"}
	I1010 19:23:30.276574  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | Getting to WaitForSSH function...
	I1010 19:23:30.276590  148123 main.go:141] libmachine: (old-k8s-version-947203) Waiting for SSH to be available...
	I1010 19:23:30.278486  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.278854  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:30.278890  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.278984  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | Using SSH client type: external
	I1010 19:23:30.279016  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | Using SSH private key: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/old-k8s-version-947203/id_rsa (-rw-------)
	I1010 19:23:30.279051  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.112 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19787-81676/.minikube/machines/old-k8s-version-947203/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1010 19:23:30.279068  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | About to run SSH command:
	I1010 19:23:30.279080  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | exit 0
	I1010 19:23:30.405053  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | SSH cmd err, output: <nil>: 
	I1010 19:23:30.405492  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetConfigRaw
	I1010 19:23:30.406224  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetIP
	I1010 19:23:30.408830  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.409178  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:30.409204  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.409462  148123 profile.go:143] Saving config to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/old-k8s-version-947203/config.json ...
	I1010 19:23:30.409673  148123 machine.go:93] provisionDockerMachine start ...
	I1010 19:23:30.409693  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .DriverName
	I1010 19:23:30.409916  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHHostname
	I1010 19:23:30.412131  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.412400  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:30.412427  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.412574  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHPort
	I1010 19:23:30.412770  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:30.412965  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:30.413117  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHUsername
	I1010 19:23:30.413368  148123 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:30.413653  148123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.112 22 <nil> <nil>}
	I1010 19:23:30.413668  148123 main.go:141] libmachine: About to run SSH command:
	hostname
	I1010 19:23:30.525149  148123 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1010 19:23:30.525176  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetMachineName
	I1010 19:23:30.525417  148123 buildroot.go:166] provisioning hostname "old-k8s-version-947203"
	I1010 19:23:30.525478  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetMachineName
	I1010 19:23:30.525703  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHHostname
	I1010 19:23:30.528343  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.528681  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:30.528708  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.528841  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHPort
	I1010 19:23:30.529035  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:30.529185  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:30.529303  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHUsername
	I1010 19:23:30.529460  148123 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:30.529635  148123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.112 22 <nil> <nil>}
	I1010 19:23:30.529645  148123 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-947203 && echo "old-k8s-version-947203" | sudo tee /etc/hostname
	I1010 19:23:30.655605  148123 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-947203
	
	I1010 19:23:30.655644  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHHostname
	I1010 19:23:30.658439  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.658834  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:30.658869  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.659063  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHPort
	I1010 19:23:30.659285  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:30.659458  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:30.659574  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHUsername
	I1010 19:23:30.659733  148123 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:30.659905  148123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.112 22 <nil> <nil>}
	I1010 19:23:30.659921  148123 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-947203' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-947203/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-947203' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1010 19:23:30.781974  148123 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1010 19:23:30.782011  148123 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19787-81676/.minikube CaCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19787-81676/.minikube}
	I1010 19:23:30.782033  148123 buildroot.go:174] setting up certificates
	I1010 19:23:30.782048  148123 provision.go:84] configureAuth start
	I1010 19:23:30.782059  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetMachineName
	I1010 19:23:30.782379  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetIP
	I1010 19:23:30.785189  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.785584  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:30.785613  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.785782  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHHostname
	I1010 19:23:30.788086  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.788432  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:30.788458  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.788582  148123 provision.go:143] copyHostCerts
	I1010 19:23:30.788645  148123 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem, removing ...
	I1010 19:23:30.788660  148123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem
	I1010 19:23:30.788729  148123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem (1078 bytes)
	I1010 19:23:30.788839  148123 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem, removing ...
	I1010 19:23:30.788867  148123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem
	I1010 19:23:30.788900  148123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem (1123 bytes)
	I1010 19:23:30.788977  148123 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem, removing ...
	I1010 19:23:30.788986  148123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem
	I1010 19:23:30.789013  148123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem (1675 bytes)
	I1010 19:23:30.789081  148123 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-947203 san=[127.0.0.1 192.168.61.112 localhost minikube old-k8s-version-947203]
	I1010 19:23:31.240626  148123 provision.go:177] copyRemoteCerts
	I1010 19:23:31.240698  148123 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1010 19:23:31.240737  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHHostname
	I1010 19:23:31.243678  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.244108  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:31.244138  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.244356  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHPort
	I1010 19:23:31.244565  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:31.244765  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHUsername
	I1010 19:23:31.244929  148123 sshutil.go:53] new ssh client: &{IP:192.168.61.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/old-k8s-version-947203/id_rsa Username:docker}
	I1010 19:23:31.331337  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1010 19:23:31.356266  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1010 19:23:31.381186  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1010 19:23:31.405301  148123 provision.go:87] duration metric: took 623.235619ms to configureAuth
	I1010 19:23:31.405337  148123 buildroot.go:189] setting minikube options for container-runtime
	I1010 19:23:31.405541  148123 config.go:182] Loaded profile config "old-k8s-version-947203": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1010 19:23:31.405620  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHHostname
	I1010 19:23:31.408111  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.408444  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:31.408470  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.408695  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHPort
	I1010 19:23:31.408947  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:31.409109  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:31.409229  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHUsername
	I1010 19:23:31.409374  148123 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:31.409583  148123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.112 22 <nil> <nil>}
	I1010 19:23:31.409609  148123 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1010 19:23:31.647023  148123 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1010 19:23:31.647050  148123 machine.go:96] duration metric: took 1.237364012s to provisionDockerMachine
	I1010 19:23:31.647063  148123 start.go:293] postStartSetup for "old-k8s-version-947203" (driver="kvm2")
	I1010 19:23:31.647073  148123 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1010 19:23:31.647104  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .DriverName
	I1010 19:23:31.647464  148123 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1010 19:23:31.647502  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHHostname
	I1010 19:23:31.649991  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.650288  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:31.650311  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.650484  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHPort
	I1010 19:23:31.650637  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:31.650832  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHUsername
	I1010 19:23:31.650968  148123 sshutil.go:53] new ssh client: &{IP:192.168.61.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/old-k8s-version-947203/id_rsa Username:docker}
	I1010 19:23:31.735985  148123 ssh_runner.go:195] Run: cat /etc/os-release
	I1010 19:23:31.740689  148123 info.go:137] Remote host: Buildroot 2023.02.9
	I1010 19:23:31.740725  148123 filesync.go:126] Scanning /home/jenkins/minikube-integration/19787-81676/.minikube/addons for local assets ...
	I1010 19:23:31.740803  148123 filesync.go:126] Scanning /home/jenkins/minikube-integration/19787-81676/.minikube/files for local assets ...
	I1010 19:23:31.740947  148123 filesync.go:149] local asset: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem -> 888762.pem in /etc/ssl/certs
	I1010 19:23:31.741057  148123 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1010 19:23:31.751383  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem --> /etc/ssl/certs/888762.pem (1708 bytes)
	I1010 19:23:31.777091  148123 start.go:296] duration metric: took 130.011618ms for postStartSetup
	I1010 19:23:31.777134  148123 fix.go:56] duration metric: took 20.455078936s for fixHost
	I1010 19:23:31.777158  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHHostname
	I1010 19:23:31.780083  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.780661  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:31.780708  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.780832  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHPort
	I1010 19:23:31.781069  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:31.781245  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:31.781382  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHUsername
	I1010 19:23:31.781534  148123 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:31.781711  148123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.112 22 <nil> <nil>}
	I1010 19:23:31.781721  148123 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1010 19:23:31.893727  148123 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728588211.849777581
	
	I1010 19:23:31.893756  148123 fix.go:216] guest clock: 1728588211.849777581
	I1010 19:23:31.893763  148123 fix.go:229] Guest: 2024-10-10 19:23:31.849777581 +0000 UTC Remote: 2024-10-10 19:23:31.777138808 +0000 UTC m=+218.253674512 (delta=72.638773ms)
	I1010 19:23:31.893806  148123 fix.go:200] guest clock delta is within tolerance: 72.638773ms
	I1010 19:23:31.893813  148123 start.go:83] releasing machines lock for "old-k8s-version-947203", held for 20.571797307s
	I1010 19:23:31.893848  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .DriverName
	I1010 19:23:31.894151  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetIP
	I1010 19:23:31.896747  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.897156  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:31.897207  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.897385  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .DriverName
	I1010 19:23:31.898036  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .DriverName
	I1010 19:23:31.898229  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .DriverName
	I1010 19:23:31.898375  148123 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1010 19:23:31.898422  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHHostname
	I1010 19:23:31.898461  148123 ssh_runner.go:195] Run: cat /version.json
	I1010 19:23:31.898487  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHHostname
	I1010 19:23:31.901274  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.901450  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.901608  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:31.901649  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.901784  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHPort
	I1010 19:23:31.901920  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:31.901945  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.901952  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:31.902112  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHPort
	I1010 19:23:31.902130  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHUsername
	I1010 19:23:31.902306  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:31.902348  148123 sshutil.go:53] new ssh client: &{IP:192.168.61.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/old-k8s-version-947203/id_rsa Username:docker}
	I1010 19:23:31.902412  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHUsername
	I1010 19:23:31.902533  148123 sshutil.go:53] new ssh client: &{IP:192.168.61.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/old-k8s-version-947203/id_rsa Username:docker}
	I1010 19:23:32.016264  148123 ssh_runner.go:195] Run: systemctl --version
	I1010 19:23:32.022618  148123 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1010 19:23:32.169502  148123 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1010 19:23:32.175960  148123 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1010 19:23:32.176039  148123 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1010 19:23:32.193584  148123 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1010 19:23:32.193615  148123 start.go:495] detecting cgroup driver to use...
	I1010 19:23:32.193698  148123 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1010 19:23:32.210923  148123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1010 19:23:32.227172  148123 docker.go:217] disabling cri-docker service (if available) ...
	I1010 19:23:32.227254  148123 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1010 19:23:32.242142  148123 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1010 19:23:32.256896  148123 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1010 19:23:32.387462  148123 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1010 19:23:32.575184  148123 docker.go:233] disabling docker service ...
	I1010 19:23:32.575257  148123 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1010 19:23:32.590667  148123 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1010 19:23:32.604825  148123 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1010 19:23:32.739827  148123 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1010 19:23:32.863648  148123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1010 19:23:32.879435  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1010 19:23:32.899582  148123 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1010 19:23:32.899659  148123 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:32.915010  148123 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1010 19:23:32.915081  148123 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:32.931181  148123 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:32.944038  148123 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:32.955182  148123 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1010 19:23:32.972220  148123 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1010 19:23:32.982681  148123 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1010 19:23:32.982739  148123 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1010 19:23:32.997012  148123 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1010 19:23:33.009468  148123 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 19:23:33.180403  148123 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1010 19:23:33.284934  148123 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1010 19:23:33.285008  148123 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1010 19:23:33.290063  148123 start.go:563] Will wait 60s for crictl version
	I1010 19:23:33.290124  148123 ssh_runner.go:195] Run: which crictl
	I1010 19:23:33.294752  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1010 19:23:33.342710  148123 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1010 19:23:33.342792  148123 ssh_runner.go:195] Run: crio --version
	I1010 19:23:33.372821  148123 ssh_runner.go:195] Run: crio --version
	I1010 19:23:33.409395  148123 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1010 19:23:33.411012  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetIP
	I1010 19:23:33.414374  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:33.414789  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:33.414836  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:33.415084  148123 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1010 19:23:33.420164  148123 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 19:23:33.433442  148123 kubeadm.go:883] updating cluster {Name:old-k8s-version-947203 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-947203 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.112 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1010 19:23:33.433631  148123 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1010 19:23:33.433717  148123 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 19:23:33.480013  148123 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1010 19:23:33.480078  148123 ssh_runner.go:195] Run: which lz4
	I1010 19:23:33.485443  148123 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1010 19:23:33.490986  148123 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1010 19:23:33.491032  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1010 19:23:31.921836  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .Start
	I1010 19:23:31.922036  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Ensuring networks are active...
	I1010 19:23:31.922890  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Ensuring network default is active
	I1010 19:23:31.923271  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Ensuring network mk-default-k8s-diff-port-361847 is active
	I1010 19:23:31.923685  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Getting domain xml...
	I1010 19:23:31.924449  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Creating domain...
	I1010 19:23:33.241164  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting to get IP...
	I1010 19:23:33.242273  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:33.242713  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | unable to find current IP address of domain default-k8s-diff-port-361847 in network mk-default-k8s-diff-port-361847
	I1010 19:23:33.242814  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | I1010 19:23:33.242702  149213 retry.go:31] will retry after 195.013046ms: waiting for machine to come up
	I1010 19:23:33.438965  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:33.439425  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | unable to find current IP address of domain default-k8s-diff-port-361847 in network mk-default-k8s-diff-port-361847
	I1010 19:23:33.439452  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | I1010 19:23:33.439379  149213 retry.go:31] will retry after 344.223823ms: waiting for machine to come up
	I1010 19:23:33.785167  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:33.785833  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | unable to find current IP address of domain default-k8s-diff-port-361847 in network mk-default-k8s-diff-port-361847
	I1010 19:23:33.785864  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | I1010 19:23:33.785780  149213 retry.go:31] will retry after 342.787658ms: waiting for machine to come up
	I1010 19:23:33.435066  147758 pod_ready.go:103] pod "coredns-7c65d6cfc9-fgtkg" in "kube-system" namespace has status "Ready":"False"
	I1010 19:23:34.936768  147758 pod_ready.go:93] pod "coredns-7c65d6cfc9-fgtkg" in "kube-system" namespace has status "Ready":"True"
	I1010 19:23:34.936800  147758 pod_ready.go:82] duration metric: took 10.009235225s for pod "coredns-7c65d6cfc9-fgtkg" in "kube-system" namespace to be "Ready" ...
	I1010 19:23:34.936814  147758 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:23:35.944395  147758 pod_ready.go:93] pod "etcd-embed-certs-541370" in "kube-system" namespace has status "Ready":"True"
	I1010 19:23:35.944430  147758 pod_ready.go:82] duration metric: took 1.007599746s for pod "etcd-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:23:35.944445  147758 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:23:35.953224  147758 pod_ready.go:93] pod "kube-apiserver-embed-certs-541370" in "kube-system" namespace has status "Ready":"True"
	I1010 19:23:35.953255  147758 pod_ready.go:82] duration metric: took 8.801702ms for pod "kube-apiserver-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:23:35.953266  147758 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:23:35.227279  148123 crio.go:462] duration metric: took 1.74186828s to copy over tarball
	I1010 19:23:35.227383  148123 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1010 19:23:38.216499  148123 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.989082447s)
	I1010 19:23:38.216533  148123 crio.go:469] duration metric: took 2.989218587s to extract the tarball
	I1010 19:23:38.216541  148123 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1010 19:23:38.259699  148123 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 19:23:38.308284  148123 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1010 19:23:38.308313  148123 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1010 19:23:38.308422  148123 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1010 19:23:38.308477  148123 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1010 19:23:38.308482  148123 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1010 19:23:38.308593  148123 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1010 19:23:38.308617  148123 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1010 19:23:38.308405  148123 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 19:23:38.308446  148123 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1010 19:23:38.308530  148123 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1010 19:23:38.310558  148123 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1010 19:23:38.310612  148123 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1010 19:23:38.310642  148123 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1010 19:23:38.310652  148123 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1010 19:23:38.310645  148123 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1010 19:23:38.310560  148123 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 19:23:38.310733  148123 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1010 19:23:38.311068  148123 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1010 19:23:38.469174  148123 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1010 19:23:38.469563  148123 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1010 19:23:38.488463  148123 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1010 19:23:38.490815  148123 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1010 19:23:38.491841  148123 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1010 19:23:38.496079  148123 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1010 19:23:38.501292  148123 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1010 19:23:34.130443  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:34.130968  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | unable to find current IP address of domain default-k8s-diff-port-361847 in network mk-default-k8s-diff-port-361847
	I1010 19:23:34.130998  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | I1010 19:23:34.130915  149213 retry.go:31] will retry after 393.100812ms: waiting for machine to come up
	I1010 19:23:34.525570  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:34.526032  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | unable to find current IP address of domain default-k8s-diff-port-361847 in network mk-default-k8s-diff-port-361847
	I1010 19:23:34.526060  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | I1010 19:23:34.525980  149213 retry.go:31] will retry after 465.468437ms: waiting for machine to come up
	I1010 19:23:34.992775  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:34.993348  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | unable to find current IP address of domain default-k8s-diff-port-361847 in network mk-default-k8s-diff-port-361847
	I1010 19:23:34.993386  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | I1010 19:23:34.993287  149213 retry.go:31] will retry after 907.884473ms: waiting for machine to come up
	I1010 19:23:35.902481  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:35.902942  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | unable to find current IP address of domain default-k8s-diff-port-361847 in network mk-default-k8s-diff-port-361847
	I1010 19:23:35.902974  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | I1010 19:23:35.902878  149213 retry.go:31] will retry after 1.157806188s: waiting for machine to come up
	I1010 19:23:37.062068  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:37.062749  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | unable to find current IP address of domain default-k8s-diff-port-361847 in network mk-default-k8s-diff-port-361847
	I1010 19:23:37.062777  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | I1010 19:23:37.062706  149213 retry.go:31] will retry after 1.432559208s: waiting for machine to come up
	I1010 19:23:38.496653  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:38.497120  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | unable to find current IP address of domain default-k8s-diff-port-361847 in network mk-default-k8s-diff-port-361847
	I1010 19:23:38.497153  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | I1010 19:23:38.497066  149213 retry.go:31] will retry after 1.559787003s: waiting for machine to come up
	I1010 19:23:37.961068  147758 pod_ready.go:103] pod "kube-controller-manager-embed-certs-541370" in "kube-system" namespace has status "Ready":"False"
	I1010 19:23:40.065559  147758 pod_ready.go:103] pod "kube-controller-manager-embed-certs-541370" in "kube-system" namespace has status "Ready":"False"
	I1010 19:23:40.528757  147758 pod_ready.go:93] pod "kube-controller-manager-embed-certs-541370" in "kube-system" namespace has status "Ready":"True"
	I1010 19:23:40.528786  147758 pod_ready.go:82] duration metric: took 4.575513259s for pod "kube-controller-manager-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:23:40.528802  147758 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-f5l6x" in "kube-system" namespace to be "Ready" ...
	I1010 19:23:40.538002  147758 pod_ready.go:93] pod "kube-proxy-f5l6x" in "kube-system" namespace has status "Ready":"True"
	I1010 19:23:40.538034  147758 pod_ready.go:82] duration metric: took 9.22357ms for pod "kube-proxy-f5l6x" in "kube-system" namespace to be "Ready" ...
	I1010 19:23:40.538049  147758 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:23:40.543594  147758 pod_ready.go:93] pod "kube-scheduler-embed-certs-541370" in "kube-system" namespace has status "Ready":"True"
	I1010 19:23:40.543615  147758 pod_ready.go:82] duration metric: took 5.558665ms for pod "kube-scheduler-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:23:40.543626  147758 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace to be "Ready" ...
	I1010 19:23:38.581315  148123 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1010 19:23:38.581361  148123 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1010 19:23:38.581385  148123 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1010 19:23:38.581407  148123 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1010 19:23:38.581442  148123 ssh_runner.go:195] Run: which crictl
	I1010 19:23:38.581457  148123 ssh_runner.go:195] Run: which crictl
	I1010 19:23:38.642668  148123 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1010 19:23:38.642721  148123 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1010 19:23:38.642777  148123 ssh_runner.go:195] Run: which crictl
	I1010 19:23:38.658235  148123 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1010 19:23:38.658291  148123 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1010 19:23:38.658288  148123 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1010 19:23:38.658328  148123 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1010 19:23:38.658346  148123 ssh_runner.go:195] Run: which crictl
	I1010 19:23:38.658373  148123 ssh_runner.go:195] Run: which crictl
	I1010 19:23:38.678038  148123 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1010 19:23:38.678097  148123 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1010 19:23:38.678155  148123 ssh_runner.go:195] Run: which crictl
	I1010 19:23:38.682719  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1010 19:23:38.682773  148123 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1010 19:23:38.682721  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1010 19:23:38.682791  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1010 19:23:38.682812  148123 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1010 19:23:38.682845  148123 ssh_runner.go:195] Run: which crictl
	I1010 19:23:38.682851  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1010 19:23:38.682872  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1010 19:23:38.691181  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1010 19:23:38.842626  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1010 19:23:38.842714  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1010 19:23:38.842754  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1010 19:23:38.842797  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1010 19:23:38.842721  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1010 19:23:38.842851  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1010 19:23:38.842789  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1010 19:23:38.989994  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1010 19:23:39.005815  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1010 19:23:39.005923  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1010 19:23:39.010029  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1010 19:23:39.010105  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1010 19:23:39.010150  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1010 19:23:39.010230  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1010 19:23:39.134646  148123 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 19:23:39.141431  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1010 19:23:39.176750  148123 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1010 19:23:39.176945  148123 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1010 19:23:39.176989  148123 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1010 19:23:39.177038  148123 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1010 19:23:39.177073  148123 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1010 19:23:39.177104  148123 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1010 19:23:39.335056  148123 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1010 19:23:39.335113  148123 cache_images.go:92] duration metric: took 1.026784312s to LoadCachedImages
	W1010 19:23:39.335180  148123 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I1010 19:23:39.335192  148123 kubeadm.go:934] updating node { 192.168.61.112 8443 v1.20.0 crio true true} ...
	I1010 19:23:39.335305  148123 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-947203 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.112
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-947203 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1010 19:23:39.335380  148123 ssh_runner.go:195] Run: crio config
	I1010 19:23:39.394307  148123 cni.go:84] Creating CNI manager for ""
	I1010 19:23:39.394338  148123 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1010 19:23:39.394350  148123 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1010 19:23:39.394378  148123 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.112 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-947203 NodeName:old-k8s-version-947203 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.112"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.112 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1010 19:23:39.394572  148123 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.112
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-947203"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.112
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.112"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1010 19:23:39.394662  148123 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1010 19:23:39.405510  148123 binaries.go:44] Found k8s binaries, skipping transfer
	I1010 19:23:39.405600  148123 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1010 19:23:39.415968  148123 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1010 19:23:39.443234  148123 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1010 19:23:39.462448  148123 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1010 19:23:39.481959  148123 ssh_runner.go:195] Run: grep 192.168.61.112	control-plane.minikube.internal$ /etc/hosts
	I1010 19:23:39.486188  148123 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.112	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 19:23:39.501922  148123 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 19:23:39.642769  148123 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 19:23:39.661335  148123 certs.go:68] Setting up /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/old-k8s-version-947203 for IP: 192.168.61.112
	I1010 19:23:39.661363  148123 certs.go:194] generating shared ca certs ...
	I1010 19:23:39.661384  148123 certs.go:226] acquiring lock for ca certs: {Name:mk934aa2a82050d7b4e8800492bf64f91d140517 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 19:23:39.661595  148123 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key
	I1010 19:23:39.661658  148123 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key
	I1010 19:23:39.661671  148123 certs.go:256] generating profile certs ...
	I1010 19:23:39.661779  148123 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/old-k8s-version-947203/client.key
	I1010 19:23:39.661867  148123 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/old-k8s-version-947203/apiserver.key.8a666a52
	I1010 19:23:39.661922  148123 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/old-k8s-version-947203/proxy-client.key
	I1010 19:23:39.662312  148123 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876.pem (1338 bytes)
	W1010 19:23:39.662395  148123 certs.go:480] ignoring /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876_empty.pem, impossibly tiny 0 bytes
	I1010 19:23:39.662410  148123 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem (1679 bytes)
	I1010 19:23:39.662447  148123 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem (1078 bytes)
	I1010 19:23:39.662501  148123 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem (1123 bytes)
	I1010 19:23:39.662531  148123 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem (1675 bytes)
	I1010 19:23:39.662622  148123 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem (1708 bytes)
	I1010 19:23:39.663372  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1010 19:23:39.710366  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1010 19:23:39.752724  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1010 19:23:39.801408  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1010 19:23:39.843557  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/old-k8s-version-947203/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1010 19:23:39.892522  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/old-k8s-version-947203/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1010 19:23:39.938519  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/old-k8s-version-947203/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1010 19:23:39.966140  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/old-k8s-version-947203/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1010 19:23:39.992974  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem --> /usr/share/ca-certificates/888762.pem (1708 bytes)
	I1010 19:23:40.019381  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1010 19:23:40.045769  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876.pem --> /usr/share/ca-certificates/88876.pem (1338 bytes)
	I1010 19:23:40.080683  148123 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1010 19:23:40.099747  148123 ssh_runner.go:195] Run: openssl version
	I1010 19:23:40.107865  148123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1010 19:23:40.123158  148123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1010 19:23:40.128594  148123 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 10 17:58 /usr/share/ca-certificates/minikubeCA.pem
	I1010 19:23:40.128660  148123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1010 19:23:40.135319  148123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1010 19:23:40.147514  148123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/88876.pem && ln -fs /usr/share/ca-certificates/88876.pem /etc/ssl/certs/88876.pem"
	I1010 19:23:40.162463  148123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/88876.pem
	I1010 19:23:40.168387  148123 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 10 18:09 /usr/share/ca-certificates/88876.pem
	I1010 19:23:40.168512  148123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/88876.pem
	I1010 19:23:40.176304  148123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/88876.pem /etc/ssl/certs/51391683.0"
	I1010 19:23:40.191306  148123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/888762.pem && ln -fs /usr/share/ca-certificates/888762.pem /etc/ssl/certs/888762.pem"
	I1010 19:23:40.204196  148123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/888762.pem
	I1010 19:23:40.211044  148123 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 10 18:09 /usr/share/ca-certificates/888762.pem
	I1010 19:23:40.211122  148123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/888762.pem
	I1010 19:23:40.219574  148123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/888762.pem /etc/ssl/certs/3ec20f2e.0"
	I1010 19:23:40.232634  148123 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1010 19:23:40.237639  148123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1010 19:23:40.244141  148123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1010 19:23:40.251188  148123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1010 19:23:40.258538  148123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1010 19:23:40.265029  148123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1010 19:23:40.271754  148123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1010 19:23:40.278352  148123 kubeadm.go:392] StartCluster: {Name:old-k8s-version-947203 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-947203 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.112 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 19:23:40.278453  148123 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1010 19:23:40.278518  148123 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1010 19:23:40.317771  148123 cri.go:89] found id: ""
	I1010 19:23:40.317838  148123 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1010 19:23:40.330494  148123 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1010 19:23:40.330520  148123 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1010 19:23:40.330576  148123 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1010 19:23:40.341984  148123 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1010 19:23:40.343386  148123 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-947203" does not appear in /home/jenkins/minikube-integration/19787-81676/kubeconfig
	I1010 19:23:40.344334  148123 kubeconfig.go:62] /home/jenkins/minikube-integration/19787-81676/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-947203" cluster setting kubeconfig missing "old-k8s-version-947203" context setting]
	I1010 19:23:40.345360  148123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19787-81676/kubeconfig: {Name:mk31297b607b774870865548d952622c04d970cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 19:23:40.347128  148123 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1010 19:23:40.357902  148123 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.112
	I1010 19:23:40.357944  148123 kubeadm.go:1160] stopping kube-system containers ...
	I1010 19:23:40.357961  148123 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1010 19:23:40.358031  148123 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1010 19:23:40.399970  148123 cri.go:89] found id: ""
	I1010 19:23:40.400057  148123 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1010 19:23:40.418527  148123 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1010 19:23:40.429182  148123 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1010 19:23:40.429217  148123 kubeadm.go:157] found existing configuration files:
	
	I1010 19:23:40.429262  148123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1010 19:23:40.439287  148123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1010 19:23:40.439363  148123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1010 19:23:40.449479  148123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1010 19:23:40.459288  148123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1010 19:23:40.459359  148123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1010 19:23:40.472733  148123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1010 19:23:40.484890  148123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1010 19:23:40.484958  148123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1010 19:23:40.495301  148123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1010 19:23:40.504750  148123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1010 19:23:40.504820  148123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1010 19:23:40.515034  148123 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1010 19:23:40.526168  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:23:40.675022  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:23:41.566371  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:23:41.815296  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:23:41.930082  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:23:42.027597  148123 api_server.go:52] waiting for apiserver process to appear ...
	I1010 19:23:42.027713  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:42.528219  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:43.027889  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:43.527735  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:40.058247  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:40.058783  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | unable to find current IP address of domain default-k8s-diff-port-361847 in network mk-default-k8s-diff-port-361847
	I1010 19:23:40.058835  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | I1010 19:23:40.058696  149213 retry.go:31] will retry after 2.214094081s: waiting for machine to come up
	I1010 19:23:42.274629  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:42.275167  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | unable to find current IP address of domain default-k8s-diff-port-361847 in network mk-default-k8s-diff-port-361847
	I1010 19:23:42.275194  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | I1010 19:23:42.275106  149213 retry.go:31] will retry after 2.126528577s: waiting for machine to come up
	I1010 19:23:42.550865  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:23:45.051043  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:23:44.028463  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:44.527916  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:45.027911  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:45.528797  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:46.028772  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:46.527982  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:47.027799  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:47.527894  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:48.028352  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:48.527935  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:44.403101  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:44.403575  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | unable to find current IP address of domain default-k8s-diff-port-361847 in network mk-default-k8s-diff-port-361847
	I1010 19:23:44.403616  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | I1010 19:23:44.403534  149213 retry.go:31] will retry after 3.603964622s: waiting for machine to come up
	I1010 19:23:48.008726  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:48.009142  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | unable to find current IP address of domain default-k8s-diff-port-361847 in network mk-default-k8s-diff-port-361847
	I1010 19:23:48.009191  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | I1010 19:23:48.009100  149213 retry.go:31] will retry after 3.639744981s: waiting for machine to come up
	I1010 19:23:47.551003  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:23:49.661572  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:23:52.858209  147213 start.go:364] duration metric: took 56.558774237s to acquireMachinesLock for "no-preload-320324"
	I1010 19:23:52.858274  147213 start.go:96] Skipping create...Using existing machine configuration
	I1010 19:23:52.858283  147213 fix.go:54] fixHost starting: 
	I1010 19:23:52.858705  147213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:23:52.858742  147213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:23:52.878428  147213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38525
	I1010 19:23:52.878955  147213 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:23:52.879563  147213 main.go:141] libmachine: Using API Version  1
	I1010 19:23:52.879599  147213 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:23:52.879945  147213 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:23:52.880144  147213 main.go:141] libmachine: (no-preload-320324) Calling .DriverName
	I1010 19:23:52.880282  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetState
	I1010 19:23:52.881626  147213 fix.go:112] recreateIfNeeded on no-preload-320324: state=Stopped err=<nil>
	I1010 19:23:52.881650  147213 main.go:141] libmachine: (no-preload-320324) Calling .DriverName
	W1010 19:23:52.881799  147213 fix.go:138] unexpected machine state, will restart: <nil>
	I1010 19:23:52.883912  147213 out.go:177] * Restarting existing kvm2 VM for "no-preload-320324" ...
	I1010 19:23:49.028421  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:49.528271  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:50.028462  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:50.527867  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:51.028782  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:51.528581  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:52.028732  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:52.527987  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:53.027852  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:53.528647  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:52.885239  147213 main.go:141] libmachine: (no-preload-320324) Calling .Start
	I1010 19:23:52.885429  147213 main.go:141] libmachine: (no-preload-320324) Ensuring networks are active...
	I1010 19:23:52.886211  147213 main.go:141] libmachine: (no-preload-320324) Ensuring network default is active
	I1010 19:23:52.886749  147213 main.go:141] libmachine: (no-preload-320324) Ensuring network mk-no-preload-320324 is active
	I1010 19:23:52.887310  147213 main.go:141] libmachine: (no-preload-320324) Getting domain xml...
	I1010 19:23:52.888034  147213 main.go:141] libmachine: (no-preload-320324) Creating domain...
	I1010 19:23:51.652975  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:51.653464  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Found IP for machine: 192.168.50.32
	I1010 19:23:51.653487  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Reserving static IP address...
	I1010 19:23:51.653509  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has current primary IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:51.653910  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-361847", mac: "52:54:00:a6:72:58", ip: "192.168.50.32"} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:51.653956  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | skip adding static IP to network mk-default-k8s-diff-port-361847 - found existing host DHCP lease matching {name: "default-k8s-diff-port-361847", mac: "52:54:00:a6:72:58", ip: "192.168.50.32"}
	I1010 19:23:51.653974  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Reserved static IP address: 192.168.50.32
	I1010 19:23:51.653993  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for SSH to be available...
	I1010 19:23:51.654006  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | Getting to WaitForSSH function...
	I1010 19:23:51.655927  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:51.656210  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:51.656240  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:51.656334  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | Using SSH client type: external
	I1010 19:23:51.656372  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | Using SSH private key: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/default-k8s-diff-port-361847/id_rsa (-rw-------)
	I1010 19:23:51.656409  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.32 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19787-81676/.minikube/machines/default-k8s-diff-port-361847/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1010 19:23:51.656425  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | About to run SSH command:
	I1010 19:23:51.656436  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | exit 0
	I1010 19:23:51.780839  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | SSH cmd err, output: <nil>: 
	I1010 19:23:51.781206  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetConfigRaw
	I1010 19:23:51.781939  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetIP
	I1010 19:23:51.784347  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:51.784663  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:51.784696  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:51.784918  148525 profile.go:143] Saving config to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/default-k8s-diff-port-361847/config.json ...
	I1010 19:23:51.785134  148525 machine.go:93] provisionDockerMachine start ...
	I1010 19:23:51.785158  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .DriverName
	I1010 19:23:51.785403  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHHostname
	I1010 19:23:51.787817  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:51.788306  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:51.788347  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:51.788547  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHPort
	I1010 19:23:51.788807  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:23:51.789038  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:23:51.789274  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHUsername
	I1010 19:23:51.789515  148525 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:51.789802  148525 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.32 22 <nil> <nil>}
	I1010 19:23:51.789825  148525 main.go:141] libmachine: About to run SSH command:
	hostname
	I1010 19:23:51.893367  148525 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1010 19:23:51.893399  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetMachineName
	I1010 19:23:51.893652  148525 buildroot.go:166] provisioning hostname "default-k8s-diff-port-361847"
	I1010 19:23:51.893699  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetMachineName
	I1010 19:23:51.893921  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHHostname
	I1010 19:23:51.896986  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:51.897377  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:51.897422  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:51.897662  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHPort
	I1010 19:23:51.897815  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:23:51.897949  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:23:51.898064  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHUsername
	I1010 19:23:51.898302  148525 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:51.898489  148525 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.32 22 <nil> <nil>}
	I1010 19:23:51.898502  148525 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-361847 && echo "default-k8s-diff-port-361847" | sudo tee /etc/hostname
	I1010 19:23:52.015158  148525 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-361847
	
	I1010 19:23:52.015199  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHHostname
	I1010 19:23:52.018094  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.018468  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:52.018497  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.018683  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHPort
	I1010 19:23:52.018901  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:23:52.019039  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:23:52.019209  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHUsername
	I1010 19:23:52.019474  148525 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:52.019690  148525 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.32 22 <nil> <nil>}
	I1010 19:23:52.019708  148525 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-361847' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-361847/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-361847' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1010 19:23:52.133923  148525 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1010 19:23:52.133960  148525 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19787-81676/.minikube CaCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19787-81676/.minikube}
	I1010 19:23:52.134007  148525 buildroot.go:174] setting up certificates
	I1010 19:23:52.134023  148525 provision.go:84] configureAuth start
	I1010 19:23:52.134043  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetMachineName
	I1010 19:23:52.134351  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetIP
	I1010 19:23:52.137242  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.137637  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:52.137670  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.137860  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHHostname
	I1010 19:23:52.140264  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.140640  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:52.140672  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.140833  148525 provision.go:143] copyHostCerts
	I1010 19:23:52.140907  148525 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem, removing ...
	I1010 19:23:52.140922  148525 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem
	I1010 19:23:52.140977  148525 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem (1078 bytes)
	I1010 19:23:52.141088  148525 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem, removing ...
	I1010 19:23:52.141098  148525 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem
	I1010 19:23:52.141118  148525 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem (1123 bytes)
	I1010 19:23:52.141175  148525 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem, removing ...
	I1010 19:23:52.141182  148525 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem
	I1010 19:23:52.141213  148525 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem (1675 bytes)
	I1010 19:23:52.141264  148525 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-361847 san=[127.0.0.1 192.168.50.32 default-k8s-diff-port-361847 localhost minikube]
	I1010 19:23:52.241146  148525 provision.go:177] copyRemoteCerts
	I1010 19:23:52.241212  148525 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1010 19:23:52.241241  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHHostname
	I1010 19:23:52.244061  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.244463  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:52.244490  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.244731  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHPort
	I1010 19:23:52.244929  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:23:52.245110  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHUsername
	I1010 19:23:52.245228  148525 sshutil.go:53] new ssh client: &{IP:192.168.50.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/default-k8s-diff-port-361847/id_rsa Username:docker}
	I1010 19:23:52.327309  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1010 19:23:52.352288  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1010 19:23:52.376308  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1010 19:23:52.400807  148525 provision.go:87] duration metric: took 266.765119ms to configureAuth
	I1010 19:23:52.400862  148525 buildroot.go:189] setting minikube options for container-runtime
	I1010 19:23:52.401065  148525 config.go:182] Loaded profile config "default-k8s-diff-port-361847": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 19:23:52.401171  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHHostname
	I1010 19:23:52.403552  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.403919  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:52.403950  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.404173  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHPort
	I1010 19:23:52.404371  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:23:52.404513  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:23:52.404629  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHUsername
	I1010 19:23:52.404743  148525 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:52.404927  148525 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.32 22 <nil> <nil>}
	I1010 19:23:52.404949  148525 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1010 19:23:52.622902  148525 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1010 19:23:52.622930  148525 machine.go:96] duration metric: took 837.779579ms to provisionDockerMachine
	I1010 19:23:52.622942  148525 start.go:293] postStartSetup for "default-k8s-diff-port-361847" (driver="kvm2")
	I1010 19:23:52.622952  148525 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1010 19:23:52.622968  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .DriverName
	I1010 19:23:52.623331  148525 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1010 19:23:52.623369  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHHostname
	I1010 19:23:52.626106  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.626435  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:52.626479  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.626721  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHPort
	I1010 19:23:52.626932  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:23:52.627091  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHUsername
	I1010 19:23:52.627262  148525 sshutil.go:53] new ssh client: &{IP:192.168.50.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/default-k8s-diff-port-361847/id_rsa Username:docker}
	I1010 19:23:52.708050  148525 ssh_runner.go:195] Run: cat /etc/os-release
	I1010 19:23:52.712524  148525 info.go:137] Remote host: Buildroot 2023.02.9
	I1010 19:23:52.712550  148525 filesync.go:126] Scanning /home/jenkins/minikube-integration/19787-81676/.minikube/addons for local assets ...
	I1010 19:23:52.712608  148525 filesync.go:126] Scanning /home/jenkins/minikube-integration/19787-81676/.minikube/files for local assets ...
	I1010 19:23:52.712688  148525 filesync.go:149] local asset: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem -> 888762.pem in /etc/ssl/certs
	I1010 19:23:52.712782  148525 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1010 19:23:52.723719  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem --> /etc/ssl/certs/888762.pem (1708 bytes)
	I1010 19:23:52.747686  148525 start.go:296] duration metric: took 124.729371ms for postStartSetup
	I1010 19:23:52.747727  148525 fix.go:56] duration metric: took 20.853721623s for fixHost
	I1010 19:23:52.747749  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHHostname
	I1010 19:23:52.750316  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.750645  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:52.750677  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.750817  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHPort
	I1010 19:23:52.751046  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:23:52.751195  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:23:52.751333  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHUsername
	I1010 19:23:52.751511  148525 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:52.751733  148525 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.32 22 <nil> <nil>}
	I1010 19:23:52.751749  148525 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1010 19:23:52.857986  148525 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728588232.831281012
	
	I1010 19:23:52.858019  148525 fix.go:216] guest clock: 1728588232.831281012
	I1010 19:23:52.858029  148525 fix.go:229] Guest: 2024-10-10 19:23:52.831281012 +0000 UTC Remote: 2024-10-10 19:23:52.747731551 +0000 UTC m=+158.845659062 (delta=83.549461ms)
	I1010 19:23:52.858075  148525 fix.go:200] guest clock delta is within tolerance: 83.549461ms
	I1010 19:23:52.858088  148525 start.go:83] releasing machines lock for "default-k8s-diff-port-361847", held for 20.964121636s
	I1010 19:23:52.858120  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .DriverName
	I1010 19:23:52.858491  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetIP
	I1010 19:23:52.861220  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.861640  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:52.861672  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.861828  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .DriverName
	I1010 19:23:52.862337  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .DriverName
	I1010 19:23:52.862548  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .DriverName
	I1010 19:23:52.862655  148525 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1010 19:23:52.862702  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHHostname
	I1010 19:23:52.862825  148525 ssh_runner.go:195] Run: cat /version.json
	I1010 19:23:52.862854  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHHostname
	I1010 19:23:52.865579  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.865840  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.865921  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:52.865960  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.866106  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHPort
	I1010 19:23:52.866290  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:23:52.866300  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:52.866319  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.866423  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHUsername
	I1010 19:23:52.866496  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHPort
	I1010 19:23:52.866648  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:23:52.866671  148525 sshutil.go:53] new ssh client: &{IP:192.168.50.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/default-k8s-diff-port-361847/id_rsa Username:docker}
	I1010 19:23:52.866798  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHUsername
	I1010 19:23:52.866910  148525 sshutil.go:53] new ssh client: &{IP:192.168.50.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/default-k8s-diff-port-361847/id_rsa Username:docker}
	I1010 19:23:52.966354  148525 ssh_runner.go:195] Run: systemctl --version
	I1010 19:23:52.972526  148525 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1010 19:23:53.119801  148525 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1010 19:23:53.126287  148525 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1010 19:23:53.126355  148525 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1010 19:23:53.147301  148525 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1010 19:23:53.147325  148525 start.go:495] detecting cgroup driver to use...
	I1010 19:23:53.147381  148525 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1010 19:23:53.167368  148525 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1010 19:23:53.183239  148525 docker.go:217] disabling cri-docker service (if available) ...
	I1010 19:23:53.183308  148525 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1010 19:23:53.203230  148525 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1010 19:23:53.217261  148525 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1010 19:23:53.343555  148525 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1010 19:23:53.491952  148525 docker.go:233] disabling docker service ...
	I1010 19:23:53.492054  148525 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1010 19:23:53.508136  148525 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1010 19:23:53.521662  148525 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1010 19:23:53.651858  148525 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1010 19:23:53.781954  148525 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1010 19:23:53.803934  148525 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1010 19:23:53.826070  148525 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1010 19:23:53.826146  148525 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:53.837506  148525 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1010 19:23:53.837587  148525 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:53.848653  148525 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:53.860511  148525 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:53.873254  148525 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1010 19:23:53.887862  148525 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:53.899507  148525 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:53.923325  148525 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:53.934999  148525 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1010 19:23:53.946869  148525 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1010 19:23:53.946945  148525 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1010 19:23:53.968116  148525 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1010 19:23:53.980109  148525 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 19:23:54.106345  148525 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1010 19:23:54.210345  148525 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1010 19:23:54.210417  148525 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1010 19:23:54.215968  148525 start.go:563] Will wait 60s for crictl version
	I1010 19:23:54.216037  148525 ssh_runner.go:195] Run: which crictl
	I1010 19:23:54.219885  148525 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1010 19:23:54.260286  148525 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1010 19:23:54.260375  148525 ssh_runner.go:195] Run: crio --version
	I1010 19:23:54.289908  148525 ssh_runner.go:195] Run: crio --version
	I1010 19:23:54.320940  148525 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1010 19:23:52.050137  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:23:54.060194  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:23:56.551981  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:23:54.027982  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:54.528065  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:55.027890  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:55.527753  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:56.027877  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:56.527790  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:57.028263  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:57.527790  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:58.028743  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:58.528039  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:54.234149  147213 main.go:141] libmachine: (no-preload-320324) Waiting to get IP...
	I1010 19:23:54.235147  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:23:54.235598  147213 main.go:141] libmachine: (no-preload-320324) DBG | unable to find current IP address of domain no-preload-320324 in network mk-no-preload-320324
	I1010 19:23:54.235657  147213 main.go:141] libmachine: (no-preload-320324) DBG | I1010 19:23:54.235580  149378 retry.go:31] will retry after 308.921504ms: waiting for machine to come up
	I1010 19:23:54.546327  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:23:54.547002  147213 main.go:141] libmachine: (no-preload-320324) DBG | unable to find current IP address of domain no-preload-320324 in network mk-no-preload-320324
	I1010 19:23:54.547029  147213 main.go:141] libmachine: (no-preload-320324) DBG | I1010 19:23:54.546956  149378 retry.go:31] will retry after 288.92327ms: waiting for machine to come up
	I1010 19:23:54.837625  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:23:54.838136  147213 main.go:141] libmachine: (no-preload-320324) DBG | unable to find current IP address of domain no-preload-320324 in network mk-no-preload-320324
	I1010 19:23:54.838164  147213 main.go:141] libmachine: (no-preload-320324) DBG | I1010 19:23:54.838054  149378 retry.go:31] will retry after 321.948113ms: waiting for machine to come up
	I1010 19:23:55.161940  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:23:55.162494  147213 main.go:141] libmachine: (no-preload-320324) DBG | unable to find current IP address of domain no-preload-320324 in network mk-no-preload-320324
	I1010 19:23:55.162526  147213 main.go:141] libmachine: (no-preload-320324) DBG | I1010 19:23:55.162441  149378 retry.go:31] will retry after 573.848095ms: waiting for machine to come up
	I1010 19:23:55.739080  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:23:55.739592  147213 main.go:141] libmachine: (no-preload-320324) DBG | unable to find current IP address of domain no-preload-320324 in network mk-no-preload-320324
	I1010 19:23:55.739620  147213 main.go:141] libmachine: (no-preload-320324) DBG | I1010 19:23:55.739494  149378 retry.go:31] will retry after 529.087622ms: waiting for machine to come up
	I1010 19:23:56.270324  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:23:56.270899  147213 main.go:141] libmachine: (no-preload-320324) DBG | unable to find current IP address of domain no-preload-320324 in network mk-no-preload-320324
	I1010 19:23:56.270929  147213 main.go:141] libmachine: (no-preload-320324) DBG | I1010 19:23:56.270850  149378 retry.go:31] will retry after 629.204989ms: waiting for machine to come up
	I1010 19:23:56.901836  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:23:56.902283  147213 main.go:141] libmachine: (no-preload-320324) DBG | unable to find current IP address of domain no-preload-320324 in network mk-no-preload-320324
	I1010 19:23:56.902325  147213 main.go:141] libmachine: (no-preload-320324) DBG | I1010 19:23:56.902222  149378 retry.go:31] will retry after 804.309499ms: waiting for machine to come up
	I1010 19:23:57.708806  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:23:57.709175  147213 main.go:141] libmachine: (no-preload-320324) DBG | unable to find current IP address of domain no-preload-320324 in network mk-no-preload-320324
	I1010 19:23:57.709208  147213 main.go:141] libmachine: (no-preload-320324) DBG | I1010 19:23:57.709151  149378 retry.go:31] will retry after 1.204078295s: waiting for machine to come up
	I1010 19:23:54.322534  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetIP
	I1010 19:23:54.325744  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:54.326217  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:54.326257  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:54.326533  148525 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1010 19:23:54.331527  148525 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 19:23:54.343881  148525 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-361847 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:default-k8s-diff-port-361847 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.32 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1010 19:23:54.344033  148525 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1010 19:23:54.344084  148525 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 19:23:54.389066  148525 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1010 19:23:54.389149  148525 ssh_runner.go:195] Run: which lz4
	I1010 19:23:54.393550  148525 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1010 19:23:54.397787  148525 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1010 19:23:54.397833  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1010 19:23:55.897111  148525 crio.go:462] duration metric: took 1.503593301s to copy over tarball
	I1010 19:23:55.897212  148525 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1010 19:23:58.060691  148525 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.16343467s)
	I1010 19:23:58.060731  148525 crio.go:469] duration metric: took 2.163580526s to extract the tarball
	I1010 19:23:58.060741  148525 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1010 19:23:58.103877  148525 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 19:23:58.162881  148525 crio.go:514] all images are preloaded for cri-o runtime.
	I1010 19:23:58.162907  148525 cache_images.go:84] Images are preloaded, skipping loading
	I1010 19:23:58.162915  148525 kubeadm.go:934] updating node { 192.168.50.32 8444 v1.31.1 crio true true} ...
	I1010 19:23:58.163031  148525 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-361847 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.32
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-361847 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1010 19:23:58.163098  148525 ssh_runner.go:195] Run: crio config
	I1010 19:23:58.219804  148525 cni.go:84] Creating CNI manager for ""
	I1010 19:23:58.219827  148525 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1010 19:23:58.219837  148525 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1010 19:23:58.219861  148525 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.32 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-361847 NodeName:default-k8s-diff-port-361847 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.32"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.32 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1010 19:23:58.219982  148525 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.32
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-361847"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.32
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.32"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1010 19:23:58.220042  148525 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1010 19:23:58.231444  148525 binaries.go:44] Found k8s binaries, skipping transfer
	I1010 19:23:58.231565  148525 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1010 19:23:58.241835  148525 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I1010 19:23:58.259408  148525 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1010 19:23:58.276571  148525 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I1010 19:23:58.294640  148525 ssh_runner.go:195] Run: grep 192.168.50.32	control-plane.minikube.internal$ /etc/hosts
	I1010 19:23:58.298503  148525 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.32	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 19:23:58.312286  148525 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 19:23:58.449757  148525 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 19:23:58.467342  148525 certs.go:68] Setting up /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/default-k8s-diff-port-361847 for IP: 192.168.50.32
	I1010 19:23:58.467377  148525 certs.go:194] generating shared ca certs ...
	I1010 19:23:58.467398  148525 certs.go:226] acquiring lock for ca certs: {Name:mk934aa2a82050d7b4e8800492bf64f91d140517 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 19:23:58.467583  148525 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key
	I1010 19:23:58.467642  148525 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key
	I1010 19:23:58.467655  148525 certs.go:256] generating profile certs ...
	I1010 19:23:58.467826  148525 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/default-k8s-diff-port-361847/client.key
	I1010 19:23:58.467895  148525 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/default-k8s-diff-port-361847/apiserver.key.ae5e3f04
	I1010 19:23:58.467951  148525 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/default-k8s-diff-port-361847/proxy-client.key
	I1010 19:23:58.468089  148525 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876.pem (1338 bytes)
	W1010 19:23:58.468136  148525 certs.go:480] ignoring /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876_empty.pem, impossibly tiny 0 bytes
	I1010 19:23:58.468153  148525 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem (1679 bytes)
	I1010 19:23:58.468194  148525 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem (1078 bytes)
	I1010 19:23:58.468226  148525 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem (1123 bytes)
	I1010 19:23:58.468260  148525 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem (1675 bytes)
	I1010 19:23:58.468317  148525 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem (1708 bytes)
	I1010 19:23:58.468931  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1010 19:23:58.529632  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1010 19:23:58.571900  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1010 19:23:58.612599  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1010 19:23:58.645536  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/default-k8s-diff-port-361847/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1010 19:23:58.675961  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/default-k8s-diff-port-361847/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1010 19:23:58.700712  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/default-k8s-diff-port-361847/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1010 19:23:58.725355  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/default-k8s-diff-port-361847/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1010 19:23:58.751138  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1010 19:23:58.775832  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876.pem --> /usr/share/ca-certificates/88876.pem (1338 bytes)
	I1010 19:23:58.800729  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem --> /usr/share/ca-certificates/888762.pem (1708 bytes)
	I1010 19:23:58.825558  148525 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1010 19:23:58.843331  148525 ssh_runner.go:195] Run: openssl version
	I1010 19:23:58.849271  148525 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/88876.pem && ln -fs /usr/share/ca-certificates/88876.pem /etc/ssl/certs/88876.pem"
	I1010 19:23:58.861031  148525 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/88876.pem
	I1010 19:23:58.865721  148525 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 10 18:09 /usr/share/ca-certificates/88876.pem
	I1010 19:23:58.865797  148525 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/88876.pem
	I1010 19:23:58.871961  148525 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/88876.pem /etc/ssl/certs/51391683.0"
	I1010 19:23:58.884520  148525 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/888762.pem && ln -fs /usr/share/ca-certificates/888762.pem /etc/ssl/certs/888762.pem"
	I1010 19:23:58.896744  148525 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/888762.pem
	I1010 19:23:58.901507  148525 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 10 18:09 /usr/share/ca-certificates/888762.pem
	I1010 19:23:58.901571  148525 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/888762.pem
	I1010 19:23:58.907366  148525 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/888762.pem /etc/ssl/certs/3ec20f2e.0"
	I1010 19:23:58.919784  148525 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1010 19:23:58.931972  148525 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1010 19:23:58.936897  148525 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 10 17:58 /usr/share/ca-certificates/minikubeCA.pem
	I1010 19:23:58.936981  148525 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1010 19:23:58.943007  148525 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1010 19:23:59.052037  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:01.551982  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:23:59.028519  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:59.527790  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:00.027889  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:00.528623  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:01.028697  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:01.528030  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:02.028062  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:02.527876  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:03.028745  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:03.528569  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:58.914409  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:23:58.914894  147213 main.go:141] libmachine: (no-preload-320324) DBG | unable to find current IP address of domain no-preload-320324 in network mk-no-preload-320324
	I1010 19:23:58.914927  147213 main.go:141] libmachine: (no-preload-320324) DBG | I1010 19:23:58.914831  149378 retry.go:31] will retry after 1.631827888s: waiting for machine to come up
	I1010 19:24:00.548505  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:00.549135  147213 main.go:141] libmachine: (no-preload-320324) DBG | unable to find current IP address of domain no-preload-320324 in network mk-no-preload-320324
	I1010 19:24:00.549164  147213 main.go:141] libmachine: (no-preload-320324) DBG | I1010 19:24:00.549043  149378 retry.go:31] will retry after 2.126895157s: waiting for machine to come up
	I1010 19:24:02.678328  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:02.678907  147213 main.go:141] libmachine: (no-preload-320324) DBG | unable to find current IP address of domain no-preload-320324 in network mk-no-preload-320324
	I1010 19:24:02.678969  147213 main.go:141] libmachine: (no-preload-320324) DBG | I1010 19:24:02.678891  149378 retry.go:31] will retry after 2.754376625s: waiting for machine to come up
	I1010 19:23:58.955104  148525 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1010 19:23:58.959833  148525 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1010 19:23:58.966528  148525 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1010 19:23:58.973590  148525 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1010 19:23:58.982390  148525 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1010 19:23:58.990767  148525 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1010 19:23:58.997162  148525 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1010 19:23:59.003647  148525 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-361847 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:default-k8s-diff-port-361847 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.32 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 19:23:59.003786  148525 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1010 19:23:59.003865  148525 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1010 19:23:59.048772  148525 cri.go:89] found id: ""
	I1010 19:23:59.048869  148525 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1010 19:23:59.061267  148525 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1010 19:23:59.061288  148525 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1010 19:23:59.061338  148525 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1010 19:23:59.072629  148525 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1010 19:23:59.074287  148525 kubeconfig.go:125] found "default-k8s-diff-port-361847" server: "https://192.168.50.32:8444"
	I1010 19:23:59.077880  148525 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1010 19:23:59.090738  148525 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.32
	I1010 19:23:59.090783  148525 kubeadm.go:1160] stopping kube-system containers ...
	I1010 19:23:59.090799  148525 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1010 19:23:59.090886  148525 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1010 19:23:59.136762  148525 cri.go:89] found id: ""
	I1010 19:23:59.136888  148525 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1010 19:23:59.155937  148525 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1010 19:23:59.166471  148525 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1010 19:23:59.166493  148525 kubeadm.go:157] found existing configuration files:
	
	I1010 19:23:59.166549  148525 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1010 19:23:59.178247  148525 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1010 19:23:59.178313  148525 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1010 19:23:59.189455  148525 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1010 19:23:59.200127  148525 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1010 19:23:59.200204  148525 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1010 19:23:59.210764  148525 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1010 19:23:59.221048  148525 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1010 19:23:59.221119  148525 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1010 19:23:59.231762  148525 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1010 19:23:59.242152  148525 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1010 19:23:59.242217  148525 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1010 19:23:59.252608  148525 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1010 19:23:59.265219  148525 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:23:59.391743  148525 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:24:00.243288  148525 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:24:00.453782  148525 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:24:00.532137  148525 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:24:00.623598  148525 api_server.go:52] waiting for apiserver process to appear ...
	I1010 19:24:00.623711  148525 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:01.124678  148525 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:01.624626  148525 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:01.667587  148525 api_server.go:72] duration metric: took 1.043987857s to wait for apiserver process to appear ...
	I1010 19:24:01.667621  148525 api_server.go:88] waiting for apiserver healthz status ...
	I1010 19:24:01.667649  148525 api_server.go:253] Checking apiserver healthz at https://192.168.50.32:8444/healthz ...
	I1010 19:24:01.668298  148525 api_server.go:269] stopped: https://192.168.50.32:8444/healthz: Get "https://192.168.50.32:8444/healthz": dial tcp 192.168.50.32:8444: connect: connection refused
	I1010 19:24:02.168273  148525 api_server.go:253] Checking apiserver healthz at https://192.168.50.32:8444/healthz ...
	I1010 19:24:05.275654  148525 api_server.go:279] https://192.168.50.32:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1010 19:24:05.275695  148525 api_server.go:103] status: https://192.168.50.32:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1010 19:24:05.275713  148525 api_server.go:253] Checking apiserver healthz at https://192.168.50.32:8444/healthz ...
	I1010 19:24:05.309713  148525 api_server.go:279] https://192.168.50.32:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1010 19:24:05.309770  148525 api_server.go:103] status: https://192.168.50.32:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1010 19:24:05.668325  148525 api_server.go:253] Checking apiserver healthz at https://192.168.50.32:8444/healthz ...
	I1010 19:24:05.684992  148525 api_server.go:279] https://192.168.50.32:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1010 19:24:05.685031  148525 api_server.go:103] status: https://192.168.50.32:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1010 19:24:06.168198  148525 api_server.go:253] Checking apiserver healthz at https://192.168.50.32:8444/healthz ...
	I1010 19:24:06.176584  148525 api_server.go:279] https://192.168.50.32:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1010 19:24:06.176627  148525 api_server.go:103] status: https://192.168.50.32:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1010 19:24:06.668130  148525 api_server.go:253] Checking apiserver healthz at https://192.168.50.32:8444/healthz ...
	I1010 19:24:06.682049  148525 api_server.go:279] https://192.168.50.32:8444/healthz returned 200:
	ok
	I1010 19:24:06.692780  148525 api_server.go:141] control plane version: v1.31.1
	I1010 19:24:06.692811  148525 api_server.go:131] duration metric: took 5.025182717s to wait for apiserver health ...
	I1010 19:24:06.692820  148525 cni.go:84] Creating CNI manager for ""
	I1010 19:24:06.692831  148525 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1010 19:24:06.694447  148525 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1010 19:24:03.558797  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:06.054012  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:04.028711  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:04.528770  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:05.028716  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:05.528083  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:06.028204  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:06.528430  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:07.027972  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:07.527987  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:08.027829  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:08.528676  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:05.435450  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:05.435940  147213 main.go:141] libmachine: (no-preload-320324) DBG | unable to find current IP address of domain no-preload-320324 in network mk-no-preload-320324
	I1010 19:24:05.435970  147213 main.go:141] libmachine: (no-preload-320324) DBG | I1010 19:24:05.435888  149378 retry.go:31] will retry after 2.981990051s: waiting for machine to come up
	I1010 19:24:08.419385  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:08.419982  147213 main.go:141] libmachine: (no-preload-320324) DBG | unable to find current IP address of domain no-preload-320324 in network mk-no-preload-320324
	I1010 19:24:08.420006  147213 main.go:141] libmachine: (no-preload-320324) DBG | I1010 19:24:08.419905  149378 retry.go:31] will retry after 3.976204267s: waiting for machine to come up
	I1010 19:24:06.695841  148525 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1010 19:24:06.711212  148525 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1010 19:24:06.747753  148525 system_pods.go:43] waiting for kube-system pods to appear ...
	I1010 19:24:06.768344  148525 system_pods.go:59] 8 kube-system pods found
	I1010 19:24:06.768429  148525 system_pods.go:61] "coredns-7c65d6cfc9-rv8vq" [93b209ea-bb5f-40c5-aea8-8771b785f021] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1010 19:24:06.768446  148525 system_pods.go:61] "etcd-default-k8s-diff-port-361847" [65129999-984d-497c-a6e1-9c53a5374991] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1010 19:24:06.768452  148525 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-361847" [5f18ba24-29cf-433e-a70d-23757278c04f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1010 19:24:06.768460  148525 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-361847" [c189c785-8ac5-4003-802d-9e7c089d450e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1010 19:24:06.768467  148525 system_pods.go:61] "kube-proxy-v5lm8" [e78eabf9-5c65-4cba-83fd-0837cef05126] Running
	I1010 19:24:06.768476  148525 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-361847" [4f84f0f5-e255-4534-9db3-e5cfee0b2447] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1010 19:24:06.768485  148525 system_pods.go:61] "metrics-server-6867b74b74-h5kjm" [a3979b79-bd21-490b-97ac-0a78efd43a99] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1010 19:24:06.768493  148525 system_pods.go:61] "storage-provisioner" [ca8606d3-9adb-46da-886a-3081b11b52a6] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1010 19:24:06.768499  148525 system_pods.go:74] duration metric: took 20.716461ms to wait for pod list to return data ...
	I1010 19:24:06.768509  148525 node_conditions.go:102] verifying NodePressure condition ...
	I1010 19:24:06.777935  148525 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1010 19:24:06.777973  148525 node_conditions.go:123] node cpu capacity is 2
	I1010 19:24:06.777988  148525 node_conditions.go:105] duration metric: took 9.473726ms to run NodePressure ...
	I1010 19:24:06.778019  148525 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:24:07.053296  148525 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1010 19:24:07.057585  148525 kubeadm.go:739] kubelet initialised
	I1010 19:24:07.057608  148525 kubeadm.go:740] duration metric: took 4.283027ms waiting for restarted kubelet to initialise ...
	I1010 19:24:07.057618  148525 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 19:24:07.064157  148525 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-rv8vq" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:07.069962  148525 pod_ready.go:98] node "default-k8s-diff-port-361847" hosting pod "coredns-7c65d6cfc9-rv8vq" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-361847" has status "Ready":"False"
	I1010 19:24:07.069989  148525 pod_ready.go:82] duration metric: took 5.791958ms for pod "coredns-7c65d6cfc9-rv8vq" in "kube-system" namespace to be "Ready" ...
	E1010 19:24:07.069999  148525 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-361847" hosting pod "coredns-7c65d6cfc9-rv8vq" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-361847" has status "Ready":"False"
	I1010 19:24:07.070022  148525 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:07.075615  148525 pod_ready.go:98] node "default-k8s-diff-port-361847" hosting pod "etcd-default-k8s-diff-port-361847" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-361847" has status "Ready":"False"
	I1010 19:24:07.075644  148525 pod_ready.go:82] duration metric: took 5.608749ms for pod "etcd-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	E1010 19:24:07.075654  148525 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-361847" hosting pod "etcd-default-k8s-diff-port-361847" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-361847" has status "Ready":"False"
	I1010 19:24:07.075661  148525 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:07.081717  148525 pod_ready.go:98] node "default-k8s-diff-port-361847" hosting pod "kube-apiserver-default-k8s-diff-port-361847" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-361847" has status "Ready":"False"
	I1010 19:24:07.081743  148525 pod_ready.go:82] duration metric: took 6.074977ms for pod "kube-apiserver-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	E1010 19:24:07.081754  148525 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-361847" hosting pod "kube-apiserver-default-k8s-diff-port-361847" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-361847" has status "Ready":"False"
	I1010 19:24:07.081761  148525 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:07.152204  148525 pod_ready.go:98] node "default-k8s-diff-port-361847" hosting pod "kube-controller-manager-default-k8s-diff-port-361847" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-361847" has status "Ready":"False"
	I1010 19:24:07.152244  148525 pod_ready.go:82] duration metric: took 70.475599ms for pod "kube-controller-manager-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	E1010 19:24:07.152258  148525 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-361847" hosting pod "kube-controller-manager-default-k8s-diff-port-361847" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-361847" has status "Ready":"False"
	I1010 19:24:07.152266  148525 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-v5lm8" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:07.551283  148525 pod_ready.go:93] pod "kube-proxy-v5lm8" in "kube-system" namespace has status "Ready":"True"
	I1010 19:24:07.551311  148525 pod_ready.go:82] duration metric: took 399.036581ms for pod "kube-proxy-v5lm8" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:07.551324  148525 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:08.550896  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:10.551437  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:09.028538  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:09.527883  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:10.028693  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:10.528439  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:11.028528  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:11.528679  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:12.028002  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:12.527904  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:13.028685  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:13.527833  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:12.401115  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.401808  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has current primary IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.401841  147213 main.go:141] libmachine: (no-preload-320324) Found IP for machine: 192.168.72.11
	I1010 19:24:12.401856  147213 main.go:141] libmachine: (no-preload-320324) Reserving static IP address...
	I1010 19:24:12.402368  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "no-preload-320324", mac: "52:54:00:95:03:cd", ip: "192.168.72.11"} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:12.402407  147213 main.go:141] libmachine: (no-preload-320324) DBG | skip adding static IP to network mk-no-preload-320324 - found existing host DHCP lease matching {name: "no-preload-320324", mac: "52:54:00:95:03:cd", ip: "192.168.72.11"}
	I1010 19:24:12.402426  147213 main.go:141] libmachine: (no-preload-320324) Reserved static IP address: 192.168.72.11
	I1010 19:24:12.402443  147213 main.go:141] libmachine: (no-preload-320324) Waiting for SSH to be available...
	I1010 19:24:12.402458  147213 main.go:141] libmachine: (no-preload-320324) DBG | Getting to WaitForSSH function...
	I1010 19:24:12.404803  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.405200  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:12.405226  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.405461  147213 main.go:141] libmachine: (no-preload-320324) DBG | Using SSH client type: external
	I1010 19:24:12.405494  147213 main.go:141] libmachine: (no-preload-320324) DBG | Using SSH private key: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/no-preload-320324/id_rsa (-rw-------)
	I1010 19:24:12.405527  147213 main.go:141] libmachine: (no-preload-320324) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.11 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19787-81676/.minikube/machines/no-preload-320324/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1010 19:24:12.405541  147213 main.go:141] libmachine: (no-preload-320324) DBG | About to run SSH command:
	I1010 19:24:12.405554  147213 main.go:141] libmachine: (no-preload-320324) DBG | exit 0
	I1010 19:24:12.529010  147213 main.go:141] libmachine: (no-preload-320324) DBG | SSH cmd err, output: <nil>: 
	I1010 19:24:12.529401  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetConfigRaw
	I1010 19:24:12.530257  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetIP
	I1010 19:24:12.533285  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.533692  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:12.533727  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.533963  147213 profile.go:143] Saving config to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/no-preload-320324/config.json ...
	I1010 19:24:12.534205  147213 machine.go:93] provisionDockerMachine start ...
	I1010 19:24:12.534230  147213 main.go:141] libmachine: (no-preload-320324) Calling .DriverName
	I1010 19:24:12.534450  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHHostname
	I1010 19:24:12.536585  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.536976  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:12.537003  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.537133  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHPort
	I1010 19:24:12.537323  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:12.537512  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:12.537689  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHUsername
	I1010 19:24:12.537925  147213 main.go:141] libmachine: Using SSH client type: native
	I1010 19:24:12.538138  147213 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.11 22 <nil> <nil>}
	I1010 19:24:12.538151  147213 main.go:141] libmachine: About to run SSH command:
	hostname
	I1010 19:24:12.641679  147213 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1010 19:24:12.641706  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetMachineName
	I1010 19:24:12.641964  147213 buildroot.go:166] provisioning hostname "no-preload-320324"
	I1010 19:24:12.642002  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetMachineName
	I1010 19:24:12.642235  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHHostname
	I1010 19:24:12.645149  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.645488  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:12.645521  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.645647  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHPort
	I1010 19:24:12.645836  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:12.646001  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:12.646155  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHUsername
	I1010 19:24:12.646352  147213 main.go:141] libmachine: Using SSH client type: native
	I1010 19:24:12.646533  147213 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.11 22 <nil> <nil>}
	I1010 19:24:12.646545  147213 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-320324 && echo "no-preload-320324" | sudo tee /etc/hostname
	I1010 19:24:12.766449  147213 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-320324
	
	I1010 19:24:12.766480  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHHostname
	I1010 19:24:12.769836  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.770331  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:12.770356  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.770584  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHPort
	I1010 19:24:12.770810  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:12.770962  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:12.771119  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHUsername
	I1010 19:24:12.771252  147213 main.go:141] libmachine: Using SSH client type: native
	I1010 19:24:12.771448  147213 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.11 22 <nil> <nil>}
	I1010 19:24:12.771470  147213 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-320324' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-320324/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-320324' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1010 19:24:12.882458  147213 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1010 19:24:12.882495  147213 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19787-81676/.minikube CaCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19787-81676/.minikube}
	I1010 19:24:12.882537  147213 buildroot.go:174] setting up certificates
	I1010 19:24:12.882547  147213 provision.go:84] configureAuth start
	I1010 19:24:12.882562  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetMachineName
	I1010 19:24:12.882865  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetIP
	I1010 19:24:12.885854  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.886139  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:12.886173  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.886308  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHHostname
	I1010 19:24:12.888479  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.888788  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:12.888819  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.888976  147213 provision.go:143] copyHostCerts
	I1010 19:24:12.889037  147213 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem, removing ...
	I1010 19:24:12.889049  147213 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem
	I1010 19:24:12.889102  147213 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem (1123 bytes)
	I1010 19:24:12.889235  147213 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem, removing ...
	I1010 19:24:12.889246  147213 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem
	I1010 19:24:12.889278  147213 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem (1675 bytes)
	I1010 19:24:12.889370  147213 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem, removing ...
	I1010 19:24:12.889381  147213 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem
	I1010 19:24:12.889406  147213 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem (1078 bytes)
	I1010 19:24:12.889493  147213 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem org=jenkins.no-preload-320324 san=[127.0.0.1 192.168.72.11 localhost minikube no-preload-320324]
	I1010 19:24:12.978176  147213 provision.go:177] copyRemoteCerts
	I1010 19:24:12.978235  147213 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1010 19:24:12.978261  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHHostname
	I1010 19:24:12.981662  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.982182  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:12.982218  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.982452  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHPort
	I1010 19:24:12.982647  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:12.982829  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHUsername
	I1010 19:24:12.983005  147213 sshutil.go:53] new ssh client: &{IP:192.168.72.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/no-preload-320324/id_rsa Username:docker}
	I1010 19:24:13.067269  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1010 19:24:13.092777  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1010 19:24:13.118530  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1010 19:24:13.143401  147213 provision.go:87] duration metric: took 260.833877ms to configureAuth
	I1010 19:24:13.143436  147213 buildroot.go:189] setting minikube options for container-runtime
	I1010 19:24:13.143678  147213 config.go:182] Loaded profile config "no-preload-320324": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 19:24:13.143776  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHHostname
	I1010 19:24:13.147086  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:13.147507  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:13.147531  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:13.147787  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHPort
	I1010 19:24:13.148032  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:13.148222  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:13.148452  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHUsername
	I1010 19:24:13.148660  147213 main.go:141] libmachine: Using SSH client type: native
	I1010 19:24:13.149013  147213 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.11 22 <nil> <nil>}
	I1010 19:24:13.149041  147213 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1010 19:24:13.375683  147213 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1010 19:24:13.375714  147213 machine.go:96] duration metric: took 841.493636ms to provisionDockerMachine
	I1010 19:24:13.375736  147213 start.go:293] postStartSetup for "no-preload-320324" (driver="kvm2")
	I1010 19:24:13.375754  147213 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1010 19:24:13.375775  147213 main.go:141] libmachine: (no-preload-320324) Calling .DriverName
	I1010 19:24:13.376085  147213 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1010 19:24:13.376116  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHHostname
	I1010 19:24:13.378855  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:13.379179  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:13.379224  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:13.379408  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHPort
	I1010 19:24:13.379608  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:13.379769  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHUsername
	I1010 19:24:13.379910  147213 sshutil.go:53] new ssh client: &{IP:192.168.72.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/no-preload-320324/id_rsa Username:docker}
	I1010 19:24:13.459580  147213 ssh_runner.go:195] Run: cat /etc/os-release
	I1010 19:24:13.463644  147213 info.go:137] Remote host: Buildroot 2023.02.9
	I1010 19:24:13.463674  147213 filesync.go:126] Scanning /home/jenkins/minikube-integration/19787-81676/.minikube/addons for local assets ...
	I1010 19:24:13.463751  147213 filesync.go:126] Scanning /home/jenkins/minikube-integration/19787-81676/.minikube/files for local assets ...
	I1010 19:24:13.463845  147213 filesync.go:149] local asset: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem -> 888762.pem in /etc/ssl/certs
	I1010 19:24:13.463963  147213 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1010 19:24:13.473483  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem --> /etc/ssl/certs/888762.pem (1708 bytes)
	I1010 19:24:13.498773  147213 start.go:296] duration metric: took 123.021762ms for postStartSetup
	I1010 19:24:13.498814  147213 fix.go:56] duration metric: took 20.640532088s for fixHost
	I1010 19:24:13.498834  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHHostname
	I1010 19:24:13.501681  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:13.502243  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:13.502281  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:13.502476  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHPort
	I1010 19:24:13.502679  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:13.502835  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:13.502993  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHUsername
	I1010 19:24:13.503177  147213 main.go:141] libmachine: Using SSH client type: native
	I1010 19:24:13.503383  147213 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.11 22 <nil> <nil>}
	I1010 19:24:13.503396  147213 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1010 19:24:13.613929  147213 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728588253.586950075
	
	I1010 19:24:13.613954  147213 fix.go:216] guest clock: 1728588253.586950075
	I1010 19:24:13.613963  147213 fix.go:229] Guest: 2024-10-10 19:24:13.586950075 +0000 UTC Remote: 2024-10-10 19:24:13.498818059 +0000 UTC m=+359.788559229 (delta=88.132016ms)
	I1010 19:24:13.613988  147213 fix.go:200] guest clock delta is within tolerance: 88.132016ms
	I1010 19:24:13.614020  147213 start.go:83] releasing machines lock for "no-preload-320324", held for 20.755775587s
	I1010 19:24:13.614063  147213 main.go:141] libmachine: (no-preload-320324) Calling .DriverName
	I1010 19:24:13.614473  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetIP
	I1010 19:24:13.617327  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:13.617694  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:13.617721  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:13.618016  147213 main.go:141] libmachine: (no-preload-320324) Calling .DriverName
	I1010 19:24:13.618670  147213 main.go:141] libmachine: (no-preload-320324) Calling .DriverName
	I1010 19:24:13.618884  147213 main.go:141] libmachine: (no-preload-320324) Calling .DriverName
	I1010 19:24:13.618989  147213 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1010 19:24:13.619047  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHHostname
	I1010 19:24:13.619142  147213 ssh_runner.go:195] Run: cat /version.json
	I1010 19:24:13.619185  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHHostname
	I1010 19:24:13.621972  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:13.622229  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:13.622322  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:13.622348  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:13.622533  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHPort
	I1010 19:24:13.622666  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:13.622697  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:13.622736  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:13.622865  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHPort
	I1010 19:24:13.622930  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHUsername
	I1010 19:24:13.623059  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:13.623073  147213 sshutil.go:53] new ssh client: &{IP:192.168.72.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/no-preload-320324/id_rsa Username:docker}
	I1010 19:24:13.623225  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHUsername
	I1010 19:24:13.623349  147213 sshutil.go:53] new ssh client: &{IP:192.168.72.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/no-preload-320324/id_rsa Username:docker}
	I1010 19:24:13.720999  147213 ssh_runner.go:195] Run: systemctl --version
	I1010 19:24:13.727679  147213 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1010 19:24:09.562834  148525 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-361847" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:12.058686  148525 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-361847" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:13.870558  147213 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1010 19:24:13.877853  147213 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1010 19:24:13.877923  147213 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1010 19:24:13.896295  147213 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1010 19:24:13.896325  147213 start.go:495] detecting cgroup driver to use...
	I1010 19:24:13.896400  147213 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1010 19:24:13.913122  147213 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1010 19:24:13.929359  147213 docker.go:217] disabling cri-docker service (if available) ...
	I1010 19:24:13.929437  147213 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1010 19:24:13.944840  147213 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1010 19:24:13.960062  147213 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1010 19:24:14.090774  147213 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1010 19:24:14.246094  147213 docker.go:233] disabling docker service ...
	I1010 19:24:14.246161  147213 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1010 19:24:14.264682  147213 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1010 19:24:14.280264  147213 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1010 19:24:14.437156  147213 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1010 19:24:14.569220  147213 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1010 19:24:14.585723  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1010 19:24:14.607349  147213 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1010 19:24:14.607429  147213 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:24:14.619113  147213 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1010 19:24:14.619198  147213 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:24:14.631818  147213 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:24:14.643977  147213 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:24:14.655753  147213 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1010 19:24:14.667235  147213 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:24:14.679225  147213 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:24:14.698760  147213 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:24:14.710440  147213 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1010 19:24:14.722565  147213 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1010 19:24:14.722625  147213 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1010 19:24:14.740587  147213 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1010 19:24:14.752630  147213 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 19:24:14.887728  147213 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1010 19:24:14.989026  147213 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1010 19:24:14.989109  147213 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1010 19:24:14.995309  147213 start.go:563] Will wait 60s for crictl version
	I1010 19:24:14.995366  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:24:14.999840  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1010 19:24:15.043758  147213 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1010 19:24:15.043856  147213 ssh_runner.go:195] Run: crio --version
	I1010 19:24:15.079274  147213 ssh_runner.go:195] Run: crio --version
	I1010 19:24:15.116630  147213 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1010 19:24:13.050633  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:15.552413  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:14.028090  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:14.528072  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:15.028648  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:15.528388  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:16.028060  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:16.528472  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:17.028098  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:17.528125  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:18.028563  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:18.528613  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:15.118343  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetIP
	I1010 19:24:15.121596  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:15.122101  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:15.122133  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:15.122396  147213 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1010 19:24:15.127140  147213 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 19:24:15.141249  147213 kubeadm.go:883] updating cluster {Name:no-preload-320324 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:no-preload-320324 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.11 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1010 19:24:15.141375  147213 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1010 19:24:15.141417  147213 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 19:24:15.183271  147213 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1010 19:24:15.183303  147213 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1010 19:24:15.183412  147213 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 19:24:15.183444  147213 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1010 19:24:15.183452  147213 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I1010 19:24:15.183459  147213 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1010 19:24:15.183422  147213 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1010 19:24:15.183493  147213 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1010 19:24:15.183512  147213 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I1010 19:24:15.183507  147213 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I1010 19:24:15.185097  147213 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I1010 19:24:15.185097  147213 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1010 19:24:15.185097  147213 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 19:24:15.185099  147213 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I1010 19:24:15.185097  147213 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I1010 19:24:15.185098  147213 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1010 19:24:15.185103  147213 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1010 19:24:15.185106  147213 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1010 19:24:15.328484  147213 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.1
	I1010 19:24:15.333573  147213 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I1010 19:24:15.340047  147213 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I1010 19:24:15.358922  147213 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I1010 19:24:15.359800  147213 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.1
	I1010 19:24:15.366668  147213 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.1
	I1010 19:24:15.409942  147213 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I1010 19:24:15.409995  147213 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1010 19:24:15.410050  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:24:15.416186  147213 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.1
	I1010 19:24:15.452343  147213 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I1010 19:24:15.452385  147213 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I1010 19:24:15.452426  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:24:15.533567  147213 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I1010 19:24:15.533620  147213 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I1010 19:24:15.533671  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:24:15.585611  147213 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I1010 19:24:15.585659  147213 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I1010 19:24:15.585685  147213 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I1010 19:24:15.585712  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:24:15.585724  147213 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I1010 19:24:15.585765  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:24:15.585769  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I1010 19:24:15.585805  147213 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I1010 19:24:15.585832  147213 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I1010 19:24:15.585856  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1010 19:24:15.585872  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:24:15.585943  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1010 19:24:15.603131  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I1010 19:24:15.661918  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1010 19:24:15.683739  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I1010 19:24:15.683760  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I1010 19:24:15.683833  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1010 19:24:15.683880  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I1010 19:24:15.685385  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I1010 19:24:15.792253  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1010 19:24:15.818116  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I1010 19:24:15.818183  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I1010 19:24:15.818289  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1010 19:24:15.818321  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I1010 19:24:15.818402  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I1010 19:24:15.878069  147213 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I1010 19:24:15.878202  147213 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I1010 19:24:15.940520  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I1010 19:24:15.953841  147213 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I1010 19:24:15.953955  147213 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I1010 19:24:15.953990  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I1010 19:24:15.954047  147213 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I1010 19:24:15.954115  147213 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I1010 19:24:15.954120  147213 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I1010 19:24:15.954130  147213 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I1010 19:24:15.954144  147213 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I1010 19:24:15.954157  147213 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I1010 19:24:15.954205  147213 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	I1010 19:24:16.005975  147213 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I1010 19:24:16.006028  147213 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I1010 19:24:16.006090  147213 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I1010 19:24:16.023905  147213 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I1010 19:24:16.023990  147213 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.1 (exists)
	I1010 19:24:16.024024  147213 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I1010 19:24:16.024023  147213 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.1 (exists)
	I1010 19:24:16.033715  147213 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 19:24:18.150881  147213 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1: (2.144766677s)
	I1010 19:24:18.150935  147213 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.1 (exists)
	I1010 19:24:18.150931  147213 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (2.196753845s)
	I1010 19:24:18.150944  147213 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1: (2.126894115s)
	I1010 19:24:18.150973  147213 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.1 (exists)
	I1010 19:24:18.150953  147213 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I1010 19:24:18.150982  147213 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.117235962s)
	I1010 19:24:18.151002  147213 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I1010 19:24:18.151014  147213 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1010 19:24:18.151053  147213 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 19:24:18.151069  147213 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I1010 19:24:18.151097  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:24:14.059223  148525 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-361847" in "kube-system" namespace has status "Ready":"True"
	I1010 19:24:14.059252  148525 pod_ready.go:82] duration metric: took 6.507918149s for pod "kube-scheduler-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:14.059266  148525 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:16.066908  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:18.082398  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:18.051799  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:20.552644  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:19.028306  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:19.527854  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:20.028765  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:20.528245  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:21.027936  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:21.528172  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:22.028527  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:22.528446  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:23.028709  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:23.528711  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:21.952099  147213 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.801005716s)
	I1010 19:24:21.952134  147213 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I1010 19:24:21.952163  147213 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I1010 19:24:21.952165  147213 ssh_runner.go:235] Completed: which crictl: (3.801048272s)
	I1010 19:24:21.952212  147213 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I1010 19:24:21.952225  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 19:24:21.993627  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 19:24:20.566055  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:22.567145  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:23.053514  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:25.554151  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:24.028402  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:24.528516  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:25.028207  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:25.527928  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:26.028032  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:26.527826  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:27.028609  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:27.528448  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:28.028018  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:28.528597  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:23.929370  147213 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1: (1.977128659s)
	I1010 19:24:23.929418  147213 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I1010 19:24:23.929450  147213 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I1010 19:24:23.929498  147213 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.935844384s)
	I1010 19:24:23.929532  147213 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1
	I1010 19:24:23.929551  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 19:24:26.009485  147213 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.079908324s)
	I1010 19:24:26.009567  147213 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1010 19:24:26.009484  147213 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1: (2.079925224s)
	I1010 19:24:26.009641  147213 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I1010 19:24:26.009671  147213 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I1010 19:24:26.009684  147213 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1010 19:24:26.009720  147213 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1
	I1010 19:24:27.968483  147213 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.958772952s)
	I1010 19:24:27.968534  147213 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1010 19:24:27.968559  147213 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1: (1.958813643s)
	I1010 19:24:27.968587  147213 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I1010 19:24:27.968619  147213 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I1010 19:24:27.968686  147213 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1
	I1010 19:24:25.069787  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:27.567013  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:28.050968  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:30.551528  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:29.027960  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:29.528054  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:30.028227  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:30.527791  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:31.027790  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:31.528761  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:32.028343  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:32.528553  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:33.028195  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:33.527895  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:29.315157  147213 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1: (1.346440456s)
	I1010 19:24:29.315211  147213 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I1010 19:24:29.315244  147213 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1010 19:24:29.315296  147213 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1010 19:24:30.173931  147213 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1010 19:24:30.173977  147213 cache_images.go:123] Successfully loaded all cached images
	I1010 19:24:30.173985  147213 cache_images.go:92] duration metric: took 14.990666845s to LoadCachedImages
	I1010 19:24:30.174001  147213 kubeadm.go:934] updating node { 192.168.72.11 8443 v1.31.1 crio true true} ...
	I1010 19:24:30.174129  147213 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-320324 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.11
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-320324 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1010 19:24:30.174221  147213 ssh_runner.go:195] Run: crio config
	I1010 19:24:30.222677  147213 cni.go:84] Creating CNI manager for ""
	I1010 19:24:30.222702  147213 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1010 19:24:30.222711  147213 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1010 19:24:30.222736  147213 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.11 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-320324 NodeName:no-preload-320324 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.11"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.11 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1010 19:24:30.222923  147213 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.11
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-320324"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.11
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.11"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1010 19:24:30.222998  147213 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1010 19:24:30.233755  147213 binaries.go:44] Found k8s binaries, skipping transfer
	I1010 19:24:30.233818  147213 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1010 19:24:30.243829  147213 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I1010 19:24:30.263056  147213 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1010 19:24:30.282362  147213 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I1010 19:24:30.300449  147213 ssh_runner.go:195] Run: grep 192.168.72.11	control-plane.minikube.internal$ /etc/hosts
	I1010 19:24:30.304661  147213 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.11	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 19:24:30.317462  147213 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 19:24:30.445515  147213 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 19:24:30.462816  147213 certs.go:68] Setting up /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/no-preload-320324 for IP: 192.168.72.11
	I1010 19:24:30.462847  147213 certs.go:194] generating shared ca certs ...
	I1010 19:24:30.462871  147213 certs.go:226] acquiring lock for ca certs: {Name:mk934aa2a82050d7b4e8800492bf64f91d140517 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 19:24:30.463074  147213 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key
	I1010 19:24:30.463132  147213 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key
	I1010 19:24:30.463145  147213 certs.go:256] generating profile certs ...
	I1010 19:24:30.463289  147213 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/no-preload-320324/client.key
	I1010 19:24:30.463364  147213 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/no-preload-320324/apiserver.key.a7785fc5
	I1010 19:24:30.463413  147213 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/no-preload-320324/proxy-client.key
	I1010 19:24:30.463565  147213 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876.pem (1338 bytes)
	W1010 19:24:30.463604  147213 certs.go:480] ignoring /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876_empty.pem, impossibly tiny 0 bytes
	I1010 19:24:30.463617  147213 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem (1679 bytes)
	I1010 19:24:30.463657  147213 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem (1078 bytes)
	I1010 19:24:30.463689  147213 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem (1123 bytes)
	I1010 19:24:30.463721  147213 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem (1675 bytes)
	I1010 19:24:30.463774  147213 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem (1708 bytes)
	I1010 19:24:30.464502  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1010 19:24:30.525320  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1010 19:24:30.565229  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1010 19:24:30.597731  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1010 19:24:30.626174  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/no-preload-320324/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1010 19:24:30.659991  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/no-preload-320324/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1010 19:24:30.685662  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/no-preload-320324/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1010 19:24:30.710757  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/no-preload-320324/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1010 19:24:30.736325  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem --> /usr/share/ca-certificates/888762.pem (1708 bytes)
	I1010 19:24:30.771239  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1010 19:24:30.796467  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876.pem --> /usr/share/ca-certificates/88876.pem (1338 bytes)
	I1010 19:24:30.821925  147213 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1010 19:24:30.840743  147213 ssh_runner.go:195] Run: openssl version
	I1010 19:24:30.846898  147213 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/88876.pem && ln -fs /usr/share/ca-certificates/88876.pem /etc/ssl/certs/88876.pem"
	I1010 19:24:30.858410  147213 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/88876.pem
	I1010 19:24:30.863188  147213 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 10 18:09 /usr/share/ca-certificates/88876.pem
	I1010 19:24:30.863260  147213 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/88876.pem
	I1010 19:24:30.869307  147213 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/88876.pem /etc/ssl/certs/51391683.0"
	I1010 19:24:30.880319  147213 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/888762.pem && ln -fs /usr/share/ca-certificates/888762.pem /etc/ssl/certs/888762.pem"
	I1010 19:24:30.891307  147213 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/888762.pem
	I1010 19:24:30.895771  147213 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 10 18:09 /usr/share/ca-certificates/888762.pem
	I1010 19:24:30.895828  147213 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/888762.pem
	I1010 19:24:30.901510  147213 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/888762.pem /etc/ssl/certs/3ec20f2e.0"
	I1010 19:24:30.912627  147213 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1010 19:24:30.924330  147213 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1010 19:24:30.929108  147213 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 10 17:58 /usr/share/ca-certificates/minikubeCA.pem
	I1010 19:24:30.929194  147213 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1010 19:24:30.935266  147213 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1010 19:24:30.946714  147213 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1010 19:24:30.951692  147213 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1010 19:24:30.957910  147213 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1010 19:24:30.964296  147213 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1010 19:24:30.971001  147213 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1010 19:24:30.977427  147213 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1010 19:24:30.984201  147213 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1010 19:24:30.990532  147213 kubeadm.go:392] StartCluster: {Name:no-preload-320324 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:no-preload-320324 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.11 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 19:24:30.990622  147213 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1010 19:24:30.990727  147213 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1010 19:24:31.033544  147213 cri.go:89] found id: ""
	I1010 19:24:31.033624  147213 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1010 19:24:31.044956  147213 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1010 19:24:31.044975  147213 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1010 19:24:31.045025  147213 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1010 19:24:31.056563  147213 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1010 19:24:31.057705  147213 kubeconfig.go:125] found "no-preload-320324" server: "https://192.168.72.11:8443"
	I1010 19:24:31.059853  147213 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1010 19:24:31.071304  147213 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.11
	I1010 19:24:31.071338  147213 kubeadm.go:1160] stopping kube-system containers ...
	I1010 19:24:31.071353  147213 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1010 19:24:31.071444  147213 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1010 19:24:31.107345  147213 cri.go:89] found id: ""
	I1010 19:24:31.107429  147213 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1010 19:24:31.125556  147213 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1010 19:24:31.135390  147213 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1010 19:24:31.135428  147213 kubeadm.go:157] found existing configuration files:
	
	I1010 19:24:31.135478  147213 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1010 19:24:31.144653  147213 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1010 19:24:31.144715  147213 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1010 19:24:31.154458  147213 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1010 19:24:31.163444  147213 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1010 19:24:31.163501  147213 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1010 19:24:31.172633  147213 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1010 19:24:31.181939  147213 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1010 19:24:31.182001  147213 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1010 19:24:31.191638  147213 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1010 19:24:31.200846  147213 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1010 19:24:31.200935  147213 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1010 19:24:31.211048  147213 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1010 19:24:31.221008  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:24:31.352733  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:24:32.270546  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:24:32.474510  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:24:32.551517  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:24:32.707737  147213 api_server.go:52] waiting for apiserver process to appear ...
	I1010 19:24:32.707826  147213 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:33.208647  147213 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:33.708539  147213 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:33.728647  147213 api_server.go:72] duration metric: took 1.020907246s to wait for apiserver process to appear ...
	I1010 19:24:33.728678  147213 api_server.go:88] waiting for apiserver healthz status ...
	I1010 19:24:33.728701  147213 api_server.go:253] Checking apiserver healthz at https://192.168.72.11:8443/healthz ...
	I1010 19:24:30.066635  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:32.066732  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:32.552277  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:35.051399  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:34.028434  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:34.527949  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:35.028224  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:35.528470  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:36.028540  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:36.528499  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:37.028362  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:37.528483  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:38.028531  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:38.527865  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:37.025756  147213 api_server.go:279] https://192.168.72.11:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1010 19:24:37.025787  147213 api_server.go:103] status: https://192.168.72.11:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1010 19:24:37.025802  147213 api_server.go:253] Checking apiserver healthz at https://192.168.72.11:8443/healthz ...
	I1010 19:24:37.078247  147213 api_server.go:279] https://192.168.72.11:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1010 19:24:37.078283  147213 api_server.go:103] status: https://192.168.72.11:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1010 19:24:37.229601  147213 api_server.go:253] Checking apiserver healthz at https://192.168.72.11:8443/healthz ...
	I1010 19:24:37.237166  147213 api_server.go:279] https://192.168.72.11:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1010 19:24:37.237204  147213 api_server.go:103] status: https://192.168.72.11:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1010 19:24:37.728824  147213 api_server.go:253] Checking apiserver healthz at https://192.168.72.11:8443/healthz ...
	I1010 19:24:37.735660  147213 api_server.go:279] https://192.168.72.11:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1010 19:24:37.735700  147213 api_server.go:103] status: https://192.168.72.11:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1010 19:24:38.229746  147213 api_server.go:253] Checking apiserver healthz at https://192.168.72.11:8443/healthz ...
	I1010 19:24:38.234449  147213 api_server.go:279] https://192.168.72.11:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1010 19:24:38.234491  147213 api_server.go:103] status: https://192.168.72.11:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1010 19:24:38.729000  147213 api_server.go:253] Checking apiserver healthz at https://192.168.72.11:8443/healthz ...
	I1010 19:24:38.737564  147213 api_server.go:279] https://192.168.72.11:8443/healthz returned 200:
	ok
	I1010 19:24:38.751982  147213 api_server.go:141] control plane version: v1.31.1
	I1010 19:24:38.752012  147213 api_server.go:131] duration metric: took 5.023326632s to wait for apiserver health ...
	I1010 19:24:38.752023  147213 cni.go:84] Creating CNI manager for ""
	I1010 19:24:38.752030  147213 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1010 19:24:38.753351  147213 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1010 19:24:34.067208  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:36.067413  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:38.566729  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:38.754645  147213 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1010 19:24:38.772086  147213 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1010 19:24:38.792017  147213 system_pods.go:43] waiting for kube-system pods to appear ...
	I1010 19:24:38.800547  147213 system_pods.go:59] 8 kube-system pods found
	I1010 19:24:38.800592  147213 system_pods.go:61] "coredns-7c65d6cfc9-86brb" [28e5f869-f82f-4bd4-9d9c-89499fa89c89] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1010 19:24:38.800602  147213 system_pods.go:61] "etcd-no-preload-320324" [3027fc7b-a2cd-47c9-a6f1-122bfd0475a8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1010 19:24:38.800609  147213 system_pods.go:61] "kube-apiserver-no-preload-320324" [6e3d2eab-4568-48e9-957c-e724ff89b5da] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1010 19:24:38.800617  147213 system_pods.go:61] "kube-controller-manager-no-preload-320324" [a6d65cd3-48b1-4faf-bb7f-f6433641d6f3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1010 19:24:38.800624  147213 system_pods.go:61] "kube-proxy-vn6sv" [e5b2c419-7299-4bc4-b263-99408b9484eb] Running
	I1010 19:24:38.800629  147213 system_pods.go:61] "kube-scheduler-no-preload-320324" [e089c258-e720-4c78-ada1-eebbe89556c1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1010 19:24:38.800638  147213 system_pods.go:61] "metrics-server-6867b74b74-8w9lk" [354939e6-2ca9-44f5-8e8e-c10493c68b79] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1010 19:24:38.800642  147213 system_pods.go:61] "storage-provisioner" [d4965b53-60c3-4a97-bd52-d164d977247a] Running
	I1010 19:24:38.800648  147213 system_pods.go:74] duration metric: took 8.60732ms to wait for pod list to return data ...
	I1010 19:24:38.800654  147213 node_conditions.go:102] verifying NodePressure condition ...
	I1010 19:24:38.804628  147213 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1010 19:24:38.804663  147213 node_conditions.go:123] node cpu capacity is 2
	I1010 19:24:38.804680  147213 node_conditions.go:105] duration metric: took 4.021699ms to run NodePressure ...
	I1010 19:24:38.804700  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:24:39.078452  147213 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1010 19:24:39.087090  147213 kubeadm.go:739] kubelet initialised
	I1010 19:24:39.087116  147213 kubeadm.go:740] duration metric: took 8.636436ms waiting for restarted kubelet to initialise ...
	I1010 19:24:39.087125  147213 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 19:24:39.094468  147213 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-86brb" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:39.108724  147213 pod_ready.go:98] node "no-preload-320324" hosting pod "coredns-7c65d6cfc9-86brb" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:39.108756  147213 pod_ready.go:82] duration metric: took 14.254631ms for pod "coredns-7c65d6cfc9-86brb" in "kube-system" namespace to be "Ready" ...
	E1010 19:24:39.108770  147213 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-320324" hosting pod "coredns-7c65d6cfc9-86brb" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:39.108780  147213 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:39.119304  147213 pod_ready.go:98] node "no-preload-320324" hosting pod "etcd-no-preload-320324" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:39.119335  147213 pod_ready.go:82] duration metric: took 10.543376ms for pod "etcd-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	E1010 19:24:39.119345  147213 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-320324" hosting pod "etcd-no-preload-320324" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:39.119352  147213 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:39.127243  147213 pod_ready.go:98] node "no-preload-320324" hosting pod "kube-apiserver-no-preload-320324" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:39.127268  147213 pod_ready.go:82] duration metric: took 7.907414ms for pod "kube-apiserver-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	E1010 19:24:39.127278  147213 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-320324" hosting pod "kube-apiserver-no-preload-320324" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:39.127285  147213 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:39.195549  147213 pod_ready.go:98] node "no-preload-320324" hosting pod "kube-controller-manager-no-preload-320324" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:39.195578  147213 pod_ready.go:82] duration metric: took 68.282333ms for pod "kube-controller-manager-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	E1010 19:24:39.195588  147213 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-320324" hosting pod "kube-controller-manager-no-preload-320324" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:39.195594  147213 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-vn6sv" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:39.595842  147213 pod_ready.go:98] node "no-preload-320324" hosting pod "kube-proxy-vn6sv" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:39.595871  147213 pod_ready.go:82] duration metric: took 400.267905ms for pod "kube-proxy-vn6sv" in "kube-system" namespace to be "Ready" ...
	E1010 19:24:39.595880  147213 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-320324" hosting pod "kube-proxy-vn6sv" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:39.595886  147213 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:39.995731  147213 pod_ready.go:98] node "no-preload-320324" hosting pod "kube-scheduler-no-preload-320324" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:39.995760  147213 pod_ready.go:82] duration metric: took 399.866947ms for pod "kube-scheduler-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	E1010 19:24:39.995770  147213 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-320324" hosting pod "kube-scheduler-no-preload-320324" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:39.995777  147213 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:40.396420  147213 pod_ready.go:98] node "no-preload-320324" hosting pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:40.396456  147213 pod_ready.go:82] duration metric: took 400.667834ms for pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace to be "Ready" ...
	E1010 19:24:40.396470  147213 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-320324" hosting pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:40.396482  147213 pod_ready.go:39] duration metric: took 1.309346973s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 19:24:40.396508  147213 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1010 19:24:40.409956  147213 ops.go:34] apiserver oom_adj: -16
	I1010 19:24:40.409980  147213 kubeadm.go:597] duration metric: took 9.364998977s to restartPrimaryControlPlane
	I1010 19:24:40.409991  147213 kubeadm.go:394] duration metric: took 9.419470024s to StartCluster
	I1010 19:24:40.410009  147213 settings.go:142] acquiring lock: {Name:mk8d73e472e6ca16e47be8eb712406a95625b8da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 19:24:40.410085  147213 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19787-81676/kubeconfig
	I1010 19:24:40.413037  147213 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19787-81676/kubeconfig: {Name:mk31297b607b774870865548d952622c04d970cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 19:24:40.413448  147213 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.11 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1010 19:24:40.413783  147213 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1010 19:24:40.413979  147213 config.go:182] Loaded profile config "no-preload-320324": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 19:24:40.413996  147213 addons.go:69] Setting default-storageclass=true in profile "no-preload-320324"
	I1010 19:24:40.414020  147213 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-320324"
	I1010 19:24:40.413983  147213 addons.go:69] Setting storage-provisioner=true in profile "no-preload-320324"
	I1010 19:24:40.414048  147213 addons.go:234] Setting addon storage-provisioner=true in "no-preload-320324"
	W1010 19:24:40.414057  147213 addons.go:243] addon storage-provisioner should already be in state true
	I1010 19:24:40.414091  147213 host.go:66] Checking if "no-preload-320324" exists ...
	I1010 19:24:40.414170  147213 addons.go:69] Setting metrics-server=true in profile "no-preload-320324"
	I1010 19:24:40.414230  147213 addons.go:234] Setting addon metrics-server=true in "no-preload-320324"
	W1010 19:24:40.414252  147213 addons.go:243] addon metrics-server should already be in state true
	I1010 19:24:40.414292  147213 host.go:66] Checking if "no-preload-320324" exists ...
	I1010 19:24:40.414612  147213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:24:40.414640  147213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:24:40.414678  147213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:24:40.414712  147213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:24:40.415409  147213 out.go:177] * Verifying Kubernetes components...
	I1010 19:24:40.415412  147213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:24:40.415553  147213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:24:40.416812  147213 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 19:24:40.431363  147213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44727
	I1010 19:24:40.431474  147213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46487
	I1010 19:24:40.431659  147213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33767
	I1010 19:24:40.431983  147213 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:24:40.432136  147213 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:24:40.432156  147213 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:24:40.432567  147213 main.go:141] libmachine: Using API Version  1
	I1010 19:24:40.432587  147213 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:24:40.432710  147213 main.go:141] libmachine: Using API Version  1
	I1010 19:24:40.432732  147213 main.go:141] libmachine: Using API Version  1
	I1010 19:24:40.432740  147213 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:24:40.432749  147213 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:24:40.433000  147213 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:24:40.433079  147213 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:24:40.433103  147213 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:24:40.433468  147213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:24:40.433498  147213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:24:40.436984  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetState
	I1010 19:24:40.453362  147213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:24:40.453426  147213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:24:40.454884  147213 addons.go:234] Setting addon default-storageclass=true in "no-preload-320324"
	W1010 19:24:40.454913  147213 addons.go:243] addon default-storageclass should already be in state true
	I1010 19:24:40.454947  147213 host.go:66] Checking if "no-preload-320324" exists ...
	I1010 19:24:40.455335  147213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:24:40.455394  147213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:24:40.470642  147213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38831
	I1010 19:24:40.471118  147213 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:24:40.471701  147213 main.go:141] libmachine: Using API Version  1
	I1010 19:24:40.471730  147213 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:24:40.472241  147213 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:24:40.472523  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetState
	I1010 19:24:40.473953  147213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36987
	I1010 19:24:40.474196  147213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34527
	I1010 19:24:40.474332  147213 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:24:40.474672  147213 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:24:40.474814  147213 main.go:141] libmachine: Using API Version  1
	I1010 19:24:40.474827  147213 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:24:40.475181  147213 main.go:141] libmachine: Using API Version  1
	I1010 19:24:40.475210  147213 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:24:40.475310  147213 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:24:40.475702  147213 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:24:40.475785  147213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:24:40.475825  147213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:24:40.475922  147213 main.go:141] libmachine: (no-preload-320324) Calling .DriverName
	I1010 19:24:40.476046  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetState
	I1010 19:24:40.478147  147213 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 19:24:40.478395  147213 main.go:141] libmachine: (no-preload-320324) Calling .DriverName
	I1010 19:24:40.479869  147213 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 19:24:40.479896  147213 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1010 19:24:40.479922  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHHostname
	I1010 19:24:40.480549  147213 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1010 19:24:37.051611  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:39.551952  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:41.553895  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:40.482101  147213 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1010 19:24:40.482119  147213 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1010 19:24:40.482144  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHHostname
	I1010 19:24:40.484066  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:40.484560  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:40.484588  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:40.484833  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHPort
	I1010 19:24:40.485065  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:40.485241  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHUsername
	I1010 19:24:40.485272  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:40.485443  147213 sshutil.go:53] new ssh client: &{IP:192.168.72.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/no-preload-320324/id_rsa Username:docker}
	I1010 19:24:40.485788  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:40.485807  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:40.485842  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHPort
	I1010 19:24:40.486017  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:40.486202  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHUsername
	I1010 19:24:40.486454  147213 sshutil.go:53] new ssh client: &{IP:192.168.72.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/no-preload-320324/id_rsa Username:docker}
	I1010 19:24:40.492533  147213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38967
	I1010 19:24:40.493012  147213 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:24:40.493566  147213 main.go:141] libmachine: Using API Version  1
	I1010 19:24:40.493595  147213 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:24:40.494056  147213 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:24:40.494325  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetState
	I1010 19:24:40.496053  147213 main.go:141] libmachine: (no-preload-320324) Calling .DriverName
	I1010 19:24:40.496301  147213 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1010 19:24:40.496321  147213 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1010 19:24:40.496344  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHHostname
	I1010 19:24:40.499125  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:40.499667  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:40.499690  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:40.499843  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHPort
	I1010 19:24:40.500022  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:40.500194  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHUsername
	I1010 19:24:40.500357  147213 sshutil.go:53] new ssh client: &{IP:192.168.72.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/no-preload-320324/id_rsa Username:docker}
	I1010 19:24:40.651454  147213 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 19:24:40.667056  147213 node_ready.go:35] waiting up to 6m0s for node "no-preload-320324" to be "Ready" ...
	I1010 19:24:40.782217  147213 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 19:24:40.803094  147213 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1010 19:24:40.803122  147213 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1010 19:24:40.812288  147213 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1010 19:24:40.837679  147213 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1010 19:24:40.837723  147213 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1010 19:24:40.882090  147213 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1010 19:24:40.882119  147213 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1010 19:24:40.940115  147213 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1010 19:24:41.949181  147213 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.136852217s)
	I1010 19:24:41.949258  147213 main.go:141] libmachine: Making call to close driver server
	I1010 19:24:41.949275  147213 main.go:141] libmachine: (no-preload-320324) Calling .Close
	I1010 19:24:41.949286  147213 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.167030419s)
	I1010 19:24:41.949327  147213 main.go:141] libmachine: Making call to close driver server
	I1010 19:24:41.949345  147213 main.go:141] libmachine: (no-preload-320324) Calling .Close
	I1010 19:24:41.949625  147213 main.go:141] libmachine: (no-preload-320324) DBG | Closing plugin on server side
	I1010 19:24:41.949652  147213 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:24:41.949660  147213 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:24:41.949661  147213 main.go:141] libmachine: (no-preload-320324) DBG | Closing plugin on server side
	I1010 19:24:41.949668  147213 main.go:141] libmachine: Making call to close driver server
	I1010 19:24:41.949679  147213 main.go:141] libmachine: (no-preload-320324) Calling .Close
	I1010 19:24:41.949761  147213 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:24:41.949804  147213 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:24:41.949819  147213 main.go:141] libmachine: Making call to close driver server
	I1010 19:24:41.949826  147213 main.go:141] libmachine: (no-preload-320324) Calling .Close
	I1010 19:24:41.950811  147213 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:24:41.950824  147213 main.go:141] libmachine: (no-preload-320324) DBG | Closing plugin on server side
	I1010 19:24:41.950827  147213 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:24:41.950822  147213 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:24:41.950845  147213 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:24:41.950811  147213 main.go:141] libmachine: (no-preload-320324) DBG | Closing plugin on server side
	I1010 19:24:41.957797  147213 main.go:141] libmachine: Making call to close driver server
	I1010 19:24:41.957814  147213 main.go:141] libmachine: (no-preload-320324) Calling .Close
	I1010 19:24:41.958071  147213 main.go:141] libmachine: (no-preload-320324) DBG | Closing plugin on server side
	I1010 19:24:41.958077  147213 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:24:41.958099  147213 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:24:42.005530  147213 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.065377363s)
	I1010 19:24:42.005590  147213 main.go:141] libmachine: Making call to close driver server
	I1010 19:24:42.005602  147213 main.go:141] libmachine: (no-preload-320324) Calling .Close
	I1010 19:24:42.005914  147213 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:24:42.005937  147213 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:24:42.005935  147213 main.go:141] libmachine: (no-preload-320324) DBG | Closing plugin on server side
	I1010 19:24:42.005972  147213 main.go:141] libmachine: Making call to close driver server
	I1010 19:24:42.006003  147213 main.go:141] libmachine: (no-preload-320324) Calling .Close
	I1010 19:24:42.006280  147213 main.go:141] libmachine: (no-preload-320324) DBG | Closing plugin on server side
	I1010 19:24:42.006313  147213 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:24:42.006335  147213 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:24:42.006354  147213 addons.go:475] Verifying addon metrics-server=true in "no-preload-320324"
	I1010 19:24:42.008523  147213 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1010 19:24:39.028363  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:39.528571  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:40.027992  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:40.528552  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:41.028220  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:41.527889  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:42.028462  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:24:42.028549  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:24:42.070669  148123 cri.go:89] found id: ""
	I1010 19:24:42.070701  148123 logs.go:282] 0 containers: []
	W1010 19:24:42.070710  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:24:42.070716  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:24:42.070775  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:24:42.110693  148123 cri.go:89] found id: ""
	I1010 19:24:42.110731  148123 logs.go:282] 0 containers: []
	W1010 19:24:42.110748  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:24:42.110756  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:24:42.110816  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:24:42.148471  148123 cri.go:89] found id: ""
	I1010 19:24:42.148511  148123 logs.go:282] 0 containers: []
	W1010 19:24:42.148525  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:24:42.148535  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:24:42.148603  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:24:42.184637  148123 cri.go:89] found id: ""
	I1010 19:24:42.184670  148123 logs.go:282] 0 containers: []
	W1010 19:24:42.184683  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:24:42.184691  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:24:42.184759  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:24:42.226794  148123 cri.go:89] found id: ""
	I1010 19:24:42.226834  148123 logs.go:282] 0 containers: []
	W1010 19:24:42.226848  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:24:42.226857  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:24:42.226942  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:24:42.262969  148123 cri.go:89] found id: ""
	I1010 19:24:42.263004  148123 logs.go:282] 0 containers: []
	W1010 19:24:42.263017  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:24:42.263027  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:24:42.263167  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:24:42.301063  148123 cri.go:89] found id: ""
	I1010 19:24:42.301088  148123 logs.go:282] 0 containers: []
	W1010 19:24:42.301096  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:24:42.301102  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:24:42.301153  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:24:42.338829  148123 cri.go:89] found id: ""
	I1010 19:24:42.338862  148123 logs.go:282] 0 containers: []
	W1010 19:24:42.338873  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:24:42.338886  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:24:42.338901  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:24:42.391426  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:24:42.391478  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:24:42.405520  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:24:42.405563  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:24:42.544245  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:24:42.544275  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:24:42.544292  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:24:42.620760  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:24:42.620804  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:24:42.009965  147213 addons.go:510] duration metric: took 1.596190602s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1010 19:24:42.672792  147213 node_ready.go:53] node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:41.066744  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:43.066850  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:43.557231  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:46.051820  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:45.164389  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:45.179714  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:24:45.179776  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:24:45.216271  148123 cri.go:89] found id: ""
	I1010 19:24:45.216308  148123 logs.go:282] 0 containers: []
	W1010 19:24:45.216319  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:24:45.216327  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:24:45.216394  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:24:45.255109  148123 cri.go:89] found id: ""
	I1010 19:24:45.255154  148123 logs.go:282] 0 containers: []
	W1010 19:24:45.255167  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:24:45.255176  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:24:45.255248  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:24:45.294338  148123 cri.go:89] found id: ""
	I1010 19:24:45.294369  148123 logs.go:282] 0 containers: []
	W1010 19:24:45.294380  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:24:45.294388  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:24:45.294457  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:24:45.339573  148123 cri.go:89] found id: ""
	I1010 19:24:45.339606  148123 logs.go:282] 0 containers: []
	W1010 19:24:45.339626  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:24:45.339636  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:24:45.339703  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:24:45.400184  148123 cri.go:89] found id: ""
	I1010 19:24:45.400214  148123 logs.go:282] 0 containers: []
	W1010 19:24:45.400225  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:24:45.400234  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:24:45.400301  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:24:45.436152  148123 cri.go:89] found id: ""
	I1010 19:24:45.436183  148123 logs.go:282] 0 containers: []
	W1010 19:24:45.436195  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:24:45.436203  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:24:45.436262  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:24:45.484321  148123 cri.go:89] found id: ""
	I1010 19:24:45.484347  148123 logs.go:282] 0 containers: []
	W1010 19:24:45.484355  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:24:45.484361  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:24:45.484441  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:24:45.532893  148123 cri.go:89] found id: ""
	I1010 19:24:45.532923  148123 logs.go:282] 0 containers: []
	W1010 19:24:45.532932  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:24:45.532943  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:24:45.532958  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:24:45.585183  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:24:45.585214  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:24:45.638800  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:24:45.638847  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:24:45.653928  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:24:45.653961  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:24:45.733534  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:24:45.733564  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:24:45.733580  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:24:48.314367  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:48.329687  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:24:48.329754  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:24:48.370372  148123 cri.go:89] found id: ""
	I1010 19:24:48.370400  148123 logs.go:282] 0 containers: []
	W1010 19:24:48.370409  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:24:48.370415  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:24:48.370490  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:24:48.409362  148123 cri.go:89] found id: ""
	I1010 19:24:48.409396  148123 logs.go:282] 0 containers: []
	W1010 19:24:48.409407  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:24:48.409415  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:24:48.409500  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:24:48.449640  148123 cri.go:89] found id: ""
	I1010 19:24:48.449672  148123 logs.go:282] 0 containers: []
	W1010 19:24:48.449681  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:24:48.449687  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:24:48.449746  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:24:48.485053  148123 cri.go:89] found id: ""
	I1010 19:24:48.485104  148123 logs.go:282] 0 containers: []
	W1010 19:24:48.485121  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:24:48.485129  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:24:48.485183  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:24:48.520083  148123 cri.go:89] found id: ""
	I1010 19:24:48.520113  148123 logs.go:282] 0 containers: []
	W1010 19:24:48.520121  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:24:48.520127  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:24:48.520185  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:24:48.555112  148123 cri.go:89] found id: ""
	I1010 19:24:48.555138  148123 logs.go:282] 0 containers: []
	W1010 19:24:48.555149  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:24:48.555156  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:24:48.555241  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:24:45.171882  147213 node_ready.go:53] node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:47.673073  147213 node_ready.go:49] node "no-preload-320324" has status "Ready":"True"
	I1010 19:24:47.673103  147213 node_ready.go:38] duration metric: took 7.00601327s for node "no-preload-320324" to be "Ready" ...
	I1010 19:24:47.673117  147213 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 19:24:47.682195  147213 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-86brb" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:47.690079  147213 pod_ready.go:93] pod "coredns-7c65d6cfc9-86brb" in "kube-system" namespace has status "Ready":"True"
	I1010 19:24:47.690111  147213 pod_ready.go:82] duration metric: took 7.882823ms for pod "coredns-7c65d6cfc9-86brb" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:47.690126  147213 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:47.698009  147213 pod_ready.go:93] pod "etcd-no-preload-320324" in "kube-system" namespace has status "Ready":"True"
	I1010 19:24:47.698038  147213 pod_ready.go:82] duration metric: took 7.903016ms for pod "etcd-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:47.698052  147213 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:45.066893  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:47.566144  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:48.551853  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:51.050365  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:48.592682  148123 cri.go:89] found id: ""
	I1010 19:24:48.592710  148123 logs.go:282] 0 containers: []
	W1010 19:24:48.592719  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:24:48.592725  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:24:48.592775  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:24:48.628449  148123 cri.go:89] found id: ""
	I1010 19:24:48.628482  148123 logs.go:282] 0 containers: []
	W1010 19:24:48.628490  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:24:48.628500  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:24:48.628513  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:24:48.709385  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:24:48.709428  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:24:48.752542  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:24:48.752677  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:24:48.812331  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:24:48.812385  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:24:48.827057  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:24:48.827095  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:24:48.908312  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:24:51.409122  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:51.423404  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:24:51.423482  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:24:51.465742  148123 cri.go:89] found id: ""
	I1010 19:24:51.465781  148123 logs.go:282] 0 containers: []
	W1010 19:24:51.465793  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:24:51.465803  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:24:51.465875  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:24:51.501597  148123 cri.go:89] found id: ""
	I1010 19:24:51.501630  148123 logs.go:282] 0 containers: []
	W1010 19:24:51.501641  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:24:51.501648  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:24:51.501711  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:24:51.537500  148123 cri.go:89] found id: ""
	I1010 19:24:51.537539  148123 logs.go:282] 0 containers: []
	W1010 19:24:51.537551  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:24:51.537559  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:24:51.537626  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:24:51.579546  148123 cri.go:89] found id: ""
	I1010 19:24:51.579576  148123 logs.go:282] 0 containers: []
	W1010 19:24:51.579587  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:24:51.579595  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:24:51.579660  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:24:51.619893  148123 cri.go:89] found id: ""
	I1010 19:24:51.619917  148123 logs.go:282] 0 containers: []
	W1010 19:24:51.619925  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:24:51.619931  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:24:51.619999  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:24:51.655787  148123 cri.go:89] found id: ""
	I1010 19:24:51.655824  148123 logs.go:282] 0 containers: []
	W1010 19:24:51.655837  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:24:51.655846  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:24:51.655921  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:24:51.699582  148123 cri.go:89] found id: ""
	I1010 19:24:51.699611  148123 logs.go:282] 0 containers: []
	W1010 19:24:51.699619  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:24:51.699625  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:24:51.699685  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:24:51.738658  148123 cri.go:89] found id: ""
	I1010 19:24:51.738689  148123 logs.go:282] 0 containers: []
	W1010 19:24:51.738697  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:24:51.738707  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:24:51.738721  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:24:51.789556  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:24:51.789587  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:24:51.858919  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:24:51.858968  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:24:51.887818  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:24:51.887854  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:24:51.964408  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:24:51.964434  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:24:51.964449  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:24:49.705130  147213 pod_ready.go:103] pod "kube-apiserver-no-preload-320324" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:51.705847  147213 pod_ready.go:103] pod "kube-apiserver-no-preload-320324" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:53.205374  147213 pod_ready.go:93] pod "kube-apiserver-no-preload-320324" in "kube-system" namespace has status "Ready":"True"
	I1010 19:24:53.205401  147213 pod_ready.go:82] duration metric: took 5.507341974s for pod "kube-apiserver-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:53.205413  147213 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:53.210237  147213 pod_ready.go:93] pod "kube-controller-manager-no-preload-320324" in "kube-system" namespace has status "Ready":"True"
	I1010 19:24:53.210259  147213 pod_ready.go:82] duration metric: took 4.83925ms for pod "kube-controller-manager-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:53.210269  147213 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-vn6sv" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:53.215158  147213 pod_ready.go:93] pod "kube-proxy-vn6sv" in "kube-system" namespace has status "Ready":"True"
	I1010 19:24:53.215186  147213 pod_ready.go:82] duration metric: took 4.909888ms for pod "kube-proxy-vn6sv" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:53.215198  147213 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:53.220077  147213 pod_ready.go:93] pod "kube-scheduler-no-preload-320324" in "kube-system" namespace has status "Ready":"True"
	I1010 19:24:53.220097  147213 pod_ready.go:82] duration metric: took 4.890652ms for pod "kube-scheduler-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:53.220105  147213 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:50.066165  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:52.066343  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:53.552604  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:56.050748  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:54.545627  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:54.560748  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:24:54.560830  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:24:54.595863  148123 cri.go:89] found id: ""
	I1010 19:24:54.595893  148123 logs.go:282] 0 containers: []
	W1010 19:24:54.595903  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:24:54.595912  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:24:54.595978  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:24:54.633645  148123 cri.go:89] found id: ""
	I1010 19:24:54.633681  148123 logs.go:282] 0 containers: []
	W1010 19:24:54.633693  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:24:54.633701  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:24:54.633761  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:24:54.668269  148123 cri.go:89] found id: ""
	I1010 19:24:54.668299  148123 logs.go:282] 0 containers: []
	W1010 19:24:54.668311  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:24:54.668317  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:24:54.668369  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:24:54.706559  148123 cri.go:89] found id: ""
	I1010 19:24:54.706591  148123 logs.go:282] 0 containers: []
	W1010 19:24:54.706600  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:24:54.706608  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:24:54.706673  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:24:54.746248  148123 cri.go:89] found id: ""
	I1010 19:24:54.746283  148123 logs.go:282] 0 containers: []
	W1010 19:24:54.746295  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:24:54.746303  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:24:54.746383  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:24:54.782982  148123 cri.go:89] found id: ""
	I1010 19:24:54.783017  148123 logs.go:282] 0 containers: []
	W1010 19:24:54.783027  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:24:54.783033  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:24:54.783085  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:24:54.819664  148123 cri.go:89] found id: ""
	I1010 19:24:54.819700  148123 logs.go:282] 0 containers: []
	W1010 19:24:54.819713  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:24:54.819722  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:24:54.819797  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:24:54.859603  148123 cri.go:89] found id: ""
	I1010 19:24:54.859632  148123 logs.go:282] 0 containers: []
	W1010 19:24:54.859640  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:24:54.859650  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:24:54.859662  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:24:54.910949  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:24:54.910987  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:24:54.925941  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:24:54.925975  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:24:55.009626  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:24:55.009654  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:24:55.009669  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:24:55.097196  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:24:55.097237  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:24:57.641732  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:57.661141  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:24:57.661222  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:24:57.716054  148123 cri.go:89] found id: ""
	I1010 19:24:57.716086  148123 logs.go:282] 0 containers: []
	W1010 19:24:57.716094  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:24:57.716100  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:24:57.716178  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:24:57.766862  148123 cri.go:89] found id: ""
	I1010 19:24:57.766892  148123 logs.go:282] 0 containers: []
	W1010 19:24:57.766906  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:24:57.766917  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:24:57.766989  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:24:57.814776  148123 cri.go:89] found id: ""
	I1010 19:24:57.814808  148123 logs.go:282] 0 containers: []
	W1010 19:24:57.814821  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:24:57.814829  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:24:57.814899  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:24:57.850458  148123 cri.go:89] found id: ""
	I1010 19:24:57.850495  148123 logs.go:282] 0 containers: []
	W1010 19:24:57.850505  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:24:57.850516  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:24:57.850667  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:24:57.886541  148123 cri.go:89] found id: ""
	I1010 19:24:57.886566  148123 logs.go:282] 0 containers: []
	W1010 19:24:57.886575  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:24:57.886581  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:24:57.886645  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:24:57.920748  148123 cri.go:89] found id: ""
	I1010 19:24:57.920783  148123 logs.go:282] 0 containers: []
	W1010 19:24:57.920795  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:24:57.920802  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:24:57.920887  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:24:57.957801  148123 cri.go:89] found id: ""
	I1010 19:24:57.957833  148123 logs.go:282] 0 containers: []
	W1010 19:24:57.957844  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:24:57.957852  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:24:57.957919  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:24:57.995648  148123 cri.go:89] found id: ""
	I1010 19:24:57.995683  148123 logs.go:282] 0 containers: []
	W1010 19:24:57.995694  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:24:57.995706  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:24:57.995723  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:24:58.034987  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:24:58.035030  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:24:58.089014  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:24:58.089066  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:24:58.104179  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:24:58.104211  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:24:58.178239  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:24:58.178271  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:24:58.178291  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:24:55.229459  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:57.727298  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:54.566779  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:57.065902  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:58.051248  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:00.550512  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:00.756057  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:00.770086  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:00.770150  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:00.811400  148123 cri.go:89] found id: ""
	I1010 19:25:00.811444  148123 logs.go:282] 0 containers: []
	W1010 19:25:00.811456  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:00.811466  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:00.811533  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:00.848821  148123 cri.go:89] found id: ""
	I1010 19:25:00.848859  148123 logs.go:282] 0 containers: []
	W1010 19:25:00.848870  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:00.848877  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:00.848947  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:00.883529  148123 cri.go:89] found id: ""
	I1010 19:25:00.883557  148123 logs.go:282] 0 containers: []
	W1010 19:25:00.883566  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:00.883573  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:00.883631  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:00.922952  148123 cri.go:89] found id: ""
	I1010 19:25:00.922982  148123 logs.go:282] 0 containers: []
	W1010 19:25:00.922992  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:00.922998  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:00.923057  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:00.956116  148123 cri.go:89] found id: ""
	I1010 19:25:00.956147  148123 logs.go:282] 0 containers: []
	W1010 19:25:00.956159  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:00.956168  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:00.956233  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:00.998892  148123 cri.go:89] found id: ""
	I1010 19:25:00.998919  148123 logs.go:282] 0 containers: []
	W1010 19:25:00.998930  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:00.998939  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:00.998996  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:01.037617  148123 cri.go:89] found id: ""
	I1010 19:25:01.037649  148123 logs.go:282] 0 containers: []
	W1010 19:25:01.037657  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:01.037663  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:01.037717  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:01.078003  148123 cri.go:89] found id: ""
	I1010 19:25:01.078034  148123 logs.go:282] 0 containers: []
	W1010 19:25:01.078046  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:01.078055  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:01.078069  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:01.118948  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:01.118977  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:01.171053  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:01.171091  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:01.187027  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:01.187060  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:01.261288  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:01.261315  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:01.261330  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:24:59.728997  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:02.227142  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:59.566448  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:02.066184  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:02.551951  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:05.050558  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:03.848144  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:03.862527  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:03.862601  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:03.899120  148123 cri.go:89] found id: ""
	I1010 19:25:03.899159  148123 logs.go:282] 0 containers: []
	W1010 19:25:03.899180  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:03.899187  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:03.899276  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:03.935461  148123 cri.go:89] found id: ""
	I1010 19:25:03.935492  148123 logs.go:282] 0 containers: []
	W1010 19:25:03.935500  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:03.935507  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:03.935569  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:03.971892  148123 cri.go:89] found id: ""
	I1010 19:25:03.971925  148123 logs.go:282] 0 containers: []
	W1010 19:25:03.971937  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:03.971945  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:03.972019  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:04.008341  148123 cri.go:89] found id: ""
	I1010 19:25:04.008368  148123 logs.go:282] 0 containers: []
	W1010 19:25:04.008377  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:04.008390  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:04.008447  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:04.042007  148123 cri.go:89] found id: ""
	I1010 19:25:04.042036  148123 logs.go:282] 0 containers: []
	W1010 19:25:04.042044  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:04.042051  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:04.042101  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:04.078017  148123 cri.go:89] found id: ""
	I1010 19:25:04.078045  148123 logs.go:282] 0 containers: []
	W1010 19:25:04.078053  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:04.078059  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:04.078112  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:04.116792  148123 cri.go:89] found id: ""
	I1010 19:25:04.116823  148123 logs.go:282] 0 containers: []
	W1010 19:25:04.116832  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:04.116839  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:04.116928  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:04.153468  148123 cri.go:89] found id: ""
	I1010 19:25:04.153496  148123 logs.go:282] 0 containers: []
	W1010 19:25:04.153503  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:04.153513  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:04.153525  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:04.230646  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:04.230683  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:04.270975  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:04.271015  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:04.320845  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:04.320902  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:04.337789  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:04.337828  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:04.416077  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:06.916412  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:06.931309  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:06.931376  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:06.969224  148123 cri.go:89] found id: ""
	I1010 19:25:06.969257  148123 logs.go:282] 0 containers: []
	W1010 19:25:06.969269  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:06.969277  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:06.969346  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:07.006598  148123 cri.go:89] found id: ""
	I1010 19:25:07.006653  148123 logs.go:282] 0 containers: []
	W1010 19:25:07.006667  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:07.006676  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:07.006744  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:07.046392  148123 cri.go:89] found id: ""
	I1010 19:25:07.046418  148123 logs.go:282] 0 containers: []
	W1010 19:25:07.046427  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:07.046433  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:07.046491  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:07.089570  148123 cri.go:89] found id: ""
	I1010 19:25:07.089603  148123 logs.go:282] 0 containers: []
	W1010 19:25:07.089615  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:07.089624  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:07.089689  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:07.127152  148123 cri.go:89] found id: ""
	I1010 19:25:07.127185  148123 logs.go:282] 0 containers: []
	W1010 19:25:07.127195  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:07.127204  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:07.127282  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:07.160516  148123 cri.go:89] found id: ""
	I1010 19:25:07.160543  148123 logs.go:282] 0 containers: []
	W1010 19:25:07.160554  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:07.160563  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:07.160635  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:07.197270  148123 cri.go:89] found id: ""
	I1010 19:25:07.197307  148123 logs.go:282] 0 containers: []
	W1010 19:25:07.197318  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:07.197327  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:07.197395  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:07.232658  148123 cri.go:89] found id: ""
	I1010 19:25:07.232687  148123 logs.go:282] 0 containers: []
	W1010 19:25:07.232696  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:07.232706  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:07.232720  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:07.246579  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:07.246609  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:07.315200  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:07.315230  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:07.315251  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:07.389532  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:07.389580  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:07.438808  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:07.438837  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:04.227537  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:06.727865  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:04.067121  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:06.565089  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:08.565565  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:07.051371  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:09.051420  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:11.054211  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:09.990294  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:10.004155  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:10.004270  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:10.039034  148123 cri.go:89] found id: ""
	I1010 19:25:10.039068  148123 logs.go:282] 0 containers: []
	W1010 19:25:10.039078  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:10.039087  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:10.039174  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:10.079936  148123 cri.go:89] found id: ""
	I1010 19:25:10.079967  148123 logs.go:282] 0 containers: []
	W1010 19:25:10.079978  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:10.079985  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:10.080068  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:10.116441  148123 cri.go:89] found id: ""
	I1010 19:25:10.116471  148123 logs.go:282] 0 containers: []
	W1010 19:25:10.116483  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:10.116491  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:10.116556  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:10.151998  148123 cri.go:89] found id: ""
	I1010 19:25:10.152045  148123 logs.go:282] 0 containers: []
	W1010 19:25:10.152058  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:10.152075  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:10.152153  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:10.188514  148123 cri.go:89] found id: ""
	I1010 19:25:10.188547  148123 logs.go:282] 0 containers: []
	W1010 19:25:10.188565  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:10.188574  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:10.188640  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:10.223648  148123 cri.go:89] found id: ""
	I1010 19:25:10.223682  148123 logs.go:282] 0 containers: []
	W1010 19:25:10.223694  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:10.223702  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:10.223771  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:10.264117  148123 cri.go:89] found id: ""
	I1010 19:25:10.264143  148123 logs.go:282] 0 containers: []
	W1010 19:25:10.264151  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:10.264158  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:10.264215  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:10.302566  148123 cri.go:89] found id: ""
	I1010 19:25:10.302601  148123 logs.go:282] 0 containers: []
	W1010 19:25:10.302613  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:10.302625  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:10.302649  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:10.316567  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:10.316606  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:10.388642  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:10.388674  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:10.388692  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:10.471035  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:10.471090  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:10.519600  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:10.519645  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:13.075428  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:13.090830  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:13.090915  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:13.126302  148123 cri.go:89] found id: ""
	I1010 19:25:13.126336  148123 logs.go:282] 0 containers: []
	W1010 19:25:13.126348  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:13.126357  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:13.126429  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:13.163309  148123 cri.go:89] found id: ""
	I1010 19:25:13.163344  148123 logs.go:282] 0 containers: []
	W1010 19:25:13.163357  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:13.163365  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:13.163428  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:13.197569  148123 cri.go:89] found id: ""
	I1010 19:25:13.197605  148123 logs.go:282] 0 containers: []
	W1010 19:25:13.197615  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:13.197621  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:13.197692  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:13.236299  148123 cri.go:89] found id: ""
	I1010 19:25:13.236328  148123 logs.go:282] 0 containers: []
	W1010 19:25:13.236336  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:13.236342  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:13.236406  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:13.275667  148123 cri.go:89] found id: ""
	I1010 19:25:13.275696  148123 logs.go:282] 0 containers: []
	W1010 19:25:13.275705  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:13.275711  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:13.275776  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:13.309728  148123 cri.go:89] found id: ""
	I1010 19:25:13.309763  148123 logs.go:282] 0 containers: []
	W1010 19:25:13.309774  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:13.309786  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:13.309854  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:13.347464  148123 cri.go:89] found id: ""
	I1010 19:25:13.347493  148123 logs.go:282] 0 containers: []
	W1010 19:25:13.347504  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:13.347513  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:13.347576  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:13.385086  148123 cri.go:89] found id: ""
	I1010 19:25:13.385119  148123 logs.go:282] 0 containers: []
	W1010 19:25:13.385130  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:13.385142  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:13.385156  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:13.466513  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:13.466553  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:13.508610  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:13.508651  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:09.226850  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:11.227241  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:13.726879  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:10.565663  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:12.565845  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:13.555465  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:16.051764  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:13.564897  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:13.564932  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:13.578771  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:13.578803  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:13.651857  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:16.153066  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:16.167613  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:16.167698  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:16.203867  148123 cri.go:89] found id: ""
	I1010 19:25:16.203897  148123 logs.go:282] 0 containers: []
	W1010 19:25:16.203906  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:16.203912  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:16.203965  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:16.242077  148123 cri.go:89] found id: ""
	I1010 19:25:16.242109  148123 logs.go:282] 0 containers: []
	W1010 19:25:16.242130  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:16.242138  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:16.242234  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:16.283792  148123 cri.go:89] found id: ""
	I1010 19:25:16.283825  148123 logs.go:282] 0 containers: []
	W1010 19:25:16.283836  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:16.283844  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:16.283907  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:16.319934  148123 cri.go:89] found id: ""
	I1010 19:25:16.319969  148123 logs.go:282] 0 containers: []
	W1010 19:25:16.319980  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:16.319990  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:16.320063  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:16.360439  148123 cri.go:89] found id: ""
	I1010 19:25:16.360482  148123 logs.go:282] 0 containers: []
	W1010 19:25:16.360495  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:16.360504  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:16.360569  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:16.395890  148123 cri.go:89] found id: ""
	I1010 19:25:16.395922  148123 logs.go:282] 0 containers: []
	W1010 19:25:16.395931  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:16.395941  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:16.396009  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:16.431396  148123 cri.go:89] found id: ""
	I1010 19:25:16.431442  148123 logs.go:282] 0 containers: []
	W1010 19:25:16.431456  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:16.431463  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:16.431531  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:16.471768  148123 cri.go:89] found id: ""
	I1010 19:25:16.471796  148123 logs.go:282] 0 containers: []
	W1010 19:25:16.471804  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:16.471814  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:16.471830  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:16.526112  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:16.526155  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:16.541041  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:16.541074  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:16.623245  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:16.623273  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:16.623289  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:16.710840  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:16.710884  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:15.727171  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:17.728705  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:15.067362  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:17.566242  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:18.551207  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:21.050222  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:19.257566  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:19.273982  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:19.274063  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:19.311360  148123 cri.go:89] found id: ""
	I1010 19:25:19.311392  148123 logs.go:282] 0 containers: []
	W1010 19:25:19.311404  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:19.311411  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:19.311475  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:19.347025  148123 cri.go:89] found id: ""
	I1010 19:25:19.347053  148123 logs.go:282] 0 containers: []
	W1010 19:25:19.347062  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:19.347068  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:19.347120  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:19.383575  148123 cri.go:89] found id: ""
	I1010 19:25:19.383608  148123 logs.go:282] 0 containers: []
	W1010 19:25:19.383620  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:19.383633  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:19.383698  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:19.419245  148123 cri.go:89] found id: ""
	I1010 19:25:19.419270  148123 logs.go:282] 0 containers: []
	W1010 19:25:19.419278  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:19.419284  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:19.419334  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:19.453563  148123 cri.go:89] found id: ""
	I1010 19:25:19.453591  148123 logs.go:282] 0 containers: []
	W1010 19:25:19.453601  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:19.453658  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:19.453715  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:19.496430  148123 cri.go:89] found id: ""
	I1010 19:25:19.496458  148123 logs.go:282] 0 containers: []
	W1010 19:25:19.496466  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:19.496473  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:19.496533  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:19.534626  148123 cri.go:89] found id: ""
	I1010 19:25:19.534670  148123 logs.go:282] 0 containers: []
	W1010 19:25:19.534682  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:19.534691  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:19.534757  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:19.574186  148123 cri.go:89] found id: ""
	I1010 19:25:19.574236  148123 logs.go:282] 0 containers: []
	W1010 19:25:19.574248  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:19.574261  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:19.574283  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:19.587952  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:19.587989  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:19.664607  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:19.664644  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:19.664664  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:19.742185  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:19.742224  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:19.781530  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:19.781562  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:22.335398  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:22.349627  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:22.349693  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:22.389075  148123 cri.go:89] found id: ""
	I1010 19:25:22.389102  148123 logs.go:282] 0 containers: []
	W1010 19:25:22.389110  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:22.389117  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:22.389168  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:22.424331  148123 cri.go:89] found id: ""
	I1010 19:25:22.424368  148123 logs.go:282] 0 containers: []
	W1010 19:25:22.424381  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:22.424389  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:22.424460  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:22.464011  148123 cri.go:89] found id: ""
	I1010 19:25:22.464051  148123 logs.go:282] 0 containers: []
	W1010 19:25:22.464062  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:22.464070  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:22.464141  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:22.500786  148123 cri.go:89] found id: ""
	I1010 19:25:22.500819  148123 logs.go:282] 0 containers: []
	W1010 19:25:22.500828  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:22.500835  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:22.500914  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:22.538016  148123 cri.go:89] found id: ""
	I1010 19:25:22.538053  148123 logs.go:282] 0 containers: []
	W1010 19:25:22.538061  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:22.538068  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:22.538124  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:22.576600  148123 cri.go:89] found id: ""
	I1010 19:25:22.576629  148123 logs.go:282] 0 containers: []
	W1010 19:25:22.576637  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:22.576643  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:22.576702  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:22.616020  148123 cri.go:89] found id: ""
	I1010 19:25:22.616048  148123 logs.go:282] 0 containers: []
	W1010 19:25:22.616057  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:22.616064  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:22.616114  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:22.652238  148123 cri.go:89] found id: ""
	I1010 19:25:22.652271  148123 logs.go:282] 0 containers: []
	W1010 19:25:22.652283  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:22.652295  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:22.652311  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:22.726182  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:22.726216  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:22.726234  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:22.814040  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:22.814086  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:22.859254  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:22.859289  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:22.916565  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:22.916607  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:20.227871  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:22.732566  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:20.066872  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:22.566173  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:23.050833  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:25.551662  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:25.432636  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:25.446982  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:25.447047  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:25.482440  148123 cri.go:89] found id: ""
	I1010 19:25:25.482472  148123 logs.go:282] 0 containers: []
	W1010 19:25:25.482484  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:25.482492  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:25.482552  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:25.517709  148123 cri.go:89] found id: ""
	I1010 19:25:25.517742  148123 logs.go:282] 0 containers: []
	W1010 19:25:25.517756  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:25.517764  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:25.517867  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:25.561504  148123 cri.go:89] found id: ""
	I1010 19:25:25.561532  148123 logs.go:282] 0 containers: []
	W1010 19:25:25.561544  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:25.561552  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:25.561616  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:25.601704  148123 cri.go:89] found id: ""
	I1010 19:25:25.601741  148123 logs.go:282] 0 containers: []
	W1010 19:25:25.601753  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:25.601762  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:25.601825  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:25.638519  148123 cri.go:89] found id: ""
	I1010 19:25:25.638544  148123 logs.go:282] 0 containers: []
	W1010 19:25:25.638552  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:25.638558  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:25.638609  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:25.673390  148123 cri.go:89] found id: ""
	I1010 19:25:25.673425  148123 logs.go:282] 0 containers: []
	W1010 19:25:25.673436  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:25.673447  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:25.673525  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:25.708251  148123 cri.go:89] found id: ""
	I1010 19:25:25.708278  148123 logs.go:282] 0 containers: []
	W1010 19:25:25.708286  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:25.708293  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:25.708354  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:25.749793  148123 cri.go:89] found id: ""
	I1010 19:25:25.749826  148123 logs.go:282] 0 containers: []
	W1010 19:25:25.749837  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:25.749846  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:25.749861  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:25.802511  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:25.802554  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:25.817523  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:25.817562  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:25.898595  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:25.898619  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:25.898635  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:25.978629  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:25.978684  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:28.520165  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:28.532725  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:28.532793  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:25.226875  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:27.729015  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:25.066298  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:27.066963  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:27.551915  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:29.558497  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:28.571667  148123 cri.go:89] found id: ""
	I1010 19:25:28.571695  148123 logs.go:282] 0 containers: []
	W1010 19:25:28.571704  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:28.571710  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:28.571770  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:28.608136  148123 cri.go:89] found id: ""
	I1010 19:25:28.608165  148123 logs.go:282] 0 containers: []
	W1010 19:25:28.608174  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:28.608181  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:28.608244  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:28.642123  148123 cri.go:89] found id: ""
	I1010 19:25:28.642161  148123 logs.go:282] 0 containers: []
	W1010 19:25:28.642173  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:28.642181  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:28.642242  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:28.678600  148123 cri.go:89] found id: ""
	I1010 19:25:28.678633  148123 logs.go:282] 0 containers: []
	W1010 19:25:28.678643  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:28.678651  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:28.678702  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:28.716627  148123 cri.go:89] found id: ""
	I1010 19:25:28.716660  148123 logs.go:282] 0 containers: []
	W1010 19:25:28.716673  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:28.716681  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:28.716766  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:28.752750  148123 cri.go:89] found id: ""
	I1010 19:25:28.752786  148123 logs.go:282] 0 containers: []
	W1010 19:25:28.752798  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:28.752806  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:28.752898  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:28.790021  148123 cri.go:89] found id: ""
	I1010 19:25:28.790054  148123 logs.go:282] 0 containers: []
	W1010 19:25:28.790066  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:28.790075  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:28.790128  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:28.828760  148123 cri.go:89] found id: ""
	I1010 19:25:28.828803  148123 logs.go:282] 0 containers: []
	W1010 19:25:28.828827  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:28.828839  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:28.828877  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:28.879773  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:28.879811  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:28.894311  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:28.894342  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:28.968675  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:28.968706  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:28.968729  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:29.045862  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:29.045913  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:31.588772  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:31.601453  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:31.601526  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:31.637056  148123 cri.go:89] found id: ""
	I1010 19:25:31.637090  148123 logs.go:282] 0 containers: []
	W1010 19:25:31.637101  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:31.637108  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:31.637183  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:31.677825  148123 cri.go:89] found id: ""
	I1010 19:25:31.677854  148123 logs.go:282] 0 containers: []
	W1010 19:25:31.677862  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:31.677867  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:31.677920  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:31.715372  148123 cri.go:89] found id: ""
	I1010 19:25:31.715402  148123 logs.go:282] 0 containers: []
	W1010 19:25:31.715411  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:31.715419  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:31.715470  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:31.753767  148123 cri.go:89] found id: ""
	I1010 19:25:31.753794  148123 logs.go:282] 0 containers: []
	W1010 19:25:31.753806  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:31.753814  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:31.753879  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:31.792999  148123 cri.go:89] found id: ""
	I1010 19:25:31.793024  148123 logs.go:282] 0 containers: []
	W1010 19:25:31.793035  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:31.793050  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:31.793106  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:31.831571  148123 cri.go:89] found id: ""
	I1010 19:25:31.831598  148123 logs.go:282] 0 containers: []
	W1010 19:25:31.831607  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:31.831614  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:31.831673  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:31.880382  148123 cri.go:89] found id: ""
	I1010 19:25:31.880422  148123 logs.go:282] 0 containers: []
	W1010 19:25:31.880436  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:31.880447  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:31.880527  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:31.919596  148123 cri.go:89] found id: ""
	I1010 19:25:31.919627  148123 logs.go:282] 0 containers: []
	W1010 19:25:31.919637  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:31.919647  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:31.919661  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:31.967963  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:31.968001  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:31.983064  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:31.983106  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:32.056073  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:32.056105  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:32.056122  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:32.142927  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:32.142978  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:30.226683  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:32.227047  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:29.565699  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:31.566109  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:32.051411  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:34.052064  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:36.550062  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:34.690025  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:34.705326  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:34.705413  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:34.740756  148123 cri.go:89] found id: ""
	I1010 19:25:34.740786  148123 logs.go:282] 0 containers: []
	W1010 19:25:34.740793  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:34.740799  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:34.740872  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:34.777021  148123 cri.go:89] found id: ""
	I1010 19:25:34.777049  148123 logs.go:282] 0 containers: []
	W1010 19:25:34.777061  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:34.777069  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:34.777123  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:34.826699  148123 cri.go:89] found id: ""
	I1010 19:25:34.826735  148123 logs.go:282] 0 containers: []
	W1010 19:25:34.826747  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:34.826754  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:34.826896  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:34.864297  148123 cri.go:89] found id: ""
	I1010 19:25:34.864329  148123 logs.go:282] 0 containers: []
	W1010 19:25:34.864340  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:34.864348  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:34.864415  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:34.899198  148123 cri.go:89] found id: ""
	I1010 19:25:34.899234  148123 logs.go:282] 0 containers: []
	W1010 19:25:34.899247  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:34.899256  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:34.899315  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:34.934132  148123 cri.go:89] found id: ""
	I1010 19:25:34.934162  148123 logs.go:282] 0 containers: []
	W1010 19:25:34.934170  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:34.934176  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:34.934238  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:34.969858  148123 cri.go:89] found id: ""
	I1010 19:25:34.969891  148123 logs.go:282] 0 containers: []
	W1010 19:25:34.969903  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:34.969911  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:34.969978  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:35.006082  148123 cri.go:89] found id: ""
	I1010 19:25:35.006121  148123 logs.go:282] 0 containers: []
	W1010 19:25:35.006132  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:35.006145  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:35.006160  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:35.060596  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:35.060631  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:35.076246  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:35.076281  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:35.151879  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:35.151912  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:35.151931  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:35.231802  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:35.231845  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:37.774550  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:37.787871  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:37.787938  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:37.824273  148123 cri.go:89] found id: ""
	I1010 19:25:37.824306  148123 logs.go:282] 0 containers: []
	W1010 19:25:37.824317  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:37.824325  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:37.824410  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:37.860548  148123 cri.go:89] found id: ""
	I1010 19:25:37.860582  148123 logs.go:282] 0 containers: []
	W1010 19:25:37.860593  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:37.860601  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:37.860677  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:37.894616  148123 cri.go:89] found id: ""
	I1010 19:25:37.894644  148123 logs.go:282] 0 containers: []
	W1010 19:25:37.894654  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:37.894662  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:37.894730  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:37.931247  148123 cri.go:89] found id: ""
	I1010 19:25:37.931272  148123 logs.go:282] 0 containers: []
	W1010 19:25:37.931280  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:37.931287  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:37.931337  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:37.968028  148123 cri.go:89] found id: ""
	I1010 19:25:37.968068  148123 logs.go:282] 0 containers: []
	W1010 19:25:37.968079  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:37.968086  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:37.968153  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:38.004661  148123 cri.go:89] found id: ""
	I1010 19:25:38.004692  148123 logs.go:282] 0 containers: []
	W1010 19:25:38.004700  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:38.004706  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:38.004760  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:38.040874  148123 cri.go:89] found id: ""
	I1010 19:25:38.040906  148123 logs.go:282] 0 containers: []
	W1010 19:25:38.040915  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:38.040922  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:38.040990  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:38.083771  148123 cri.go:89] found id: ""
	I1010 19:25:38.083802  148123 logs.go:282] 0 containers: []
	W1010 19:25:38.083811  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:38.083821  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:38.083836  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:38.099645  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:38.099683  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:38.172168  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:38.172207  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:38.172223  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:38.248523  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:38.248561  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:38.306594  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:38.306630  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:34.728106  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:37.226285  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:34.065919  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:36.066751  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:38.067361  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:38.550359  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:40.551190  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:40.873868  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:40.889561  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:40.889668  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:40.926769  148123 cri.go:89] found id: ""
	I1010 19:25:40.926800  148123 logs.go:282] 0 containers: []
	W1010 19:25:40.926808  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:40.926816  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:40.926869  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:40.962564  148123 cri.go:89] found id: ""
	I1010 19:25:40.962592  148123 logs.go:282] 0 containers: []
	W1010 19:25:40.962600  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:40.962606  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:40.962659  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:40.997856  148123 cri.go:89] found id: ""
	I1010 19:25:40.997885  148123 logs.go:282] 0 containers: []
	W1010 19:25:40.997894  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:40.997900  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:40.997951  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:41.035479  148123 cri.go:89] found id: ""
	I1010 19:25:41.035506  148123 logs.go:282] 0 containers: []
	W1010 19:25:41.035514  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:41.035520  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:41.035575  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:41.077733  148123 cri.go:89] found id: ""
	I1010 19:25:41.077817  148123 logs.go:282] 0 containers: []
	W1010 19:25:41.077830  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:41.077840  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:41.077916  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:41.115439  148123 cri.go:89] found id: ""
	I1010 19:25:41.115476  148123 logs.go:282] 0 containers: []
	W1010 19:25:41.115489  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:41.115498  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:41.115552  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:41.152586  148123 cri.go:89] found id: ""
	I1010 19:25:41.152625  148123 logs.go:282] 0 containers: []
	W1010 19:25:41.152636  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:41.152643  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:41.152695  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:41.186807  148123 cri.go:89] found id: ""
	I1010 19:25:41.186840  148123 logs.go:282] 0 containers: []
	W1010 19:25:41.186851  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:41.186862  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:41.186880  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:41.200404  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:41.200447  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:41.280717  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:41.280744  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:41.280760  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:41.360561  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:41.360611  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:41.404461  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:41.404497  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:39.226903  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:41.227077  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:43.727197  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:40.570404  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:43.066523  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:43.050813  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:45.051094  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:43.955723  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:43.970895  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:43.970964  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:44.012162  148123 cri.go:89] found id: ""
	I1010 19:25:44.012194  148123 logs.go:282] 0 containers: []
	W1010 19:25:44.012203  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:44.012209  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:44.012282  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:44.074583  148123 cri.go:89] found id: ""
	I1010 19:25:44.074606  148123 logs.go:282] 0 containers: []
	W1010 19:25:44.074614  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:44.074620  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:44.074675  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:44.133562  148123 cri.go:89] found id: ""
	I1010 19:25:44.133588  148123 logs.go:282] 0 containers: []
	W1010 19:25:44.133596  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:44.133602  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:44.133658  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:44.168832  148123 cri.go:89] found id: ""
	I1010 19:25:44.168883  148123 logs.go:282] 0 containers: []
	W1010 19:25:44.168896  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:44.168908  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:44.168967  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:44.204896  148123 cri.go:89] found id: ""
	I1010 19:25:44.204923  148123 logs.go:282] 0 containers: []
	W1010 19:25:44.204934  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:44.204943  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:44.205002  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:44.242410  148123 cri.go:89] found id: ""
	I1010 19:25:44.242437  148123 logs.go:282] 0 containers: []
	W1010 19:25:44.242448  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:44.242456  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:44.242524  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:44.278553  148123 cri.go:89] found id: ""
	I1010 19:25:44.278581  148123 logs.go:282] 0 containers: []
	W1010 19:25:44.278589  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:44.278595  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:44.278658  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:44.316099  148123 cri.go:89] found id: ""
	I1010 19:25:44.316125  148123 logs.go:282] 0 containers: []
	W1010 19:25:44.316132  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:44.316141  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:44.316155  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:44.365809  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:44.365860  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:44.380129  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:44.380162  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:44.454241  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:44.454267  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:44.454281  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:44.530928  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:44.530974  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:47.078279  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:47.092233  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:47.092310  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:47.129517  148123 cri.go:89] found id: ""
	I1010 19:25:47.129557  148123 logs.go:282] 0 containers: []
	W1010 19:25:47.129573  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:47.129582  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:47.129654  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:47.170309  148123 cri.go:89] found id: ""
	I1010 19:25:47.170345  148123 logs.go:282] 0 containers: []
	W1010 19:25:47.170357  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:47.170365  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:47.170432  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:47.205110  148123 cri.go:89] found id: ""
	I1010 19:25:47.205145  148123 logs.go:282] 0 containers: []
	W1010 19:25:47.205156  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:47.205162  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:47.205228  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:47.245668  148123 cri.go:89] found id: ""
	I1010 19:25:47.245695  148123 logs.go:282] 0 containers: []
	W1010 19:25:47.245704  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:47.245711  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:47.245768  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:47.284549  148123 cri.go:89] found id: ""
	I1010 19:25:47.284576  148123 logs.go:282] 0 containers: []
	W1010 19:25:47.284584  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:47.284590  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:47.284643  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:47.324676  148123 cri.go:89] found id: ""
	I1010 19:25:47.324708  148123 logs.go:282] 0 containers: []
	W1010 19:25:47.324720  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:47.324729  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:47.324782  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:47.364315  148123 cri.go:89] found id: ""
	I1010 19:25:47.364347  148123 logs.go:282] 0 containers: []
	W1010 19:25:47.364356  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:47.364362  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:47.364433  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:47.400611  148123 cri.go:89] found id: ""
	I1010 19:25:47.400641  148123 logs.go:282] 0 containers: []
	W1010 19:25:47.400652  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:47.400664  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:47.400680  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:47.415185  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:47.415227  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:47.481638  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:47.481665  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:47.481681  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:47.560135  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:47.560171  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:47.602144  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:47.602174  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:46.227386  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:48.227699  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:45.066887  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:47.565340  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:47.051459  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:49.550170  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:51.554542  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:50.151770  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:50.165336  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:50.165403  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:50.201045  148123 cri.go:89] found id: ""
	I1010 19:25:50.201072  148123 logs.go:282] 0 containers: []
	W1010 19:25:50.201082  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:50.201089  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:50.201154  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:50.237047  148123 cri.go:89] found id: ""
	I1010 19:25:50.237082  148123 logs.go:282] 0 containers: []
	W1010 19:25:50.237094  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:50.237102  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:50.237174  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:50.273727  148123 cri.go:89] found id: ""
	I1010 19:25:50.273756  148123 logs.go:282] 0 containers: []
	W1010 19:25:50.273767  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:50.273780  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:50.273843  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:50.311397  148123 cri.go:89] found id: ""
	I1010 19:25:50.311424  148123 logs.go:282] 0 containers: []
	W1010 19:25:50.311433  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:50.311450  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:50.311513  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:50.347593  148123 cri.go:89] found id: ""
	I1010 19:25:50.347625  148123 logs.go:282] 0 containers: []
	W1010 19:25:50.347637  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:50.347705  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:50.347782  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:50.382195  148123 cri.go:89] found id: ""
	I1010 19:25:50.382228  148123 logs.go:282] 0 containers: []
	W1010 19:25:50.382240  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:50.382247  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:50.382313  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:50.419189  148123 cri.go:89] found id: ""
	I1010 19:25:50.419221  148123 logs.go:282] 0 containers: []
	W1010 19:25:50.419229  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:50.419236  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:50.419297  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:50.454368  148123 cri.go:89] found id: ""
	I1010 19:25:50.454399  148123 logs.go:282] 0 containers: []
	W1010 19:25:50.454410  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:50.454421  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:50.454440  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:50.495575  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:50.495623  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:50.548691  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:50.548737  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:50.564585  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:50.564621  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:50.637152  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:50.637174  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:50.637190  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:53.221939  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:53.235889  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:53.235968  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:53.278589  148123 cri.go:89] found id: ""
	I1010 19:25:53.278620  148123 logs.go:282] 0 containers: []
	W1010 19:25:53.278632  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:53.278640  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:53.278705  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:53.315062  148123 cri.go:89] found id: ""
	I1010 19:25:53.315090  148123 logs.go:282] 0 containers: []
	W1010 19:25:53.315101  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:53.315109  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:53.315182  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:53.348994  148123 cri.go:89] found id: ""
	I1010 19:25:53.349031  148123 logs.go:282] 0 containers: []
	W1010 19:25:53.349040  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:53.349046  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:53.349099  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:53.383334  148123 cri.go:89] found id: ""
	I1010 19:25:53.383363  148123 logs.go:282] 0 containers: []
	W1010 19:25:53.383373  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:53.383380  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:53.383450  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:53.417888  148123 cri.go:89] found id: ""
	I1010 19:25:53.417920  148123 logs.go:282] 0 containers: []
	W1010 19:25:53.417931  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:53.417938  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:53.418013  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:53.454757  148123 cri.go:89] found id: ""
	I1010 19:25:53.454787  148123 logs.go:282] 0 containers: []
	W1010 19:25:53.454798  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:53.454806  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:53.454871  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:53.491422  148123 cri.go:89] found id: ""
	I1010 19:25:53.491452  148123 logs.go:282] 0 containers: []
	W1010 19:25:53.491464  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:53.491472  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:53.491541  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:53.531240  148123 cri.go:89] found id: ""
	I1010 19:25:53.531271  148123 logs.go:282] 0 containers: []
	W1010 19:25:53.531282  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:53.531294  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:53.531310  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:50.727196  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:53.226957  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:50.065907  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:52.567056  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:54.051112  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:56.554137  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:53.582576  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:53.582608  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:53.596481  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:53.596511  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:53.666134  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:53.666162  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:53.666178  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:53.745621  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:53.745659  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:56.290744  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:56.307536  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:56.307610  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:56.353456  148123 cri.go:89] found id: ""
	I1010 19:25:56.353484  148123 logs.go:282] 0 containers: []
	W1010 19:25:56.353494  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:56.353501  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:56.353553  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:56.394093  148123 cri.go:89] found id: ""
	I1010 19:25:56.394120  148123 logs.go:282] 0 containers: []
	W1010 19:25:56.394131  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:56.394138  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:56.394203  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:56.428388  148123 cri.go:89] found id: ""
	I1010 19:25:56.428428  148123 logs.go:282] 0 containers: []
	W1010 19:25:56.428441  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:56.428450  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:56.428524  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:56.464414  148123 cri.go:89] found id: ""
	I1010 19:25:56.464459  148123 logs.go:282] 0 containers: []
	W1010 19:25:56.464472  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:56.464480  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:56.464563  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:56.505271  148123 cri.go:89] found id: ""
	I1010 19:25:56.505307  148123 logs.go:282] 0 containers: []
	W1010 19:25:56.505316  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:56.505322  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:56.505374  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:56.546424  148123 cri.go:89] found id: ""
	I1010 19:25:56.546456  148123 logs.go:282] 0 containers: []
	W1010 19:25:56.546467  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:56.546475  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:56.546545  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:56.587458  148123 cri.go:89] found id: ""
	I1010 19:25:56.587489  148123 logs.go:282] 0 containers: []
	W1010 19:25:56.587501  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:56.587509  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:56.587588  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:56.628248  148123 cri.go:89] found id: ""
	I1010 19:25:56.628279  148123 logs.go:282] 0 containers: []
	W1010 19:25:56.628291  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:56.628304  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:56.628320  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:56.642109  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:56.642141  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:56.723646  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:56.723678  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:56.723696  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:56.805849  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:56.805899  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:56.849863  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:56.849891  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:55.230447  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:57.726896  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:55.066248  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:57.565240  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:59.051145  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:01.554276  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:59.407093  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:59.422915  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:59.423005  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:59.462867  148123 cri.go:89] found id: ""
	I1010 19:25:59.462898  148123 logs.go:282] 0 containers: []
	W1010 19:25:59.462909  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:59.462917  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:59.462980  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:59.496931  148123 cri.go:89] found id: ""
	I1010 19:25:59.496958  148123 logs.go:282] 0 containers: []
	W1010 19:25:59.496968  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:59.496983  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:59.497049  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:59.532241  148123 cri.go:89] found id: ""
	I1010 19:25:59.532271  148123 logs.go:282] 0 containers: []
	W1010 19:25:59.532283  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:59.532291  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:59.532352  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:59.571285  148123 cri.go:89] found id: ""
	I1010 19:25:59.571313  148123 logs.go:282] 0 containers: []
	W1010 19:25:59.571324  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:59.571331  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:59.571391  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:59.606687  148123 cri.go:89] found id: ""
	I1010 19:25:59.606721  148123 logs.go:282] 0 containers: []
	W1010 19:25:59.606734  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:59.606741  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:59.606800  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:59.643247  148123 cri.go:89] found id: ""
	I1010 19:25:59.643276  148123 logs.go:282] 0 containers: []
	W1010 19:25:59.643286  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:59.643294  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:59.643369  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:59.679305  148123 cri.go:89] found id: ""
	I1010 19:25:59.679335  148123 logs.go:282] 0 containers: []
	W1010 19:25:59.679344  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:59.679350  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:59.679407  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:59.716149  148123 cri.go:89] found id: ""
	I1010 19:25:59.716198  148123 logs.go:282] 0 containers: []
	W1010 19:25:59.716210  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:59.716222  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:59.716239  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:59.730334  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:59.730364  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:59.799759  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:59.799786  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:59.799804  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:59.881883  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:59.881925  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:59.923755  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:59.923784  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:02.478043  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:02.492627  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:02.492705  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:02.532133  148123 cri.go:89] found id: ""
	I1010 19:26:02.532160  148123 logs.go:282] 0 containers: []
	W1010 19:26:02.532172  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:02.532179  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:02.532244  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:02.571915  148123 cri.go:89] found id: ""
	I1010 19:26:02.571943  148123 logs.go:282] 0 containers: []
	W1010 19:26:02.571951  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:02.571957  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:02.572014  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:02.616330  148123 cri.go:89] found id: ""
	I1010 19:26:02.616365  148123 logs.go:282] 0 containers: []
	W1010 19:26:02.616376  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:02.616385  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:02.616455  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:02.662245  148123 cri.go:89] found id: ""
	I1010 19:26:02.662275  148123 logs.go:282] 0 containers: []
	W1010 19:26:02.662284  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:02.662291  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:02.662354  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:02.698692  148123 cri.go:89] found id: ""
	I1010 19:26:02.698720  148123 logs.go:282] 0 containers: []
	W1010 19:26:02.698731  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:02.698739  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:02.698805  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:02.736643  148123 cri.go:89] found id: ""
	I1010 19:26:02.736667  148123 logs.go:282] 0 containers: []
	W1010 19:26:02.736677  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:02.736685  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:02.736742  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:02.780356  148123 cri.go:89] found id: ""
	I1010 19:26:02.780388  148123 logs.go:282] 0 containers: []
	W1010 19:26:02.780399  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:02.780407  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:02.780475  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:02.816716  148123 cri.go:89] found id: ""
	I1010 19:26:02.816744  148123 logs.go:282] 0 containers: []
	W1010 19:26:02.816752  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:02.816761  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:02.816775  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:02.899002  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:02.899046  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:02.942060  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:02.942093  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:02.997605  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:02.997661  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:03.011768  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:03.011804  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:03.096404  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:59.727075  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:02.227526  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:59.565903  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:02.066179  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:04.049656  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:06.050425  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:05.596971  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:05.612524  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:05.612598  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:05.649999  148123 cri.go:89] found id: ""
	I1010 19:26:05.650032  148123 logs.go:282] 0 containers: []
	W1010 19:26:05.650044  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:05.650052  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:05.650119  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:05.684365  148123 cri.go:89] found id: ""
	I1010 19:26:05.684398  148123 logs.go:282] 0 containers: []
	W1010 19:26:05.684419  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:05.684427  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:05.684494  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:05.721801  148123 cri.go:89] found id: ""
	I1010 19:26:05.721832  148123 logs.go:282] 0 containers: []
	W1010 19:26:05.721840  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:05.721847  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:05.721908  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:05.762010  148123 cri.go:89] found id: ""
	I1010 19:26:05.762037  148123 logs.go:282] 0 containers: []
	W1010 19:26:05.762046  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:05.762056  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:05.762117  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:05.803425  148123 cri.go:89] found id: ""
	I1010 19:26:05.803457  148123 logs.go:282] 0 containers: []
	W1010 19:26:05.803468  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:05.803476  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:05.803551  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:05.838800  148123 cri.go:89] found id: ""
	I1010 19:26:05.838833  148123 logs.go:282] 0 containers: []
	W1010 19:26:05.838845  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:05.838853  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:05.838921  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:05.875110  148123 cri.go:89] found id: ""
	I1010 19:26:05.875147  148123 logs.go:282] 0 containers: []
	W1010 19:26:05.875158  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:05.875167  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:05.875245  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:05.908951  148123 cri.go:89] found id: ""
	I1010 19:26:05.908988  148123 logs.go:282] 0 containers: []
	W1010 19:26:05.909000  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:05.909012  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:05.909027  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:05.922263  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:05.922293  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:05.996441  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:05.996465  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:05.996484  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:06.078113  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:06.078160  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:06.122919  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:06.122956  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:04.726335  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:06.728178  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:04.066573  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:06.564991  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:08.566655  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:08.050522  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:10.550288  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:08.674464  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:08.688452  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:08.688570  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:08.726240  148123 cri.go:89] found id: ""
	I1010 19:26:08.726269  148123 logs.go:282] 0 containers: []
	W1010 19:26:08.726279  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:08.726287  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:08.726370  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:08.764762  148123 cri.go:89] found id: ""
	I1010 19:26:08.764790  148123 logs.go:282] 0 containers: []
	W1010 19:26:08.764799  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:08.764806  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:08.764887  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:08.805522  148123 cri.go:89] found id: ""
	I1010 19:26:08.805552  148123 logs.go:282] 0 containers: []
	W1010 19:26:08.805563  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:08.805572  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:08.805642  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:08.845694  148123 cri.go:89] found id: ""
	I1010 19:26:08.845729  148123 logs.go:282] 0 containers: []
	W1010 19:26:08.845740  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:08.845747  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:08.845817  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:08.885024  148123 cri.go:89] found id: ""
	I1010 19:26:08.885054  148123 logs.go:282] 0 containers: []
	W1010 19:26:08.885066  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:08.885074  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:08.885138  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:08.927500  148123 cri.go:89] found id: ""
	I1010 19:26:08.927531  148123 logs.go:282] 0 containers: []
	W1010 19:26:08.927540  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:08.927547  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:08.927616  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:08.963870  148123 cri.go:89] found id: ""
	I1010 19:26:08.963904  148123 logs.go:282] 0 containers: []
	W1010 19:26:08.963916  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:08.963924  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:08.963988  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:08.997984  148123 cri.go:89] found id: ""
	I1010 19:26:08.998017  148123 logs.go:282] 0 containers: []
	W1010 19:26:08.998027  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:08.998039  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:08.998056  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:09.049307  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:09.049341  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:09.063341  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:09.063432  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:09.145190  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:09.145214  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:09.145226  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:09.231409  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:09.231445  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:11.773475  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:11.788981  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:11.789055  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:11.830289  148123 cri.go:89] found id: ""
	I1010 19:26:11.830319  148123 logs.go:282] 0 containers: []
	W1010 19:26:11.830327  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:11.830333  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:11.830399  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:11.867093  148123 cri.go:89] found id: ""
	I1010 19:26:11.867120  148123 logs.go:282] 0 containers: []
	W1010 19:26:11.867131  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:11.867138  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:11.867212  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:11.902146  148123 cri.go:89] found id: ""
	I1010 19:26:11.902181  148123 logs.go:282] 0 containers: []
	W1010 19:26:11.902192  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:11.902201  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:11.902271  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:11.938652  148123 cri.go:89] found id: ""
	I1010 19:26:11.938681  148123 logs.go:282] 0 containers: []
	W1010 19:26:11.938691  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:11.938703  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:11.938771  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:11.975903  148123 cri.go:89] found id: ""
	I1010 19:26:11.975937  148123 logs.go:282] 0 containers: []
	W1010 19:26:11.975947  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:11.975955  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:11.976020  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:12.014083  148123 cri.go:89] found id: ""
	I1010 19:26:12.014111  148123 logs.go:282] 0 containers: []
	W1010 19:26:12.014119  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:12.014126  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:12.014190  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:12.057832  148123 cri.go:89] found id: ""
	I1010 19:26:12.057858  148123 logs.go:282] 0 containers: []
	W1010 19:26:12.057867  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:12.057874  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:12.057925  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:12.103253  148123 cri.go:89] found id: ""
	I1010 19:26:12.103286  148123 logs.go:282] 0 containers: []
	W1010 19:26:12.103298  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:12.103311  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:12.103326  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:12.171062  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:12.171104  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:12.185463  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:12.185496  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:12.261568  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:12.261592  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:12.261607  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:12.345819  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:12.345860  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:09.226954  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:11.227205  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:13.227457  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:11.066777  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:13.565854  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:12.551323  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:15.051745  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:14.886283  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:14.901117  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:14.901213  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:14.935235  148123 cri.go:89] found id: ""
	I1010 19:26:14.935264  148123 logs.go:282] 0 containers: []
	W1010 19:26:14.935273  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:14.935280  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:14.935336  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:14.970617  148123 cri.go:89] found id: ""
	I1010 19:26:14.970642  148123 logs.go:282] 0 containers: []
	W1010 19:26:14.970652  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:14.970658  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:14.970722  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:15.011086  148123 cri.go:89] found id: ""
	I1010 19:26:15.011113  148123 logs.go:282] 0 containers: []
	W1010 19:26:15.011122  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:15.011128  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:15.011188  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:15.053004  148123 cri.go:89] found id: ""
	I1010 19:26:15.053026  148123 logs.go:282] 0 containers: []
	W1010 19:26:15.053033  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:15.053039  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:15.053099  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:15.089275  148123 cri.go:89] found id: ""
	I1010 19:26:15.089304  148123 logs.go:282] 0 containers: []
	W1010 19:26:15.089312  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:15.089319  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:15.089383  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:15.126620  148123 cri.go:89] found id: ""
	I1010 19:26:15.126645  148123 logs.go:282] 0 containers: []
	W1010 19:26:15.126654  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:15.126660  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:15.126723  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:15.160920  148123 cri.go:89] found id: ""
	I1010 19:26:15.160956  148123 logs.go:282] 0 containers: []
	W1010 19:26:15.160969  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:15.160979  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:15.161037  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:15.197430  148123 cri.go:89] found id: ""
	I1010 19:26:15.197462  148123 logs.go:282] 0 containers: []
	W1010 19:26:15.197474  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:15.197486  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:15.197507  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:15.240905  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:15.240953  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:15.297663  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:15.297719  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:15.312279  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:15.312313  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:15.395296  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:15.395320  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:15.395336  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:17.978030  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:17.992574  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:17.992651  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:18.028846  148123 cri.go:89] found id: ""
	I1010 19:26:18.028890  148123 logs.go:282] 0 containers: []
	W1010 19:26:18.028901  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:18.028908  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:18.028965  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:18.073004  148123 cri.go:89] found id: ""
	I1010 19:26:18.073033  148123 logs.go:282] 0 containers: []
	W1010 19:26:18.073041  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:18.073047  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:18.073118  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:18.112062  148123 cri.go:89] found id: ""
	I1010 19:26:18.112098  148123 logs.go:282] 0 containers: []
	W1010 19:26:18.112111  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:18.112121  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:18.112209  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:18.151565  148123 cri.go:89] found id: ""
	I1010 19:26:18.151597  148123 logs.go:282] 0 containers: []
	W1010 19:26:18.151608  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:18.151616  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:18.151675  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:18.197483  148123 cri.go:89] found id: ""
	I1010 19:26:18.197509  148123 logs.go:282] 0 containers: []
	W1010 19:26:18.197518  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:18.197524  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:18.197591  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:18.235151  148123 cri.go:89] found id: ""
	I1010 19:26:18.235181  148123 logs.go:282] 0 containers: []
	W1010 19:26:18.235193  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:18.235201  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:18.235279  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:18.273807  148123 cri.go:89] found id: ""
	I1010 19:26:18.273841  148123 logs.go:282] 0 containers: []
	W1010 19:26:18.273852  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:18.273858  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:18.273920  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:18.311644  148123 cri.go:89] found id: ""
	I1010 19:26:18.311677  148123 logs.go:282] 0 containers: []
	W1010 19:26:18.311688  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:18.311700  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:18.311716  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:18.355599  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:18.355635  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:18.407786  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:18.407830  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:18.422944  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:18.422976  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:18.496830  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:18.496873  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:18.496887  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:15.227600  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:17.726712  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:16.065701  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:18.066861  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:17.558257  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:20.050914  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:21.108300  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:21.123177  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:21.123243  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:21.162544  148123 cri.go:89] found id: ""
	I1010 19:26:21.162575  148123 logs.go:282] 0 containers: []
	W1010 19:26:21.162586  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:21.162595  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:21.162662  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:21.199799  148123 cri.go:89] found id: ""
	I1010 19:26:21.199828  148123 logs.go:282] 0 containers: []
	W1010 19:26:21.199839  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:21.199845  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:21.199901  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:21.246333  148123 cri.go:89] found id: ""
	I1010 19:26:21.246357  148123 logs.go:282] 0 containers: []
	W1010 19:26:21.246366  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:21.246372  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:21.246433  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:21.282681  148123 cri.go:89] found id: ""
	I1010 19:26:21.282719  148123 logs.go:282] 0 containers: []
	W1010 19:26:21.282732  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:21.282740  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:21.282810  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:21.319500  148123 cri.go:89] found id: ""
	I1010 19:26:21.319535  148123 logs.go:282] 0 containers: []
	W1010 19:26:21.319546  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:21.319554  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:21.319623  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:21.355779  148123 cri.go:89] found id: ""
	I1010 19:26:21.355809  148123 logs.go:282] 0 containers: []
	W1010 19:26:21.355820  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:21.355828  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:21.355902  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:21.394567  148123 cri.go:89] found id: ""
	I1010 19:26:21.394608  148123 logs.go:282] 0 containers: []
	W1010 19:26:21.394617  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:21.394623  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:21.394684  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:21.430009  148123 cri.go:89] found id: ""
	I1010 19:26:21.430046  148123 logs.go:282] 0 containers: []
	W1010 19:26:21.430058  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:21.430069  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:21.430089  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:21.443267  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:21.443301  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:21.517046  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:21.517068  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:21.517083  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:21.594927  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:21.594978  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:21.634274  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:21.634311  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:20.227157  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:22.727736  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:20.566652  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:23.066459  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:22.550526  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:25.050647  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:24.189760  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:24.204355  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:24.204420  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:24.248130  148123 cri.go:89] found id: ""
	I1010 19:26:24.248160  148123 logs.go:282] 0 containers: []
	W1010 19:26:24.248171  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:24.248178  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:24.248244  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:24.293281  148123 cri.go:89] found id: ""
	I1010 19:26:24.293312  148123 logs.go:282] 0 containers: []
	W1010 19:26:24.293324  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:24.293331  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:24.293400  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:24.350709  148123 cri.go:89] found id: ""
	I1010 19:26:24.350743  148123 logs.go:282] 0 containers: []
	W1010 19:26:24.350755  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:24.350765  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:24.350838  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:24.394113  148123 cri.go:89] found id: ""
	I1010 19:26:24.394152  148123 logs.go:282] 0 containers: []
	W1010 19:26:24.394170  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:24.394181  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:24.394256  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:24.445999  148123 cri.go:89] found id: ""
	I1010 19:26:24.446030  148123 logs.go:282] 0 containers: []
	W1010 19:26:24.446042  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:24.446051  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:24.446120  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:24.483566  148123 cri.go:89] found id: ""
	I1010 19:26:24.483596  148123 logs.go:282] 0 containers: []
	W1010 19:26:24.483605  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:24.483612  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:24.483665  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:24.520705  148123 cri.go:89] found id: ""
	I1010 19:26:24.520736  148123 logs.go:282] 0 containers: []
	W1010 19:26:24.520748  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:24.520757  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:24.520825  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:24.557316  148123 cri.go:89] found id: ""
	I1010 19:26:24.557346  148123 logs.go:282] 0 containers: []
	W1010 19:26:24.557355  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:24.557364  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:24.557376  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:24.608065  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:24.608109  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:24.623202  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:24.623250  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:24.694665  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:24.694697  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:24.694716  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:24.783369  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:24.783414  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:27.327524  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:27.341132  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:27.341214  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:27.376589  148123 cri.go:89] found id: ""
	I1010 19:26:27.376618  148123 logs.go:282] 0 containers: []
	W1010 19:26:27.376627  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:27.376633  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:27.376684  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:27.414452  148123 cri.go:89] found id: ""
	I1010 19:26:27.414491  148123 logs.go:282] 0 containers: []
	W1010 19:26:27.414502  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:27.414510  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:27.414572  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:27.449839  148123 cri.go:89] found id: ""
	I1010 19:26:27.449867  148123 logs.go:282] 0 containers: []
	W1010 19:26:27.449875  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:27.449882  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:27.449932  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:27.492445  148123 cri.go:89] found id: ""
	I1010 19:26:27.492472  148123 logs.go:282] 0 containers: []
	W1010 19:26:27.492480  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:27.492487  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:27.492549  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:27.533032  148123 cri.go:89] found id: ""
	I1010 19:26:27.533085  148123 logs.go:282] 0 containers: []
	W1010 19:26:27.533095  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:27.533122  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:27.533199  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:27.571096  148123 cri.go:89] found id: ""
	I1010 19:26:27.571122  148123 logs.go:282] 0 containers: []
	W1010 19:26:27.571130  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:27.571135  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:27.571204  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:27.608762  148123 cri.go:89] found id: ""
	I1010 19:26:27.608798  148123 logs.go:282] 0 containers: []
	W1010 19:26:27.608809  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:27.608818  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:27.608896  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:27.648200  148123 cri.go:89] found id: ""
	I1010 19:26:27.648236  148123 logs.go:282] 0 containers: []
	W1010 19:26:27.648248  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:27.648260  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:27.648275  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:27.698124  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:27.698154  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:27.755009  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:27.755052  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:27.770048  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:27.770084  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:27.845475  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:27.845505  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:27.845519  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:24.729352  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:26.731831  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:25.566028  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:27.567052  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:27.555698  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:30.049914  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:30.423117  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:30.436706  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:30.436768  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:30.473367  148123 cri.go:89] found id: ""
	I1010 19:26:30.473396  148123 logs.go:282] 0 containers: []
	W1010 19:26:30.473408  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:30.473417  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:30.473470  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:30.510560  148123 cri.go:89] found id: ""
	I1010 19:26:30.510599  148123 logs.go:282] 0 containers: []
	W1010 19:26:30.510610  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:30.510619  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:30.510687  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:30.549682  148123 cri.go:89] found id: ""
	I1010 19:26:30.549715  148123 logs.go:282] 0 containers: []
	W1010 19:26:30.549726  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:30.549734  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:30.549798  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:30.586401  148123 cri.go:89] found id: ""
	I1010 19:26:30.586425  148123 logs.go:282] 0 containers: []
	W1010 19:26:30.586434  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:30.586441  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:30.586492  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:30.622012  148123 cri.go:89] found id: ""
	I1010 19:26:30.622037  148123 logs.go:282] 0 containers: []
	W1010 19:26:30.622045  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:30.622052  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:30.622107  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:30.658336  148123 cri.go:89] found id: ""
	I1010 19:26:30.658364  148123 logs.go:282] 0 containers: []
	W1010 19:26:30.658372  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:30.658378  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:30.658442  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:30.695495  148123 cri.go:89] found id: ""
	I1010 19:26:30.695523  148123 logs.go:282] 0 containers: []
	W1010 19:26:30.695532  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:30.695538  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:30.695591  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:30.735093  148123 cri.go:89] found id: ""
	I1010 19:26:30.735125  148123 logs.go:282] 0 containers: []
	W1010 19:26:30.735136  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:30.735148  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:30.735163  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:30.816097  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:30.816140  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:30.853878  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:30.853915  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:30.906289  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:30.906341  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:30.923742  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:30.923769  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:30.993226  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:33.493799  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:33.508083  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:33.508166  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:33.545019  148123 cri.go:89] found id: ""
	I1010 19:26:33.545055  148123 logs.go:282] 0 containers: []
	W1010 19:26:33.545072  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:33.545081  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:33.545150  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:29.226673  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:31.227117  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:33.727777  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:30.068231  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:32.566025  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:32.050118  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:34.051720  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:36.550138  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:33.584215  148123 cri.go:89] found id: ""
	I1010 19:26:33.584242  148123 logs.go:282] 0 containers: []
	W1010 19:26:33.584252  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:33.584259  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:33.584315  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:33.620598  148123 cri.go:89] found id: ""
	I1010 19:26:33.620627  148123 logs.go:282] 0 containers: []
	W1010 19:26:33.620636  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:33.620643  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:33.620691  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:33.655225  148123 cri.go:89] found id: ""
	I1010 19:26:33.655252  148123 logs.go:282] 0 containers: []
	W1010 19:26:33.655260  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:33.655267  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:33.655322  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:33.693464  148123 cri.go:89] found id: ""
	I1010 19:26:33.693490  148123 logs.go:282] 0 containers: []
	W1010 19:26:33.693498  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:33.693505  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:33.693554  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:33.734024  148123 cri.go:89] found id: ""
	I1010 19:26:33.734059  148123 logs.go:282] 0 containers: []
	W1010 19:26:33.734071  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:33.734079  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:33.734141  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:33.769434  148123 cri.go:89] found id: ""
	I1010 19:26:33.769467  148123 logs.go:282] 0 containers: []
	W1010 19:26:33.769476  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:33.769483  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:33.769533  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:33.809011  148123 cri.go:89] found id: ""
	I1010 19:26:33.809040  148123 logs.go:282] 0 containers: []
	W1010 19:26:33.809049  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:33.809059  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:33.809071  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:33.864186  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:33.864230  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:33.878681  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:33.878711  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:33.958225  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:33.958250  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:33.958265  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:34.034730  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:34.034774  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:36.577123  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:36.591494  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:36.591560  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:36.625170  148123 cri.go:89] found id: ""
	I1010 19:26:36.625210  148123 logs.go:282] 0 containers: []
	W1010 19:26:36.625222  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:36.625230  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:36.625295  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:36.660790  148123 cri.go:89] found id: ""
	I1010 19:26:36.660821  148123 logs.go:282] 0 containers: []
	W1010 19:26:36.660831  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:36.660838  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:36.660911  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:36.695742  148123 cri.go:89] found id: ""
	I1010 19:26:36.695774  148123 logs.go:282] 0 containers: []
	W1010 19:26:36.695786  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:36.695793  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:36.695854  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:36.729523  148123 cri.go:89] found id: ""
	I1010 19:26:36.729552  148123 logs.go:282] 0 containers: []
	W1010 19:26:36.729561  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:36.729569  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:36.729630  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:36.765515  148123 cri.go:89] found id: ""
	I1010 19:26:36.765549  148123 logs.go:282] 0 containers: []
	W1010 19:26:36.765561  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:36.765571  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:36.765635  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:36.799110  148123 cri.go:89] found id: ""
	I1010 19:26:36.799144  148123 logs.go:282] 0 containers: []
	W1010 19:26:36.799155  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:36.799164  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:36.799224  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:36.832806  148123 cri.go:89] found id: ""
	I1010 19:26:36.832834  148123 logs.go:282] 0 containers: []
	W1010 19:26:36.832845  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:36.832865  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:36.832933  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:36.869093  148123 cri.go:89] found id: ""
	I1010 19:26:36.869126  148123 logs.go:282] 0 containers: []
	W1010 19:26:36.869137  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:36.869149  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:36.869165  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:36.922229  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:36.922276  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:36.936973  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:36.937019  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:37.016400  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:37.016425  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:37.016448  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:37.106308  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:37.106354  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:36.227451  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:38.726229  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:35.067396  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:37.565711  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:38.550438  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:41.050698  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:39.652101  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:39.665546  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:39.665639  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:39.699646  148123 cri.go:89] found id: ""
	I1010 19:26:39.699675  148123 logs.go:282] 0 containers: []
	W1010 19:26:39.699686  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:39.699693  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:39.699745  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:39.735568  148123 cri.go:89] found id: ""
	I1010 19:26:39.735592  148123 logs.go:282] 0 containers: []
	W1010 19:26:39.735600  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:39.735606  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:39.735656  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:39.770021  148123 cri.go:89] found id: ""
	I1010 19:26:39.770049  148123 logs.go:282] 0 containers: []
	W1010 19:26:39.770060  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:39.770067  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:39.770139  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:39.804867  148123 cri.go:89] found id: ""
	I1010 19:26:39.804894  148123 logs.go:282] 0 containers: []
	W1010 19:26:39.804903  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:39.804910  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:39.804972  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:39.847074  148123 cri.go:89] found id: ""
	I1010 19:26:39.847104  148123 logs.go:282] 0 containers: []
	W1010 19:26:39.847115  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:39.847124  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:39.847200  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:39.890334  148123 cri.go:89] found id: ""
	I1010 19:26:39.890368  148123 logs.go:282] 0 containers: []
	W1010 19:26:39.890379  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:39.890387  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:39.890456  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:39.926316  148123 cri.go:89] found id: ""
	I1010 19:26:39.926346  148123 logs.go:282] 0 containers: []
	W1010 19:26:39.926355  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:39.926362  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:39.926416  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:39.966179  148123 cri.go:89] found id: ""
	I1010 19:26:39.966215  148123 logs.go:282] 0 containers: []
	W1010 19:26:39.966227  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:39.966239  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:39.966255  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:40.047457  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:40.047505  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:40.086611  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:40.086656  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:40.139123  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:40.139167  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:40.152966  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:40.152995  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:40.231009  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:42.731851  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:42.748201  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:42.748287  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:42.787567  148123 cri.go:89] found id: ""
	I1010 19:26:42.787599  148123 logs.go:282] 0 containers: []
	W1010 19:26:42.787607  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:42.787614  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:42.787697  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:42.831051  148123 cri.go:89] found id: ""
	I1010 19:26:42.831089  148123 logs.go:282] 0 containers: []
	W1010 19:26:42.831100  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:42.831109  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:42.831180  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:42.871352  148123 cri.go:89] found id: ""
	I1010 19:26:42.871390  148123 logs.go:282] 0 containers: []
	W1010 19:26:42.871402  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:42.871410  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:42.871484  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:42.910283  148123 cri.go:89] found id: ""
	I1010 19:26:42.910316  148123 logs.go:282] 0 containers: []
	W1010 19:26:42.910327  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:42.910335  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:42.910403  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:42.946971  148123 cri.go:89] found id: ""
	I1010 19:26:42.947003  148123 logs.go:282] 0 containers: []
	W1010 19:26:42.947012  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:42.947019  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:42.947087  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:42.985484  148123 cri.go:89] found id: ""
	I1010 19:26:42.985517  148123 logs.go:282] 0 containers: []
	W1010 19:26:42.985528  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:42.985537  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:42.985603  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:43.024500  148123 cri.go:89] found id: ""
	I1010 19:26:43.024535  148123 logs.go:282] 0 containers: []
	W1010 19:26:43.024544  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:43.024552  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:43.024608  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:43.064008  148123 cri.go:89] found id: ""
	I1010 19:26:43.064039  148123 logs.go:282] 0 containers: []
	W1010 19:26:43.064049  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:43.064060  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:43.064075  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:43.119367  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:43.119415  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:43.133396  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:43.133428  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:43.207447  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:43.207470  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:43.207491  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:43.290416  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:43.290456  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:40.727919  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:43.227782  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:40.066461  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:42.565505  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:43.051835  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:45.052308  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:45.834115  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:45.849172  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:45.849238  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:45.886020  148123 cri.go:89] found id: ""
	I1010 19:26:45.886049  148123 logs.go:282] 0 containers: []
	W1010 19:26:45.886060  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:45.886067  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:45.886152  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:45.923188  148123 cri.go:89] found id: ""
	I1010 19:26:45.923218  148123 logs.go:282] 0 containers: []
	W1010 19:26:45.923245  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:45.923253  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:45.923316  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:45.960297  148123 cri.go:89] found id: ""
	I1010 19:26:45.960337  148123 logs.go:282] 0 containers: []
	W1010 19:26:45.960351  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:45.960361  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:45.960432  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:45.995140  148123 cri.go:89] found id: ""
	I1010 19:26:45.995173  148123 logs.go:282] 0 containers: []
	W1010 19:26:45.995184  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:45.995191  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:45.995265  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:46.040390  148123 cri.go:89] found id: ""
	I1010 19:26:46.040418  148123 logs.go:282] 0 containers: []
	W1010 19:26:46.040426  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:46.040433  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:46.040500  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:46.079999  148123 cri.go:89] found id: ""
	I1010 19:26:46.080029  148123 logs.go:282] 0 containers: []
	W1010 19:26:46.080042  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:46.080052  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:46.080105  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:46.117331  148123 cri.go:89] found id: ""
	I1010 19:26:46.117357  148123 logs.go:282] 0 containers: []
	W1010 19:26:46.117364  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:46.117370  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:46.117441  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:46.152248  148123 cri.go:89] found id: ""
	I1010 19:26:46.152279  148123 logs.go:282] 0 containers: []
	W1010 19:26:46.152290  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:46.152303  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:46.152319  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:46.192503  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:46.192533  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:46.246419  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:46.246456  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:46.259864  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:46.259895  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:46.338518  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:46.338549  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:46.338567  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:45.726776  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:48.228318  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:44.567056  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:47.065636  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:47.551013  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:50.053824  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:48.924216  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:48.938741  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:48.938805  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:48.973310  148123 cri.go:89] found id: ""
	I1010 19:26:48.973339  148123 logs.go:282] 0 containers: []
	W1010 19:26:48.973348  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:48.973356  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:48.973411  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:49.010467  148123 cri.go:89] found id: ""
	I1010 19:26:49.010492  148123 logs.go:282] 0 containers: []
	W1010 19:26:49.010500  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:49.010506  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:49.010585  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:49.051621  148123 cri.go:89] found id: ""
	I1010 19:26:49.051646  148123 logs.go:282] 0 containers: []
	W1010 19:26:49.051653  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:49.051664  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:49.051727  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:49.091090  148123 cri.go:89] found id: ""
	I1010 19:26:49.091121  148123 logs.go:282] 0 containers: []
	W1010 19:26:49.091132  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:49.091140  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:49.091202  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:49.127669  148123 cri.go:89] found id: ""
	I1010 19:26:49.127712  148123 logs.go:282] 0 containers: []
	W1010 19:26:49.127724  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:49.127732  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:49.127804  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:49.164446  148123 cri.go:89] found id: ""
	I1010 19:26:49.164476  148123 logs.go:282] 0 containers: []
	W1010 19:26:49.164485  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:49.164491  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:49.164553  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:49.201222  148123 cri.go:89] found id: ""
	I1010 19:26:49.201263  148123 logs.go:282] 0 containers: []
	W1010 19:26:49.201275  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:49.201284  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:49.201345  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:49.238258  148123 cri.go:89] found id: ""
	I1010 19:26:49.238283  148123 logs.go:282] 0 containers: []
	W1010 19:26:49.238290  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:49.238299  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:49.238311  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:49.313576  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:49.313619  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:49.351066  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:49.351096  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:49.402772  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:49.402823  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:49.417713  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:49.417752  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:49.488834  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:51.989031  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:52.003056  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:52.003140  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:52.039682  148123 cri.go:89] found id: ""
	I1010 19:26:52.039709  148123 logs.go:282] 0 containers: []
	W1010 19:26:52.039718  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:52.039725  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:52.039788  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:52.081402  148123 cri.go:89] found id: ""
	I1010 19:26:52.081433  148123 logs.go:282] 0 containers: []
	W1010 19:26:52.081443  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:52.081449  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:52.081502  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:52.118270  148123 cri.go:89] found id: ""
	I1010 19:26:52.118304  148123 logs.go:282] 0 containers: []
	W1010 19:26:52.118315  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:52.118325  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:52.118392  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:52.154692  148123 cri.go:89] found id: ""
	I1010 19:26:52.154724  148123 logs.go:282] 0 containers: []
	W1010 19:26:52.154735  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:52.154743  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:52.154807  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:52.190056  148123 cri.go:89] found id: ""
	I1010 19:26:52.190085  148123 logs.go:282] 0 containers: []
	W1010 19:26:52.190094  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:52.190101  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:52.190161  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:52.225468  148123 cri.go:89] found id: ""
	I1010 19:26:52.225501  148123 logs.go:282] 0 containers: []
	W1010 19:26:52.225511  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:52.225521  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:52.225589  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:52.260682  148123 cri.go:89] found id: ""
	I1010 19:26:52.260710  148123 logs.go:282] 0 containers: []
	W1010 19:26:52.260718  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:52.260724  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:52.260774  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:52.297613  148123 cri.go:89] found id: ""
	I1010 19:26:52.297642  148123 logs.go:282] 0 containers: []
	W1010 19:26:52.297659  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:52.297672  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:52.297689  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:52.352224  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:52.352267  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:52.367003  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:52.367033  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:52.443124  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:52.443157  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:52.443173  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:52.522391  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:52.522433  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:50.726363  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:52.727069  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:49.069109  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:51.566132  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:53.567867  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:52.554195  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:55.050995  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:55.066877  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:55.082191  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:55.082258  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:55.120729  148123 cri.go:89] found id: ""
	I1010 19:26:55.120763  148123 logs.go:282] 0 containers: []
	W1010 19:26:55.120775  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:55.120784  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:55.120863  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:55.154795  148123 cri.go:89] found id: ""
	I1010 19:26:55.154827  148123 logs.go:282] 0 containers: []
	W1010 19:26:55.154839  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:55.154848  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:55.154904  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:55.194846  148123 cri.go:89] found id: ""
	I1010 19:26:55.194876  148123 logs.go:282] 0 containers: []
	W1010 19:26:55.194889  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:55.194897  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:55.194958  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:55.230173  148123 cri.go:89] found id: ""
	I1010 19:26:55.230201  148123 logs.go:282] 0 containers: []
	W1010 19:26:55.230212  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:55.230220  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:55.230290  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:55.264999  148123 cri.go:89] found id: ""
	I1010 19:26:55.265025  148123 logs.go:282] 0 containers: []
	W1010 19:26:55.265032  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:55.265039  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:55.265096  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:55.303666  148123 cri.go:89] found id: ""
	I1010 19:26:55.303695  148123 logs.go:282] 0 containers: []
	W1010 19:26:55.303703  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:55.303724  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:55.303797  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:55.342061  148123 cri.go:89] found id: ""
	I1010 19:26:55.342087  148123 logs.go:282] 0 containers: []
	W1010 19:26:55.342098  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:55.342106  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:55.342171  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:55.378305  148123 cri.go:89] found id: ""
	I1010 19:26:55.378336  148123 logs.go:282] 0 containers: []
	W1010 19:26:55.378345  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:55.378354  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:55.378365  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:55.416815  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:55.416865  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:55.467371  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:55.467415  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:55.481704  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:55.481744  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:55.559219  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:55.559242  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:55.559254  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:58.141472  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:58.154326  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:58.154394  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:58.193053  148123 cri.go:89] found id: ""
	I1010 19:26:58.193080  148123 logs.go:282] 0 containers: []
	W1010 19:26:58.193088  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:58.193094  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:58.193142  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:58.232514  148123 cri.go:89] found id: ""
	I1010 19:26:58.232538  148123 logs.go:282] 0 containers: []
	W1010 19:26:58.232545  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:58.232551  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:58.232599  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:58.266724  148123 cri.go:89] found id: ""
	I1010 19:26:58.266756  148123 logs.go:282] 0 containers: []
	W1010 19:26:58.266765  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:58.266771  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:58.266824  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:58.301986  148123 cri.go:89] found id: ""
	I1010 19:26:58.302012  148123 logs.go:282] 0 containers: []
	W1010 19:26:58.302019  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:58.302031  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:58.302080  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:58.340394  148123 cri.go:89] found id: ""
	I1010 19:26:58.340431  148123 logs.go:282] 0 containers: []
	W1010 19:26:58.340440  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:58.340448  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:58.340500  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:58.407035  148123 cri.go:89] found id: ""
	I1010 19:26:58.407069  148123 logs.go:282] 0 containers: []
	W1010 19:26:58.407081  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:58.407088  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:58.407151  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:58.446650  148123 cri.go:89] found id: ""
	I1010 19:26:58.446682  148123 logs.go:282] 0 containers: []
	W1010 19:26:58.446694  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:58.446703  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:58.446763  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:58.480719  148123 cri.go:89] found id: ""
	I1010 19:26:58.480750  148123 logs.go:282] 0 containers: []
	W1010 19:26:58.480759  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:58.480768  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:58.480781  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:58.561568  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:58.561602  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:55.227199  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:57.726841  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:56.065787  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:58.566732  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:57.550718  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:59.550793  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:58.609237  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:58.609264  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:58.659705  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:58.659748  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:58.674646  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:58.674680  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:58.750922  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:01.252098  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:01.265420  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:01.265497  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:01.300133  148123 cri.go:89] found id: ""
	I1010 19:27:01.300168  148123 logs.go:282] 0 containers: []
	W1010 19:27:01.300180  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:01.300188  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:01.300272  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:01.337451  148123 cri.go:89] found id: ""
	I1010 19:27:01.337477  148123 logs.go:282] 0 containers: []
	W1010 19:27:01.337485  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:01.337492  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:01.337539  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:01.371987  148123 cri.go:89] found id: ""
	I1010 19:27:01.372019  148123 logs.go:282] 0 containers: []
	W1010 19:27:01.372028  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:01.372034  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:01.372098  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:01.408996  148123 cri.go:89] found id: ""
	I1010 19:27:01.409022  148123 logs.go:282] 0 containers: []
	W1010 19:27:01.409033  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:01.409042  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:01.409109  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:01.443558  148123 cri.go:89] found id: ""
	I1010 19:27:01.443587  148123 logs.go:282] 0 containers: []
	W1010 19:27:01.443595  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:01.443602  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:01.443663  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:01.478517  148123 cri.go:89] found id: ""
	I1010 19:27:01.478546  148123 logs.go:282] 0 containers: []
	W1010 19:27:01.478555  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:01.478561  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:01.478611  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:01.513528  148123 cri.go:89] found id: ""
	I1010 19:27:01.513556  148123 logs.go:282] 0 containers: []
	W1010 19:27:01.513568  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:01.513576  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:01.513641  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:01.558861  148123 cri.go:89] found id: ""
	I1010 19:27:01.558897  148123 logs.go:282] 0 containers: []
	W1010 19:27:01.558909  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:01.558921  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:01.558937  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:01.599719  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:01.599747  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:01.650736  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:01.650774  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:01.664643  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:01.664669  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:01.736533  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:01.736557  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:01.736570  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:00.225540  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:02.226962  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:00.567193  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:03.066587  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:02.050439  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:04.050984  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:06.550977  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:04.317302  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:04.330450  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:04.330519  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:04.368893  148123 cri.go:89] found id: ""
	I1010 19:27:04.368923  148123 logs.go:282] 0 containers: []
	W1010 19:27:04.368932  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:04.368939  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:04.368993  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:04.406598  148123 cri.go:89] found id: ""
	I1010 19:27:04.406625  148123 logs.go:282] 0 containers: []
	W1010 19:27:04.406634  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:04.406640  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:04.406692  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:04.441485  148123 cri.go:89] found id: ""
	I1010 19:27:04.441515  148123 logs.go:282] 0 containers: []
	W1010 19:27:04.441525  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:04.441532  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:04.441581  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:04.476663  148123 cri.go:89] found id: ""
	I1010 19:27:04.476690  148123 logs.go:282] 0 containers: []
	W1010 19:27:04.476698  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:04.476704  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:04.476765  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:04.512124  148123 cri.go:89] found id: ""
	I1010 19:27:04.512171  148123 logs.go:282] 0 containers: []
	W1010 19:27:04.512186  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:04.512195  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:04.512260  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:04.547902  148123 cri.go:89] found id: ""
	I1010 19:27:04.547929  148123 logs.go:282] 0 containers: []
	W1010 19:27:04.547940  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:04.547949  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:04.548008  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:04.585257  148123 cri.go:89] found id: ""
	I1010 19:27:04.585287  148123 logs.go:282] 0 containers: []
	W1010 19:27:04.585297  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:04.585303  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:04.585352  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:04.621019  148123 cri.go:89] found id: ""
	I1010 19:27:04.621048  148123 logs.go:282] 0 containers: []
	W1010 19:27:04.621057  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:04.621068  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:04.621080  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:04.662770  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:04.662801  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:04.714551  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:04.714592  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:04.730922  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:04.730951  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:04.841139  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:04.841163  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:04.841178  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:07.428528  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:07.442423  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:07.442498  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:07.476722  148123 cri.go:89] found id: ""
	I1010 19:27:07.476753  148123 logs.go:282] 0 containers: []
	W1010 19:27:07.476764  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:07.476772  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:07.476835  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:07.512211  148123 cri.go:89] found id: ""
	I1010 19:27:07.512245  148123 logs.go:282] 0 containers: []
	W1010 19:27:07.512256  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:07.512263  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:07.512325  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:07.546546  148123 cri.go:89] found id: ""
	I1010 19:27:07.546588  148123 logs.go:282] 0 containers: []
	W1010 19:27:07.546601  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:07.546619  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:07.546688  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:07.585743  148123 cri.go:89] found id: ""
	I1010 19:27:07.585768  148123 logs.go:282] 0 containers: []
	W1010 19:27:07.585777  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:07.585783  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:07.585848  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:07.626827  148123 cri.go:89] found id: ""
	I1010 19:27:07.626855  148123 logs.go:282] 0 containers: []
	W1010 19:27:07.626865  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:07.626874  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:07.626960  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:07.661910  148123 cri.go:89] found id: ""
	I1010 19:27:07.661940  148123 logs.go:282] 0 containers: []
	W1010 19:27:07.661948  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:07.661955  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:07.662072  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:07.698255  148123 cri.go:89] found id: ""
	I1010 19:27:07.698288  148123 logs.go:282] 0 containers: []
	W1010 19:27:07.698301  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:07.698309  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:07.698374  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:07.734389  148123 cri.go:89] found id: ""
	I1010 19:27:07.734417  148123 logs.go:282] 0 containers: []
	W1010 19:27:07.734428  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:07.734440  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:07.734454  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:07.814089  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:07.814130  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:07.854799  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:07.854831  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:07.906595  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:07.906635  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:07.921928  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:07.921966  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:07.989839  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:04.727522  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:07.226694  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:05.565868  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:07.567139  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:09.050772  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:11.051291  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:10.490115  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:10.504216  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:10.504292  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:10.554789  148123 cri.go:89] found id: ""
	I1010 19:27:10.554815  148123 logs.go:282] 0 containers: []
	W1010 19:27:10.554825  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:10.554831  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:10.554889  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:10.618970  148123 cri.go:89] found id: ""
	I1010 19:27:10.618997  148123 logs.go:282] 0 containers: []
	W1010 19:27:10.619005  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:10.619012  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:10.619061  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:10.666900  148123 cri.go:89] found id: ""
	I1010 19:27:10.666933  148123 logs.go:282] 0 containers: []
	W1010 19:27:10.666946  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:10.666953  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:10.667023  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:10.704364  148123 cri.go:89] found id: ""
	I1010 19:27:10.704405  148123 logs.go:282] 0 containers: []
	W1010 19:27:10.704430  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:10.704450  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:10.704525  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:10.742341  148123 cri.go:89] found id: ""
	I1010 19:27:10.742380  148123 logs.go:282] 0 containers: []
	W1010 19:27:10.742389  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:10.742396  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:10.742461  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:10.783056  148123 cri.go:89] found id: ""
	I1010 19:27:10.783088  148123 logs.go:282] 0 containers: []
	W1010 19:27:10.783098  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:10.783105  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:10.783174  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:10.823198  148123 cri.go:89] found id: ""
	I1010 19:27:10.823224  148123 logs.go:282] 0 containers: []
	W1010 19:27:10.823233  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:10.823243  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:10.823302  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:10.858154  148123 cri.go:89] found id: ""
	I1010 19:27:10.858195  148123 logs.go:282] 0 containers: []
	W1010 19:27:10.858204  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:10.858213  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:10.858229  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:10.935491  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:10.935518  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:10.935532  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:11.015653  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:11.015694  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:11.058765  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:11.058793  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:11.116748  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:11.116799  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:09.727270  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:12.225797  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:10.065372  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:12.065695  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:13.550669  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:16.051044  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:13.631579  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:13.644885  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:13.644953  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:13.685242  148123 cri.go:89] found id: ""
	I1010 19:27:13.685273  148123 logs.go:282] 0 containers: []
	W1010 19:27:13.685285  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:13.685294  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:13.685364  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:13.720831  148123 cri.go:89] found id: ""
	I1010 19:27:13.720866  148123 logs.go:282] 0 containers: []
	W1010 19:27:13.720877  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:13.720885  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:13.720947  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:13.766374  148123 cri.go:89] found id: ""
	I1010 19:27:13.766404  148123 logs.go:282] 0 containers: []
	W1010 19:27:13.766413  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:13.766419  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:13.766468  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:13.804550  148123 cri.go:89] found id: ""
	I1010 19:27:13.804585  148123 logs.go:282] 0 containers: []
	W1010 19:27:13.804597  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:13.804606  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:13.804674  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:13.842406  148123 cri.go:89] found id: ""
	I1010 19:27:13.842432  148123 logs.go:282] 0 containers: []
	W1010 19:27:13.842440  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:13.842460  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:13.842678  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:13.882365  148123 cri.go:89] found id: ""
	I1010 19:27:13.882402  148123 logs.go:282] 0 containers: []
	W1010 19:27:13.882415  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:13.882423  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:13.882505  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:13.926016  148123 cri.go:89] found id: ""
	I1010 19:27:13.926052  148123 logs.go:282] 0 containers: []
	W1010 19:27:13.926065  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:13.926075  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:13.926177  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:13.960485  148123 cri.go:89] found id: ""
	I1010 19:27:13.960523  148123 logs.go:282] 0 containers: []
	W1010 19:27:13.960533  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:13.960542  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:13.960558  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:14.013013  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:14.013055  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:14.026906  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:14.026935  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:14.104469  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:14.104494  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:14.104507  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:14.185917  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:14.185959  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:16.732088  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:16.746646  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:16.746710  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:16.784715  148123 cri.go:89] found id: ""
	I1010 19:27:16.784739  148123 logs.go:282] 0 containers: []
	W1010 19:27:16.784749  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:16.784755  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:16.784807  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:16.824181  148123 cri.go:89] found id: ""
	I1010 19:27:16.824210  148123 logs.go:282] 0 containers: []
	W1010 19:27:16.824220  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:16.824228  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:16.824289  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:16.860901  148123 cri.go:89] found id: ""
	I1010 19:27:16.860932  148123 logs.go:282] 0 containers: []
	W1010 19:27:16.860941  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:16.860947  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:16.860997  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:16.895127  148123 cri.go:89] found id: ""
	I1010 19:27:16.895159  148123 logs.go:282] 0 containers: []
	W1010 19:27:16.895175  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:16.895183  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:16.895266  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:16.931989  148123 cri.go:89] found id: ""
	I1010 19:27:16.932020  148123 logs.go:282] 0 containers: []
	W1010 19:27:16.932028  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:16.932035  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:16.932086  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:16.967254  148123 cri.go:89] found id: ""
	I1010 19:27:16.967283  148123 logs.go:282] 0 containers: []
	W1010 19:27:16.967292  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:16.967299  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:16.967347  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:17.003010  148123 cri.go:89] found id: ""
	I1010 19:27:17.003035  148123 logs.go:282] 0 containers: []
	W1010 19:27:17.003043  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:17.003050  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:17.003097  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:17.040053  148123 cri.go:89] found id: ""
	I1010 19:27:17.040093  148123 logs.go:282] 0 containers: []
	W1010 19:27:17.040102  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:17.040118  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:17.040131  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:17.093176  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:17.093217  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:17.108086  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:17.108116  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:17.186423  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:17.186452  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:17.186467  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:17.271884  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:17.271926  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:14.227197  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:16.739354  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:14.066233  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:16.565852  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:18.566337  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:18.051613  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:20.549888  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:19.817954  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:19.831924  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:19.831993  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:19.873415  148123 cri.go:89] found id: ""
	I1010 19:27:19.873441  148123 logs.go:282] 0 containers: []
	W1010 19:27:19.873459  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:19.873466  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:19.873527  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:19.912174  148123 cri.go:89] found id: ""
	I1010 19:27:19.912207  148123 logs.go:282] 0 containers: []
	W1010 19:27:19.912219  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:19.912227  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:19.912298  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:19.948422  148123 cri.go:89] found id: ""
	I1010 19:27:19.948456  148123 logs.go:282] 0 containers: []
	W1010 19:27:19.948466  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:19.948472  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:19.948524  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:19.983917  148123 cri.go:89] found id: ""
	I1010 19:27:19.983949  148123 logs.go:282] 0 containers: []
	W1010 19:27:19.983962  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:19.983970  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:19.984024  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:20.019160  148123 cri.go:89] found id: ""
	I1010 19:27:20.019187  148123 logs.go:282] 0 containers: []
	W1010 19:27:20.019198  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:20.019207  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:20.019271  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:20.056073  148123 cri.go:89] found id: ""
	I1010 19:27:20.056104  148123 logs.go:282] 0 containers: []
	W1010 19:27:20.056116  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:20.056124  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:20.056187  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:20.095877  148123 cri.go:89] found id: ""
	I1010 19:27:20.095916  148123 logs.go:282] 0 containers: []
	W1010 19:27:20.095928  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:20.095935  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:20.096007  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:20.132360  148123 cri.go:89] found id: ""
	I1010 19:27:20.132385  148123 logs.go:282] 0 containers: []
	W1010 19:27:20.132394  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:20.132402  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:20.132413  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:20.190573  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:20.190619  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:20.205785  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:20.205819  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:20.278882  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:20.278911  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:20.278924  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:20.363982  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:20.364024  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:22.906059  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:22.919839  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:22.919912  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:22.958203  148123 cri.go:89] found id: ""
	I1010 19:27:22.958242  148123 logs.go:282] 0 containers: []
	W1010 19:27:22.958252  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:22.958258  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:22.958312  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:22.995834  148123 cri.go:89] found id: ""
	I1010 19:27:22.995866  148123 logs.go:282] 0 containers: []
	W1010 19:27:22.995874  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:22.995880  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:22.995945  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:23.035913  148123 cri.go:89] found id: ""
	I1010 19:27:23.035950  148123 logs.go:282] 0 containers: []
	W1010 19:27:23.035962  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:23.035970  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:23.036039  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:23.073995  148123 cri.go:89] found id: ""
	I1010 19:27:23.074036  148123 logs.go:282] 0 containers: []
	W1010 19:27:23.074049  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:23.074057  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:23.074117  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:23.111190  148123 cri.go:89] found id: ""
	I1010 19:27:23.111222  148123 logs.go:282] 0 containers: []
	W1010 19:27:23.111234  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:23.111243  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:23.111305  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:23.147771  148123 cri.go:89] found id: ""
	I1010 19:27:23.147797  148123 logs.go:282] 0 containers: []
	W1010 19:27:23.147806  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:23.147814  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:23.147878  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:23.185485  148123 cri.go:89] found id: ""
	I1010 19:27:23.185517  148123 logs.go:282] 0 containers: []
	W1010 19:27:23.185527  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:23.185535  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:23.185598  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:23.222030  148123 cri.go:89] found id: ""
	I1010 19:27:23.222060  148123 logs.go:282] 0 containers: []
	W1010 19:27:23.222070  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:23.222081  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:23.222097  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:23.301826  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:23.301849  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:23.301861  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:23.378688  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:23.378730  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:23.426683  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:23.426717  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:23.478425  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:23.478467  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:19.226994  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:21.727366  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:21.067094  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:23.567075  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:22.550076  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:24.551681  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:25.993768  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:26.008311  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:26.008383  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:26.046315  148123 cri.go:89] found id: ""
	I1010 19:27:26.046343  148123 logs.go:282] 0 containers: []
	W1010 19:27:26.046355  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:26.046364  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:26.046418  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:26.081823  148123 cri.go:89] found id: ""
	I1010 19:27:26.081847  148123 logs.go:282] 0 containers: []
	W1010 19:27:26.081855  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:26.081861  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:26.081913  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:26.117139  148123 cri.go:89] found id: ""
	I1010 19:27:26.117167  148123 logs.go:282] 0 containers: []
	W1010 19:27:26.117183  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:26.117191  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:26.117242  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:26.154477  148123 cri.go:89] found id: ""
	I1010 19:27:26.154501  148123 logs.go:282] 0 containers: []
	W1010 19:27:26.154510  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:26.154517  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:26.154568  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:26.187497  148123 cri.go:89] found id: ""
	I1010 19:27:26.187527  148123 logs.go:282] 0 containers: []
	W1010 19:27:26.187535  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:26.187541  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:26.187604  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:26.223110  148123 cri.go:89] found id: ""
	I1010 19:27:26.223140  148123 logs.go:282] 0 containers: []
	W1010 19:27:26.223151  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:26.223160  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:26.223226  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:26.264975  148123 cri.go:89] found id: ""
	I1010 19:27:26.265003  148123 logs.go:282] 0 containers: []
	W1010 19:27:26.265011  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:26.265017  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:26.265067  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:26.307467  148123 cri.go:89] found id: ""
	I1010 19:27:26.307494  148123 logs.go:282] 0 containers: []
	W1010 19:27:26.307503  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:26.307512  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:26.307524  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:26.346313  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:26.346342  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:26.400069  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:26.400109  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:26.415467  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:26.415500  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:26.486986  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:26.487016  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:26.487033  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:24.226736  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:26.228720  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:28.726470  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:26.067100  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:28.565675  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:27.051110  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:29.051207  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:31.553085  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:29.064522  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:29.077631  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:29.077696  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:29.116127  148123 cri.go:89] found id: ""
	I1010 19:27:29.116154  148123 logs.go:282] 0 containers: []
	W1010 19:27:29.116164  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:29.116172  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:29.116241  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:29.152863  148123 cri.go:89] found id: ""
	I1010 19:27:29.152893  148123 logs.go:282] 0 containers: []
	W1010 19:27:29.152901  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:29.152907  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:29.152963  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:29.189951  148123 cri.go:89] found id: ""
	I1010 19:27:29.189983  148123 logs.go:282] 0 containers: []
	W1010 19:27:29.189992  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:29.189999  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:29.190060  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:29.226611  148123 cri.go:89] found id: ""
	I1010 19:27:29.226646  148123 logs.go:282] 0 containers: []
	W1010 19:27:29.226657  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:29.226665  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:29.226729  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:29.271884  148123 cri.go:89] found id: ""
	I1010 19:27:29.271921  148123 logs.go:282] 0 containers: []
	W1010 19:27:29.271934  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:29.271944  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:29.272016  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:29.306128  148123 cri.go:89] found id: ""
	I1010 19:27:29.306168  148123 logs.go:282] 0 containers: []
	W1010 19:27:29.306181  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:29.306191  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:29.306255  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:29.341869  148123 cri.go:89] found id: ""
	I1010 19:27:29.341893  148123 logs.go:282] 0 containers: []
	W1010 19:27:29.341901  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:29.341908  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:29.341962  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:29.376213  148123 cri.go:89] found id: ""
	I1010 19:27:29.376240  148123 logs.go:282] 0 containers: []
	W1010 19:27:29.376249  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:29.376259  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:29.376273  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:29.428827  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:29.428878  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:29.443719  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:29.443754  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:29.516166  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:29.516193  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:29.516208  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:29.596643  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:29.596681  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:32.135286  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:32.148791  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:32.148893  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:32.185511  148123 cri.go:89] found id: ""
	I1010 19:27:32.185542  148123 logs.go:282] 0 containers: []
	W1010 19:27:32.185553  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:32.185560  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:32.185626  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:32.222908  148123 cri.go:89] found id: ""
	I1010 19:27:32.222934  148123 logs.go:282] 0 containers: []
	W1010 19:27:32.222942  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:32.222948  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:32.222999  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:32.260998  148123 cri.go:89] found id: ""
	I1010 19:27:32.261033  148123 logs.go:282] 0 containers: []
	W1010 19:27:32.261045  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:32.261063  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:32.261128  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:32.298311  148123 cri.go:89] found id: ""
	I1010 19:27:32.298339  148123 logs.go:282] 0 containers: []
	W1010 19:27:32.298348  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:32.298355  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:32.298409  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:32.334134  148123 cri.go:89] found id: ""
	I1010 19:27:32.334220  148123 logs.go:282] 0 containers: []
	W1010 19:27:32.334236  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:32.334249  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:32.334319  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:32.369690  148123 cri.go:89] found id: ""
	I1010 19:27:32.369723  148123 logs.go:282] 0 containers: []
	W1010 19:27:32.369735  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:32.369743  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:32.369807  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:32.407164  148123 cri.go:89] found id: ""
	I1010 19:27:32.407210  148123 logs.go:282] 0 containers: []
	W1010 19:27:32.407218  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:32.407224  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:32.407279  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:32.446468  148123 cri.go:89] found id: ""
	I1010 19:27:32.446494  148123 logs.go:282] 0 containers: []
	W1010 19:27:32.446505  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:32.446519  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:32.446535  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:32.497953  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:32.497997  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:32.513590  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:32.513620  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:32.594037  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:32.594072  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:32.594090  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:32.673546  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:32.673587  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:30.727725  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:32.727813  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:31.066731  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:33.067815  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:34.050574  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:36.550119  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:35.226084  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:35.241152  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:35.241222  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:35.283208  148123 cri.go:89] found id: ""
	I1010 19:27:35.283245  148123 logs.go:282] 0 containers: []
	W1010 19:27:35.283269  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:35.283286  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:35.283346  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:35.320406  148123 cri.go:89] found id: ""
	I1010 19:27:35.320444  148123 logs.go:282] 0 containers: []
	W1010 19:27:35.320457  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:35.320466  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:35.320523  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:35.360984  148123 cri.go:89] found id: ""
	I1010 19:27:35.361015  148123 logs.go:282] 0 containers: []
	W1010 19:27:35.361027  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:35.361034  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:35.361097  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:35.400191  148123 cri.go:89] found id: ""
	I1010 19:27:35.400219  148123 logs.go:282] 0 containers: []
	W1010 19:27:35.400230  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:35.400244  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:35.400314  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:35.438092  148123 cri.go:89] found id: ""
	I1010 19:27:35.438126  148123 logs.go:282] 0 containers: []
	W1010 19:27:35.438139  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:35.438147  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:35.438215  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:35.472656  148123 cri.go:89] found id: ""
	I1010 19:27:35.472681  148123 logs.go:282] 0 containers: []
	W1010 19:27:35.472688  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:35.472694  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:35.472743  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:35.505895  148123 cri.go:89] found id: ""
	I1010 19:27:35.505931  148123 logs.go:282] 0 containers: []
	W1010 19:27:35.505942  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:35.505950  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:35.506015  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:35.540406  148123 cri.go:89] found id: ""
	I1010 19:27:35.540441  148123 logs.go:282] 0 containers: []
	W1010 19:27:35.540451  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:35.540462  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:35.540478  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:35.555874  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:35.555903  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:35.628400  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:35.628427  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:35.628453  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:35.708262  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:35.708305  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:35.752796  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:35.752824  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:38.306212  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:38.319473  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:38.319543  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:38.357493  148123 cri.go:89] found id: ""
	I1010 19:27:38.357520  148123 logs.go:282] 0 containers: []
	W1010 19:27:38.357529  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:38.357536  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:38.357588  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:38.395908  148123 cri.go:89] found id: ""
	I1010 19:27:38.395935  148123 logs.go:282] 0 containers: []
	W1010 19:27:38.395943  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:38.395949  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:38.396010  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:38.434495  148123 cri.go:89] found id: ""
	I1010 19:27:38.434529  148123 logs.go:282] 0 containers: []
	W1010 19:27:38.434539  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:38.434545  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:38.434608  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:38.474647  148123 cri.go:89] found id: ""
	I1010 19:27:38.474686  148123 logs.go:282] 0 containers: []
	W1010 19:27:38.474698  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:38.474710  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:38.474784  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:38.511105  148123 cri.go:89] found id: ""
	I1010 19:27:38.511133  148123 logs.go:282] 0 containers: []
	W1010 19:27:38.511141  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:38.511148  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:38.511199  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:38.551011  148123 cri.go:89] found id: ""
	I1010 19:27:38.551055  148123 logs.go:282] 0 containers: []
	W1010 19:27:38.551066  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:38.551074  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:38.551139  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:35.227301  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:37.726528  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:35.567838  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:38.066658  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:38.552499  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:40.544561  147758 pod_ready.go:82] duration metric: took 4m0.00091784s for pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace to be "Ready" ...
	E1010 19:27:40.544600  147758 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace to be "Ready" (will not retry!)
	I1010 19:27:40.544623  147758 pod_ready.go:39] duration metric: took 4m15.623470592s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 19:27:40.544664  147758 kubeadm.go:597] duration metric: took 4m22.92080204s to restartPrimaryControlPlane
	W1010 19:27:40.544737  147758 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1010 19:27:40.544829  147758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1010 19:27:38.590579  148123 cri.go:89] found id: ""
	I1010 19:27:38.590606  148123 logs.go:282] 0 containers: []
	W1010 19:27:38.590615  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:38.590621  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:38.590686  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:38.627775  148123 cri.go:89] found id: ""
	I1010 19:27:38.627812  148123 logs.go:282] 0 containers: []
	W1010 19:27:38.627826  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:38.627838  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:38.627854  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:38.683195  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:38.683239  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:38.697220  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:38.697250  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:38.771619  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:38.771651  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:38.771668  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:38.851324  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:38.851362  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:41.408165  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:41.422958  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:41.423032  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:41.466774  148123 cri.go:89] found id: ""
	I1010 19:27:41.466806  148123 logs.go:282] 0 containers: []
	W1010 19:27:41.466816  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:41.466823  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:41.466874  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:41.506429  148123 cri.go:89] found id: ""
	I1010 19:27:41.506470  148123 logs.go:282] 0 containers: []
	W1010 19:27:41.506482  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:41.506489  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:41.506555  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:41.548929  148123 cri.go:89] found id: ""
	I1010 19:27:41.548965  148123 logs.go:282] 0 containers: []
	W1010 19:27:41.548976  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:41.548983  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:41.549044  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:41.592243  148123 cri.go:89] found id: ""
	I1010 19:27:41.592274  148123 logs.go:282] 0 containers: []
	W1010 19:27:41.592283  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:41.592290  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:41.592342  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:41.628516  148123 cri.go:89] found id: ""
	I1010 19:27:41.628558  148123 logs.go:282] 0 containers: []
	W1010 19:27:41.628570  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:41.628578  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:41.628650  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:41.665011  148123 cri.go:89] found id: ""
	I1010 19:27:41.665041  148123 logs.go:282] 0 containers: []
	W1010 19:27:41.665053  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:41.665060  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:41.665112  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:41.702650  148123 cri.go:89] found id: ""
	I1010 19:27:41.702681  148123 logs.go:282] 0 containers: []
	W1010 19:27:41.702692  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:41.702700  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:41.702764  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:41.738971  148123 cri.go:89] found id: ""
	I1010 19:27:41.739002  148123 logs.go:282] 0 containers: []
	W1010 19:27:41.739021  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:41.739031  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:41.739062  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:41.813296  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:41.813321  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:41.813335  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:41.895138  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:41.895185  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:41.941975  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:41.942012  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:41.994838  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:41.994888  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:39.727140  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:41.728263  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:40.566241  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:43.065219  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:44.511153  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:44.524871  148123 kubeadm.go:597] duration metric: took 4m4.194337013s to restartPrimaryControlPlane
	W1010 19:27:44.524953  148123 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1010 19:27:44.524988  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1010 19:27:44.994063  148123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 19:27:45.010926  148123 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1010 19:27:45.021757  148123 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1010 19:27:45.032246  148123 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1010 19:27:45.032270  148123 kubeadm.go:157] found existing configuration files:
	
	I1010 19:27:45.032326  148123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1010 19:27:45.042799  148123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1010 19:27:45.042866  148123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1010 19:27:45.053017  148123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1010 19:27:45.063373  148123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1010 19:27:45.063445  148123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1010 19:27:45.074689  148123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1010 19:27:45.085187  148123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1010 19:27:45.085241  148123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1010 19:27:45.096275  148123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1010 19:27:45.106686  148123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1010 19:27:45.106750  148123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1010 19:27:45.117018  148123 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1010 19:27:45.358920  148123 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1010 19:27:44.226853  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:46.227586  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:48.727469  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:45.066410  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:47.569864  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:51.230704  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:53.727351  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:50.065845  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:52.066267  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:55.727457  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:58.226861  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:54.564611  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:56.566702  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:00.728542  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:03.225779  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:59.065614  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:01.068088  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:03.566502  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:06.739904  147758 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.195045639s)
	I1010 19:28:06.739984  147758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 19:28:06.756046  147758 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1010 19:28:06.768580  147758 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1010 19:28:06.780663  147758 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1010 19:28:06.780732  147758 kubeadm.go:157] found existing configuration files:
	
	I1010 19:28:06.780807  147758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1010 19:28:06.792092  147758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1010 19:28:06.792179  147758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1010 19:28:06.804515  147758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1010 19:28:06.814969  147758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1010 19:28:06.815040  147758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1010 19:28:06.826056  147758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1010 19:28:06.836050  147758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1010 19:28:06.836108  147758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1010 19:28:06.846125  147758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1010 19:28:06.855505  147758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1010 19:28:06.855559  147758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1010 19:28:06.865367  147758 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1010 19:28:06.916227  147758 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1010 19:28:06.916375  147758 kubeadm.go:310] [preflight] Running pre-flight checks
	I1010 19:28:07.036539  147758 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1010 19:28:07.036652  147758 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1010 19:28:07.036762  147758 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1010 19:28:07.044897  147758 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1010 19:28:07.046978  147758 out.go:235]   - Generating certificates and keys ...
	I1010 19:28:07.047117  147758 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1010 19:28:07.047229  147758 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1010 19:28:07.047384  147758 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1010 19:28:07.047467  147758 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1010 19:28:07.047584  147758 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1010 19:28:07.047675  147758 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1010 19:28:07.047794  147758 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1010 19:28:07.047902  147758 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1010 19:28:07.048005  147758 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1010 19:28:07.048093  147758 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1010 19:28:07.048142  147758 kubeadm.go:310] [certs] Using the existing "sa" key
	I1010 19:28:07.048210  147758 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1010 19:28:07.127836  147758 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1010 19:28:07.434492  147758 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1010 19:28:07.487567  147758 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1010 19:28:07.731314  147758 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1010 19:28:07.919060  147758 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1010 19:28:07.919565  147758 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1010 19:28:07.922740  147758 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1010 19:28:05.227611  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:07.229836  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:06.065246  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:08.067360  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:07.925140  147758 out.go:235]   - Booting up control plane ...
	I1010 19:28:07.925239  147758 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1010 19:28:07.925356  147758 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1010 19:28:07.925444  147758 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1010 19:28:07.944375  147758 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1010 19:28:07.951182  147758 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1010 19:28:07.951274  147758 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1010 19:28:08.087325  147758 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1010 19:28:08.087560  147758 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1010 19:28:08.598361  147758 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 511.081439ms
	I1010 19:28:08.598502  147758 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1010 19:28:09.727932  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:12.227939  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:10.566945  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:13.067142  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:14.100517  147758 kubeadm.go:310] [api-check] The API server is healthy after 5.501985157s
	I1010 19:28:14.119932  147758 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1010 19:28:14.149557  147758 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1010 19:28:14.207413  147758 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1010 19:28:14.207735  147758 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-541370 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1010 19:28:14.226199  147758 kubeadm.go:310] [bootstrap-token] Using token: sbg4v0.t5me93bb5vn8m913
	I1010 19:28:14.228059  147758 out.go:235]   - Configuring RBAC rules ...
	I1010 19:28:14.228208  147758 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1010 19:28:14.241706  147758 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1010 19:28:14.256554  147758 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1010 19:28:14.263129  147758 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1010 19:28:14.274346  147758 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1010 19:28:14.282313  147758 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1010 19:28:14.507850  147758 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1010 19:28:14.970234  147758 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1010 19:28:15.508328  147758 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1010 19:28:15.509530  147758 kubeadm.go:310] 
	I1010 19:28:15.509635  147758 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1010 19:28:15.509653  147758 kubeadm.go:310] 
	I1010 19:28:15.509743  147758 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1010 19:28:15.509762  147758 kubeadm.go:310] 
	I1010 19:28:15.509795  147758 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1010 19:28:15.509888  147758 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1010 19:28:15.509954  147758 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1010 19:28:15.509970  147758 kubeadm.go:310] 
	I1010 19:28:15.510083  147758 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1010 19:28:15.510103  147758 kubeadm.go:310] 
	I1010 19:28:15.510203  147758 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1010 19:28:15.510214  147758 kubeadm.go:310] 
	I1010 19:28:15.510297  147758 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1010 19:28:15.510410  147758 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1010 19:28:15.510489  147758 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1010 19:28:15.510495  147758 kubeadm.go:310] 
	I1010 19:28:15.510603  147758 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1010 19:28:15.510707  147758 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1010 19:28:15.510724  147758 kubeadm.go:310] 
	I1010 19:28:15.510807  147758 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token sbg4v0.t5me93bb5vn8m913 \
	I1010 19:28:15.510958  147758 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:bc8568f96f96483e4403a7eff9866ac7bd8b0e76d8203796a49dd1a769f11bda \
	I1010 19:28:15.511005  147758 kubeadm.go:310] 	--control-plane 
	I1010 19:28:15.511034  147758 kubeadm.go:310] 
	I1010 19:28:15.511161  147758 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1010 19:28:15.511173  147758 kubeadm.go:310] 
	I1010 19:28:15.511268  147758 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token sbg4v0.t5me93bb5vn8m913 \
	I1010 19:28:15.511403  147758 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:bc8568f96f96483e4403a7eff9866ac7bd8b0e76d8203796a49dd1a769f11bda 
	I1010 19:28:15.512298  147758 kubeadm.go:310] W1010 19:28:06.890572    2539 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1010 19:28:15.512594  147758 kubeadm.go:310] W1010 19:28:06.891448    2539 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1010 19:28:15.512702  147758 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1010 19:28:15.512734  147758 cni.go:84] Creating CNI manager for ""
	I1010 19:28:15.512744  147758 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1010 19:28:15.514703  147758 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1010 19:28:15.516229  147758 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1010 19:28:15.527554  147758 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1010 19:28:15.549266  147758 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1010 19:28:15.549362  147758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:15.549399  147758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-541370 minikube.k8s.io/updated_at=2024_10_10T19_28_15_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=e90d7de550e839433dc631623053398b652747fc minikube.k8s.io/name=embed-certs-541370 minikube.k8s.io/primary=true
	I1010 19:28:15.590732  147758 ops.go:34] apiserver oom_adj: -16
	I1010 19:28:15.740942  147758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:16.241392  147758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:16.741807  147758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:14.229241  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:16.727260  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:14.059512  148525 pod_ready.go:82] duration metric: took 4m0.00022742s for pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace to be "Ready" ...
	E1010 19:28:14.059550  148525 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace to be "Ready" (will not retry!)
	I1010 19:28:14.059569  148525 pod_ready.go:39] duration metric: took 4m7.001942194s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 19:28:14.059614  148525 kubeadm.go:597] duration metric: took 4m14.998320151s to restartPrimaryControlPlane
	W1010 19:28:14.059672  148525 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1010 19:28:14.059698  148525 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1010 19:28:17.241315  147758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:17.741580  147758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:18.241006  147758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:18.742042  147758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:19.241251  147758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:19.741030  147758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:19.862541  147758 kubeadm.go:1113] duration metric: took 4.313246481s to wait for elevateKubeSystemPrivileges
	I1010 19:28:19.862579  147758 kubeadm.go:394] duration metric: took 5m2.288571479s to StartCluster
	I1010 19:28:19.862628  147758 settings.go:142] acquiring lock: {Name:mk8d73e472e6ca16e47be8eb712406a95625b8da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 19:28:19.862751  147758 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19787-81676/kubeconfig
	I1010 19:28:19.864528  147758 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19787-81676/kubeconfig: {Name:mk31297b607b774870865548d952622c04d970cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 19:28:19.864812  147758 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.120 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1010 19:28:19.864910  147758 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1010 19:28:19.865019  147758 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-541370"
	I1010 19:28:19.865041  147758 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-541370"
	W1010 19:28:19.865053  147758 addons.go:243] addon storage-provisioner should already be in state true
	I1010 19:28:19.865062  147758 addons.go:69] Setting default-storageclass=true in profile "embed-certs-541370"
	I1010 19:28:19.865085  147758 host.go:66] Checking if "embed-certs-541370" exists ...
	I1010 19:28:19.865077  147758 config.go:182] Loaded profile config "embed-certs-541370": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 19:28:19.865129  147758 addons.go:69] Setting metrics-server=true in profile "embed-certs-541370"
	I1010 19:28:19.865164  147758 addons.go:234] Setting addon metrics-server=true in "embed-certs-541370"
	W1010 19:28:19.865179  147758 addons.go:243] addon metrics-server should already be in state true
	I1010 19:28:19.865115  147758 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-541370"
	I1010 19:28:19.865215  147758 host.go:66] Checking if "embed-certs-541370" exists ...
	I1010 19:28:19.865558  147758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:28:19.865593  147758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:28:19.865607  147758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:28:19.865629  147758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:28:19.865595  147758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:28:19.865725  147758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:28:19.866857  147758 out.go:177] * Verifying Kubernetes components...
	I1010 19:28:19.868590  147758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 19:28:19.882524  147758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42899
	I1010 19:28:19.882595  147758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39477
	I1010 19:28:19.882678  147758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42999
	I1010 19:28:19.883065  147758 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:28:19.883168  147758 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:28:19.883281  147758 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:28:19.883559  147758 main.go:141] libmachine: Using API Version  1
	I1010 19:28:19.883575  147758 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:28:19.883657  147758 main.go:141] libmachine: Using API Version  1
	I1010 19:28:19.883669  147758 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:28:19.883802  147758 main.go:141] libmachine: Using API Version  1
	I1010 19:28:19.883818  147758 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:28:19.883968  147758 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:28:19.883976  147758 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:28:19.884141  147758 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:28:19.884194  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetState
	I1010 19:28:19.884408  147758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:28:19.884437  147758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:28:19.884684  147758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:28:19.884746  147758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:28:19.887912  147758 addons.go:234] Setting addon default-storageclass=true in "embed-certs-541370"
	W1010 19:28:19.887942  147758 addons.go:243] addon default-storageclass should already be in state true
	I1010 19:28:19.887973  147758 host.go:66] Checking if "embed-certs-541370" exists ...
	I1010 19:28:19.888333  147758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:28:19.888383  147758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:28:19.901588  147758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37895
	I1010 19:28:19.902131  147758 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:28:19.902597  147758 main.go:141] libmachine: Using API Version  1
	I1010 19:28:19.902621  147758 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:28:19.902927  147758 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:28:19.903101  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetState
	I1010 19:28:19.904556  147758 main.go:141] libmachine: (embed-certs-541370) Calling .DriverName
	I1010 19:28:19.905207  147758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33927
	I1010 19:28:19.905621  147758 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:28:19.906188  147758 main.go:141] libmachine: Using API Version  1
	I1010 19:28:19.906209  147758 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:28:19.906599  147758 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:28:19.906647  147758 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 19:28:19.906837  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetState
	I1010 19:28:19.907699  147758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34245
	I1010 19:28:19.908147  147758 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:28:19.908557  147758 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 19:28:19.908584  147758 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1010 19:28:19.908610  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHHostname
	I1010 19:28:19.908705  147758 main.go:141] libmachine: (embed-certs-541370) Calling .DriverName
	I1010 19:28:19.908717  147758 main.go:141] libmachine: Using API Version  1
	I1010 19:28:19.908745  147758 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:28:19.909364  147758 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:28:19.910154  147758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:28:19.910208  147758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:28:19.910840  147758 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1010 19:28:19.912716  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:28:19.912722  147758 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1010 19:28:19.912743  147758 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1010 19:28:19.912769  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHHostname
	I1010 19:28:19.913199  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:28:19.913224  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:28:19.913500  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHPort
	I1010 19:28:19.913682  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:28:19.913845  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHUsername
	I1010 19:28:19.913972  147758 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/embed-certs-541370/id_rsa Username:docker}
	I1010 19:28:19.921800  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:28:19.922343  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:28:19.922374  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:28:19.922653  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHPort
	I1010 19:28:19.922842  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:28:19.922965  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHUsername
	I1010 19:28:19.923108  147758 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/embed-certs-541370/id_rsa Username:docker}
	I1010 19:28:19.935097  147758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39933
	I1010 19:28:19.935605  147758 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:28:19.936123  147758 main.go:141] libmachine: Using API Version  1
	I1010 19:28:19.936146  147758 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:28:19.936561  147758 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:28:19.936747  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetState
	I1010 19:28:19.938789  147758 main.go:141] libmachine: (embed-certs-541370) Calling .DriverName
	I1010 19:28:19.939019  147758 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1010 19:28:19.939034  147758 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1010 19:28:19.939054  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHHostname
	I1010 19:28:19.941682  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:28:19.942137  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:28:19.942165  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:28:19.942404  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHPort
	I1010 19:28:19.942642  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:28:19.942767  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHUsername
	I1010 19:28:19.942915  147758 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/embed-certs-541370/id_rsa Username:docker}
	I1010 19:28:20.108247  147758 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 19:28:20.149819  147758 node_ready.go:35] waiting up to 6m0s for node "embed-certs-541370" to be "Ready" ...
	I1010 19:28:20.163096  147758 node_ready.go:49] node "embed-certs-541370" has status "Ready":"True"
	I1010 19:28:20.163118  147758 node_ready.go:38] duration metric: took 13.26779ms for node "embed-certs-541370" to be "Ready" ...
	I1010 19:28:20.163128  147758 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 19:28:20.168620  147758 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-59752" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:20.241952  147758 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1010 19:28:20.241978  147758 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1010 19:28:20.249679  147758 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1010 19:28:20.290149  147758 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1010 19:28:20.290190  147758 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1010 19:28:20.291475  147758 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 19:28:20.410539  147758 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1010 19:28:20.410582  147758 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1010 19:28:20.491567  147758 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1010 19:28:20.684370  147758 main.go:141] libmachine: Making call to close driver server
	I1010 19:28:20.684403  147758 main.go:141] libmachine: (embed-certs-541370) Calling .Close
	I1010 19:28:20.684695  147758 main.go:141] libmachine: (embed-certs-541370) DBG | Closing plugin on server side
	I1010 19:28:20.684742  147758 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:28:20.684749  147758 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:28:20.684756  147758 main.go:141] libmachine: Making call to close driver server
	I1010 19:28:20.684764  147758 main.go:141] libmachine: (embed-certs-541370) Calling .Close
	I1010 19:28:20.685029  147758 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:28:20.685059  147758 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:28:20.685036  147758 main.go:141] libmachine: (embed-certs-541370) DBG | Closing plugin on server side
	I1010 19:28:20.695901  147758 main.go:141] libmachine: Making call to close driver server
	I1010 19:28:20.695926  147758 main.go:141] libmachine: (embed-certs-541370) Calling .Close
	I1010 19:28:20.696202  147758 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:28:20.696249  147758 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:28:21.439463  147758 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.147952803s)
	I1010 19:28:21.439626  147758 main.go:141] libmachine: Making call to close driver server
	I1010 19:28:21.439659  147758 main.go:141] libmachine: (embed-certs-541370) Calling .Close
	I1010 19:28:21.439951  147758 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:28:21.439969  147758 main.go:141] libmachine: (embed-certs-541370) DBG | Closing plugin on server side
	I1010 19:28:21.439976  147758 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:28:21.439997  147758 main.go:141] libmachine: Making call to close driver server
	I1010 19:28:21.440009  147758 main.go:141] libmachine: (embed-certs-541370) Calling .Close
	I1010 19:28:21.440299  147758 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:28:21.440298  147758 main.go:141] libmachine: (embed-certs-541370) DBG | Closing plugin on server side
	I1010 19:28:21.440314  147758 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:28:21.780486  147758 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.288854773s)
	I1010 19:28:21.780551  147758 main.go:141] libmachine: Making call to close driver server
	I1010 19:28:21.780567  147758 main.go:141] libmachine: (embed-certs-541370) Calling .Close
	I1010 19:28:21.780948  147758 main.go:141] libmachine: (embed-certs-541370) DBG | Closing plugin on server side
	I1010 19:28:21.780980  147758 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:28:21.780996  147758 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:28:21.781007  147758 main.go:141] libmachine: Making call to close driver server
	I1010 19:28:21.781016  147758 main.go:141] libmachine: (embed-certs-541370) Calling .Close
	I1010 19:28:21.781289  147758 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:28:21.781310  147758 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:28:21.781331  147758 addons.go:475] Verifying addon metrics-server=true in "embed-certs-541370"
	I1010 19:28:21.783512  147758 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1010 19:28:21.784958  147758 addons.go:510] duration metric: took 1.92006141s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1010 19:28:19.225844  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:21.227960  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:23.726439  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:22.195129  147758 pod_ready.go:103] pod "coredns-7c65d6cfc9-59752" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:24.678736  147758 pod_ready.go:103] pod "coredns-7c65d6cfc9-59752" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:25.727053  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:27.727657  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:27.177348  147758 pod_ready.go:103] pod "coredns-7c65d6cfc9-59752" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:29.177459  147758 pod_ready.go:93] pod "coredns-7c65d6cfc9-59752" in "kube-system" namespace has status "Ready":"True"
	I1010 19:28:29.177485  147758 pod_ready.go:82] duration metric: took 9.008841503s for pod "coredns-7c65d6cfc9-59752" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:29.177495  147758 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-n7wxs" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:29.182744  147758 pod_ready.go:93] pod "coredns-7c65d6cfc9-n7wxs" in "kube-system" namespace has status "Ready":"True"
	I1010 19:28:29.182777  147758 pod_ready.go:82] duration metric: took 5.273263ms for pod "coredns-7c65d6cfc9-n7wxs" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:29.182791  147758 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:29.191507  147758 pod_ready.go:93] pod "etcd-embed-certs-541370" in "kube-system" namespace has status "Ready":"True"
	I1010 19:28:29.191539  147758 pod_ready.go:82] duration metric: took 8.738961ms for pod "etcd-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:29.191554  147758 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:29.199167  147758 pod_ready.go:93] pod "kube-apiserver-embed-certs-541370" in "kube-system" namespace has status "Ready":"True"
	I1010 19:28:29.199218  147758 pod_ready.go:82] duration metric: took 7.635672ms for pod "kube-apiserver-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:29.199234  147758 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:29.204558  147758 pod_ready.go:93] pod "kube-controller-manager-embed-certs-541370" in "kube-system" namespace has status "Ready":"True"
	I1010 19:28:29.204581  147758 pod_ready.go:82] duration metric: took 5.337574ms for pod "kube-controller-manager-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:29.204591  147758 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-6hdds" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:29.573781  147758 pod_ready.go:93] pod "kube-proxy-6hdds" in "kube-system" namespace has status "Ready":"True"
	I1010 19:28:29.573808  147758 pod_ready.go:82] duration metric: took 369.210969ms for pod "kube-proxy-6hdds" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:29.573818  147758 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:29.974015  147758 pod_ready.go:93] pod "kube-scheduler-embed-certs-541370" in "kube-system" namespace has status "Ready":"True"
	I1010 19:28:29.974039  147758 pod_ready.go:82] duration metric: took 400.214845ms for pod "kube-scheduler-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:29.974048  147758 pod_ready.go:39] duration metric: took 9.810911064s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 19:28:29.974066  147758 api_server.go:52] waiting for apiserver process to appear ...
	I1010 19:28:29.974120  147758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:28:29.991332  147758 api_server.go:72] duration metric: took 10.126480862s to wait for apiserver process to appear ...
	I1010 19:28:29.991356  147758 api_server.go:88] waiting for apiserver healthz status ...
	I1010 19:28:29.991382  147758 api_server.go:253] Checking apiserver healthz at https://192.168.39.120:8443/healthz ...
	I1010 19:28:29.995855  147758 api_server.go:279] https://192.168.39.120:8443/healthz returned 200:
	ok
	I1010 19:28:29.997488  147758 api_server.go:141] control plane version: v1.31.1
	I1010 19:28:29.997516  147758 api_server.go:131] duration metric: took 6.152312ms to wait for apiserver health ...
	I1010 19:28:29.997526  147758 system_pods.go:43] waiting for kube-system pods to appear ...
	I1010 19:28:30.176631  147758 system_pods.go:59] 9 kube-system pods found
	I1010 19:28:30.176662  147758 system_pods.go:61] "coredns-7c65d6cfc9-59752" [f7980c69-dd8e-42e0-a0ab-1dedf2203367] Running
	I1010 19:28:30.176668  147758 system_pods.go:61] "coredns-7c65d6cfc9-n7wxs" [ac89df92-bbdb-432b-8c4a-d9ced3c2f2b5] Running
	I1010 19:28:30.176672  147758 system_pods.go:61] "etcd-embed-certs-541370" [94f92d38-753f-47c7-95b6-7ef0486a172d] Running
	I1010 19:28:30.176676  147758 system_pods.go:61] "kube-apiserver-embed-certs-541370" [b60f7ec1-6416-4e0b-8f49-d673496187b6] Running
	I1010 19:28:30.176680  147758 system_pods.go:61] "kube-controller-manager-embed-certs-541370" [ca09511b-ec93-4146-9d43-5fbc0880394a] Running
	I1010 19:28:30.176683  147758 system_pods.go:61] "kube-proxy-6hdds" [fe7cbbf4-12be-469d-b176-37c4daccab96] Running
	I1010 19:28:30.176686  147758 system_pods.go:61] "kube-scheduler-embed-certs-541370" [37c96f22-d5a4-4233-ba65-7367b075656a] Running
	I1010 19:28:30.176693  147758 system_pods.go:61] "metrics-server-6867b74b74-znhn4" [5dc1f764-c7c7-480e-b787-5f5cf6c14a84] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1010 19:28:30.176699  147758 system_pods.go:61] "storage-provisioner" [cfb28184-daef-40be-9170-b42058727418] Running
	I1010 19:28:30.176707  147758 system_pods.go:74] duration metric: took 179.174083ms to wait for pod list to return data ...
	I1010 19:28:30.176714  147758 default_sa.go:34] waiting for default service account to be created ...
	I1010 19:28:30.375326  147758 default_sa.go:45] found service account: "default"
	I1010 19:28:30.375361  147758 default_sa.go:55] duration metric: took 198.640267ms for default service account to be created ...
	I1010 19:28:30.375374  147758 system_pods.go:116] waiting for k8s-apps to be running ...
	I1010 19:28:30.578749  147758 system_pods.go:86] 9 kube-system pods found
	I1010 19:28:30.578780  147758 system_pods.go:89] "coredns-7c65d6cfc9-59752" [f7980c69-dd8e-42e0-a0ab-1dedf2203367] Running
	I1010 19:28:30.578786  147758 system_pods.go:89] "coredns-7c65d6cfc9-n7wxs" [ac89df92-bbdb-432b-8c4a-d9ced3c2f2b5] Running
	I1010 19:28:30.578790  147758 system_pods.go:89] "etcd-embed-certs-541370" [94f92d38-753f-47c7-95b6-7ef0486a172d] Running
	I1010 19:28:30.578794  147758 system_pods.go:89] "kube-apiserver-embed-certs-541370" [b60f7ec1-6416-4e0b-8f49-d673496187b6] Running
	I1010 19:28:30.578797  147758 system_pods.go:89] "kube-controller-manager-embed-certs-541370" [ca09511b-ec93-4146-9d43-5fbc0880394a] Running
	I1010 19:28:30.578801  147758 system_pods.go:89] "kube-proxy-6hdds" [fe7cbbf4-12be-469d-b176-37c4daccab96] Running
	I1010 19:28:30.578804  147758 system_pods.go:89] "kube-scheduler-embed-certs-541370" [37c96f22-d5a4-4233-ba65-7367b075656a] Running
	I1010 19:28:30.578810  147758 system_pods.go:89] "metrics-server-6867b74b74-znhn4" [5dc1f764-c7c7-480e-b787-5f5cf6c14a84] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1010 19:28:30.578814  147758 system_pods.go:89] "storage-provisioner" [cfb28184-daef-40be-9170-b42058727418] Running
	I1010 19:28:30.578822  147758 system_pods.go:126] duration metric: took 203.441477ms to wait for k8s-apps to be running ...
	I1010 19:28:30.578829  147758 system_svc.go:44] waiting for kubelet service to be running ....
	I1010 19:28:30.578877  147758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 19:28:30.596523  147758 system_svc.go:56] duration metric: took 17.684729ms WaitForService to wait for kubelet
	I1010 19:28:30.596553  147758 kubeadm.go:582] duration metric: took 10.731708748s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1010 19:28:30.596573  147758 node_conditions.go:102] verifying NodePressure condition ...
	I1010 19:28:30.774749  147758 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1010 19:28:30.774783  147758 node_conditions.go:123] node cpu capacity is 2
	I1010 19:28:30.774807  147758 node_conditions.go:105] duration metric: took 178.228671ms to run NodePressure ...
	I1010 19:28:30.774822  147758 start.go:241] waiting for startup goroutines ...
	I1010 19:28:30.774831  147758 start.go:246] waiting for cluster config update ...
	I1010 19:28:30.774845  147758 start.go:255] writing updated cluster config ...
	I1010 19:28:30.775121  147758 ssh_runner.go:195] Run: rm -f paused
	I1010 19:28:30.826689  147758 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1010 19:28:30.828795  147758 out.go:177] * Done! kubectl is now configured to use "embed-certs-541370" cluster and "default" namespace by default
	I1010 19:28:29.728096  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:32.229632  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:34.726536  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:36.727032  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:38.727488  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:40.372903  148525 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.31317648s)
	I1010 19:28:40.372991  148525 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 19:28:40.389319  148525 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1010 19:28:40.400123  148525 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1010 19:28:40.411906  148525 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1010 19:28:40.411932  148525 kubeadm.go:157] found existing configuration files:
	
	I1010 19:28:40.411976  148525 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1010 19:28:40.421840  148525 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1010 19:28:40.421904  148525 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1010 19:28:40.432229  148525 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1010 19:28:40.442121  148525 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1010 19:28:40.442203  148525 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1010 19:28:40.452969  148525 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1010 19:28:40.463085  148525 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1010 19:28:40.463146  148525 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1010 19:28:40.473103  148525 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1010 19:28:40.482854  148525 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1010 19:28:40.482914  148525 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1010 19:28:40.494023  148525 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1010 19:28:40.543369  148525 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1010 19:28:40.543466  148525 kubeadm.go:310] [preflight] Running pre-flight checks
	I1010 19:28:40.657301  148525 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1010 19:28:40.657462  148525 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1010 19:28:40.657579  148525 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1010 19:28:40.669222  148525 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1010 19:28:40.670995  148525 out.go:235]   - Generating certificates and keys ...
	I1010 19:28:40.671102  148525 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1010 19:28:40.671171  148525 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1010 19:28:40.671284  148525 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1010 19:28:40.671374  148525 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1010 19:28:40.671471  148525 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1010 19:28:40.671557  148525 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1010 19:28:40.671650  148525 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1010 19:28:40.671751  148525 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1010 19:28:40.671895  148525 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1010 19:28:40.672000  148525 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1010 19:28:40.672056  148525 kubeadm.go:310] [certs] Using the existing "sa" key
	I1010 19:28:40.672136  148525 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1010 19:28:40.876613  148525 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1010 19:28:41.109518  148525 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1010 19:28:41.186751  148525 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1010 19:28:41.424710  148525 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1010 19:28:41.479611  148525 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1010 19:28:41.480235  148525 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1010 19:28:41.483222  148525 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1010 19:28:41.227521  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:43.728023  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:41.484809  148525 out.go:235]   - Booting up control plane ...
	I1010 19:28:41.484935  148525 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1010 19:28:41.485020  148525 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1010 19:28:41.485317  148525 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1010 19:28:41.506919  148525 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1010 19:28:41.517006  148525 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1010 19:28:41.517077  148525 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1010 19:28:41.653211  148525 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1010 19:28:41.653364  148525 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1010 19:28:42.655360  148525 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001910447s
	I1010 19:28:42.655482  148525 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1010 19:28:47.658431  148525 kubeadm.go:310] [api-check] The API server is healthy after 5.003169217s
	I1010 19:28:47.676178  148525 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1010 19:28:47.694752  148525 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1010 19:28:47.720376  148525 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1010 19:28:47.720645  148525 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-361847 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1010 19:28:47.736489  148525 kubeadm.go:310] [bootstrap-token] Using token: cprf0t.lm4xp75yi0cdu4sy
	I1010 19:28:46.228217  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:48.726740  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:47.737958  148525 out.go:235]   - Configuring RBAC rules ...
	I1010 19:28:47.738089  148525 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1010 19:28:47.750073  148525 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1010 19:28:47.758010  148525 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1010 19:28:47.761649  148525 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1010 19:28:47.768953  148525 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1010 19:28:47.774428  148525 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1010 19:28:48.065988  148525 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1010 19:28:48.502538  148525 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1010 19:28:49.066479  148525 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1010 19:28:49.069842  148525 kubeadm.go:310] 
	I1010 19:28:49.069937  148525 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1010 19:28:49.069947  148525 kubeadm.go:310] 
	I1010 19:28:49.070046  148525 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1010 19:28:49.070058  148525 kubeadm.go:310] 
	I1010 19:28:49.070089  148525 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1010 19:28:49.070166  148525 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1010 19:28:49.070254  148525 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1010 19:28:49.070265  148525 kubeadm.go:310] 
	I1010 19:28:49.070342  148525 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1010 19:28:49.070353  148525 kubeadm.go:310] 
	I1010 19:28:49.070446  148525 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1010 19:28:49.070478  148525 kubeadm.go:310] 
	I1010 19:28:49.070544  148525 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1010 19:28:49.070640  148525 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1010 19:28:49.070750  148525 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1010 19:28:49.070773  148525 kubeadm.go:310] 
	I1010 19:28:49.070880  148525 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1010 19:28:49.070990  148525 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1010 19:28:49.071001  148525 kubeadm.go:310] 
	I1010 19:28:49.071153  148525 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token cprf0t.lm4xp75yi0cdu4sy \
	I1010 19:28:49.071299  148525 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:bc8568f96f96483e4403a7eff9866ac7bd8b0e76d8203796a49dd1a769f11bda \
	I1010 19:28:49.071330  148525 kubeadm.go:310] 	--control-plane 
	I1010 19:28:49.071349  148525 kubeadm.go:310] 
	I1010 19:28:49.071468  148525 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1010 19:28:49.071497  148525 kubeadm.go:310] 
	I1010 19:28:49.072228  148525 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token cprf0t.lm4xp75yi0cdu4sy \
	I1010 19:28:49.072354  148525 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:bc8568f96f96483e4403a7eff9866ac7bd8b0e76d8203796a49dd1a769f11bda 
	I1010 19:28:49.074595  148525 kubeadm.go:310] W1010 19:28:40.525557    2560 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1010 19:28:49.074944  148525 kubeadm.go:310] W1010 19:28:40.526329    2560 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1010 19:28:49.075102  148525 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1010 19:28:49.075143  148525 cni.go:84] Creating CNI manager for ""
	I1010 19:28:49.075166  148525 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1010 19:28:49.077190  148525 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1010 19:28:49.078665  148525 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1010 19:28:49.091792  148525 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1010 19:28:49.113801  148525 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1010 19:28:49.113920  148525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-361847 minikube.k8s.io/updated_at=2024_10_10T19_28_49_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=e90d7de550e839433dc631623053398b652747fc minikube.k8s.io/name=default-k8s-diff-port-361847 minikube.k8s.io/primary=true
	I1010 19:28:49.114074  148525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:49.154398  148525 ops.go:34] apiserver oom_adj: -16
	I1010 19:28:49.351271  148525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:49.852049  148525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:50.351441  148525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:50.852022  148525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:51.351391  148525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:51.851329  148525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:52.351840  148525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:52.852392  148525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:53.351397  148525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:53.443325  148525 kubeadm.go:1113] duration metric: took 4.329288133s to wait for elevateKubeSystemPrivileges
	I1010 19:28:53.443363  148525 kubeadm.go:394] duration metric: took 4m54.439732071s to StartCluster
	I1010 19:28:53.443386  148525 settings.go:142] acquiring lock: {Name:mk8d73e472e6ca16e47be8eb712406a95625b8da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 19:28:53.443481  148525 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19787-81676/kubeconfig
	I1010 19:28:53.445465  148525 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19787-81676/kubeconfig: {Name:mk31297b607b774870865548d952622c04d970cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 19:28:53.445747  148525 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.32 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1010 19:28:53.445842  148525 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1010 19:28:53.445957  148525 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-361847"
	I1010 19:28:53.445980  148525 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-361847"
	W1010 19:28:53.445992  148525 addons.go:243] addon storage-provisioner should already be in state true
	I1010 19:28:53.446004  148525 config.go:182] Loaded profile config "default-k8s-diff-port-361847": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 19:28:53.446026  148525 host.go:66] Checking if "default-k8s-diff-port-361847" exists ...
	I1010 19:28:53.446065  148525 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-361847"
	I1010 19:28:53.446100  148525 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-361847"
	I1010 19:28:53.446085  148525 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-361847"
	I1010 19:28:53.446137  148525 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-361847"
	W1010 19:28:53.446151  148525 addons.go:243] addon metrics-server should already be in state true
	I1010 19:28:53.446242  148525 host.go:66] Checking if "default-k8s-diff-port-361847" exists ...
	I1010 19:28:53.446515  148525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:28:53.446562  148525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:28:53.447089  148525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:28:53.447135  148525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:28:53.447315  148525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:28:53.447360  148525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:28:53.450779  148525 out.go:177] * Verifying Kubernetes components...
	I1010 19:28:53.452838  148525 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 19:28:53.465502  148525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36411
	I1010 19:28:53.466020  148525 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:28:53.466572  148525 main.go:141] libmachine: Using API Version  1
	I1010 19:28:53.466594  148525 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:28:53.466772  148525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42293
	I1010 19:28:53.467034  148525 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:28:53.467209  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetState
	I1010 19:28:53.467310  148525 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:28:53.467828  148525 main.go:141] libmachine: Using API Version  1
	I1010 19:28:53.467857  148525 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:28:53.467899  148525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36387
	I1010 19:28:53.468270  148525 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:28:53.468451  148525 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:28:53.468866  148525 main.go:141] libmachine: Using API Version  1
	I1010 19:28:53.468891  148525 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:28:53.469102  148525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:28:53.469150  148525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:28:53.469484  148525 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:28:53.470068  148525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:28:53.470114  148525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:28:53.471192  148525 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-361847"
	W1010 19:28:53.471213  148525 addons.go:243] addon default-storageclass should already be in state true
	I1010 19:28:53.471261  148525 host.go:66] Checking if "default-k8s-diff-port-361847" exists ...
	I1010 19:28:53.471618  148525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:28:53.471664  148525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:28:53.486550  148525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38971
	I1010 19:28:53.487068  148525 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:28:53.487608  148525 main.go:141] libmachine: Using API Version  1
	I1010 19:28:53.487626  148525 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:28:53.488015  148525 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:28:53.488329  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetState
	I1010 19:28:53.490200  148525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44971
	I1010 19:28:53.490240  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .DriverName
	I1010 19:28:53.490790  148525 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:28:53.491318  148525 main.go:141] libmachine: Using API Version  1
	I1010 19:28:53.491341  148525 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:28:53.491682  148525 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:28:53.491957  148525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45297
	I1010 19:28:53.492100  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetState
	I1010 19:28:53.492423  148525 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:28:53.492731  148525 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1010 19:28:53.492811  148525 main.go:141] libmachine: Using API Version  1
	I1010 19:28:53.492831  148525 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:28:53.493240  148525 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:28:53.493885  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .DriverName
	I1010 19:28:53.493979  148525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:28:53.494031  148525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:28:53.494359  148525 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1010 19:28:53.494381  148525 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1010 19:28:53.494397  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHHostname
	I1010 19:28:53.495771  148525 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 19:28:51.226596  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:53.227299  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:53.227335  147213 pod_ready.go:82] duration metric: took 4m0.007224391s for pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace to be "Ready" ...
	E1010 19:28:53.227346  147213 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1010 19:28:53.227355  147213 pod_ready.go:39] duration metric: took 4m5.554224355s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 19:28:53.227375  147213 api_server.go:52] waiting for apiserver process to appear ...
	I1010 19:28:53.227419  147213 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:28:53.227484  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:28:53.288713  147213 cri.go:89] found id: "20a9cb514f18abab89d82173a52c43693b13548712f96726c491e14100e5deba"
	I1010 19:28:53.288749  147213 cri.go:89] found id: ""
	I1010 19:28:53.288759  147213 logs.go:282] 1 containers: [20a9cb514f18abab89d82173a52c43693b13548712f96726c491e14100e5deba]
	I1010 19:28:53.288823  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:53.294819  147213 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:28:53.294904  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:28:53.340169  147213 cri.go:89] found id: "bfc9f1f069a0299235908025d3c8840e059ddc2fc2f579932edb134cff568c36"
	I1010 19:28:53.340197  147213 cri.go:89] found id: ""
	I1010 19:28:53.340207  147213 logs.go:282] 1 containers: [bfc9f1f069a0299235908025d3c8840e059ddc2fc2f579932edb134cff568c36]
	I1010 19:28:53.340271  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:53.345214  147213 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:28:53.345292  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:28:53.392808  147213 cri.go:89] found id: "3c98f0e3e46ce82feb8638281ffb8c8cdf326b15115a6edfe91de59de6868846"
	I1010 19:28:53.392838  147213 cri.go:89] found id: ""
	I1010 19:28:53.392859  147213 logs.go:282] 1 containers: [3c98f0e3e46ce82feb8638281ffb8c8cdf326b15115a6edfe91de59de6868846]
	I1010 19:28:53.392921  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:53.398275  147213 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:28:53.398361  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:28:53.439567  147213 cri.go:89] found id: "d397ef1d012accb6f9cb41fa5bd5ed4649588e9ad8552f87d2f9a579a9c3f023"
	I1010 19:28:53.439594  147213 cri.go:89] found id: ""
	I1010 19:28:53.439604  147213 logs.go:282] 1 containers: [d397ef1d012accb6f9cb41fa5bd5ed4649588e9ad8552f87d2f9a579a9c3f023]
	I1010 19:28:53.439665  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:53.444366  147213 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:28:53.444436  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:28:53.522580  147213 cri.go:89] found id: "3a26f9cbec8dc8e6e1b1c1cc24a2432dc85819a78557be2263db6bc34e847664"
	I1010 19:28:53.522597  147213 cri.go:89] found id: ""
	I1010 19:28:53.522605  147213 logs.go:282] 1 containers: [3a26f9cbec8dc8e6e1b1c1cc24a2432dc85819a78557be2263db6bc34e847664]
	I1010 19:28:53.522654  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:53.528890  147213 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:28:53.528974  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:28:53.575933  147213 cri.go:89] found id: "d59196636b2826805e226757e3c5d3683d49f2fddb43f2e965aeae02026742ea"
	I1010 19:28:53.575963  147213 cri.go:89] found id: ""
	I1010 19:28:53.575975  147213 logs.go:282] 1 containers: [d59196636b2826805e226757e3c5d3683d49f2fddb43f2e965aeae02026742ea]
	I1010 19:28:53.576035  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:53.581693  147213 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:28:53.581763  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:28:53.619789  147213 cri.go:89] found id: ""
	I1010 19:28:53.619819  147213 logs.go:282] 0 containers: []
	W1010 19:28:53.619831  147213 logs.go:284] No container was found matching "kindnet"
	I1010 19:28:53.619839  147213 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1010 19:28:53.619899  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1010 19:28:53.659715  147213 cri.go:89] found id: "dfabbf70cd44985666c6b54a456e49c66a41cc19300a483565decd937ce240ab"
	I1010 19:28:53.659746  147213 cri.go:89] found id: "e14d37c6da3f630d60720c5dc7ec414f060a7b327fafd27b633f3402e552840e"
	I1010 19:28:53.659752  147213 cri.go:89] found id: ""
	I1010 19:28:53.659762  147213 logs.go:282] 2 containers: [dfabbf70cd44985666c6b54a456e49c66a41cc19300a483565decd937ce240ab e14d37c6da3f630d60720c5dc7ec414f060a7b327fafd27b633f3402e552840e]
	I1010 19:28:53.659828  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:53.664377  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:53.668766  147213 logs.go:123] Gathering logs for dmesg ...
	I1010 19:28:53.668796  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:28:53.685976  147213 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:28:53.686007  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 19:28:53.497232  148525 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 19:28:53.497251  148525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1010 19:28:53.497273  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHHostname
	I1010 19:28:53.497732  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:28:53.498599  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:28:53.498627  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:28:53.498971  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHPort
	I1010 19:28:53.499159  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:28:53.499312  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHUsername
	I1010 19:28:53.499414  148525 sshutil.go:53] new ssh client: &{IP:192.168.50.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/default-k8s-diff-port-361847/id_rsa Username:docker}
	I1010 19:28:53.501044  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:28:53.501509  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:28:53.501531  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:28:53.501782  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHPort
	I1010 19:28:53.501956  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:28:53.502080  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHUsername
	I1010 19:28:53.502232  148525 sshutil.go:53] new ssh client: &{IP:192.168.50.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/default-k8s-diff-port-361847/id_rsa Username:docker}
	I1010 19:28:53.512240  148525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34899
	I1010 19:28:53.512809  148525 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:28:53.513347  148525 main.go:141] libmachine: Using API Version  1
	I1010 19:28:53.513368  148525 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:28:53.513787  148525 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:28:53.514001  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetState
	I1010 19:28:53.515436  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .DriverName
	I1010 19:28:53.515639  148525 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1010 19:28:53.515659  148525 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1010 19:28:53.515681  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHHostname
	I1010 19:28:53.518128  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:28:53.518596  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:28:53.518628  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:28:53.518909  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHPort
	I1010 19:28:53.519080  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:28:53.519216  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHUsername
	I1010 19:28:53.519376  148525 sshutil.go:53] new ssh client: &{IP:192.168.50.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/default-k8s-diff-port-361847/id_rsa Username:docker}
	I1010 19:28:53.712871  148525 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 19:28:53.755059  148525 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-361847" to be "Ready" ...
	I1010 19:28:53.766564  148525 node_ready.go:49] node "default-k8s-diff-port-361847" has status "Ready":"True"
	I1010 19:28:53.766590  148525 node_ready.go:38] duration metric: took 11.490223ms for node "default-k8s-diff-port-361847" to be "Ready" ...
	I1010 19:28:53.766603  148525 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 19:28:53.777458  148525 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-dh9th" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:53.875493  148525 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1010 19:28:53.875525  148525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1010 19:28:53.911443  148525 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1010 19:28:53.944885  148525 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1010 19:28:53.944919  148525 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1010 19:28:53.945487  148525 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 19:28:54.011209  148525 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1010 19:28:54.011239  148525 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1010 19:28:54.039679  148525 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1010 19:28:54.598172  148525 main.go:141] libmachine: Making call to close driver server
	I1010 19:28:54.598226  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .Close
	I1010 19:28:54.598584  148525 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:28:54.598608  148525 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:28:54.598619  148525 main.go:141] libmachine: Making call to close driver server
	I1010 19:28:54.598629  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .Close
	I1010 19:28:54.598898  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | Closing plugin on server side
	I1010 19:28:54.598931  148525 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:28:54.598939  148525 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:28:54.643365  148525 main.go:141] libmachine: Making call to close driver server
	I1010 19:28:54.643392  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .Close
	I1010 19:28:54.643734  148525 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:28:54.643760  148525 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:28:55.287018  148525 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.341483807s)
	I1010 19:28:55.287045  148525 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.247326452s)
	I1010 19:28:55.287089  148525 main.go:141] libmachine: Making call to close driver server
	I1010 19:28:55.287094  148525 main.go:141] libmachine: Making call to close driver server
	I1010 19:28:55.287106  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .Close
	I1010 19:28:55.287112  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .Close
	I1010 19:28:55.287440  148525 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:28:55.287479  148525 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:28:55.287506  148525 main.go:141] libmachine: Making call to close driver server
	I1010 19:28:55.287524  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .Close
	I1010 19:28:55.287570  148525 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:28:55.287589  148525 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:28:55.287598  148525 main.go:141] libmachine: Making call to close driver server
	I1010 19:28:55.287607  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .Close
	I1010 19:28:55.287818  148525 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:28:55.287831  148525 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:28:55.287840  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | Closing plugin on server side
	I1010 19:28:55.287855  148525 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:28:55.287862  148525 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:28:55.287872  148525 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-361847"
	I1010 19:28:55.287880  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | Closing plugin on server side
	I1010 19:28:55.289944  148525 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1010 19:28:53.841387  147213 logs.go:123] Gathering logs for kube-apiserver [20a9cb514f18abab89d82173a52c43693b13548712f96726c491e14100e5deba] ...
	I1010 19:28:53.841441  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 20a9cb514f18abab89d82173a52c43693b13548712f96726c491e14100e5deba"
	I1010 19:28:53.892951  147213 logs.go:123] Gathering logs for coredns [3c98f0e3e46ce82feb8638281ffb8c8cdf326b15115a6edfe91de59de6868846] ...
	I1010 19:28:53.893005  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c98f0e3e46ce82feb8638281ffb8c8cdf326b15115a6edfe91de59de6868846"
	I1010 19:28:53.947636  147213 logs.go:123] Gathering logs for kube-scheduler [d397ef1d012accb6f9cb41fa5bd5ed4649588e9ad8552f87d2f9a579a9c3f023] ...
	I1010 19:28:53.947668  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d397ef1d012accb6f9cb41fa5bd5ed4649588e9ad8552f87d2f9a579a9c3f023"
	I1010 19:28:53.992969  147213 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:28:53.992998  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:28:54.520652  147213 logs.go:123] Gathering logs for kubelet ...
	I1010 19:28:54.520703  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:28:54.588366  147213 logs.go:123] Gathering logs for etcd [bfc9f1f069a0299235908025d3c8840e059ddc2fc2f579932edb134cff568c36] ...
	I1010 19:28:54.588418  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bfc9f1f069a0299235908025d3c8840e059ddc2fc2f579932edb134cff568c36"
	I1010 19:28:54.651179  147213 logs.go:123] Gathering logs for kube-proxy [3a26f9cbec8dc8e6e1b1c1cc24a2432dc85819a78557be2263db6bc34e847664] ...
	I1010 19:28:54.651227  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3a26f9cbec8dc8e6e1b1c1cc24a2432dc85819a78557be2263db6bc34e847664"
	I1010 19:28:54.712881  147213 logs.go:123] Gathering logs for kube-controller-manager [d59196636b2826805e226757e3c5d3683d49f2fddb43f2e965aeae02026742ea] ...
	I1010 19:28:54.712925  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d59196636b2826805e226757e3c5d3683d49f2fddb43f2e965aeae02026742ea"
	I1010 19:28:54.779030  147213 logs.go:123] Gathering logs for storage-provisioner [dfabbf70cd44985666c6b54a456e49c66a41cc19300a483565decd937ce240ab] ...
	I1010 19:28:54.779094  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dfabbf70cd44985666c6b54a456e49c66a41cc19300a483565decd937ce240ab"
	I1010 19:28:54.821961  147213 logs.go:123] Gathering logs for storage-provisioner [e14d37c6da3f630d60720c5dc7ec414f060a7b327fafd27b633f3402e552840e] ...
	I1010 19:28:54.822002  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e14d37c6da3f630d60720c5dc7ec414f060a7b327fafd27b633f3402e552840e"
	I1010 19:28:54.871409  147213 logs.go:123] Gathering logs for container status ...
	I1010 19:28:54.871446  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:28:57.425310  147213 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:28:57.442308  147213 api_server.go:72] duration metric: took 4m17.02881034s to wait for apiserver process to appear ...
	I1010 19:28:57.442343  147213 api_server.go:88] waiting for apiserver healthz status ...
	I1010 19:28:57.442383  147213 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:28:57.442444  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:28:57.481392  147213 cri.go:89] found id: "20a9cb514f18abab89d82173a52c43693b13548712f96726c491e14100e5deba"
	I1010 19:28:57.481420  147213 cri.go:89] found id: ""
	I1010 19:28:57.481430  147213 logs.go:282] 1 containers: [20a9cb514f18abab89d82173a52c43693b13548712f96726c491e14100e5deba]
	I1010 19:28:57.481503  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:57.486191  147213 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:28:57.486269  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:28:57.532238  147213 cri.go:89] found id: "bfc9f1f069a0299235908025d3c8840e059ddc2fc2f579932edb134cff568c36"
	I1010 19:28:57.532271  147213 cri.go:89] found id: ""
	I1010 19:28:57.532284  147213 logs.go:282] 1 containers: [bfc9f1f069a0299235908025d3c8840e059ddc2fc2f579932edb134cff568c36]
	I1010 19:28:57.532357  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:57.538105  147213 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:28:57.538188  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:28:57.579729  147213 cri.go:89] found id: "3c98f0e3e46ce82feb8638281ffb8c8cdf326b15115a6edfe91de59de6868846"
	I1010 19:28:57.579757  147213 cri.go:89] found id: ""
	I1010 19:28:57.579767  147213 logs.go:282] 1 containers: [3c98f0e3e46ce82feb8638281ffb8c8cdf326b15115a6edfe91de59de6868846]
	I1010 19:28:57.579833  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:57.584494  147213 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:28:57.584568  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:28:57.623920  147213 cri.go:89] found id: "d397ef1d012accb6f9cb41fa5bd5ed4649588e9ad8552f87d2f9a579a9c3f023"
	I1010 19:28:57.623949  147213 cri.go:89] found id: ""
	I1010 19:28:57.623960  147213 logs.go:282] 1 containers: [d397ef1d012accb6f9cb41fa5bd5ed4649588e9ad8552f87d2f9a579a9c3f023]
	I1010 19:28:57.624028  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:57.628927  147213 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:28:57.629018  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:28:57.669669  147213 cri.go:89] found id: "3a26f9cbec8dc8e6e1b1c1cc24a2432dc85819a78557be2263db6bc34e847664"
	I1010 19:28:57.669698  147213 cri.go:89] found id: ""
	I1010 19:28:57.669707  147213 logs.go:282] 1 containers: [3a26f9cbec8dc8e6e1b1c1cc24a2432dc85819a78557be2263db6bc34e847664]
	I1010 19:28:57.669771  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:57.674449  147213 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:28:57.674526  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:28:57.721856  147213 cri.go:89] found id: "d59196636b2826805e226757e3c5d3683d49f2fddb43f2e965aeae02026742ea"
	I1010 19:28:57.721881  147213 cri.go:89] found id: ""
	I1010 19:28:57.721891  147213 logs.go:282] 1 containers: [d59196636b2826805e226757e3c5d3683d49f2fddb43f2e965aeae02026742ea]
	I1010 19:28:57.721955  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:57.726422  147213 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:28:57.726497  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:28:57.764464  147213 cri.go:89] found id: ""
	I1010 19:28:57.764499  147213 logs.go:282] 0 containers: []
	W1010 19:28:57.764512  147213 logs.go:284] No container was found matching "kindnet"
	I1010 19:28:57.764521  147213 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1010 19:28:57.764595  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1010 19:28:57.809758  147213 cri.go:89] found id: "dfabbf70cd44985666c6b54a456e49c66a41cc19300a483565decd937ce240ab"
	I1010 19:28:57.809784  147213 cri.go:89] found id: "e14d37c6da3f630d60720c5dc7ec414f060a7b327fafd27b633f3402e552840e"
	I1010 19:28:57.809788  147213 cri.go:89] found id: ""
	I1010 19:28:57.809797  147213 logs.go:282] 2 containers: [dfabbf70cd44985666c6b54a456e49c66a41cc19300a483565decd937ce240ab e14d37c6da3f630d60720c5dc7ec414f060a7b327fafd27b633f3402e552840e]
	I1010 19:28:57.809854  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:57.815576  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:57.820152  147213 logs.go:123] Gathering logs for kube-apiserver [20a9cb514f18abab89d82173a52c43693b13548712f96726c491e14100e5deba] ...
	I1010 19:28:57.820181  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 20a9cb514f18abab89d82173a52c43693b13548712f96726c491e14100e5deba"
	I1010 19:28:57.869339  147213 logs.go:123] Gathering logs for etcd [bfc9f1f069a0299235908025d3c8840e059ddc2fc2f579932edb134cff568c36] ...
	I1010 19:28:57.869383  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bfc9f1f069a0299235908025d3c8840e059ddc2fc2f579932edb134cff568c36"
	I1010 19:28:57.918698  147213 logs.go:123] Gathering logs for kube-proxy [3a26f9cbec8dc8e6e1b1c1cc24a2432dc85819a78557be2263db6bc34e847664] ...
	I1010 19:28:57.918739  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3a26f9cbec8dc8e6e1b1c1cc24a2432dc85819a78557be2263db6bc34e847664"
	I1010 19:28:57.960939  147213 logs.go:123] Gathering logs for kube-controller-manager [d59196636b2826805e226757e3c5d3683d49f2fddb43f2e965aeae02026742ea] ...
	I1010 19:28:57.960985  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d59196636b2826805e226757e3c5d3683d49f2fddb43f2e965aeae02026742ea"
	I1010 19:28:58.013572  147213 logs.go:123] Gathering logs for storage-provisioner [dfabbf70cd44985666c6b54a456e49c66a41cc19300a483565decd937ce240ab] ...
	I1010 19:28:58.013612  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dfabbf70cd44985666c6b54a456e49c66a41cc19300a483565decd937ce240ab"
	I1010 19:28:58.053247  147213 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:28:58.053277  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:28:58.507428  147213 logs.go:123] Gathering logs for container status ...
	I1010 19:28:58.507473  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:28:58.552704  147213 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:28:58.552742  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 19:28:58.672077  147213 logs.go:123] Gathering logs for dmesg ...
	I1010 19:28:58.672127  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:28:58.690997  147213 logs.go:123] Gathering logs for coredns [3c98f0e3e46ce82feb8638281ffb8c8cdf326b15115a6edfe91de59de6868846] ...
	I1010 19:28:58.691049  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c98f0e3e46ce82feb8638281ffb8c8cdf326b15115a6edfe91de59de6868846"
	I1010 19:28:58.735251  147213 logs.go:123] Gathering logs for kube-scheduler [d397ef1d012accb6f9cb41fa5bd5ed4649588e9ad8552f87d2f9a579a9c3f023] ...
	I1010 19:28:58.735287  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d397ef1d012accb6f9cb41fa5bd5ed4649588e9ad8552f87d2f9a579a9c3f023"
	I1010 19:28:55.291700  148525 addons.go:510] duration metric: took 1.845864985s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1010 19:28:55.785186  148525 pod_ready.go:103] pod "coredns-7c65d6cfc9-dh9th" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:57.789567  148525 pod_ready.go:103] pod "coredns-7c65d6cfc9-dh9th" in "kube-system" namespace has status "Ready":"False"
	I1010 19:29:00.284444  148525 pod_ready.go:103] pod "coredns-7c65d6cfc9-dh9th" in "kube-system" namespace has status "Ready":"False"
	I1010 19:29:01.297627  148525 pod_ready.go:93] pod "coredns-7c65d6cfc9-dh9th" in "kube-system" namespace has status "Ready":"True"
	I1010 19:29:01.297660  148525 pod_ready.go:82] duration metric: took 7.520173084s for pod "coredns-7c65d6cfc9-dh9th" in "kube-system" namespace to be "Ready" ...
	I1010 19:29:01.297676  148525 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-fgxh7" in "kube-system" namespace to be "Ready" ...
	I1010 19:29:01.804654  148525 pod_ready.go:93] pod "coredns-7c65d6cfc9-fgxh7" in "kube-system" namespace has status "Ready":"True"
	I1010 19:29:01.804676  148525 pod_ready.go:82] duration metric: took 506.992872ms for pod "coredns-7c65d6cfc9-fgxh7" in "kube-system" namespace to be "Ready" ...
	I1010 19:29:01.804690  148525 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	I1010 19:29:01.809788  148525 pod_ready.go:93] pod "etcd-default-k8s-diff-port-361847" in "kube-system" namespace has status "Ready":"True"
	I1010 19:29:01.809814  148525 pod_ready.go:82] duration metric: took 5.116023ms for pod "etcd-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	I1010 19:29:01.809825  148525 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	I1010 19:29:01.814460  148525 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-361847" in "kube-system" namespace has status "Ready":"True"
	I1010 19:29:01.814486  148525 pod_ready.go:82] duration metric: took 4.652085ms for pod "kube-apiserver-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	I1010 19:29:01.814501  148525 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	I1010 19:29:01.819719  148525 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-361847" in "kube-system" namespace has status "Ready":"True"
	I1010 19:29:01.819741  148525 pod_ready.go:82] duration metric: took 5.231258ms for pod "kube-controller-manager-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	I1010 19:29:01.819753  148525 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-jlvn6" in "kube-system" namespace to be "Ready" ...
	I1010 19:29:02.082285  148525 pod_ready.go:93] pod "kube-proxy-jlvn6" in "kube-system" namespace has status "Ready":"True"
	I1010 19:29:02.082325  148525 pod_ready.go:82] duration metric: took 262.562954ms for pod "kube-proxy-jlvn6" in "kube-system" namespace to be "Ready" ...
	I1010 19:29:02.082342  148525 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	I1010 19:29:02.481705  148525 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-361847" in "kube-system" namespace has status "Ready":"True"
	I1010 19:29:02.481730  148525 pod_ready.go:82] duration metric: took 399.378957ms for pod "kube-scheduler-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	I1010 19:29:02.481742  148525 pod_ready.go:39] duration metric: took 8.715126416s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 19:29:02.481779  148525 api_server.go:52] waiting for apiserver process to appear ...
	I1010 19:29:02.481832  148525 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:29:02.498706  148525 api_server.go:72] duration metric: took 9.052891898s to wait for apiserver process to appear ...
	I1010 19:29:02.498760  148525 api_server.go:88] waiting for apiserver healthz status ...
	I1010 19:29:02.498795  148525 api_server.go:253] Checking apiserver healthz at https://192.168.50.32:8444/healthz ...
	I1010 19:29:02.503501  148525 api_server.go:279] https://192.168.50.32:8444/healthz returned 200:
	ok
	I1010 19:29:02.504594  148525 api_server.go:141] control plane version: v1.31.1
	I1010 19:29:02.504620  148525 api_server.go:131] duration metric: took 5.850548ms to wait for apiserver health ...
	I1010 19:29:02.504629  148525 system_pods.go:43] waiting for kube-system pods to appear ...
	I1010 19:29:02.685579  148525 system_pods.go:59] 9 kube-system pods found
	I1010 19:29:02.685611  148525 system_pods.go:61] "coredns-7c65d6cfc9-dh9th" [ff14d755-810a-497a-b1fc-7fe231748af3] Running
	I1010 19:29:02.685618  148525 system_pods.go:61] "coredns-7c65d6cfc9-fgxh7" [b4faa977-3205-4395-bda3-8fe24fdcf6cc] Running
	I1010 19:29:02.685624  148525 system_pods.go:61] "etcd-default-k8s-diff-port-361847" [d7e1e625-2945-4755-927a-aab5e40f5392] Running
	I1010 19:29:02.685630  148525 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-361847" [0f7dace6-6cdb-4438-a4b3-3fecac56c709] Running
	I1010 19:29:02.685635  148525 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-361847" [d08fb57e-d916-4d4f-929a-509c1c8c0f89] Running
	I1010 19:29:02.685639  148525 system_pods.go:61] "kube-proxy-jlvn6" [6336f682-0362-4855-b848-3540052aec19] Running
	I1010 19:29:02.685644  148525 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-361847" [60282768-5672-42b9-b9f5-6d5cd0b186cf] Running
	I1010 19:29:02.685653  148525 system_pods.go:61] "metrics-server-6867b74b74-fdf7p" [6f8ca204-13fe-4adb-9c09-33ec6821ff2d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1010 19:29:02.685658  148525 system_pods.go:61] "storage-provisioner" [ea1a4ade-9648-401f-a0ad-633ab3c1196b] Running
	I1010 19:29:02.685669  148525 system_pods.go:74] duration metric: took 181.032548ms to wait for pod list to return data ...
	I1010 19:29:02.685683  148525 default_sa.go:34] waiting for default service account to be created ...
	I1010 19:29:02.883256  148525 default_sa.go:45] found service account: "default"
	I1010 19:29:02.883288  148525 default_sa.go:55] duration metric: took 197.59742ms for default service account to be created ...
	I1010 19:29:02.883298  148525 system_pods.go:116] waiting for k8s-apps to be running ...
	I1010 19:29:03.084706  148525 system_pods.go:86] 9 kube-system pods found
	I1010 19:29:03.084737  148525 system_pods.go:89] "coredns-7c65d6cfc9-dh9th" [ff14d755-810a-497a-b1fc-7fe231748af3] Running
	I1010 19:29:03.084742  148525 system_pods.go:89] "coredns-7c65d6cfc9-fgxh7" [b4faa977-3205-4395-bda3-8fe24fdcf6cc] Running
	I1010 19:29:03.084746  148525 system_pods.go:89] "etcd-default-k8s-diff-port-361847" [d7e1e625-2945-4755-927a-aab5e40f5392] Running
	I1010 19:29:03.084751  148525 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-361847" [0f7dace6-6cdb-4438-a4b3-3fecac56c709] Running
	I1010 19:29:03.084755  148525 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-361847" [d08fb57e-d916-4d4f-929a-509c1c8c0f89] Running
	I1010 19:29:03.084759  148525 system_pods.go:89] "kube-proxy-jlvn6" [6336f682-0362-4855-b848-3540052aec19] Running
	I1010 19:29:03.084762  148525 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-361847" [60282768-5672-42b9-b9f5-6d5cd0b186cf] Running
	I1010 19:29:03.084768  148525 system_pods.go:89] "metrics-server-6867b74b74-fdf7p" [6f8ca204-13fe-4adb-9c09-33ec6821ff2d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1010 19:29:03.084772  148525 system_pods.go:89] "storage-provisioner" [ea1a4ade-9648-401f-a0ad-633ab3c1196b] Running
	I1010 19:29:03.084779  148525 system_pods.go:126] duration metric: took 201.476637ms to wait for k8s-apps to be running ...
	I1010 19:29:03.084787  148525 system_svc.go:44] waiting for kubelet service to be running ....
	I1010 19:29:03.084832  148525 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 19:29:03.100986  148525 system_svc.go:56] duration metric: took 16.183062ms WaitForService to wait for kubelet
	I1010 19:29:03.101026  148525 kubeadm.go:582] duration metric: took 9.655245557s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1010 19:29:03.101050  148525 node_conditions.go:102] verifying NodePressure condition ...
	I1010 19:29:03.282063  148525 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1010 19:29:03.282095  148525 node_conditions.go:123] node cpu capacity is 2
	I1010 19:29:03.282106  148525 node_conditions.go:105] duration metric: took 181.049888ms to run NodePressure ...
	I1010 19:29:03.282119  148525 start.go:241] waiting for startup goroutines ...
	I1010 19:29:03.282125  148525 start.go:246] waiting for cluster config update ...
	I1010 19:29:03.282135  148525 start.go:255] writing updated cluster config ...
	I1010 19:29:03.282414  148525 ssh_runner.go:195] Run: rm -f paused
	I1010 19:29:03.331838  148525 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1010 19:29:03.333698  148525 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-361847" cluster and "default" namespace by default
	I1010 19:28:58.775358  147213 logs.go:123] Gathering logs for storage-provisioner [e14d37c6da3f630d60720c5dc7ec414f060a7b327fafd27b633f3402e552840e] ...
	I1010 19:28:58.775396  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e14d37c6da3f630d60720c5dc7ec414f060a7b327fafd27b633f3402e552840e"
	I1010 19:28:58.812210  147213 logs.go:123] Gathering logs for kubelet ...
	I1010 19:28:58.812269  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:29:01.381750  147213 api_server.go:253] Checking apiserver healthz at https://192.168.72.11:8443/healthz ...
	I1010 19:29:01.386658  147213 api_server.go:279] https://192.168.72.11:8443/healthz returned 200:
	ok
	I1010 19:29:01.387793  147213 api_server.go:141] control plane version: v1.31.1
	I1010 19:29:01.387819  147213 api_server.go:131] duration metric: took 3.945468552s to wait for apiserver health ...
	I1010 19:29:01.387829  147213 system_pods.go:43] waiting for kube-system pods to appear ...
	I1010 19:29:01.387861  147213 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:29:01.387948  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:29:01.433312  147213 cri.go:89] found id: "20a9cb514f18abab89d82173a52c43693b13548712f96726c491e14100e5deba"
	I1010 19:29:01.433344  147213 cri.go:89] found id: ""
	I1010 19:29:01.433433  147213 logs.go:282] 1 containers: [20a9cb514f18abab89d82173a52c43693b13548712f96726c491e14100e5deba]
	I1010 19:29:01.433521  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:29:01.437920  147213 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:29:01.437983  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:29:01.476429  147213 cri.go:89] found id: "bfc9f1f069a0299235908025d3c8840e059ddc2fc2f579932edb134cff568c36"
	I1010 19:29:01.476458  147213 cri.go:89] found id: ""
	I1010 19:29:01.476470  147213 logs.go:282] 1 containers: [bfc9f1f069a0299235908025d3c8840e059ddc2fc2f579932edb134cff568c36]
	I1010 19:29:01.476522  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:29:01.480912  147213 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:29:01.480987  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:29:01.522141  147213 cri.go:89] found id: "3c98f0e3e46ce82feb8638281ffb8c8cdf326b15115a6edfe91de59de6868846"
	I1010 19:29:01.522164  147213 cri.go:89] found id: ""
	I1010 19:29:01.522173  147213 logs.go:282] 1 containers: [3c98f0e3e46ce82feb8638281ffb8c8cdf326b15115a6edfe91de59de6868846]
	I1010 19:29:01.522238  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:29:01.526742  147213 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:29:01.526803  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:29:01.572715  147213 cri.go:89] found id: "d397ef1d012accb6f9cb41fa5bd5ed4649588e9ad8552f87d2f9a579a9c3f023"
	I1010 19:29:01.572747  147213 cri.go:89] found id: ""
	I1010 19:29:01.572759  147213 logs.go:282] 1 containers: [d397ef1d012accb6f9cb41fa5bd5ed4649588e9ad8552f87d2f9a579a9c3f023]
	I1010 19:29:01.572814  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:29:01.577754  147213 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:29:01.577832  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:29:01.616077  147213 cri.go:89] found id: "3a26f9cbec8dc8e6e1b1c1cc24a2432dc85819a78557be2263db6bc34e847664"
	I1010 19:29:01.616104  147213 cri.go:89] found id: ""
	I1010 19:29:01.616121  147213 logs.go:282] 1 containers: [3a26f9cbec8dc8e6e1b1c1cc24a2432dc85819a78557be2263db6bc34e847664]
	I1010 19:29:01.616185  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:29:01.620622  147213 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:29:01.620702  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:29:01.662859  147213 cri.go:89] found id: "d59196636b2826805e226757e3c5d3683d49f2fddb43f2e965aeae02026742ea"
	I1010 19:29:01.662889  147213 cri.go:89] found id: ""
	I1010 19:29:01.662903  147213 logs.go:282] 1 containers: [d59196636b2826805e226757e3c5d3683d49f2fddb43f2e965aeae02026742ea]
	I1010 19:29:01.662964  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:29:01.667491  147213 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:29:01.667585  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:29:01.706191  147213 cri.go:89] found id: ""
	I1010 19:29:01.706217  147213 logs.go:282] 0 containers: []
	W1010 19:29:01.706228  147213 logs.go:284] No container was found matching "kindnet"
	I1010 19:29:01.706234  147213 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1010 19:29:01.706299  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1010 19:29:01.753559  147213 cri.go:89] found id: "dfabbf70cd44985666c6b54a456e49c66a41cc19300a483565decd937ce240ab"
	I1010 19:29:01.753581  147213 cri.go:89] found id: "e14d37c6da3f630d60720c5dc7ec414f060a7b327fafd27b633f3402e552840e"
	I1010 19:29:01.753584  147213 cri.go:89] found id: ""
	I1010 19:29:01.753591  147213 logs.go:282] 2 containers: [dfabbf70cd44985666c6b54a456e49c66a41cc19300a483565decd937ce240ab e14d37c6da3f630d60720c5dc7ec414f060a7b327fafd27b633f3402e552840e]
	I1010 19:29:01.753645  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:29:01.758179  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:29:01.762336  147213 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:29:01.762358  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 19:29:01.867667  147213 logs.go:123] Gathering logs for etcd [bfc9f1f069a0299235908025d3c8840e059ddc2fc2f579932edb134cff568c36] ...
	I1010 19:29:01.867698  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bfc9f1f069a0299235908025d3c8840e059ddc2fc2f579932edb134cff568c36"
	I1010 19:29:01.911722  147213 logs.go:123] Gathering logs for coredns [3c98f0e3e46ce82feb8638281ffb8c8cdf326b15115a6edfe91de59de6868846] ...
	I1010 19:29:01.911756  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c98f0e3e46ce82feb8638281ffb8c8cdf326b15115a6edfe91de59de6868846"
	I1010 19:29:01.955152  147213 logs.go:123] Gathering logs for kube-proxy [3a26f9cbec8dc8e6e1b1c1cc24a2432dc85819a78557be2263db6bc34e847664] ...
	I1010 19:29:01.955189  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3a26f9cbec8dc8e6e1b1c1cc24a2432dc85819a78557be2263db6bc34e847664"
	I1010 19:29:01.995010  147213 logs.go:123] Gathering logs for kube-controller-manager [d59196636b2826805e226757e3c5d3683d49f2fddb43f2e965aeae02026742ea] ...
	I1010 19:29:01.995041  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d59196636b2826805e226757e3c5d3683d49f2fddb43f2e965aeae02026742ea"
	I1010 19:29:02.047505  147213 logs.go:123] Gathering logs for storage-provisioner [dfabbf70cd44985666c6b54a456e49c66a41cc19300a483565decd937ce240ab] ...
	I1010 19:29:02.047546  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dfabbf70cd44985666c6b54a456e49c66a41cc19300a483565decd937ce240ab"
	I1010 19:29:02.085080  147213 logs.go:123] Gathering logs for storage-provisioner [e14d37c6da3f630d60720c5dc7ec414f060a7b327fafd27b633f3402e552840e] ...
	I1010 19:29:02.085110  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e14d37c6da3f630d60720c5dc7ec414f060a7b327fafd27b633f3402e552840e"
	I1010 19:29:02.128482  147213 logs.go:123] Gathering logs for kubelet ...
	I1010 19:29:02.128527  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:29:02.194867  147213 logs.go:123] Gathering logs for dmesg ...
	I1010 19:29:02.194904  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:29:02.211881  147213 logs.go:123] Gathering logs for kube-apiserver [20a9cb514f18abab89d82173a52c43693b13548712f96726c491e14100e5deba] ...
	I1010 19:29:02.211911  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 20a9cb514f18abab89d82173a52c43693b13548712f96726c491e14100e5deba"
	I1010 19:29:02.262969  147213 logs.go:123] Gathering logs for kube-scheduler [d397ef1d012accb6f9cb41fa5bd5ed4649588e9ad8552f87d2f9a579a9c3f023] ...
	I1010 19:29:02.263013  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d397ef1d012accb6f9cb41fa5bd5ed4649588e9ad8552f87d2f9a579a9c3f023"
	I1010 19:29:02.302921  147213 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:29:02.302956  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:29:02.671102  147213 logs.go:123] Gathering logs for container status ...
	I1010 19:29:02.671169  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:29:05.241477  147213 system_pods.go:59] 8 kube-system pods found
	I1010 19:29:05.241508  147213 system_pods.go:61] "coredns-7c65d6cfc9-86brb" [28e5f869-f82f-4bd4-9d9c-89499fa89c89] Running
	I1010 19:29:05.241513  147213 system_pods.go:61] "etcd-no-preload-320324" [3027fc7b-a2cd-47c9-a6f1-122bfd0475a8] Running
	I1010 19:29:05.241517  147213 system_pods.go:61] "kube-apiserver-no-preload-320324" [6e3d2eab-4568-48e9-957c-e724ff89b5da] Running
	I1010 19:29:05.241521  147213 system_pods.go:61] "kube-controller-manager-no-preload-320324" [a6d65cd3-48b1-4faf-bb7f-f6433641d6f3] Running
	I1010 19:29:05.241525  147213 system_pods.go:61] "kube-proxy-vn6sv" [e5b2c419-7299-4bc4-b263-99408b9484eb] Running
	I1010 19:29:05.241528  147213 system_pods.go:61] "kube-scheduler-no-preload-320324" [e089c258-e720-4c78-ada1-eebbe89556c1] Running
	I1010 19:29:05.241534  147213 system_pods.go:61] "metrics-server-6867b74b74-8w9lk" [354939e6-2ca9-44f5-8e8e-c10493c68b79] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1010 19:29:05.241540  147213 system_pods.go:61] "storage-provisioner" [d4965b53-60c3-4a97-bd52-d164d977247a] Running
	I1010 19:29:05.241549  147213 system_pods.go:74] duration metric: took 3.853712488s to wait for pod list to return data ...
	I1010 19:29:05.241556  147213 default_sa.go:34] waiting for default service account to be created ...
	I1010 19:29:05.244686  147213 default_sa.go:45] found service account: "default"
	I1010 19:29:05.244721  147213 default_sa.go:55] duration metric: took 3.158069ms for default service account to be created ...
	I1010 19:29:05.244733  147213 system_pods.go:116] waiting for k8s-apps to be running ...
	I1010 19:29:05.249372  147213 system_pods.go:86] 8 kube-system pods found
	I1010 19:29:05.249398  147213 system_pods.go:89] "coredns-7c65d6cfc9-86brb" [28e5f869-f82f-4bd4-9d9c-89499fa89c89] Running
	I1010 19:29:05.249404  147213 system_pods.go:89] "etcd-no-preload-320324" [3027fc7b-a2cd-47c9-a6f1-122bfd0475a8] Running
	I1010 19:29:05.249408  147213 system_pods.go:89] "kube-apiserver-no-preload-320324" [6e3d2eab-4568-48e9-957c-e724ff89b5da] Running
	I1010 19:29:05.249413  147213 system_pods.go:89] "kube-controller-manager-no-preload-320324" [a6d65cd3-48b1-4faf-bb7f-f6433641d6f3] Running
	I1010 19:29:05.249418  147213 system_pods.go:89] "kube-proxy-vn6sv" [e5b2c419-7299-4bc4-b263-99408b9484eb] Running
	I1010 19:29:05.249425  147213 system_pods.go:89] "kube-scheduler-no-preload-320324" [e089c258-e720-4c78-ada1-eebbe89556c1] Running
	I1010 19:29:05.249433  147213 system_pods.go:89] "metrics-server-6867b74b74-8w9lk" [354939e6-2ca9-44f5-8e8e-c10493c68b79] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1010 19:29:05.249442  147213 system_pods.go:89] "storage-provisioner" [d4965b53-60c3-4a97-bd52-d164d977247a] Running
	I1010 19:29:05.249455  147213 system_pods.go:126] duration metric: took 4.715381ms to wait for k8s-apps to be running ...
	I1010 19:29:05.249467  147213 system_svc.go:44] waiting for kubelet service to be running ....
	I1010 19:29:05.249519  147213 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 19:29:05.265180  147213 system_svc.go:56] duration metric: took 15.703413ms WaitForService to wait for kubelet
	I1010 19:29:05.265216  147213 kubeadm.go:582] duration metric: took 4m24.851723603s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1010 19:29:05.265237  147213 node_conditions.go:102] verifying NodePressure condition ...
	I1010 19:29:05.268775  147213 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1010 19:29:05.268807  147213 node_conditions.go:123] node cpu capacity is 2
	I1010 19:29:05.268821  147213 node_conditions.go:105] duration metric: took 3.575195ms to run NodePressure ...
	I1010 19:29:05.268834  147213 start.go:241] waiting for startup goroutines ...
	I1010 19:29:05.268840  147213 start.go:246] waiting for cluster config update ...
	I1010 19:29:05.268869  147213 start.go:255] writing updated cluster config ...
	I1010 19:29:05.269148  147213 ssh_runner.go:195] Run: rm -f paused
	I1010 19:29:05.319999  147213 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1010 19:29:05.322189  147213 out.go:177] * Done! kubectl is now configured to use "no-preload-320324" cluster and "default" namespace by default
	I1010 19:29:41.275119  148123 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1010 19:29:41.275272  148123 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1010 19:29:41.276822  148123 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1010 19:29:41.276919  148123 kubeadm.go:310] [preflight] Running pre-flight checks
	I1010 19:29:41.277017  148123 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1010 19:29:41.277142  148123 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1010 19:29:41.277254  148123 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1010 19:29:41.277357  148123 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1010 19:29:41.279069  148123 out.go:235]   - Generating certificates and keys ...
	I1010 19:29:41.279160  148123 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1010 19:29:41.279217  148123 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1010 19:29:41.279306  148123 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1010 19:29:41.279381  148123 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1010 19:29:41.279484  148123 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1010 19:29:41.279576  148123 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1010 19:29:41.279674  148123 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1010 19:29:41.279779  148123 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1010 19:29:41.279906  148123 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1010 19:29:41.279971  148123 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1010 19:29:41.280005  148123 kubeadm.go:310] [certs] Using the existing "sa" key
	I1010 19:29:41.280052  148123 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1010 19:29:41.280095  148123 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1010 19:29:41.280144  148123 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1010 19:29:41.280219  148123 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1010 19:29:41.280317  148123 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1010 19:29:41.280474  148123 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1010 19:29:41.280583  148123 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1010 19:29:41.280648  148123 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1010 19:29:41.280736  148123 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1010 19:29:41.282180  148123 out.go:235]   - Booting up control plane ...
	I1010 19:29:41.282266  148123 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1010 19:29:41.282336  148123 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1010 19:29:41.282414  148123 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1010 19:29:41.282538  148123 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1010 19:29:41.282748  148123 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1010 19:29:41.282822  148123 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1010 19:29:41.282896  148123 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1010 19:29:41.283061  148123 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1010 19:29:41.283123  148123 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1010 19:29:41.283287  148123 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1010 19:29:41.283348  148123 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1010 19:29:41.283519  148123 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1010 19:29:41.283623  148123 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1010 19:29:41.283809  148123 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1010 19:29:41.283900  148123 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1010 19:29:41.284115  148123 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1010 19:29:41.284135  148123 kubeadm.go:310] 
	I1010 19:29:41.284201  148123 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1010 19:29:41.284243  148123 kubeadm.go:310] 		timed out waiting for the condition
	I1010 19:29:41.284257  148123 kubeadm.go:310] 
	I1010 19:29:41.284307  148123 kubeadm.go:310] 	This error is likely caused by:
	I1010 19:29:41.284346  148123 kubeadm.go:310] 		- The kubelet is not running
	I1010 19:29:41.284481  148123 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1010 19:29:41.284505  148123 kubeadm.go:310] 
	I1010 19:29:41.284655  148123 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1010 19:29:41.284714  148123 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1010 19:29:41.284752  148123 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1010 19:29:41.284758  148123 kubeadm.go:310] 
	I1010 19:29:41.284913  148123 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1010 19:29:41.285038  148123 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1010 19:29:41.285055  148123 kubeadm.go:310] 
	I1010 19:29:41.285169  148123 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1010 19:29:41.285307  148123 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1010 19:29:41.285475  148123 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1010 19:29:41.285587  148123 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1010 19:29:41.285644  148123 kubeadm.go:310] 
	W1010 19:29:41.285747  148123 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1010 19:29:41.285792  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1010 19:29:46.681684  148123 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.395862838s)
	I1010 19:29:46.681797  148123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 19:29:46.696927  148123 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1010 19:29:46.708250  148123 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1010 19:29:46.708280  148123 kubeadm.go:157] found existing configuration files:
	
	I1010 19:29:46.708339  148123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1010 19:29:46.719748  148123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1010 19:29:46.719828  148123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1010 19:29:46.730892  148123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1010 19:29:46.742329  148123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1010 19:29:46.742401  148123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1010 19:29:46.752919  148123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1010 19:29:46.762860  148123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1010 19:29:46.762932  148123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1010 19:29:46.772513  148123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1010 19:29:46.781672  148123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1010 19:29:46.781746  148123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1010 19:29:46.791250  148123 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1010 19:29:47.018706  148123 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1010 19:31:43.351851  148123 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1010 19:31:43.351992  148123 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1010 19:31:43.353664  148123 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1010 19:31:43.353752  148123 kubeadm.go:310] [preflight] Running pre-flight checks
	I1010 19:31:43.353861  148123 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1010 19:31:43.353998  148123 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1010 19:31:43.354130  148123 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1010 19:31:43.354235  148123 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1010 19:31:43.356450  148123 out.go:235]   - Generating certificates and keys ...
	I1010 19:31:43.356557  148123 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1010 19:31:43.356652  148123 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1010 19:31:43.356783  148123 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1010 19:31:43.356902  148123 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1010 19:31:43.357002  148123 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1010 19:31:43.357074  148123 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1010 19:31:43.357181  148123 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1010 19:31:43.357247  148123 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1010 19:31:43.357325  148123 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1010 19:31:43.357408  148123 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1010 19:31:43.357459  148123 kubeadm.go:310] [certs] Using the existing "sa" key
	I1010 19:31:43.357529  148123 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1010 19:31:43.357604  148123 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1010 19:31:43.357676  148123 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1010 19:31:43.357751  148123 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1010 19:31:43.357830  148123 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1010 19:31:43.357979  148123 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1010 19:31:43.358092  148123 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1010 19:31:43.358158  148123 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1010 19:31:43.358263  148123 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1010 19:31:43.360216  148123 out.go:235]   - Booting up control plane ...
	I1010 19:31:43.360332  148123 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1010 19:31:43.360463  148123 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1010 19:31:43.360551  148123 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1010 19:31:43.360673  148123 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1010 19:31:43.360907  148123 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1010 19:31:43.360976  148123 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1010 19:31:43.361058  148123 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1010 19:31:43.361309  148123 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1010 19:31:43.361410  148123 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1010 19:31:43.361648  148123 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1010 19:31:43.361742  148123 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1010 19:31:43.361960  148123 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1010 19:31:43.362058  148123 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1010 19:31:43.362308  148123 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1010 19:31:43.362399  148123 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1010 19:31:43.362640  148123 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1010 19:31:43.362652  148123 kubeadm.go:310] 
	I1010 19:31:43.362708  148123 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1010 19:31:43.362758  148123 kubeadm.go:310] 		timed out waiting for the condition
	I1010 19:31:43.362768  148123 kubeadm.go:310] 
	I1010 19:31:43.362809  148123 kubeadm.go:310] 	This error is likely caused by:
	I1010 19:31:43.362858  148123 kubeadm.go:310] 		- The kubelet is not running
	I1010 19:31:43.362981  148123 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1010 19:31:43.362993  148123 kubeadm.go:310] 
	I1010 19:31:43.363119  148123 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1010 19:31:43.363164  148123 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1010 19:31:43.363204  148123 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1010 19:31:43.363219  148123 kubeadm.go:310] 
	I1010 19:31:43.363344  148123 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1010 19:31:43.363454  148123 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1010 19:31:43.363465  148123 kubeadm.go:310] 
	I1010 19:31:43.363591  148123 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1010 19:31:43.363708  148123 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1010 19:31:43.363803  148123 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1010 19:31:43.363892  148123 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1010 19:31:43.363978  148123 kubeadm.go:394] duration metric: took 8m3.085634474s to StartCluster
	I1010 19:31:43.364052  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:31:43.364159  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:31:43.364268  148123 kubeadm.go:310] 
	I1010 19:31:43.413041  148123 cri.go:89] found id: ""
	I1010 19:31:43.413075  148123 logs.go:282] 0 containers: []
	W1010 19:31:43.413086  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:31:43.413092  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:31:43.413206  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:31:43.456454  148123 cri.go:89] found id: ""
	I1010 19:31:43.456487  148123 logs.go:282] 0 containers: []
	W1010 19:31:43.456499  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:31:43.456507  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:31:43.456567  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:31:43.498582  148123 cri.go:89] found id: ""
	I1010 19:31:43.498614  148123 logs.go:282] 0 containers: []
	W1010 19:31:43.498624  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:31:43.498633  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:31:43.498694  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:31:43.540557  148123 cri.go:89] found id: ""
	I1010 19:31:43.540589  148123 logs.go:282] 0 containers: []
	W1010 19:31:43.540598  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:31:43.540605  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:31:43.540661  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:31:43.576743  148123 cri.go:89] found id: ""
	I1010 19:31:43.576776  148123 logs.go:282] 0 containers: []
	W1010 19:31:43.576787  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:31:43.576796  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:31:43.576887  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:31:43.614620  148123 cri.go:89] found id: ""
	I1010 19:31:43.614652  148123 logs.go:282] 0 containers: []
	W1010 19:31:43.614662  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:31:43.614668  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:31:43.614730  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:31:43.652008  148123 cri.go:89] found id: ""
	I1010 19:31:43.652061  148123 logs.go:282] 0 containers: []
	W1010 19:31:43.652075  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:31:43.652084  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:31:43.652153  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:31:43.689990  148123 cri.go:89] found id: ""
	I1010 19:31:43.690019  148123 logs.go:282] 0 containers: []
	W1010 19:31:43.690028  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:31:43.690043  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:31:43.690056  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:31:43.745634  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:31:43.745673  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:31:43.760064  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:31:43.760095  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:31:43.840168  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:31:43.840197  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:31:43.840214  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:31:43.943265  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:31:43.943303  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1010 19:31:43.987758  148123 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1010 19:31:43.987816  148123 out.go:270] * 
	W1010 19:31:43.987891  148123 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1010 19:31:43.987906  148123 out.go:270] * 
	W1010 19:31:43.988703  148123 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1010 19:31:43.991928  148123 out.go:201] 
	W1010 19:31:43.993494  148123 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1010 19:31:43.993556  148123 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1010 19:31:43.993585  148123 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1010 19:31:43.995273  148123 out.go:201] 
	
	
	==> CRI-O <==
	Oct 10 19:47:01 default-k8s-diff-port-361847 crio[714]: time="2024-10-10 19:47:01.790503732Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728589621790477015,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6a2b0fd3-da83-44f2-a63f-d856022b06bf name=/runtime.v1.ImageService/ImageFsInfo
	Oct 10 19:47:01 default-k8s-diff-port-361847 crio[714]: time="2024-10-10 19:47:01.791063693Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=47f92945-72b5-4819-a66e-f8ecfa5dfc82 name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 19:47:01 default-k8s-diff-port-361847 crio[714]: time="2024-10-10 19:47:01.791192562Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=47f92945-72b5-4819-a66e-f8ecfa5dfc82 name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 19:47:01 default-k8s-diff-port-361847 crio[714]: time="2024-10-10 19:47:01.791572925Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3af3f927e6e218cdbe6528482fbad91146bcf8b8e2f3ecc6f4dd22367e3353f6,PodSandboxId:e01154709f127b006abdad36e3e8ded86fc4b6450036f64af1396f428d2d8f29,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728588535735861530,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea1a4ade-9648-401f-a0ad-633ab3c1196b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c34d62ea901a006bee1b94b8a0963dce08a38a1fa76a03c69046a03b594563c9,PodSandboxId:f82b561c276715641ce0cbe818bb5ed36fa05d45fee515656f171bc5a4450fd5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728588535033615341,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jlvn6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6336f682-0362-4855-b848-3540052aec19,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fafdf63631a8ff669c19a4671a7c2c85fa4dad5700f0ca207bb21013ef99b06,PodSandboxId:77913b146b0c128f6f16ec19db1e1cdad56d2dc8d1143598ae43bdc9cbcc5536,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728588534445030987,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fgxh7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4faa977-3205-4395-bda3-8fe24fdcf6cc,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tc
p\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8b8f844b7b0586105ff3dba820b339ba8a3f6bbd6277900172e961d7006a1c2,PodSandboxId:c1ea37fae8f88a9283add7ed11f2cc1c0f92b5e5b297926d04325e4132961de1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728588534125856622,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-dh9th,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff14d755-810a-497a-b1fc-
7fe231748af3,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbfa3f7b306bd53ceda1d13c69729c39dc0eb7db018adb04d47e120cbdb300f7,PodSandboxId:ad4c2c7d8d35d6ad8b2f97e2a1b6527975ed0d14435ea181f3b3e50a75e8ccd6,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728588522855220602,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-361847,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb250ff6dad742b9f14cc7b757329d85,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b7a70a0d1c6b99da20dd7aa2a10a91457890905859cc04c211e03b36b5e34be,PodSandboxId:5efab2a6937fd8e2fd2d74e6c07f859c00f1369d27e8fe02a033cbdebd922639,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728588522801390605,Labels:map[string]str
ing{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-361847,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f4eaa86354b36640568a0448bbc6bb4,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc897586e115d16a892ff088925d6b09643f59333e746617ae836506a76e3939,PodSandboxId:1ff7755d161d96dc0a648e1b1cd4cb0e147cbc9943c1021b6f1e4857bbe6f06f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728588522836342605,Label
s:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-361847,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b1732c94af48bf557e8cc0c0f19485d,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:decf33fb776b612209e9001fe9e0154fc6db1046acee4d9d83f96e7bf6e906ff,PodSandboxId:df22536f2cd09d30a1a1104b472ae89f67f8f97332d2bcc7067831641df363da,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728588522817430732,Labels:
map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-361847,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b57c646474679053469c7268c1c49d62,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57335da36e4a25689e6f890eff8bfda0f17a5e5f055b71ba73288383a1b0b07c,PodSandboxId:8c451c4f67d2e5ef483b0964581573cdb6a25ceeddeadde2b5e4166321e63f6f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728588241406370879,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-361847,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b1732c94af48bf557e8cc0c0f19485d,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=47f92945-72b5-4819-a66e-f8ecfa5dfc82 name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 19:47:01 default-k8s-diff-port-361847 crio[714]: time="2024-10-10 19:47:01.831240400Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5c2708e2-3491-46c7-9bdd-7bdf30dc3e8a name=/runtime.v1.RuntimeService/Version
	Oct 10 19:47:01 default-k8s-diff-port-361847 crio[714]: time="2024-10-10 19:47:01.831311344Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5c2708e2-3491-46c7-9bdd-7bdf30dc3e8a name=/runtime.v1.RuntimeService/Version
	Oct 10 19:47:01 default-k8s-diff-port-361847 crio[714]: time="2024-10-10 19:47:01.832958905Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fd507ff5-4e08-4ab1-b311-a053c3287726 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 10 19:47:01 default-k8s-diff-port-361847 crio[714]: time="2024-10-10 19:47:01.833479939Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728589621833455529,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fd507ff5-4e08-4ab1-b311-a053c3287726 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 10 19:47:01 default-k8s-diff-port-361847 crio[714]: time="2024-10-10 19:47:01.833952422Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4ccf8ab6-03bc-4d4d-a525-77ee0229540f name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 19:47:01 default-k8s-diff-port-361847 crio[714]: time="2024-10-10 19:47:01.834039234Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4ccf8ab6-03bc-4d4d-a525-77ee0229540f name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 19:47:01 default-k8s-diff-port-361847 crio[714]: time="2024-10-10 19:47:01.834342345Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3af3f927e6e218cdbe6528482fbad91146bcf8b8e2f3ecc6f4dd22367e3353f6,PodSandboxId:e01154709f127b006abdad36e3e8ded86fc4b6450036f64af1396f428d2d8f29,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728588535735861530,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea1a4ade-9648-401f-a0ad-633ab3c1196b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c34d62ea901a006bee1b94b8a0963dce08a38a1fa76a03c69046a03b594563c9,PodSandboxId:f82b561c276715641ce0cbe818bb5ed36fa05d45fee515656f171bc5a4450fd5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728588535033615341,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jlvn6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6336f682-0362-4855-b848-3540052aec19,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fafdf63631a8ff669c19a4671a7c2c85fa4dad5700f0ca207bb21013ef99b06,PodSandboxId:77913b146b0c128f6f16ec19db1e1cdad56d2dc8d1143598ae43bdc9cbcc5536,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728588534445030987,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fgxh7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4faa977-3205-4395-bda3-8fe24fdcf6cc,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tc
p\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8b8f844b7b0586105ff3dba820b339ba8a3f6bbd6277900172e961d7006a1c2,PodSandboxId:c1ea37fae8f88a9283add7ed11f2cc1c0f92b5e5b297926d04325e4132961de1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728588534125856622,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-dh9th,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff14d755-810a-497a-b1fc-
7fe231748af3,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbfa3f7b306bd53ceda1d13c69729c39dc0eb7db018adb04d47e120cbdb300f7,PodSandboxId:ad4c2c7d8d35d6ad8b2f97e2a1b6527975ed0d14435ea181f3b3e50a75e8ccd6,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728588522855220602,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-361847,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb250ff6dad742b9f14cc7b757329d85,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b7a70a0d1c6b99da20dd7aa2a10a91457890905859cc04c211e03b36b5e34be,PodSandboxId:5efab2a6937fd8e2fd2d74e6c07f859c00f1369d27e8fe02a033cbdebd922639,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728588522801390605,Labels:map[string]str
ing{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-361847,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f4eaa86354b36640568a0448bbc6bb4,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc897586e115d16a892ff088925d6b09643f59333e746617ae836506a76e3939,PodSandboxId:1ff7755d161d96dc0a648e1b1cd4cb0e147cbc9943c1021b6f1e4857bbe6f06f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728588522836342605,Label
s:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-361847,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b1732c94af48bf557e8cc0c0f19485d,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:decf33fb776b612209e9001fe9e0154fc6db1046acee4d9d83f96e7bf6e906ff,PodSandboxId:df22536f2cd09d30a1a1104b472ae89f67f8f97332d2bcc7067831641df363da,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728588522817430732,Labels:
map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-361847,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b57c646474679053469c7268c1c49d62,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57335da36e4a25689e6f890eff8bfda0f17a5e5f055b71ba73288383a1b0b07c,PodSandboxId:8c451c4f67d2e5ef483b0964581573cdb6a25ceeddeadde2b5e4166321e63f6f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728588241406370879,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-361847,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b1732c94af48bf557e8cc0c0f19485d,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4ccf8ab6-03bc-4d4d-a525-77ee0229540f name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 19:47:01 default-k8s-diff-port-361847 crio[714]: time="2024-10-10 19:47:01.877342957Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=56b6b766-245c-4e2f-95a3-9dfd6d6459c6 name=/runtime.v1.RuntimeService/Version
	Oct 10 19:47:01 default-k8s-diff-port-361847 crio[714]: time="2024-10-10 19:47:01.877418881Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=56b6b766-245c-4e2f-95a3-9dfd6d6459c6 name=/runtime.v1.RuntimeService/Version
	Oct 10 19:47:01 default-k8s-diff-port-361847 crio[714]: time="2024-10-10 19:47:01.879255112Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=70ff82cc-a3bc-4947-9e0d-4eab9b364b76 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 10 19:47:01 default-k8s-diff-port-361847 crio[714]: time="2024-10-10 19:47:01.879624520Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728589621879600908,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=70ff82cc-a3bc-4947-9e0d-4eab9b364b76 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 10 19:47:01 default-k8s-diff-port-361847 crio[714]: time="2024-10-10 19:47:01.880299827Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a3c09781-a43f-4cd2-b8c8-cf079cf125bf name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 19:47:01 default-k8s-diff-port-361847 crio[714]: time="2024-10-10 19:47:01.880369625Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a3c09781-a43f-4cd2-b8c8-cf079cf125bf name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 19:47:01 default-k8s-diff-port-361847 crio[714]: time="2024-10-10 19:47:01.880684526Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3af3f927e6e218cdbe6528482fbad91146bcf8b8e2f3ecc6f4dd22367e3353f6,PodSandboxId:e01154709f127b006abdad36e3e8ded86fc4b6450036f64af1396f428d2d8f29,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728588535735861530,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea1a4ade-9648-401f-a0ad-633ab3c1196b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c34d62ea901a006bee1b94b8a0963dce08a38a1fa76a03c69046a03b594563c9,PodSandboxId:f82b561c276715641ce0cbe818bb5ed36fa05d45fee515656f171bc5a4450fd5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728588535033615341,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jlvn6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6336f682-0362-4855-b848-3540052aec19,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fafdf63631a8ff669c19a4671a7c2c85fa4dad5700f0ca207bb21013ef99b06,PodSandboxId:77913b146b0c128f6f16ec19db1e1cdad56d2dc8d1143598ae43bdc9cbcc5536,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728588534445030987,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fgxh7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4faa977-3205-4395-bda3-8fe24fdcf6cc,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tc
p\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8b8f844b7b0586105ff3dba820b339ba8a3f6bbd6277900172e961d7006a1c2,PodSandboxId:c1ea37fae8f88a9283add7ed11f2cc1c0f92b5e5b297926d04325e4132961de1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728588534125856622,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-dh9th,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff14d755-810a-497a-b1fc-
7fe231748af3,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbfa3f7b306bd53ceda1d13c69729c39dc0eb7db018adb04d47e120cbdb300f7,PodSandboxId:ad4c2c7d8d35d6ad8b2f97e2a1b6527975ed0d14435ea181f3b3e50a75e8ccd6,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728588522855220602,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-361847,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb250ff6dad742b9f14cc7b757329d85,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b7a70a0d1c6b99da20dd7aa2a10a91457890905859cc04c211e03b36b5e34be,PodSandboxId:5efab2a6937fd8e2fd2d74e6c07f859c00f1369d27e8fe02a033cbdebd922639,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728588522801390605,Labels:map[string]str
ing{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-361847,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f4eaa86354b36640568a0448bbc6bb4,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc897586e115d16a892ff088925d6b09643f59333e746617ae836506a76e3939,PodSandboxId:1ff7755d161d96dc0a648e1b1cd4cb0e147cbc9943c1021b6f1e4857bbe6f06f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728588522836342605,Label
s:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-361847,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b1732c94af48bf557e8cc0c0f19485d,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:decf33fb776b612209e9001fe9e0154fc6db1046acee4d9d83f96e7bf6e906ff,PodSandboxId:df22536f2cd09d30a1a1104b472ae89f67f8f97332d2bcc7067831641df363da,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728588522817430732,Labels:
map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-361847,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b57c646474679053469c7268c1c49d62,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57335da36e4a25689e6f890eff8bfda0f17a5e5f055b71ba73288383a1b0b07c,PodSandboxId:8c451c4f67d2e5ef483b0964581573cdb6a25ceeddeadde2b5e4166321e63f6f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728588241406370879,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-361847,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b1732c94af48bf557e8cc0c0f19485d,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a3c09781-a43f-4cd2-b8c8-cf079cf125bf name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 19:47:01 default-k8s-diff-port-361847 crio[714]: time="2024-10-10 19:47:01.915509080Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e71ea1b8-ecd6-49f5-bad2-337077c4c8f9 name=/runtime.v1.RuntimeService/Version
	Oct 10 19:47:01 default-k8s-diff-port-361847 crio[714]: time="2024-10-10 19:47:01.915581808Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e71ea1b8-ecd6-49f5-bad2-337077c4c8f9 name=/runtime.v1.RuntimeService/Version
	Oct 10 19:47:01 default-k8s-diff-port-361847 crio[714]: time="2024-10-10 19:47:01.916734657Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0e1a4dd1-8d39-4672-a768-e4561e1835fb name=/runtime.v1.ImageService/ImageFsInfo
	Oct 10 19:47:01 default-k8s-diff-port-361847 crio[714]: time="2024-10-10 19:47:01.917114474Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728589621917092632,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0e1a4dd1-8d39-4672-a768-e4561e1835fb name=/runtime.v1.ImageService/ImageFsInfo
	Oct 10 19:47:01 default-k8s-diff-port-361847 crio[714]: time="2024-10-10 19:47:01.917742305Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=66ea59b1-f2cd-4c13-bc2a-ac6d3014a7d8 name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 19:47:01 default-k8s-diff-port-361847 crio[714]: time="2024-10-10 19:47:01.917793843Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=66ea59b1-f2cd-4c13-bc2a-ac6d3014a7d8 name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 19:47:01 default-k8s-diff-port-361847 crio[714]: time="2024-10-10 19:47:01.918000178Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3af3f927e6e218cdbe6528482fbad91146bcf8b8e2f3ecc6f4dd22367e3353f6,PodSandboxId:e01154709f127b006abdad36e3e8ded86fc4b6450036f64af1396f428d2d8f29,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728588535735861530,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea1a4ade-9648-401f-a0ad-633ab3c1196b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c34d62ea901a006bee1b94b8a0963dce08a38a1fa76a03c69046a03b594563c9,PodSandboxId:f82b561c276715641ce0cbe818bb5ed36fa05d45fee515656f171bc5a4450fd5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728588535033615341,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jlvn6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6336f682-0362-4855-b848-3540052aec19,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fafdf63631a8ff669c19a4671a7c2c85fa4dad5700f0ca207bb21013ef99b06,PodSandboxId:77913b146b0c128f6f16ec19db1e1cdad56d2dc8d1143598ae43bdc9cbcc5536,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728588534445030987,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fgxh7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4faa977-3205-4395-bda3-8fe24fdcf6cc,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tc
p\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8b8f844b7b0586105ff3dba820b339ba8a3f6bbd6277900172e961d7006a1c2,PodSandboxId:c1ea37fae8f88a9283add7ed11f2cc1c0f92b5e5b297926d04325e4132961de1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728588534125856622,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-dh9th,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff14d755-810a-497a-b1fc-
7fe231748af3,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbfa3f7b306bd53ceda1d13c69729c39dc0eb7db018adb04d47e120cbdb300f7,PodSandboxId:ad4c2c7d8d35d6ad8b2f97e2a1b6527975ed0d14435ea181f3b3e50a75e8ccd6,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728588522855220602,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-361847,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb250ff6dad742b9f14cc7b757329d85,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b7a70a0d1c6b99da20dd7aa2a10a91457890905859cc04c211e03b36b5e34be,PodSandboxId:5efab2a6937fd8e2fd2d74e6c07f859c00f1369d27e8fe02a033cbdebd922639,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728588522801390605,Labels:map[string]str
ing{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-361847,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f4eaa86354b36640568a0448bbc6bb4,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc897586e115d16a892ff088925d6b09643f59333e746617ae836506a76e3939,PodSandboxId:1ff7755d161d96dc0a648e1b1cd4cb0e147cbc9943c1021b6f1e4857bbe6f06f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728588522836342605,Label
s:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-361847,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b1732c94af48bf557e8cc0c0f19485d,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:decf33fb776b612209e9001fe9e0154fc6db1046acee4d9d83f96e7bf6e906ff,PodSandboxId:df22536f2cd09d30a1a1104b472ae89f67f8f97332d2bcc7067831641df363da,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728588522817430732,Labels:
map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-361847,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b57c646474679053469c7268c1c49d62,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57335da36e4a25689e6f890eff8bfda0f17a5e5f055b71ba73288383a1b0b07c,PodSandboxId:8c451c4f67d2e5ef483b0964581573cdb6a25ceeddeadde2b5e4166321e63f6f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728588241406370879,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-361847,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b1732c94af48bf557e8cc0c0f19485d,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=66ea59b1-f2cd-4c13-bc2a-ac6d3014a7d8 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	3af3f927e6e21       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   18 minutes ago      Running             storage-provisioner       0                   e01154709f127       storage-provisioner
	c34d62ea901a0       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   18 minutes ago      Running             kube-proxy                0                   f82b561c27671       kube-proxy-jlvn6
	1fafdf63631a8       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   18 minutes ago      Running             coredns                   0                   77913b146b0c1       coredns-7c65d6cfc9-fgxh7
	c8b8f844b7b05       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   18 minutes ago      Running             coredns                   0                   c1ea37fae8f88       coredns-7c65d6cfc9-dh9th
	fbfa3f7b306bd       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   18 minutes ago      Running             etcd                      2                   ad4c2c7d8d35d       etcd-default-k8s-diff-port-361847
	dc897586e115d       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   18 minutes ago      Running             kube-apiserver            2                   1ff7755d161d9       kube-apiserver-default-k8s-diff-port-361847
	decf33fb776b6       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   18 minutes ago      Running             kube-scheduler            2                   df22536f2cd09       kube-scheduler-default-k8s-diff-port-361847
	0b7a70a0d1c6b       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   18 minutes ago      Running             kube-controller-manager   2                   5efab2a6937fd       kube-controller-manager-default-k8s-diff-port-361847
	57335da36e4a2       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   23 minutes ago      Exited              kube-apiserver            1                   8c451c4f67d2e       kube-apiserver-default-k8s-diff-port-361847
	
	
	==> coredns [1fafdf63631a8ff669c19a4671a7c2c85fa4dad5700f0ca207bb21013ef99b06] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [c8b8f844b7b0586105ff3dba820b339ba8a3f6bbd6277900172e961d7006a1c2] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-361847
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-361847
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e90d7de550e839433dc631623053398b652747fc
	                    minikube.k8s.io/name=default-k8s-diff-port-361847
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_10T19_28_49_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 10 Oct 2024 19:28:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-361847
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 10 Oct 2024 19:47:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 10 Oct 2024 19:44:16 +0000   Thu, 10 Oct 2024 19:28:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 10 Oct 2024 19:44:16 +0000   Thu, 10 Oct 2024 19:28:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 10 Oct 2024 19:44:16 +0000   Thu, 10 Oct 2024 19:28:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 10 Oct 2024 19:44:16 +0000   Thu, 10 Oct 2024 19:28:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.32
	  Hostname:    default-k8s-diff-port-361847
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4eae24e916514d64996e008ddd3e63f0
	  System UUID:                4eae24e9-1651-4d64-996e-008ddd3e63f0
	  Boot ID:                    9b3a015f-d090-461f-84c7-df645892ed0b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-dh9th                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     18m
	  kube-system                 coredns-7c65d6cfc9-fgxh7                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     18m
	  kube-system                 etcd-default-k8s-diff-port-361847                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         18m
	  kube-system                 kube-apiserver-default-k8s-diff-port-361847             250m (12%)    0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-361847    200m (10%)    0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-proxy-jlvn6                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-scheduler-default-k8s-diff-port-361847             100m (5%)     0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 metrics-server-6867b74b74-fdf7p                         100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         18m
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 18m   kube-proxy       
	  Normal  Starting                 18m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  18m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  18m   kubelet          Node default-k8s-diff-port-361847 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m   kubelet          Node default-k8s-diff-port-361847 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m   kubelet          Node default-k8s-diff-port-361847 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           18m   node-controller  Node default-k8s-diff-port-361847 event: Registered Node default-k8s-diff-port-361847 in Controller
	
	
	==> dmesg <==
	[  +0.053424] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041721] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.187571] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.537652] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.606078] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.103148] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +0.058638] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.053693] systemd-fstab-generator[648]: Ignoring "noauto" option for root device
	[  +0.201763] systemd-fstab-generator[663]: Ignoring "noauto" option for root device
	[  +0.121613] systemd-fstab-generator[675]: Ignoring "noauto" option for root device
	[  +0.315074] systemd-fstab-generator[705]: Ignoring "noauto" option for root device
	[  +4.352652] systemd-fstab-generator[793]: Ignoring "noauto" option for root device
	[  +0.062955] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.926151] systemd-fstab-generator[915]: Ignoring "noauto" option for root device
	[Oct10 19:24] kauditd_printk_skb: 97 callbacks suppressed
	[  +8.480079] kauditd_printk_skb: 85 callbacks suppressed
	[Oct10 19:28] kauditd_printk_skb: 7 callbacks suppressed
	[  +1.203806] systemd-fstab-generator[2587]: Ignoring "noauto" option for root device
	[  +4.677214] kauditd_printk_skb: 54 callbacks suppressed
	[  +1.884942] systemd-fstab-generator[2909]: Ignoring "noauto" option for root device
	[  +5.439093] systemd-fstab-generator[3026]: Ignoring "noauto" option for root device
	[  +0.100917] kauditd_printk_skb: 14 callbacks suppressed
	[Oct10 19:29] kauditd_printk_skb: 82 callbacks suppressed
	
	
	==> etcd [fbfa3f7b306bd53ceda1d13c69729c39dc0eb7db018adb04d47e120cbdb300f7] <==
	{"level":"info","ts":"2024-10-10T19:28:43.402286Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbd4dd8524dacdec became pre-candidate at term 1"}
	{"level":"info","ts":"2024-10-10T19:28:43.402302Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbd4dd8524dacdec received MsgPreVoteResp from fbd4dd8524dacdec at term 1"}
	{"level":"info","ts":"2024-10-10T19:28:43.402313Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbd4dd8524dacdec became candidate at term 2"}
	{"level":"info","ts":"2024-10-10T19:28:43.402319Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbd4dd8524dacdec received MsgVoteResp from fbd4dd8524dacdec at term 2"}
	{"level":"info","ts":"2024-10-10T19:28:43.402327Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbd4dd8524dacdec became leader at term 2"}
	{"level":"info","ts":"2024-10-10T19:28:43.402334Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: fbd4dd8524dacdec elected leader fbd4dd8524dacdec at term 2"}
	{"level":"info","ts":"2024-10-10T19:28:43.406368Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"fbd4dd8524dacdec","local-member-attributes":"{Name:default-k8s-diff-port-361847 ClientURLs:[https://192.168.50.32:2379]}","request-path":"/0/members/fbd4dd8524dacdec/attributes","cluster-id":"2484c988a436b7d1","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-10T19:28:43.406490Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-10T19:28:43.407219Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-10T19:28:43.417221Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-10T19:28:43.417365Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-10T19:28:43.413261Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-10T19:28:43.414813Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-10T19:28:43.422258Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.32:2379"}
	{"level":"info","ts":"2024-10-10T19:28:43.422890Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-10T19:28:43.425754Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-10T19:28:43.457411Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"2484c988a436b7d1","local-member-id":"fbd4dd8524dacdec","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-10T19:28:43.459433Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-10T19:28:43.459510Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-10T19:38:43.833112Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":687}
	{"level":"info","ts":"2024-10-10T19:38:43.843120Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":687,"took":"9.439252ms","hash":3290937971,"current-db-size-bytes":2138112,"current-db-size":"2.1 MB","current-db-size-in-use-bytes":2138112,"current-db-size-in-use":"2.1 MB"}
	{"level":"info","ts":"2024-10-10T19:38:43.843263Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3290937971,"revision":687,"compact-revision":-1}
	{"level":"info","ts":"2024-10-10T19:43:43.841998Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":930}
	{"level":"info","ts":"2024-10-10T19:43:43.847314Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":930,"took":"4.877781ms","hash":83851247,"current-db-size-bytes":2138112,"current-db-size":"2.1 MB","current-db-size-in-use-bytes":1536000,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2024-10-10T19:43:43.847376Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":83851247,"revision":930,"compact-revision":687}
	
	
	==> kernel <==
	 19:47:02 up 23 min,  0 users,  load average: 0.04, 0.05, 0.06
	Linux default-k8s-diff-port-361847 5.10.207 #1 SMP Tue Oct 8 15:16:25 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [57335da36e4a25689e6f890eff8bfda0f17a5e5f055b71ba73288383a1b0b07c] <==
	W1010 19:28:39.418934       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1010 19:28:39.425468       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1010 19:28:39.493673       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1010 19:28:39.497033       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1010 19:28:39.515769       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1010 19:28:39.537531       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1010 19:28:39.543923       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1010 19:28:39.581953       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1010 19:28:39.606927       1 logging.go:55] [core] [Channel #205 SubChannel #206]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1010 19:28:39.616860       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1010 19:28:39.637843       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1010 19:28:39.667418       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1010 19:28:39.675005       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1010 19:28:39.694737       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1010 19:28:39.703408       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1010 19:28:39.732722       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1010 19:28:39.805596       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1010 19:28:39.820129       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1010 19:28:39.843447       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1010 19:28:39.937101       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1010 19:28:40.014605       1 logging.go:55] [core] [Channel #17 SubChannel #18]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1010 19:28:40.061465       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1010 19:28:40.089411       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1010 19:28:40.089411       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1010 19:28:40.123574       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [dc897586e115d16a892ff088925d6b09643f59333e746617ae836506a76e3939] <==
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1010 19:43:46.435834       1 handler_proxy.go:99] no RequestInfo found in the context
	E1010 19:43:46.435849       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1010 19:43:46.436952       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1010 19:43:46.437026       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1010 19:44:46.437533       1 handler_proxy.go:99] no RequestInfo found in the context
	W1010 19:44:46.437593       1 handler_proxy.go:99] no RequestInfo found in the context
	E1010 19:44:46.437630       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1010 19:44:46.437630       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1010 19:44:46.439203       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1010 19:44:46.439291       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1010 19:46:46.440017       1 handler_proxy.go:99] no RequestInfo found in the context
	W1010 19:46:46.440044       1 handler_proxy.go:99] no RequestInfo found in the context
	E1010 19:46:46.440222       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E1010 19:46:46.440281       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1010 19:46:46.441593       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1010 19:46:46.441695       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [0b7a70a0d1c6b99da20dd7aa2a10a91457890905859cc04c211e03b36b5e34be] <==
	E1010 19:41:52.615774       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1010 19:41:53.060096       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1010 19:42:22.621912       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1010 19:42:23.069526       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1010 19:42:52.629101       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1010 19:42:53.077369       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1010 19:43:22.636756       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1010 19:43:23.085338       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1010 19:43:52.644530       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1010 19:43:53.092843       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1010 19:44:16.064211       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-361847"
	E1010 19:44:22.652671       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1010 19:44:23.100624       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1010 19:44:52.663776       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1010 19:44:53.110114       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1010 19:45:18.445924       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="313.03µs"
	E1010 19:45:22.673452       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1010 19:45:23.117512       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1010 19:45:33.424195       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="110.898µs"
	E1010 19:45:52.680810       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1010 19:45:53.125116       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1010 19:46:22.688681       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1010 19:46:23.134089       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1010 19:46:52.696367       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1010 19:46:53.142478       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [c34d62ea901a006bee1b94b8a0963dce08a38a1fa76a03c69046a03b594563c9] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1010 19:28:55.433214       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1010 19:28:55.452659       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.32"]
	E1010 19:28:55.452966       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1010 19:28:55.604687       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1010 19:28:55.604789       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1010 19:28:55.604856       1 server_linux.go:169] "Using iptables Proxier"
	I1010 19:28:55.608887       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1010 19:28:55.610848       1 server.go:483] "Version info" version="v1.31.1"
	I1010 19:28:55.610968       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1010 19:28:55.614278       1 config.go:199] "Starting service config controller"
	I1010 19:28:55.615809       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1010 19:28:55.615871       1 config.go:105] "Starting endpoint slice config controller"
	I1010 19:28:55.615881       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1010 19:28:55.617600       1 config.go:328] "Starting node config controller"
	I1010 19:28:55.617610       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1010 19:28:55.716024       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1010 19:28:55.716108       1 shared_informer.go:320] Caches are synced for service config
	I1010 19:28:55.717671       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [decf33fb776b612209e9001fe9e0154fc6db1046acee4d9d83f96e7bf6e906ff] <==
	W1010 19:28:45.477362       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1010 19:28:45.478206       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1010 19:28:46.496323       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1010 19:28:46.497206       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1010 19:28:46.579399       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1010 19:28:46.579529       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1010 19:28:46.643093       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1010 19:28:46.643250       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1010 19:28:46.649461       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1010 19:28:46.649562       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1010 19:28:46.661549       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1010 19:28:46.661652       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1010 19:28:46.688958       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1010 19:28:46.689128       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1010 19:28:46.689071       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1010 19:28:46.689359       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1010 19:28:46.802868       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1010 19:28:46.803666       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1010 19:28:46.819655       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1010 19:28:46.820204       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1010 19:28:46.846338       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1010 19:28:46.846658       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1010 19:28:46.926814       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1010 19:28:46.928281       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I1010 19:28:48.966763       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 10 19:45:48 default-k8s-diff-port-361847 kubelet[2916]: E1010 19:45:48.681748    2916 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728589548681203639,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 19:45:58 default-k8s-diff-port-361847 kubelet[2916]: E1010 19:45:58.684209    2916 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728589558683473554,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 19:45:58 default-k8s-diff-port-361847 kubelet[2916]: E1010 19:45:58.684513    2916 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728589558683473554,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 19:46:01 default-k8s-diff-port-361847 kubelet[2916]: E1010 19:46:01.408014    2916 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-fdf7p" podUID="6f8ca204-13fe-4adb-9c09-33ec6821ff2d"
	Oct 10 19:46:08 default-k8s-diff-port-361847 kubelet[2916]: E1010 19:46:08.686655    2916 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728589568686038687,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 19:46:08 default-k8s-diff-port-361847 kubelet[2916]: E1010 19:46:08.687066    2916 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728589568686038687,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 19:46:12 default-k8s-diff-port-361847 kubelet[2916]: E1010 19:46:12.408931    2916 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-fdf7p" podUID="6f8ca204-13fe-4adb-9c09-33ec6821ff2d"
	Oct 10 19:46:18 default-k8s-diff-port-361847 kubelet[2916]: E1010 19:46:18.688679    2916 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728589578688249871,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 19:46:18 default-k8s-diff-port-361847 kubelet[2916]: E1010 19:46:18.688996    2916 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728589578688249871,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 19:46:26 default-k8s-diff-port-361847 kubelet[2916]: E1010 19:46:26.408279    2916 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-fdf7p" podUID="6f8ca204-13fe-4adb-9c09-33ec6821ff2d"
	Oct 10 19:46:28 default-k8s-diff-port-361847 kubelet[2916]: E1010 19:46:28.692448    2916 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728589588691944345,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 19:46:28 default-k8s-diff-port-361847 kubelet[2916]: E1010 19:46:28.692718    2916 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728589588691944345,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 19:46:37 default-k8s-diff-port-361847 kubelet[2916]: E1010 19:46:37.408390    2916 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-fdf7p" podUID="6f8ca204-13fe-4adb-9c09-33ec6821ff2d"
	Oct 10 19:46:38 default-k8s-diff-port-361847 kubelet[2916]: E1010 19:46:38.695602    2916 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728589598694855026,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 19:46:38 default-k8s-diff-port-361847 kubelet[2916]: E1010 19:46:38.696025    2916 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728589598694855026,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 19:46:48 default-k8s-diff-port-361847 kubelet[2916]: E1010 19:46:48.440411    2916 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 10 19:46:48 default-k8s-diff-port-361847 kubelet[2916]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 10 19:46:48 default-k8s-diff-port-361847 kubelet[2916]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 10 19:46:48 default-k8s-diff-port-361847 kubelet[2916]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 10 19:46:48 default-k8s-diff-port-361847 kubelet[2916]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 10 19:46:48 default-k8s-diff-port-361847 kubelet[2916]: E1010 19:46:48.698406    2916 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728589608698033343,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 19:46:48 default-k8s-diff-port-361847 kubelet[2916]: E1010 19:46:48.698433    2916 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728589608698033343,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 19:46:51 default-k8s-diff-port-361847 kubelet[2916]: E1010 19:46:51.408556    2916 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-fdf7p" podUID="6f8ca204-13fe-4adb-9c09-33ec6821ff2d"
	Oct 10 19:46:58 default-k8s-diff-port-361847 kubelet[2916]: E1010 19:46:58.702886    2916 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728589618701759405,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 19:46:58 default-k8s-diff-port-361847 kubelet[2916]: E1010 19:46:58.702996    2916 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728589618701759405,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [3af3f927e6e218cdbe6528482fbad91146bcf8b8e2f3ecc6f4dd22367e3353f6] <==
	I1010 19:28:55.835064       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1010 19:28:55.844850       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1010 19:28:55.845024       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1010 19:28:55.858573       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1010 19:28:55.858899       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-361847_ecf32781-3b84-4083-8316-13968b37b0f6!
	I1010 19:28:55.859691       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ea091e49-901f-468f-9bcc-d20776ed10cf", APIVersion:"v1", ResourceVersion:"403", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-361847_ecf32781-3b84-4083-8316-13968b37b0f6 became leader
	I1010 19:28:55.959644       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-361847_ecf32781-3b84-4083-8316-13968b37b0f6!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-361847 -n default-k8s-diff-port-361847
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-361847 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-fdf7p
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-361847 describe pod metrics-server-6867b74b74-fdf7p
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-361847 describe pod metrics-server-6867b74b74-fdf7p: exit status 1 (65.476885ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-fdf7p" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-361847 describe pod metrics-server-6867b74b74-fdf7p: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (536.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (351.36s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-320324 -n no-preload-320324
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-10-10 19:43:57.791765894 +0000 UTC m=+6385.476701080
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-320324 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-320324 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.07µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-320324 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-320324 -n no-preload-320324
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-320324 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-320324 logs -n 25: (2.073950565s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p newest-cni-029826             | newest-cni-029826            | jenkins | v1.34.0 | 10 Oct 24 19:16 UTC | 10 Oct 24 19:16 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-029826                                   | newest-cni-029826            | jenkins | v1.34.0 | 10 Oct 24 19:16 UTC | 10 Oct 24 19:16 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-029826                  | newest-cni-029826            | jenkins | v1.34.0 | 10 Oct 24 19:16 UTC | 10 Oct 24 19:16 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-029826 --memory=2200 --alsologtostderr   | newest-cni-029826            | jenkins | v1.34.0 | 10 Oct 24 19:16 UTC | 10 Oct 24 19:16 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-541370            | embed-certs-541370           | jenkins | v1.34.0 | 10 Oct 24 19:16 UTC | 10 Oct 24 19:16 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-541370                                  | embed-certs-541370           | jenkins | v1.34.0 | 10 Oct 24 19:16 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| image   | newest-cni-029826 image list                           | newest-cni-029826            | jenkins | v1.34.0 | 10 Oct 24 19:16 UTC | 10 Oct 24 19:16 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-029826                                   | newest-cni-029826            | jenkins | v1.34.0 | 10 Oct 24 19:16 UTC | 10 Oct 24 19:16 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-029826                                   | newest-cni-029826            | jenkins | v1.34.0 | 10 Oct 24 19:16 UTC | 10 Oct 24 19:16 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-029826                                   | newest-cni-029826            | jenkins | v1.34.0 | 10 Oct 24 19:16 UTC | 10 Oct 24 19:16 UTC |
	| delete  | -p newest-cni-029826                                   | newest-cni-029826            | jenkins | v1.34.0 | 10 Oct 24 19:16 UTC | 10 Oct 24 19:17 UTC |
	| start   | -p                                                     | default-k8s-diff-port-361847 | jenkins | v1.34.0 | 10 Oct 24 19:17 UTC | 10 Oct 24 19:18 UTC |
	|         | default-k8s-diff-port-361847                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-320324                  | no-preload-320324            | jenkins | v1.34.0 | 10 Oct 24 19:18 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-320324                                   | no-preload-320324            | jenkins | v1.34.0 | 10 Oct 24 19:18 UTC | 10 Oct 24 19:29 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-947203        | old-k8s-version-947203       | jenkins | v1.34.0 | 10 Oct 24 19:18 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-361847  | default-k8s-diff-port-361847 | jenkins | v1.34.0 | 10 Oct 24 19:18 UTC | 10 Oct 24 19:18 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-361847 | jenkins | v1.34.0 | 10 Oct 24 19:18 UTC |                     |
	|         | default-k8s-diff-port-361847                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-541370                 | embed-certs-541370           | jenkins | v1.34.0 | 10 Oct 24 19:18 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-541370                                  | embed-certs-541370           | jenkins | v1.34.0 | 10 Oct 24 19:19 UTC | 10 Oct 24 19:28 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-947203                              | old-k8s-version-947203       | jenkins | v1.34.0 | 10 Oct 24 19:19 UTC | 10 Oct 24 19:19 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-947203             | old-k8s-version-947203       | jenkins | v1.34.0 | 10 Oct 24 19:19 UTC | 10 Oct 24 19:19 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-947203                              | old-k8s-version-947203       | jenkins | v1.34.0 | 10 Oct 24 19:19 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-361847       | default-k8s-diff-port-361847 | jenkins | v1.34.0 | 10 Oct 24 19:21 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-361847 | jenkins | v1.34.0 | 10 Oct 24 19:21 UTC | 10 Oct 24 19:29 UTC |
	|         | default-k8s-diff-port-361847                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-947203                              | old-k8s-version-947203       | jenkins | v1.34.0 | 10 Oct 24 19:43 UTC | 10 Oct 24 19:43 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/10 19:21:13
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1010 19:21:13.943219  148525 out.go:345] Setting OutFile to fd 1 ...
	I1010 19:21:13.943336  148525 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 19:21:13.943343  148525 out.go:358] Setting ErrFile to fd 2...
	I1010 19:21:13.943347  148525 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 19:21:13.943560  148525 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19787-81676/.minikube/bin
	I1010 19:21:13.944109  148525 out.go:352] Setting JSON to false
	I1010 19:21:13.945219  148525 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":11020,"bootTime":1728577054,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1010 19:21:13.945321  148525 start.go:139] virtualization: kvm guest
	I1010 19:21:13.947915  148525 out.go:177] * [default-k8s-diff-port-361847] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1010 19:21:13.950021  148525 out.go:177]   - MINIKUBE_LOCATION=19787
	I1010 19:21:13.950037  148525 notify.go:220] Checking for updates...
	I1010 19:21:13.952994  148525 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1010 19:21:13.954661  148525 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19787-81676/kubeconfig
	I1010 19:21:13.956438  148525 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19787-81676/.minikube
	I1010 19:21:13.958502  148525 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1010 19:21:13.960099  148525 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1010 19:21:13.961930  148525 config.go:182] Loaded profile config "default-k8s-diff-port-361847": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 19:21:13.962374  148525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:21:13.962450  148525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:21:13.978323  148525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41247
	I1010 19:21:13.978926  148525 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:21:13.979520  148525 main.go:141] libmachine: Using API Version  1
	I1010 19:21:13.979538  148525 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:21:13.979954  148525 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:21:13.980144  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .DriverName
	I1010 19:21:13.980446  148525 driver.go:394] Setting default libvirt URI to qemu:///system
	I1010 19:21:13.980745  148525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:21:13.980784  148525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:21:13.996046  148525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37617
	I1010 19:21:13.996534  148525 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:21:13.997069  148525 main.go:141] libmachine: Using API Version  1
	I1010 19:21:13.997097  148525 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:21:13.997530  148525 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:21:13.997788  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .DriverName
	I1010 19:21:14.033593  148525 out.go:177] * Using the kvm2 driver based on existing profile
	I1010 19:21:14.035367  148525 start.go:297] selected driver: kvm2
	I1010 19:21:14.035394  148525 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-361847 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-361847 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.32 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 19:21:14.035526  148525 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1010 19:21:14.036341  148525 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 19:21:14.036452  148525 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19787-81676/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1010 19:21:14.052462  148525 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1010 19:21:14.052918  148525 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1010 19:21:14.052967  148525 cni.go:84] Creating CNI manager for ""
	I1010 19:21:14.053019  148525 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1010 19:21:14.053067  148525 start.go:340] cluster config:
	{Name:default-k8s-diff-port-361847 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-361847 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.32 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-h
ost Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 19:21:14.053178  148525 iso.go:125] acquiring lock: {Name:mk53f4624ffe67cac4f5d6b29b9ed87292a010a5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 19:21:14.055485  148525 out.go:177] * Starting "default-k8s-diff-port-361847" primary control-plane node in "default-k8s-diff-port-361847" cluster
	I1010 19:21:16.773106  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:21:14.056945  148525 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1010 19:21:14.057002  148525 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1010 19:21:14.057014  148525 cache.go:56] Caching tarball of preloaded images
	I1010 19:21:14.057118  148525 preload.go:172] Found /home/jenkins/minikube-integration/19787-81676/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1010 19:21:14.057134  148525 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1010 19:21:14.057268  148525 profile.go:143] Saving config to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/default-k8s-diff-port-361847/config.json ...
	I1010 19:21:14.057476  148525 start.go:360] acquireMachinesLock for default-k8s-diff-port-361847: {Name:mkf50a8a3600473176c10a3ea212772e747151e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1010 19:21:22.853158  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:21:25.925174  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:21:32.005160  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:21:35.077198  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:21:41.157130  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:21:44.229127  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:21:50.309136  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:21:53.381191  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:21:59.461129  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:22:02.533201  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:22:08.613124  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:22:11.685169  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:22:17.765161  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:22:20.837208  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:22:26.917127  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:22:29.989172  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:22:36.069147  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:22:39.141173  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:22:45.221160  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:22:48.293141  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:22:51.297376  147758 start.go:364] duration metric: took 3m49.312490934s to acquireMachinesLock for "embed-certs-541370"
	I1010 19:22:51.297453  147758 start.go:96] Skipping create...Using existing machine configuration
	I1010 19:22:51.297464  147758 fix.go:54] fixHost starting: 
	I1010 19:22:51.297787  147758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:22:51.297848  147758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:22:51.314087  147758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43639
	I1010 19:22:51.314588  147758 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:22:51.315115  147758 main.go:141] libmachine: Using API Version  1
	I1010 19:22:51.315138  147758 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:22:51.315509  147758 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:22:51.315691  147758 main.go:141] libmachine: (embed-certs-541370) Calling .DriverName
	I1010 19:22:51.315879  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetState
	I1010 19:22:51.317597  147758 fix.go:112] recreateIfNeeded on embed-certs-541370: state=Stopped err=<nil>
	I1010 19:22:51.317621  147758 main.go:141] libmachine: (embed-certs-541370) Calling .DriverName
	W1010 19:22:51.317781  147758 fix.go:138] unexpected machine state, will restart: <nil>
	I1010 19:22:51.319664  147758 out.go:177] * Restarting existing kvm2 VM for "embed-certs-541370" ...
	I1010 19:22:51.320967  147758 main.go:141] libmachine: (embed-certs-541370) Calling .Start
	I1010 19:22:51.321134  147758 main.go:141] libmachine: (embed-certs-541370) Ensuring networks are active...
	I1010 19:22:51.322026  147758 main.go:141] libmachine: (embed-certs-541370) Ensuring network default is active
	I1010 19:22:51.322468  147758 main.go:141] libmachine: (embed-certs-541370) Ensuring network mk-embed-certs-541370 is active
	I1010 19:22:51.322874  147758 main.go:141] libmachine: (embed-certs-541370) Getting domain xml...
	I1010 19:22:51.323687  147758 main.go:141] libmachine: (embed-certs-541370) Creating domain...
	I1010 19:22:51.294881  147213 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1010 19:22:51.294927  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetMachineName
	I1010 19:22:51.295226  147213 buildroot.go:166] provisioning hostname "no-preload-320324"
	I1010 19:22:51.295256  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetMachineName
	I1010 19:22:51.295454  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHHostname
	I1010 19:22:51.297198  147213 machine.go:96] duration metric: took 4m37.414594306s to provisionDockerMachine
	I1010 19:22:51.297252  147213 fix.go:56] duration metric: took 4m37.436635356s for fixHost
	I1010 19:22:51.297259  147213 start.go:83] releasing machines lock for "no-preload-320324", held for 4m37.436668423s
	W1010 19:22:51.297278  147213 start.go:714] error starting host: provision: host is not running
	W1010 19:22:51.297382  147213 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1010 19:22:51.297396  147213 start.go:729] Will try again in 5 seconds ...
	I1010 19:22:52.568699  147758 main.go:141] libmachine: (embed-certs-541370) Waiting to get IP...
	I1010 19:22:52.569582  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:22:52.569952  147758 main.go:141] libmachine: (embed-certs-541370) DBG | unable to find current IP address of domain embed-certs-541370 in network mk-embed-certs-541370
	I1010 19:22:52.570018  147758 main.go:141] libmachine: (embed-certs-541370) DBG | I1010 19:22:52.569935  148914 retry.go:31] will retry after 261.244287ms: waiting for machine to come up
	I1010 19:22:52.832639  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:22:52.833280  147758 main.go:141] libmachine: (embed-certs-541370) DBG | unable to find current IP address of domain embed-certs-541370 in network mk-embed-certs-541370
	I1010 19:22:52.833310  147758 main.go:141] libmachine: (embed-certs-541370) DBG | I1010 19:22:52.833200  148914 retry.go:31] will retry after 304.116732ms: waiting for machine to come up
	I1010 19:22:53.138770  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:22:53.139091  147758 main.go:141] libmachine: (embed-certs-541370) DBG | unable to find current IP address of domain embed-certs-541370 in network mk-embed-certs-541370
	I1010 19:22:53.139124  147758 main.go:141] libmachine: (embed-certs-541370) DBG | I1010 19:22:53.139055  148914 retry.go:31] will retry after 484.354474ms: waiting for machine to come up
	I1010 19:22:53.624831  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:22:53.625293  147758 main.go:141] libmachine: (embed-certs-541370) DBG | unable to find current IP address of domain embed-certs-541370 in network mk-embed-certs-541370
	I1010 19:22:53.625323  147758 main.go:141] libmachine: (embed-certs-541370) DBG | I1010 19:22:53.625234  148914 retry.go:31] will retry after 591.916836ms: waiting for machine to come up
	I1010 19:22:54.219214  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:22:54.219732  147758 main.go:141] libmachine: (embed-certs-541370) DBG | unable to find current IP address of domain embed-certs-541370 in network mk-embed-certs-541370
	I1010 19:22:54.219763  147758 main.go:141] libmachine: (embed-certs-541370) DBG | I1010 19:22:54.219673  148914 retry.go:31] will retry after 614.162479ms: waiting for machine to come up
	I1010 19:22:54.835573  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:22:54.836038  147758 main.go:141] libmachine: (embed-certs-541370) DBG | unable to find current IP address of domain embed-certs-541370 in network mk-embed-certs-541370
	I1010 19:22:54.836063  147758 main.go:141] libmachine: (embed-certs-541370) DBG | I1010 19:22:54.835988  148914 retry.go:31] will retry after 824.170953ms: waiting for machine to come up
	I1010 19:22:55.662092  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:22:55.662646  147758 main.go:141] libmachine: (embed-certs-541370) DBG | unable to find current IP address of domain embed-certs-541370 in network mk-embed-certs-541370
	I1010 19:22:55.662668  147758 main.go:141] libmachine: (embed-certs-541370) DBG | I1010 19:22:55.662586  148914 retry.go:31] will retry after 928.483848ms: waiting for machine to come up
	I1010 19:22:56.593200  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:22:56.593724  147758 main.go:141] libmachine: (embed-certs-541370) DBG | unable to find current IP address of domain embed-certs-541370 in network mk-embed-certs-541370
	I1010 19:22:56.593756  147758 main.go:141] libmachine: (embed-certs-541370) DBG | I1010 19:22:56.593679  148914 retry.go:31] will retry after 941.138644ms: waiting for machine to come up
	I1010 19:22:56.299351  147213 start.go:360] acquireMachinesLock for no-preload-320324: {Name:mkf50a8a3600473176c10a3ea212772e747151e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1010 19:22:57.536977  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:22:57.537403  147758 main.go:141] libmachine: (embed-certs-541370) DBG | unable to find current IP address of domain embed-certs-541370 in network mk-embed-certs-541370
	I1010 19:22:57.537429  147758 main.go:141] libmachine: (embed-certs-541370) DBG | I1010 19:22:57.537331  148914 retry.go:31] will retry after 1.262203584s: waiting for machine to come up
	I1010 19:22:58.801921  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:22:58.802420  147758 main.go:141] libmachine: (embed-certs-541370) DBG | unable to find current IP address of domain embed-certs-541370 in network mk-embed-certs-541370
	I1010 19:22:58.802454  147758 main.go:141] libmachine: (embed-certs-541370) DBG | I1010 19:22:58.802381  148914 retry.go:31] will retry after 2.154751391s: waiting for machine to come up
	I1010 19:23:00.960100  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:00.960661  147758 main.go:141] libmachine: (embed-certs-541370) DBG | unable to find current IP address of domain embed-certs-541370 in network mk-embed-certs-541370
	I1010 19:23:00.960684  147758 main.go:141] libmachine: (embed-certs-541370) DBG | I1010 19:23:00.960607  148914 retry.go:31] will retry after 1.945155171s: waiting for machine to come up
	I1010 19:23:02.907705  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:02.908097  147758 main.go:141] libmachine: (embed-certs-541370) DBG | unable to find current IP address of domain embed-certs-541370 in network mk-embed-certs-541370
	I1010 19:23:02.908129  147758 main.go:141] libmachine: (embed-certs-541370) DBG | I1010 19:23:02.908038  148914 retry.go:31] will retry after 3.245262469s: waiting for machine to come up
	I1010 19:23:06.157527  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:06.157897  147758 main.go:141] libmachine: (embed-certs-541370) DBG | unable to find current IP address of domain embed-certs-541370 in network mk-embed-certs-541370
	I1010 19:23:06.157925  147758 main.go:141] libmachine: (embed-certs-541370) DBG | I1010 19:23:06.157858  148914 retry.go:31] will retry after 3.973579024s: waiting for machine to come up
	I1010 19:23:11.321975  148123 start.go:364] duration metric: took 3m17.648521443s to acquireMachinesLock for "old-k8s-version-947203"
	I1010 19:23:11.322043  148123 start.go:96] Skipping create...Using existing machine configuration
	I1010 19:23:11.322056  148123 fix.go:54] fixHost starting: 
	I1010 19:23:11.322537  148123 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:23:11.322596  148123 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:23:11.339586  148123 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42157
	I1010 19:23:11.340091  148123 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:23:11.340686  148123 main.go:141] libmachine: Using API Version  1
	I1010 19:23:11.340719  148123 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:23:11.341101  148123 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:23:11.341295  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .DriverName
	I1010 19:23:11.341453  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetState
	I1010 19:23:11.343215  148123 fix.go:112] recreateIfNeeded on old-k8s-version-947203: state=Stopped err=<nil>
	I1010 19:23:11.343247  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .DriverName
	W1010 19:23:11.343408  148123 fix.go:138] unexpected machine state, will restart: <nil>
	I1010 19:23:11.345653  148123 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-947203" ...
	I1010 19:23:10.135369  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.135799  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has current primary IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.135830  147758 main.go:141] libmachine: (embed-certs-541370) Found IP for machine: 192.168.39.120
	I1010 19:23:10.135839  147758 main.go:141] libmachine: (embed-certs-541370) Reserving static IP address...
	I1010 19:23:10.136283  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "embed-certs-541370", mac: "52:54:00:e2:ee:d0", ip: "192.168.39.120"} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:10.136311  147758 main.go:141] libmachine: (embed-certs-541370) Reserved static IP address: 192.168.39.120
	I1010 19:23:10.136327  147758 main.go:141] libmachine: (embed-certs-541370) DBG | skip adding static IP to network mk-embed-certs-541370 - found existing host DHCP lease matching {name: "embed-certs-541370", mac: "52:54:00:e2:ee:d0", ip: "192.168.39.120"}
	I1010 19:23:10.136339  147758 main.go:141] libmachine: (embed-certs-541370) DBG | Getting to WaitForSSH function...
	I1010 19:23:10.136351  147758 main.go:141] libmachine: (embed-certs-541370) Waiting for SSH to be available...
	I1010 19:23:10.138861  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.139259  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:10.139293  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.139438  147758 main.go:141] libmachine: (embed-certs-541370) DBG | Using SSH client type: external
	I1010 19:23:10.139472  147758 main.go:141] libmachine: (embed-certs-541370) DBG | Using SSH private key: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/embed-certs-541370/id_rsa (-rw-------)
	I1010 19:23:10.139517  147758 main.go:141] libmachine: (embed-certs-541370) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.120 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19787-81676/.minikube/machines/embed-certs-541370/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1010 19:23:10.139541  147758 main.go:141] libmachine: (embed-certs-541370) DBG | About to run SSH command:
	I1010 19:23:10.139562  147758 main.go:141] libmachine: (embed-certs-541370) DBG | exit 0
	I1010 19:23:10.261078  147758 main.go:141] libmachine: (embed-certs-541370) DBG | SSH cmd err, output: <nil>: 
	I1010 19:23:10.261533  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetConfigRaw
	I1010 19:23:10.262192  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetIP
	I1010 19:23:10.265071  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.265467  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:10.265515  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.265737  147758 profile.go:143] Saving config to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/embed-certs-541370/config.json ...
	I1010 19:23:10.265941  147758 machine.go:93] provisionDockerMachine start ...
	I1010 19:23:10.265960  147758 main.go:141] libmachine: (embed-certs-541370) Calling .DriverName
	I1010 19:23:10.266188  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHHostname
	I1010 19:23:10.269186  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.269618  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:10.269649  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.269799  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHPort
	I1010 19:23:10.269984  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:23:10.270206  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:23:10.270345  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHUsername
	I1010 19:23:10.270550  147758 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:10.270834  147758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.120 22 <nil> <nil>}
	I1010 19:23:10.270849  147758 main.go:141] libmachine: About to run SSH command:
	hostname
	I1010 19:23:10.373285  147758 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1010 19:23:10.373316  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetMachineName
	I1010 19:23:10.373625  147758 buildroot.go:166] provisioning hostname "embed-certs-541370"
	I1010 19:23:10.373660  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetMachineName
	I1010 19:23:10.373835  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHHostname
	I1010 19:23:10.376552  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.376951  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:10.376994  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.377132  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHPort
	I1010 19:23:10.377332  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:23:10.377489  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:23:10.377606  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHUsername
	I1010 19:23:10.377745  147758 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:10.377918  147758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.120 22 <nil> <nil>}
	I1010 19:23:10.377930  147758 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-541370 && echo "embed-certs-541370" | sudo tee /etc/hostname
	I1010 19:23:10.495847  147758 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-541370
	
	I1010 19:23:10.495880  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHHostname
	I1010 19:23:10.498868  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.499205  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:10.499247  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.499362  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHPort
	I1010 19:23:10.499556  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:23:10.499700  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:23:10.499829  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHUsername
	I1010 19:23:10.499961  147758 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:10.500187  147758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.120 22 <nil> <nil>}
	I1010 19:23:10.500210  147758 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-541370' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-541370/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-541370' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1010 19:23:10.614318  147758 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1010 19:23:10.614357  147758 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19787-81676/.minikube CaCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19787-81676/.minikube}
	I1010 19:23:10.614412  147758 buildroot.go:174] setting up certificates
	I1010 19:23:10.614429  147758 provision.go:84] configureAuth start
	I1010 19:23:10.614457  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetMachineName
	I1010 19:23:10.614763  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetIP
	I1010 19:23:10.617457  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.617888  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:10.617916  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.618078  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHHostname
	I1010 19:23:10.620243  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.620635  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:10.620666  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.620789  147758 provision.go:143] copyHostCerts
	I1010 19:23:10.620895  147758 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem, removing ...
	I1010 19:23:10.620913  147758 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem
	I1010 19:23:10.620998  147758 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem (1078 bytes)
	I1010 19:23:10.621111  147758 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem, removing ...
	I1010 19:23:10.621123  147758 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem
	I1010 19:23:10.621159  147758 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem (1123 bytes)
	I1010 19:23:10.621245  147758 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem, removing ...
	I1010 19:23:10.621257  147758 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem
	I1010 19:23:10.621292  147758 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem (1675 bytes)
	I1010 19:23:10.621364  147758 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem org=jenkins.embed-certs-541370 san=[127.0.0.1 192.168.39.120 embed-certs-541370 localhost minikube]
	I1010 19:23:10.697456  147758 provision.go:177] copyRemoteCerts
	I1010 19:23:10.697515  147758 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1010 19:23:10.697547  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHHostname
	I1010 19:23:10.700439  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.700763  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:10.700799  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.700956  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHPort
	I1010 19:23:10.701162  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:23:10.701320  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHUsername
	I1010 19:23:10.701465  147758 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/embed-certs-541370/id_rsa Username:docker}
	I1010 19:23:10.783442  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1010 19:23:10.808446  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1010 19:23:10.832117  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1010 19:23:10.856286  147758 provision.go:87] duration metric: took 241.840139ms to configureAuth
	I1010 19:23:10.856318  147758 buildroot.go:189] setting minikube options for container-runtime
	I1010 19:23:10.856528  147758 config.go:182] Loaded profile config "embed-certs-541370": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 19:23:10.856640  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHHostname
	I1010 19:23:10.859252  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.859677  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:10.859708  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.859916  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHPort
	I1010 19:23:10.860087  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:23:10.860222  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:23:10.860344  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHUsername
	I1010 19:23:10.860524  147758 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:10.860688  147758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.120 22 <nil> <nil>}
	I1010 19:23:10.860702  147758 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1010 19:23:11.086349  147758 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1010 19:23:11.086375  147758 machine.go:96] duration metric: took 820.421344ms to provisionDockerMachine
	I1010 19:23:11.086386  147758 start.go:293] postStartSetup for "embed-certs-541370" (driver="kvm2")
	I1010 19:23:11.086401  147758 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1010 19:23:11.086423  147758 main.go:141] libmachine: (embed-certs-541370) Calling .DriverName
	I1010 19:23:11.086755  147758 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1010 19:23:11.086783  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHHostname
	I1010 19:23:11.089482  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:11.089838  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:11.089860  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:11.090042  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHPort
	I1010 19:23:11.090253  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:23:11.090410  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHUsername
	I1010 19:23:11.090535  147758 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/embed-certs-541370/id_rsa Username:docker}
	I1010 19:23:11.172474  147758 ssh_runner.go:195] Run: cat /etc/os-release
	I1010 19:23:11.176699  147758 info.go:137] Remote host: Buildroot 2023.02.9
	I1010 19:23:11.176733  147758 filesync.go:126] Scanning /home/jenkins/minikube-integration/19787-81676/.minikube/addons for local assets ...
	I1010 19:23:11.176800  147758 filesync.go:126] Scanning /home/jenkins/minikube-integration/19787-81676/.minikube/files for local assets ...
	I1010 19:23:11.176899  147758 filesync.go:149] local asset: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem -> 888762.pem in /etc/ssl/certs
	I1010 19:23:11.177044  147758 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1010 19:23:11.186985  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem --> /etc/ssl/certs/888762.pem (1708 bytes)
	I1010 19:23:11.211385  147758 start.go:296] duration metric: took 124.982089ms for postStartSetup
	I1010 19:23:11.211442  147758 fix.go:56] duration metric: took 19.913977793s for fixHost
	I1010 19:23:11.211472  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHHostname
	I1010 19:23:11.214421  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:11.214780  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:11.214812  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:11.214999  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHPort
	I1010 19:23:11.215219  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:23:11.215429  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:23:11.215612  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHUsername
	I1010 19:23:11.215786  147758 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:11.215974  147758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.120 22 <nil> <nil>}
	I1010 19:23:11.215985  147758 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1010 19:23:11.321786  147758 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728588191.295446348
	
	I1010 19:23:11.321814  147758 fix.go:216] guest clock: 1728588191.295446348
	I1010 19:23:11.321822  147758 fix.go:229] Guest: 2024-10-10 19:23:11.295446348 +0000 UTC Remote: 2024-10-10 19:23:11.211447413 +0000 UTC m=+249.373680838 (delta=83.998935ms)
	I1010 19:23:11.321870  147758 fix.go:200] guest clock delta is within tolerance: 83.998935ms
	I1010 19:23:11.321877  147758 start.go:83] releasing machines lock for "embed-certs-541370", held for 20.024455781s
	I1010 19:23:11.321905  147758 main.go:141] libmachine: (embed-certs-541370) Calling .DriverName
	I1010 19:23:11.322169  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetIP
	I1010 19:23:11.325004  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:11.325350  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:11.325375  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:11.325566  147758 main.go:141] libmachine: (embed-certs-541370) Calling .DriverName
	I1010 19:23:11.326090  147758 main.go:141] libmachine: (embed-certs-541370) Calling .DriverName
	I1010 19:23:11.326294  147758 main.go:141] libmachine: (embed-certs-541370) Calling .DriverName
	I1010 19:23:11.326383  147758 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1010 19:23:11.326444  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHHostname
	I1010 19:23:11.326501  147758 ssh_runner.go:195] Run: cat /version.json
	I1010 19:23:11.326529  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHHostname
	I1010 19:23:11.329311  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:11.329657  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:11.329690  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:11.329713  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:11.329866  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHPort
	I1010 19:23:11.330057  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:23:11.330160  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:11.330188  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:11.330204  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHUsername
	I1010 19:23:11.330344  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHPort
	I1010 19:23:11.330346  147758 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/embed-certs-541370/id_rsa Username:docker}
	I1010 19:23:11.330538  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:23:11.330687  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHUsername
	I1010 19:23:11.330821  147758 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/embed-certs-541370/id_rsa Username:docker}
	I1010 19:23:11.406525  147758 ssh_runner.go:195] Run: systemctl --version
	I1010 19:23:11.428958  147758 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1010 19:23:11.577663  147758 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1010 19:23:11.584024  147758 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1010 19:23:11.584112  147758 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1010 19:23:11.603163  147758 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1010 19:23:11.603190  147758 start.go:495] detecting cgroup driver to use...
	I1010 19:23:11.603291  147758 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1010 19:23:11.624744  147758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1010 19:23:11.645477  147758 docker.go:217] disabling cri-docker service (if available) ...
	I1010 19:23:11.645537  147758 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1010 19:23:11.660216  147758 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1010 19:23:11.675019  147758 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1010 19:23:11.796038  147758 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1010 19:23:11.967750  147758 docker.go:233] disabling docker service ...
	I1010 19:23:11.967828  147758 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1010 19:23:11.983184  147758 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1010 19:23:12.001603  147758 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1010 19:23:12.149408  147758 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1010 19:23:12.306724  147758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1010 19:23:12.324302  147758 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1010 19:23:12.345426  147758 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1010 19:23:12.345508  147758 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:12.357812  147758 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1010 19:23:12.357883  147758 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:12.370095  147758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:12.382389  147758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:12.395000  147758 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1010 19:23:12.408429  147758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:12.426851  147758 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:12.450568  147758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:12.463434  147758 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1010 19:23:12.474537  147758 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1010 19:23:12.474606  147758 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1010 19:23:12.489074  147758 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1010 19:23:12.500048  147758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 19:23:12.635695  147758 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1010 19:23:12.733511  147758 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1010 19:23:12.733593  147758 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1010 19:23:12.739072  147758 start.go:563] Will wait 60s for crictl version
	I1010 19:23:12.739138  147758 ssh_runner.go:195] Run: which crictl
	I1010 19:23:12.743675  147758 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1010 19:23:12.792272  147758 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1010 19:23:12.792379  147758 ssh_runner.go:195] Run: crio --version
	I1010 19:23:12.829968  147758 ssh_runner.go:195] Run: crio --version
	I1010 19:23:12.862579  147758 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1010 19:23:11.347158  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .Start
	I1010 19:23:11.347359  148123 main.go:141] libmachine: (old-k8s-version-947203) Ensuring networks are active...
	I1010 19:23:11.348252  148123 main.go:141] libmachine: (old-k8s-version-947203) Ensuring network default is active
	I1010 19:23:11.348689  148123 main.go:141] libmachine: (old-k8s-version-947203) Ensuring network mk-old-k8s-version-947203 is active
	I1010 19:23:11.349139  148123 main.go:141] libmachine: (old-k8s-version-947203) Getting domain xml...
	I1010 19:23:11.349799  148123 main.go:141] libmachine: (old-k8s-version-947203) Creating domain...
	I1010 19:23:12.671472  148123 main.go:141] libmachine: (old-k8s-version-947203) Waiting to get IP...
	I1010 19:23:12.672628  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:12.673145  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:12.673278  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:12.673132  149057 retry.go:31] will retry after 280.111667ms: waiting for machine to come up
	I1010 19:23:12.954865  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:12.955325  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:12.955377  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:12.955300  149057 retry.go:31] will retry after 289.967238ms: waiting for machine to come up
	I1010 19:23:13.247039  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:13.247663  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:13.247769  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:13.247632  149057 retry.go:31] will retry after 319.085935ms: waiting for machine to come up
	I1010 19:23:12.863797  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetIP
	I1010 19:23:12.867335  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:12.867760  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:12.867794  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:12.868029  147758 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1010 19:23:12.872503  147758 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 19:23:12.887684  147758 kubeadm.go:883] updating cluster {Name:embed-certs-541370 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:embed-certs-541370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.120 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1010 19:23:12.887809  147758 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1010 19:23:12.887853  147758 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 19:23:12.924155  147758 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1010 19:23:12.924240  147758 ssh_runner.go:195] Run: which lz4
	I1010 19:23:12.928613  147758 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1010 19:23:12.933024  147758 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1010 19:23:12.933069  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1010 19:23:14.450790  147758 crio.go:462] duration metric: took 1.522223644s to copy over tarball
	I1010 19:23:14.450893  147758 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1010 19:23:16.642155  147758 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.191220673s)
	I1010 19:23:16.642193  147758 crio.go:469] duration metric: took 2.191371146s to extract the tarball
	I1010 19:23:16.642202  147758 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1010 19:23:16.679611  147758 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 19:23:16.723840  147758 crio.go:514] all images are preloaded for cri-o runtime.
	I1010 19:23:16.723865  147758 cache_images.go:84] Images are preloaded, skipping loading
	I1010 19:23:16.723874  147758 kubeadm.go:934] updating node { 192.168.39.120 8443 v1.31.1 crio true true} ...
	I1010 19:23:16.723998  147758 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-541370 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.120
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-541370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1010 19:23:16.724081  147758 ssh_runner.go:195] Run: crio config
	I1010 19:23:16.779659  147758 cni.go:84] Creating CNI manager for ""
	I1010 19:23:16.779682  147758 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1010 19:23:16.779693  147758 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1010 19:23:16.779714  147758 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.120 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-541370 NodeName:embed-certs-541370 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.120"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.120 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1010 19:23:16.779842  147758 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.120
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-541370"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.120
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.120"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1010 19:23:16.779904  147758 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1010 19:23:16.791424  147758 binaries.go:44] Found k8s binaries, skipping transfer
	I1010 19:23:16.791493  147758 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1010 19:23:16.801715  147758 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I1010 19:23:16.821364  147758 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1010 19:23:16.842703  147758 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I1010 19:23:16.864835  147758 ssh_runner.go:195] Run: grep 192.168.39.120	control-plane.minikube.internal$ /etc/hosts
	I1010 19:23:16.868928  147758 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.120	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 19:23:13.568192  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:13.568690  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:13.568721  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:13.568657  149057 retry.go:31] will retry after 575.032546ms: waiting for machine to come up
	I1010 19:23:14.145650  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:14.146293  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:14.146319  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:14.146234  149057 retry.go:31] will retry after 702.803794ms: waiting for machine to come up
	I1010 19:23:14.851201  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:14.851736  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:14.851769  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:14.851706  149057 retry.go:31] will retry after 883.195401ms: waiting for machine to come up
	I1010 19:23:15.736627  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:15.737199  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:15.737252  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:15.737155  149057 retry.go:31] will retry after 794.699815ms: waiting for machine to come up
	I1010 19:23:16.533952  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:16.534510  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:16.534545  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:16.534443  149057 retry.go:31] will retry after 961.751912ms: waiting for machine to come up
	I1010 19:23:17.497535  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:17.498007  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:17.498035  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:17.497936  149057 retry.go:31] will retry after 1.423503471s: waiting for machine to come up
	I1010 19:23:16.883162  147758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 19:23:17.027646  147758 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 19:23:17.045083  147758 certs.go:68] Setting up /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/embed-certs-541370 for IP: 192.168.39.120
	I1010 19:23:17.045108  147758 certs.go:194] generating shared ca certs ...
	I1010 19:23:17.045130  147758 certs.go:226] acquiring lock for ca certs: {Name:mk934aa2a82050d7b4e8800492bf64f91d140517 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 19:23:17.045491  147758 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key
	I1010 19:23:17.045561  147758 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key
	I1010 19:23:17.045579  147758 certs.go:256] generating profile certs ...
	I1010 19:23:17.045730  147758 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/embed-certs-541370/client.key
	I1010 19:23:17.045814  147758 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/embed-certs-541370/apiserver.key.dd7630a8
	I1010 19:23:17.045874  147758 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/embed-certs-541370/proxy-client.key
	I1010 19:23:17.046015  147758 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876.pem (1338 bytes)
	W1010 19:23:17.046055  147758 certs.go:480] ignoring /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876_empty.pem, impossibly tiny 0 bytes
	I1010 19:23:17.046075  147758 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem (1679 bytes)
	I1010 19:23:17.046114  147758 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem (1078 bytes)
	I1010 19:23:17.046150  147758 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem (1123 bytes)
	I1010 19:23:17.046177  147758 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem (1675 bytes)
	I1010 19:23:17.046235  147758 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem (1708 bytes)
	I1010 19:23:17.047131  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1010 19:23:17.087057  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1010 19:23:17.137707  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1010 19:23:17.181707  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1010 19:23:17.213227  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/embed-certs-541370/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1010 19:23:17.247846  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/embed-certs-541370/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1010 19:23:17.275989  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/embed-certs-541370/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1010 19:23:17.301144  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/embed-certs-541370/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1010 19:23:17.326232  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem --> /usr/share/ca-certificates/888762.pem (1708 bytes)
	I1010 19:23:17.350586  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1010 19:23:17.374666  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876.pem --> /usr/share/ca-certificates/88876.pem (1338 bytes)
	I1010 19:23:17.399570  147758 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1010 19:23:17.417846  147758 ssh_runner.go:195] Run: openssl version
	I1010 19:23:17.424206  147758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/888762.pem && ln -fs /usr/share/ca-certificates/888762.pem /etc/ssl/certs/888762.pem"
	I1010 19:23:17.436091  147758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/888762.pem
	I1010 19:23:17.441020  147758 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 10 18:09 /usr/share/ca-certificates/888762.pem
	I1010 19:23:17.441090  147758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/888762.pem
	I1010 19:23:17.447318  147758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/888762.pem /etc/ssl/certs/3ec20f2e.0"
	I1010 19:23:17.459191  147758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1010 19:23:17.470878  147758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1010 19:23:17.476185  147758 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 10 17:58 /usr/share/ca-certificates/minikubeCA.pem
	I1010 19:23:17.476248  147758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1010 19:23:17.482808  147758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1010 19:23:17.494626  147758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/88876.pem && ln -fs /usr/share/ca-certificates/88876.pem /etc/ssl/certs/88876.pem"
	I1010 19:23:17.506522  147758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/88876.pem
	I1010 19:23:17.511484  147758 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 10 18:09 /usr/share/ca-certificates/88876.pem
	I1010 19:23:17.511558  147758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/88876.pem
	I1010 19:23:17.517445  147758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/88876.pem /etc/ssl/certs/51391683.0"
	I1010 19:23:17.529109  147758 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1010 19:23:17.534139  147758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1010 19:23:17.540846  147758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1010 19:23:17.547429  147758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1010 19:23:17.554350  147758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1010 19:23:17.561036  147758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1010 19:23:17.567571  147758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1010 19:23:17.574019  147758 kubeadm.go:392] StartCluster: {Name:embed-certs-541370 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:embed-certs-541370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.120 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 19:23:17.574128  147758 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1010 19:23:17.574187  147758 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1010 19:23:17.612699  147758 cri.go:89] found id: ""
	I1010 19:23:17.612804  147758 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1010 19:23:17.623827  147758 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1010 19:23:17.623856  147758 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1010 19:23:17.623917  147758 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1010 19:23:17.634732  147758 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1010 19:23:17.635754  147758 kubeconfig.go:125] found "embed-certs-541370" server: "https://192.168.39.120:8443"
	I1010 19:23:17.637813  147758 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1010 19:23:17.648543  147758 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.120
	I1010 19:23:17.648590  147758 kubeadm.go:1160] stopping kube-system containers ...
	I1010 19:23:17.648606  147758 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1010 19:23:17.648671  147758 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1010 19:23:17.693966  147758 cri.go:89] found id: ""
	I1010 19:23:17.694057  147758 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1010 19:23:17.715977  147758 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1010 19:23:17.727871  147758 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1010 19:23:17.727891  147758 kubeadm.go:157] found existing configuration files:
	
	I1010 19:23:17.727942  147758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1010 19:23:17.738274  147758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1010 19:23:17.738340  147758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1010 19:23:17.748925  147758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1010 19:23:17.758945  147758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1010 19:23:17.759008  147758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1010 19:23:17.769169  147758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1010 19:23:17.779196  147758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1010 19:23:17.779282  147758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1010 19:23:17.790948  147758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1010 19:23:17.802264  147758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1010 19:23:17.802332  147758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1010 19:23:17.814009  147758 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1010 19:23:17.826820  147758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:23:17.947270  147758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:23:19.128720  147758 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.181409785s)
	I1010 19:23:19.128770  147758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:23:19.343735  147758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:23:19.419728  147758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:23:19.529802  147758 api_server.go:52] waiting for apiserver process to appear ...
	I1010 19:23:19.529930  147758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:20.030019  147758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:20.530833  147758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:20.558314  147758 api_server.go:72] duration metric: took 1.028510044s to wait for apiserver process to appear ...
	I1010 19:23:20.558350  147758 api_server.go:88] waiting for apiserver healthz status ...
	I1010 19:23:20.558375  147758 api_server.go:253] Checking apiserver healthz at https://192.168.39.120:8443/healthz ...
	I1010 19:23:20.558991  147758 api_server.go:269] stopped: https://192.168.39.120:8443/healthz: Get "https://192.168.39.120:8443/healthz": dial tcp 192.168.39.120:8443: connect: connection refused
	I1010 19:23:21.058727  147758 api_server.go:253] Checking apiserver healthz at https://192.168.39.120:8443/healthz ...
	I1010 19:23:18.923057  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:18.923636  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:18.923671  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:18.923582  149057 retry.go:31] will retry after 2.09836426s: waiting for machine to come up
	I1010 19:23:21.023500  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:21.024054  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:21.024084  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:21.023999  149057 retry.go:31] will retry after 2.809962093s: waiting for machine to come up
	I1010 19:23:23.187135  147758 api_server.go:279] https://192.168.39.120:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1010 19:23:23.187187  147758 api_server.go:103] status: https://192.168.39.120:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1010 19:23:23.187203  147758 api_server.go:253] Checking apiserver healthz at https://192.168.39.120:8443/healthz ...
	I1010 19:23:23.233367  147758 api_server.go:279] https://192.168.39.120:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1010 19:23:23.233414  147758 api_server.go:103] status: https://192.168.39.120:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1010 19:23:23.558658  147758 api_server.go:253] Checking apiserver healthz at https://192.168.39.120:8443/healthz ...
	I1010 19:23:23.575108  147758 api_server.go:279] https://192.168.39.120:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1010 19:23:23.575139  147758 api_server.go:103] status: https://192.168.39.120:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1010 19:23:24.058679  147758 api_server.go:253] Checking apiserver healthz at https://192.168.39.120:8443/healthz ...
	I1010 19:23:24.065735  147758 api_server.go:279] https://192.168.39.120:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1010 19:23:24.065763  147758 api_server.go:103] status: https://192.168.39.120:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1010 19:23:24.559440  147758 api_server.go:253] Checking apiserver healthz at https://192.168.39.120:8443/healthz ...
	I1010 19:23:24.565460  147758 api_server.go:279] https://192.168.39.120:8443/healthz returned 200:
	ok
	I1010 19:23:24.571828  147758 api_server.go:141] control plane version: v1.31.1
	I1010 19:23:24.571859  147758 api_server.go:131] duration metric: took 4.013501806s to wait for apiserver health ...
	I1010 19:23:24.571869  147758 cni.go:84] Creating CNI manager for ""
	I1010 19:23:24.571875  147758 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1010 19:23:24.573875  147758 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1010 19:23:24.575458  147758 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1010 19:23:24.586870  147758 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1010 19:23:24.624362  147758 system_pods.go:43] waiting for kube-system pods to appear ...
	I1010 19:23:24.643465  147758 system_pods.go:59] 8 kube-system pods found
	I1010 19:23:24.643516  147758 system_pods.go:61] "coredns-7c65d6cfc9-fgtkg" [df696e79-ca6f-4d73-a57e-9c6cdc93c505] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1010 19:23:24.643532  147758 system_pods.go:61] "etcd-embed-certs-541370" [254fa12c-b0d2-499f-8dd9-c1505efeaaab] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1010 19:23:24.643543  147758 system_pods.go:61] "kube-apiserver-embed-certs-541370" [fcd3809d-d325-4481-8e86-c246e29458fe] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1010 19:23:24.643565  147758 system_pods.go:61] "kube-controller-manager-embed-certs-541370" [ab0fdd6b-d9b7-48dc-b82f-29b21d2295ee] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1010 19:23:24.643584  147758 system_pods.go:61] "kube-proxy-f5l6x" [446383fa-44c5-4b9e-bfc5-e38799597e75] Running
	I1010 19:23:24.643592  147758 system_pods.go:61] "kube-scheduler-embed-certs-541370" [1c6af7e7-ce16-4ae2-8feb-e5d474173de1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1010 19:23:24.643603  147758 system_pods.go:61] "metrics-server-6867b74b74-kw529" [aad00321-d499-4563-849e-286d6e699fc8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1010 19:23:24.643611  147758 system_pods.go:61] "storage-provisioner" [df4ae621-5066-4276-9276-a0538a9f9dd1] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1010 19:23:24.643620  147758 system_pods.go:74] duration metric: took 19.234558ms to wait for pod list to return data ...
	I1010 19:23:24.643637  147758 node_conditions.go:102] verifying NodePressure condition ...
	I1010 19:23:24.651647  147758 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1010 19:23:24.651683  147758 node_conditions.go:123] node cpu capacity is 2
	I1010 19:23:24.651699  147758 node_conditions.go:105] duration metric: took 8.056629ms to run NodePressure ...
	I1010 19:23:24.651720  147758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:23:24.915651  147758 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1010 19:23:24.921104  147758 kubeadm.go:739] kubelet initialised
	I1010 19:23:24.921131  147758 kubeadm.go:740] duration metric: took 5.44643ms waiting for restarted kubelet to initialise ...
	I1010 19:23:24.921142  147758 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 19:23:24.927535  147758 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-fgtkg" in "kube-system" namespace to be "Ready" ...
	I1010 19:23:23.837827  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:23.838271  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:23.838295  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:23.838220  149057 retry.go:31] will retry after 2.562944835s: waiting for machine to come up
	I1010 19:23:26.402699  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:26.403250  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:26.403294  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:26.403192  149057 retry.go:31] will retry after 3.867656846s: waiting for machine to come up
	I1010 19:23:26.932764  147758 pod_ready.go:103] pod "coredns-7c65d6cfc9-fgtkg" in "kube-system" namespace has status "Ready":"False"
	I1010 19:23:28.936055  147758 pod_ready.go:103] pod "coredns-7c65d6cfc9-fgtkg" in "kube-system" namespace has status "Ready":"False"
	I1010 19:23:31.434959  147758 pod_ready.go:103] pod "coredns-7c65d6cfc9-fgtkg" in "kube-system" namespace has status "Ready":"False"
	I1010 19:23:31.893914  148525 start.go:364] duration metric: took 2m17.836396131s to acquireMachinesLock for "default-k8s-diff-port-361847"
	I1010 19:23:31.893993  148525 start.go:96] Skipping create...Using existing machine configuration
	I1010 19:23:31.894007  148525 fix.go:54] fixHost starting: 
	I1010 19:23:31.894438  148525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:23:31.894502  148525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:23:31.914583  148525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37215
	I1010 19:23:31.915054  148525 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:23:31.915535  148525 main.go:141] libmachine: Using API Version  1
	I1010 19:23:31.915560  148525 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:23:31.915967  148525 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:23:31.916207  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .DriverName
	I1010 19:23:31.916387  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetState
	I1010 19:23:31.918035  148525 fix.go:112] recreateIfNeeded on default-k8s-diff-port-361847: state=Stopped err=<nil>
	I1010 19:23:31.918073  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .DriverName
	W1010 19:23:31.918241  148525 fix.go:138] unexpected machine state, will restart: <nil>
	I1010 19:23:31.920390  148525 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-361847" ...
	I1010 19:23:30.275222  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.275762  148123 main.go:141] libmachine: (old-k8s-version-947203) Found IP for machine: 192.168.61.112
	I1010 19:23:30.275779  148123 main.go:141] libmachine: (old-k8s-version-947203) Reserving static IP address...
	I1010 19:23:30.275794  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has current primary IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.276478  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "old-k8s-version-947203", mac: "52:54:00:b5:0b:b2", ip: "192.168.61.112"} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:30.276529  148123 main.go:141] libmachine: (old-k8s-version-947203) Reserved static IP address: 192.168.61.112
	I1010 19:23:30.276553  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | skip adding static IP to network mk-old-k8s-version-947203 - found existing host DHCP lease matching {name: "old-k8s-version-947203", mac: "52:54:00:b5:0b:b2", ip: "192.168.61.112"}
	I1010 19:23:30.276574  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | Getting to WaitForSSH function...
	I1010 19:23:30.276590  148123 main.go:141] libmachine: (old-k8s-version-947203) Waiting for SSH to be available...
	I1010 19:23:30.278486  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.278854  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:30.278890  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.278984  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | Using SSH client type: external
	I1010 19:23:30.279016  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | Using SSH private key: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/old-k8s-version-947203/id_rsa (-rw-------)
	I1010 19:23:30.279051  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.112 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19787-81676/.minikube/machines/old-k8s-version-947203/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1010 19:23:30.279068  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | About to run SSH command:
	I1010 19:23:30.279080  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | exit 0
	I1010 19:23:30.405053  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | SSH cmd err, output: <nil>: 
	I1010 19:23:30.405492  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetConfigRaw
	I1010 19:23:30.406224  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetIP
	I1010 19:23:30.408830  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.409178  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:30.409204  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.409462  148123 profile.go:143] Saving config to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/old-k8s-version-947203/config.json ...
	I1010 19:23:30.409673  148123 machine.go:93] provisionDockerMachine start ...
	I1010 19:23:30.409693  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .DriverName
	I1010 19:23:30.409916  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHHostname
	I1010 19:23:30.412131  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.412400  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:30.412427  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.412574  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHPort
	I1010 19:23:30.412770  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:30.412965  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:30.413117  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHUsername
	I1010 19:23:30.413368  148123 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:30.413653  148123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.112 22 <nil> <nil>}
	I1010 19:23:30.413668  148123 main.go:141] libmachine: About to run SSH command:
	hostname
	I1010 19:23:30.525149  148123 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1010 19:23:30.525176  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetMachineName
	I1010 19:23:30.525417  148123 buildroot.go:166] provisioning hostname "old-k8s-version-947203"
	I1010 19:23:30.525478  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetMachineName
	I1010 19:23:30.525703  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHHostname
	I1010 19:23:30.528343  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.528681  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:30.528708  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.528841  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHPort
	I1010 19:23:30.529035  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:30.529185  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:30.529303  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHUsername
	I1010 19:23:30.529460  148123 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:30.529635  148123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.112 22 <nil> <nil>}
	I1010 19:23:30.529645  148123 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-947203 && echo "old-k8s-version-947203" | sudo tee /etc/hostname
	I1010 19:23:30.655605  148123 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-947203
	
	I1010 19:23:30.655644  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHHostname
	I1010 19:23:30.658439  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.658834  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:30.658869  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.659063  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHPort
	I1010 19:23:30.659285  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:30.659458  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:30.659574  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHUsername
	I1010 19:23:30.659733  148123 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:30.659905  148123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.112 22 <nil> <nil>}
	I1010 19:23:30.659921  148123 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-947203' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-947203/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-947203' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1010 19:23:30.781974  148123 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1010 19:23:30.782011  148123 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19787-81676/.minikube CaCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19787-81676/.minikube}
	I1010 19:23:30.782033  148123 buildroot.go:174] setting up certificates
	I1010 19:23:30.782048  148123 provision.go:84] configureAuth start
	I1010 19:23:30.782059  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetMachineName
	I1010 19:23:30.782379  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetIP
	I1010 19:23:30.785189  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.785584  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:30.785613  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.785782  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHHostname
	I1010 19:23:30.788086  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.788432  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:30.788458  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.788582  148123 provision.go:143] copyHostCerts
	I1010 19:23:30.788645  148123 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem, removing ...
	I1010 19:23:30.788660  148123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem
	I1010 19:23:30.788729  148123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem (1078 bytes)
	I1010 19:23:30.788839  148123 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem, removing ...
	I1010 19:23:30.788867  148123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem
	I1010 19:23:30.788900  148123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem (1123 bytes)
	I1010 19:23:30.788977  148123 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem, removing ...
	I1010 19:23:30.788986  148123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem
	I1010 19:23:30.789013  148123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem (1675 bytes)
	I1010 19:23:30.789081  148123 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-947203 san=[127.0.0.1 192.168.61.112 localhost minikube old-k8s-version-947203]
	I1010 19:23:31.240626  148123 provision.go:177] copyRemoteCerts
	I1010 19:23:31.240698  148123 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1010 19:23:31.240737  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHHostname
	I1010 19:23:31.243678  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.244108  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:31.244138  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.244356  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHPort
	I1010 19:23:31.244565  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:31.244765  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHUsername
	I1010 19:23:31.244929  148123 sshutil.go:53] new ssh client: &{IP:192.168.61.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/old-k8s-version-947203/id_rsa Username:docker}
	I1010 19:23:31.331337  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1010 19:23:31.356266  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1010 19:23:31.381186  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1010 19:23:31.405301  148123 provision.go:87] duration metric: took 623.235619ms to configureAuth
	I1010 19:23:31.405337  148123 buildroot.go:189] setting minikube options for container-runtime
	I1010 19:23:31.405541  148123 config.go:182] Loaded profile config "old-k8s-version-947203": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1010 19:23:31.405620  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHHostname
	I1010 19:23:31.408111  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.408444  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:31.408470  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.408695  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHPort
	I1010 19:23:31.408947  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:31.409109  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:31.409229  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHUsername
	I1010 19:23:31.409374  148123 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:31.409583  148123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.112 22 <nil> <nil>}
	I1010 19:23:31.409609  148123 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1010 19:23:31.647023  148123 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1010 19:23:31.647050  148123 machine.go:96] duration metric: took 1.237364012s to provisionDockerMachine
	I1010 19:23:31.647063  148123 start.go:293] postStartSetup for "old-k8s-version-947203" (driver="kvm2")
	I1010 19:23:31.647073  148123 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1010 19:23:31.647104  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .DriverName
	I1010 19:23:31.647464  148123 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1010 19:23:31.647502  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHHostname
	I1010 19:23:31.649991  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.650288  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:31.650311  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.650484  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHPort
	I1010 19:23:31.650637  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:31.650832  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHUsername
	I1010 19:23:31.650968  148123 sshutil.go:53] new ssh client: &{IP:192.168.61.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/old-k8s-version-947203/id_rsa Username:docker}
	I1010 19:23:31.735985  148123 ssh_runner.go:195] Run: cat /etc/os-release
	I1010 19:23:31.740689  148123 info.go:137] Remote host: Buildroot 2023.02.9
	I1010 19:23:31.740725  148123 filesync.go:126] Scanning /home/jenkins/minikube-integration/19787-81676/.minikube/addons for local assets ...
	I1010 19:23:31.740803  148123 filesync.go:126] Scanning /home/jenkins/minikube-integration/19787-81676/.minikube/files for local assets ...
	I1010 19:23:31.740947  148123 filesync.go:149] local asset: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem -> 888762.pem in /etc/ssl/certs
	I1010 19:23:31.741057  148123 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1010 19:23:31.751383  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem --> /etc/ssl/certs/888762.pem (1708 bytes)
	I1010 19:23:31.777091  148123 start.go:296] duration metric: took 130.011618ms for postStartSetup
	I1010 19:23:31.777134  148123 fix.go:56] duration metric: took 20.455078936s for fixHost
	I1010 19:23:31.777158  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHHostname
	I1010 19:23:31.780083  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.780661  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:31.780708  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.780832  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHPort
	I1010 19:23:31.781069  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:31.781245  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:31.781382  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHUsername
	I1010 19:23:31.781534  148123 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:31.781711  148123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.112 22 <nil> <nil>}
	I1010 19:23:31.781721  148123 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1010 19:23:31.893727  148123 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728588211.849777581
	
	I1010 19:23:31.893756  148123 fix.go:216] guest clock: 1728588211.849777581
	I1010 19:23:31.893763  148123 fix.go:229] Guest: 2024-10-10 19:23:31.849777581 +0000 UTC Remote: 2024-10-10 19:23:31.777138808 +0000 UTC m=+218.253674512 (delta=72.638773ms)
	I1010 19:23:31.893806  148123 fix.go:200] guest clock delta is within tolerance: 72.638773ms
	I1010 19:23:31.893813  148123 start.go:83] releasing machines lock for "old-k8s-version-947203", held for 20.571797307s
	I1010 19:23:31.893848  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .DriverName
	I1010 19:23:31.894151  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetIP
	I1010 19:23:31.896747  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.897156  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:31.897207  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.897385  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .DriverName
	I1010 19:23:31.898036  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .DriverName
	I1010 19:23:31.898229  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .DriverName
	I1010 19:23:31.898375  148123 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1010 19:23:31.898422  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHHostname
	I1010 19:23:31.898461  148123 ssh_runner.go:195] Run: cat /version.json
	I1010 19:23:31.898487  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHHostname
	I1010 19:23:31.901274  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.901450  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.901608  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:31.901649  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.901784  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHPort
	I1010 19:23:31.901920  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:31.901945  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.901952  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:31.902112  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHPort
	I1010 19:23:31.902130  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHUsername
	I1010 19:23:31.902306  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:31.902348  148123 sshutil.go:53] new ssh client: &{IP:192.168.61.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/old-k8s-version-947203/id_rsa Username:docker}
	I1010 19:23:31.902412  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHUsername
	I1010 19:23:31.902533  148123 sshutil.go:53] new ssh client: &{IP:192.168.61.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/old-k8s-version-947203/id_rsa Username:docker}
	I1010 19:23:32.016264  148123 ssh_runner.go:195] Run: systemctl --version
	I1010 19:23:32.022618  148123 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1010 19:23:32.169502  148123 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1010 19:23:32.175960  148123 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1010 19:23:32.176039  148123 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1010 19:23:32.193584  148123 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1010 19:23:32.193615  148123 start.go:495] detecting cgroup driver to use...
	I1010 19:23:32.193698  148123 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1010 19:23:32.210923  148123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1010 19:23:32.227172  148123 docker.go:217] disabling cri-docker service (if available) ...
	I1010 19:23:32.227254  148123 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1010 19:23:32.242142  148123 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1010 19:23:32.256896  148123 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1010 19:23:32.387462  148123 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1010 19:23:32.575184  148123 docker.go:233] disabling docker service ...
	I1010 19:23:32.575257  148123 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1010 19:23:32.590667  148123 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1010 19:23:32.604825  148123 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1010 19:23:32.739827  148123 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1010 19:23:32.863648  148123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1010 19:23:32.879435  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1010 19:23:32.899582  148123 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1010 19:23:32.899659  148123 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:32.915010  148123 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1010 19:23:32.915081  148123 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:32.931181  148123 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:32.944038  148123 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:32.955182  148123 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1010 19:23:32.972220  148123 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1010 19:23:32.982681  148123 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1010 19:23:32.982739  148123 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1010 19:23:32.997012  148123 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1010 19:23:33.009468  148123 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 19:23:33.180403  148123 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1010 19:23:33.284934  148123 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1010 19:23:33.285008  148123 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1010 19:23:33.290063  148123 start.go:563] Will wait 60s for crictl version
	I1010 19:23:33.290124  148123 ssh_runner.go:195] Run: which crictl
	I1010 19:23:33.294752  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1010 19:23:33.342710  148123 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1010 19:23:33.342792  148123 ssh_runner.go:195] Run: crio --version
	I1010 19:23:33.372821  148123 ssh_runner.go:195] Run: crio --version
	I1010 19:23:33.409395  148123 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1010 19:23:33.411012  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetIP
	I1010 19:23:33.414374  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:33.414789  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:33.414836  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:33.415084  148123 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1010 19:23:33.420164  148123 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 19:23:33.433442  148123 kubeadm.go:883] updating cluster {Name:old-k8s-version-947203 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-947203 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.112 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1010 19:23:33.433631  148123 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1010 19:23:33.433717  148123 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 19:23:33.480013  148123 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1010 19:23:33.480078  148123 ssh_runner.go:195] Run: which lz4
	I1010 19:23:33.485443  148123 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1010 19:23:33.490986  148123 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1010 19:23:33.491032  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1010 19:23:31.921836  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .Start
	I1010 19:23:31.922036  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Ensuring networks are active...
	I1010 19:23:31.922890  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Ensuring network default is active
	I1010 19:23:31.923271  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Ensuring network mk-default-k8s-diff-port-361847 is active
	I1010 19:23:31.923685  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Getting domain xml...
	I1010 19:23:31.924449  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Creating domain...
	I1010 19:23:33.241164  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting to get IP...
	I1010 19:23:33.242273  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:33.242713  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | unable to find current IP address of domain default-k8s-diff-port-361847 in network mk-default-k8s-diff-port-361847
	I1010 19:23:33.242814  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | I1010 19:23:33.242702  149213 retry.go:31] will retry after 195.013046ms: waiting for machine to come up
	I1010 19:23:33.438965  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:33.439425  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | unable to find current IP address of domain default-k8s-diff-port-361847 in network mk-default-k8s-diff-port-361847
	I1010 19:23:33.439452  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | I1010 19:23:33.439379  149213 retry.go:31] will retry after 344.223823ms: waiting for machine to come up
	I1010 19:23:33.785167  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:33.785833  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | unable to find current IP address of domain default-k8s-diff-port-361847 in network mk-default-k8s-diff-port-361847
	I1010 19:23:33.785864  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | I1010 19:23:33.785780  149213 retry.go:31] will retry after 342.787658ms: waiting for machine to come up
	I1010 19:23:33.435066  147758 pod_ready.go:103] pod "coredns-7c65d6cfc9-fgtkg" in "kube-system" namespace has status "Ready":"False"
	I1010 19:23:34.936768  147758 pod_ready.go:93] pod "coredns-7c65d6cfc9-fgtkg" in "kube-system" namespace has status "Ready":"True"
	I1010 19:23:34.936800  147758 pod_ready.go:82] duration metric: took 10.009235225s for pod "coredns-7c65d6cfc9-fgtkg" in "kube-system" namespace to be "Ready" ...
	I1010 19:23:34.936814  147758 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:23:35.944395  147758 pod_ready.go:93] pod "etcd-embed-certs-541370" in "kube-system" namespace has status "Ready":"True"
	I1010 19:23:35.944430  147758 pod_ready.go:82] duration metric: took 1.007599746s for pod "etcd-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:23:35.944445  147758 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:23:35.953224  147758 pod_ready.go:93] pod "kube-apiserver-embed-certs-541370" in "kube-system" namespace has status "Ready":"True"
	I1010 19:23:35.953255  147758 pod_ready.go:82] duration metric: took 8.801702ms for pod "kube-apiserver-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:23:35.953266  147758 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:23:35.227279  148123 crio.go:462] duration metric: took 1.74186828s to copy over tarball
	I1010 19:23:35.227383  148123 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1010 19:23:38.216499  148123 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.989082447s)
	I1010 19:23:38.216533  148123 crio.go:469] duration metric: took 2.989218587s to extract the tarball
	I1010 19:23:38.216541  148123 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1010 19:23:38.259699  148123 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 19:23:38.308284  148123 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1010 19:23:38.308313  148123 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1010 19:23:38.308422  148123 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1010 19:23:38.308477  148123 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1010 19:23:38.308482  148123 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1010 19:23:38.308593  148123 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1010 19:23:38.308617  148123 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1010 19:23:38.308405  148123 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 19:23:38.308446  148123 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1010 19:23:38.308530  148123 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1010 19:23:38.310558  148123 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1010 19:23:38.310612  148123 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1010 19:23:38.310642  148123 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1010 19:23:38.310652  148123 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1010 19:23:38.310645  148123 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1010 19:23:38.310560  148123 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 19:23:38.310733  148123 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1010 19:23:38.311068  148123 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1010 19:23:38.469174  148123 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1010 19:23:38.469563  148123 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1010 19:23:38.488463  148123 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1010 19:23:38.490815  148123 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1010 19:23:38.491841  148123 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1010 19:23:38.496079  148123 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1010 19:23:38.501292  148123 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1010 19:23:34.130443  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:34.130968  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | unable to find current IP address of domain default-k8s-diff-port-361847 in network mk-default-k8s-diff-port-361847
	I1010 19:23:34.130998  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | I1010 19:23:34.130915  149213 retry.go:31] will retry after 393.100812ms: waiting for machine to come up
	I1010 19:23:34.525570  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:34.526032  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | unable to find current IP address of domain default-k8s-diff-port-361847 in network mk-default-k8s-diff-port-361847
	I1010 19:23:34.526060  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | I1010 19:23:34.525980  149213 retry.go:31] will retry after 465.468437ms: waiting for machine to come up
	I1010 19:23:34.992775  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:34.993348  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | unable to find current IP address of domain default-k8s-diff-port-361847 in network mk-default-k8s-diff-port-361847
	I1010 19:23:34.993386  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | I1010 19:23:34.993287  149213 retry.go:31] will retry after 907.884473ms: waiting for machine to come up
	I1010 19:23:35.902481  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:35.902942  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | unable to find current IP address of domain default-k8s-diff-port-361847 in network mk-default-k8s-diff-port-361847
	I1010 19:23:35.902974  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | I1010 19:23:35.902878  149213 retry.go:31] will retry after 1.157806188s: waiting for machine to come up
	I1010 19:23:37.062068  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:37.062749  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | unable to find current IP address of domain default-k8s-diff-port-361847 in network mk-default-k8s-diff-port-361847
	I1010 19:23:37.062777  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | I1010 19:23:37.062706  149213 retry.go:31] will retry after 1.432559208s: waiting for machine to come up
	I1010 19:23:38.496653  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:38.497120  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | unable to find current IP address of domain default-k8s-diff-port-361847 in network mk-default-k8s-diff-port-361847
	I1010 19:23:38.497153  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | I1010 19:23:38.497066  149213 retry.go:31] will retry after 1.559787003s: waiting for machine to come up
	I1010 19:23:37.961068  147758 pod_ready.go:103] pod "kube-controller-manager-embed-certs-541370" in "kube-system" namespace has status "Ready":"False"
	I1010 19:23:40.065559  147758 pod_ready.go:103] pod "kube-controller-manager-embed-certs-541370" in "kube-system" namespace has status "Ready":"False"
	I1010 19:23:40.528757  147758 pod_ready.go:93] pod "kube-controller-manager-embed-certs-541370" in "kube-system" namespace has status "Ready":"True"
	I1010 19:23:40.528786  147758 pod_ready.go:82] duration metric: took 4.575513259s for pod "kube-controller-manager-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:23:40.528802  147758 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-f5l6x" in "kube-system" namespace to be "Ready" ...
	I1010 19:23:40.538002  147758 pod_ready.go:93] pod "kube-proxy-f5l6x" in "kube-system" namespace has status "Ready":"True"
	I1010 19:23:40.538034  147758 pod_ready.go:82] duration metric: took 9.22357ms for pod "kube-proxy-f5l6x" in "kube-system" namespace to be "Ready" ...
	I1010 19:23:40.538049  147758 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:23:40.543594  147758 pod_ready.go:93] pod "kube-scheduler-embed-certs-541370" in "kube-system" namespace has status "Ready":"True"
	I1010 19:23:40.543615  147758 pod_ready.go:82] duration metric: took 5.558665ms for pod "kube-scheduler-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:23:40.543626  147758 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace to be "Ready" ...
	I1010 19:23:38.581315  148123 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1010 19:23:38.581361  148123 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1010 19:23:38.581385  148123 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1010 19:23:38.581407  148123 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1010 19:23:38.581442  148123 ssh_runner.go:195] Run: which crictl
	I1010 19:23:38.581457  148123 ssh_runner.go:195] Run: which crictl
	I1010 19:23:38.642668  148123 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1010 19:23:38.642721  148123 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1010 19:23:38.642777  148123 ssh_runner.go:195] Run: which crictl
	I1010 19:23:38.658235  148123 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1010 19:23:38.658291  148123 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1010 19:23:38.658288  148123 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1010 19:23:38.658328  148123 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1010 19:23:38.658346  148123 ssh_runner.go:195] Run: which crictl
	I1010 19:23:38.658373  148123 ssh_runner.go:195] Run: which crictl
	I1010 19:23:38.678038  148123 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1010 19:23:38.678097  148123 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1010 19:23:38.678155  148123 ssh_runner.go:195] Run: which crictl
	I1010 19:23:38.682719  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1010 19:23:38.682773  148123 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1010 19:23:38.682721  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1010 19:23:38.682791  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1010 19:23:38.682812  148123 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1010 19:23:38.682845  148123 ssh_runner.go:195] Run: which crictl
	I1010 19:23:38.682851  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1010 19:23:38.682872  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1010 19:23:38.691181  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1010 19:23:38.842626  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1010 19:23:38.842714  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1010 19:23:38.842754  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1010 19:23:38.842797  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1010 19:23:38.842721  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1010 19:23:38.842851  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1010 19:23:38.842789  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1010 19:23:38.989994  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1010 19:23:39.005815  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1010 19:23:39.005923  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1010 19:23:39.010029  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1010 19:23:39.010105  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1010 19:23:39.010150  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1010 19:23:39.010230  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1010 19:23:39.134646  148123 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 19:23:39.141431  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1010 19:23:39.176750  148123 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1010 19:23:39.176945  148123 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1010 19:23:39.176989  148123 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1010 19:23:39.177038  148123 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1010 19:23:39.177073  148123 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1010 19:23:39.177104  148123 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1010 19:23:39.335056  148123 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1010 19:23:39.335113  148123 cache_images.go:92] duration metric: took 1.026784312s to LoadCachedImages
	W1010 19:23:39.335180  148123 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I1010 19:23:39.335192  148123 kubeadm.go:934] updating node { 192.168.61.112 8443 v1.20.0 crio true true} ...
	I1010 19:23:39.335305  148123 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-947203 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.112
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-947203 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1010 19:23:39.335380  148123 ssh_runner.go:195] Run: crio config
	I1010 19:23:39.394307  148123 cni.go:84] Creating CNI manager for ""
	I1010 19:23:39.394338  148123 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1010 19:23:39.394350  148123 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1010 19:23:39.394378  148123 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.112 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-947203 NodeName:old-k8s-version-947203 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.112"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.112 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1010 19:23:39.394572  148123 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.112
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-947203"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.112
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.112"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1010 19:23:39.394662  148123 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1010 19:23:39.405510  148123 binaries.go:44] Found k8s binaries, skipping transfer
	I1010 19:23:39.405600  148123 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1010 19:23:39.415968  148123 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1010 19:23:39.443234  148123 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1010 19:23:39.462448  148123 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1010 19:23:39.481959  148123 ssh_runner.go:195] Run: grep 192.168.61.112	control-plane.minikube.internal$ /etc/hosts
	I1010 19:23:39.486188  148123 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.112	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 19:23:39.501922  148123 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 19:23:39.642769  148123 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 19:23:39.661335  148123 certs.go:68] Setting up /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/old-k8s-version-947203 for IP: 192.168.61.112
	I1010 19:23:39.661363  148123 certs.go:194] generating shared ca certs ...
	I1010 19:23:39.661384  148123 certs.go:226] acquiring lock for ca certs: {Name:mk934aa2a82050d7b4e8800492bf64f91d140517 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 19:23:39.661595  148123 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key
	I1010 19:23:39.661658  148123 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key
	I1010 19:23:39.661671  148123 certs.go:256] generating profile certs ...
	I1010 19:23:39.661779  148123 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/old-k8s-version-947203/client.key
	I1010 19:23:39.661867  148123 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/old-k8s-version-947203/apiserver.key.8a666a52
	I1010 19:23:39.661922  148123 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/old-k8s-version-947203/proxy-client.key
	I1010 19:23:39.662312  148123 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876.pem (1338 bytes)
	W1010 19:23:39.662395  148123 certs.go:480] ignoring /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876_empty.pem, impossibly tiny 0 bytes
	I1010 19:23:39.662410  148123 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem (1679 bytes)
	I1010 19:23:39.662447  148123 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem (1078 bytes)
	I1010 19:23:39.662501  148123 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem (1123 bytes)
	I1010 19:23:39.662531  148123 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem (1675 bytes)
	I1010 19:23:39.662622  148123 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem (1708 bytes)
	I1010 19:23:39.663372  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1010 19:23:39.710366  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1010 19:23:39.752724  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1010 19:23:39.801408  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1010 19:23:39.843557  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/old-k8s-version-947203/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1010 19:23:39.892522  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/old-k8s-version-947203/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1010 19:23:39.938519  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/old-k8s-version-947203/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1010 19:23:39.966140  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/old-k8s-version-947203/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1010 19:23:39.992974  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem --> /usr/share/ca-certificates/888762.pem (1708 bytes)
	I1010 19:23:40.019381  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1010 19:23:40.045769  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876.pem --> /usr/share/ca-certificates/88876.pem (1338 bytes)
	I1010 19:23:40.080683  148123 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1010 19:23:40.099747  148123 ssh_runner.go:195] Run: openssl version
	I1010 19:23:40.107865  148123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1010 19:23:40.123158  148123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1010 19:23:40.128594  148123 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 10 17:58 /usr/share/ca-certificates/minikubeCA.pem
	I1010 19:23:40.128660  148123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1010 19:23:40.135319  148123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1010 19:23:40.147514  148123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/88876.pem && ln -fs /usr/share/ca-certificates/88876.pem /etc/ssl/certs/88876.pem"
	I1010 19:23:40.162463  148123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/88876.pem
	I1010 19:23:40.168387  148123 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 10 18:09 /usr/share/ca-certificates/88876.pem
	I1010 19:23:40.168512  148123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/88876.pem
	I1010 19:23:40.176304  148123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/88876.pem /etc/ssl/certs/51391683.0"
	I1010 19:23:40.191306  148123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/888762.pem && ln -fs /usr/share/ca-certificates/888762.pem /etc/ssl/certs/888762.pem"
	I1010 19:23:40.204196  148123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/888762.pem
	I1010 19:23:40.211044  148123 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 10 18:09 /usr/share/ca-certificates/888762.pem
	I1010 19:23:40.211122  148123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/888762.pem
	I1010 19:23:40.219574  148123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/888762.pem /etc/ssl/certs/3ec20f2e.0"
	I1010 19:23:40.232634  148123 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1010 19:23:40.237639  148123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1010 19:23:40.244141  148123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1010 19:23:40.251188  148123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1010 19:23:40.258538  148123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1010 19:23:40.265029  148123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1010 19:23:40.271754  148123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1010 19:23:40.278352  148123 kubeadm.go:392] StartCluster: {Name:old-k8s-version-947203 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-947203 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.112 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 19:23:40.278453  148123 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1010 19:23:40.278518  148123 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1010 19:23:40.317771  148123 cri.go:89] found id: ""
	I1010 19:23:40.317838  148123 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1010 19:23:40.330494  148123 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1010 19:23:40.330520  148123 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1010 19:23:40.330576  148123 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1010 19:23:40.341984  148123 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1010 19:23:40.343386  148123 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-947203" does not appear in /home/jenkins/minikube-integration/19787-81676/kubeconfig
	I1010 19:23:40.344334  148123 kubeconfig.go:62] /home/jenkins/minikube-integration/19787-81676/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-947203" cluster setting kubeconfig missing "old-k8s-version-947203" context setting]
	I1010 19:23:40.345360  148123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19787-81676/kubeconfig: {Name:mk31297b607b774870865548d952622c04d970cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 19:23:40.347128  148123 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1010 19:23:40.357902  148123 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.112
	I1010 19:23:40.357944  148123 kubeadm.go:1160] stopping kube-system containers ...
	I1010 19:23:40.357961  148123 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1010 19:23:40.358031  148123 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1010 19:23:40.399970  148123 cri.go:89] found id: ""
	I1010 19:23:40.400057  148123 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1010 19:23:40.418527  148123 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1010 19:23:40.429182  148123 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1010 19:23:40.429217  148123 kubeadm.go:157] found existing configuration files:
	
	I1010 19:23:40.429262  148123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1010 19:23:40.439287  148123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1010 19:23:40.439363  148123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1010 19:23:40.449479  148123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1010 19:23:40.459288  148123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1010 19:23:40.459359  148123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1010 19:23:40.472733  148123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1010 19:23:40.484890  148123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1010 19:23:40.484958  148123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1010 19:23:40.495301  148123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1010 19:23:40.504750  148123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1010 19:23:40.504820  148123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1010 19:23:40.515034  148123 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1010 19:23:40.526168  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:23:40.675022  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:23:41.566371  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:23:41.815296  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:23:41.930082  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:23:42.027597  148123 api_server.go:52] waiting for apiserver process to appear ...
	I1010 19:23:42.027713  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:42.528219  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:43.027889  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:43.527735  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:40.058247  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:40.058783  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | unable to find current IP address of domain default-k8s-diff-port-361847 in network mk-default-k8s-diff-port-361847
	I1010 19:23:40.058835  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | I1010 19:23:40.058696  149213 retry.go:31] will retry after 2.214094081s: waiting for machine to come up
	I1010 19:23:42.274629  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:42.275167  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | unable to find current IP address of domain default-k8s-diff-port-361847 in network mk-default-k8s-diff-port-361847
	I1010 19:23:42.275194  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | I1010 19:23:42.275106  149213 retry.go:31] will retry after 2.126528577s: waiting for machine to come up
	I1010 19:23:42.550865  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:23:45.051043  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:23:44.028463  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:44.527916  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:45.027911  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:45.528797  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:46.028772  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:46.527982  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:47.027799  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:47.527894  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:48.028352  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:48.527935  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:44.403101  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:44.403575  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | unable to find current IP address of domain default-k8s-diff-port-361847 in network mk-default-k8s-diff-port-361847
	I1010 19:23:44.403616  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | I1010 19:23:44.403534  149213 retry.go:31] will retry after 3.603964622s: waiting for machine to come up
	I1010 19:23:48.008726  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:48.009142  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | unable to find current IP address of domain default-k8s-diff-port-361847 in network mk-default-k8s-diff-port-361847
	I1010 19:23:48.009191  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | I1010 19:23:48.009100  149213 retry.go:31] will retry after 3.639744981s: waiting for machine to come up
	I1010 19:23:47.551003  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:23:49.661572  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:23:52.858209  147213 start.go:364] duration metric: took 56.558774237s to acquireMachinesLock for "no-preload-320324"
	I1010 19:23:52.858274  147213 start.go:96] Skipping create...Using existing machine configuration
	I1010 19:23:52.858283  147213 fix.go:54] fixHost starting: 
	I1010 19:23:52.858705  147213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:23:52.858742  147213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:23:52.878428  147213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38525
	I1010 19:23:52.878955  147213 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:23:52.879563  147213 main.go:141] libmachine: Using API Version  1
	I1010 19:23:52.879599  147213 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:23:52.879945  147213 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:23:52.880144  147213 main.go:141] libmachine: (no-preload-320324) Calling .DriverName
	I1010 19:23:52.880282  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetState
	I1010 19:23:52.881626  147213 fix.go:112] recreateIfNeeded on no-preload-320324: state=Stopped err=<nil>
	I1010 19:23:52.881650  147213 main.go:141] libmachine: (no-preload-320324) Calling .DriverName
	W1010 19:23:52.881799  147213 fix.go:138] unexpected machine state, will restart: <nil>
	I1010 19:23:52.883912  147213 out.go:177] * Restarting existing kvm2 VM for "no-preload-320324" ...
	I1010 19:23:49.028421  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:49.528271  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:50.028462  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:50.527867  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:51.028782  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:51.528581  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:52.028732  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:52.527987  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:53.027852  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:53.528647  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:52.885239  147213 main.go:141] libmachine: (no-preload-320324) Calling .Start
	I1010 19:23:52.885429  147213 main.go:141] libmachine: (no-preload-320324) Ensuring networks are active...
	I1010 19:23:52.886211  147213 main.go:141] libmachine: (no-preload-320324) Ensuring network default is active
	I1010 19:23:52.886749  147213 main.go:141] libmachine: (no-preload-320324) Ensuring network mk-no-preload-320324 is active
	I1010 19:23:52.887310  147213 main.go:141] libmachine: (no-preload-320324) Getting domain xml...
	I1010 19:23:52.888034  147213 main.go:141] libmachine: (no-preload-320324) Creating domain...
	I1010 19:23:51.652975  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:51.653464  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Found IP for machine: 192.168.50.32
	I1010 19:23:51.653487  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Reserving static IP address...
	I1010 19:23:51.653509  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has current primary IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:51.653910  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-361847", mac: "52:54:00:a6:72:58", ip: "192.168.50.32"} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:51.653956  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | skip adding static IP to network mk-default-k8s-diff-port-361847 - found existing host DHCP lease matching {name: "default-k8s-diff-port-361847", mac: "52:54:00:a6:72:58", ip: "192.168.50.32"}
	I1010 19:23:51.653974  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Reserved static IP address: 192.168.50.32
	I1010 19:23:51.653993  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for SSH to be available...
	I1010 19:23:51.654006  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | Getting to WaitForSSH function...
	I1010 19:23:51.655927  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:51.656210  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:51.656240  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:51.656334  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | Using SSH client type: external
	I1010 19:23:51.656372  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | Using SSH private key: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/default-k8s-diff-port-361847/id_rsa (-rw-------)
	I1010 19:23:51.656409  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.32 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19787-81676/.minikube/machines/default-k8s-diff-port-361847/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1010 19:23:51.656425  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | About to run SSH command:
	I1010 19:23:51.656436  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | exit 0
	I1010 19:23:51.780839  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | SSH cmd err, output: <nil>: 
	I1010 19:23:51.781206  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetConfigRaw
	I1010 19:23:51.781939  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetIP
	I1010 19:23:51.784347  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:51.784663  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:51.784696  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:51.784918  148525 profile.go:143] Saving config to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/default-k8s-diff-port-361847/config.json ...
	I1010 19:23:51.785134  148525 machine.go:93] provisionDockerMachine start ...
	I1010 19:23:51.785158  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .DriverName
	I1010 19:23:51.785403  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHHostname
	I1010 19:23:51.787817  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:51.788306  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:51.788347  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:51.788547  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHPort
	I1010 19:23:51.788807  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:23:51.789038  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:23:51.789274  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHUsername
	I1010 19:23:51.789515  148525 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:51.789802  148525 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.32 22 <nil> <nil>}
	I1010 19:23:51.789825  148525 main.go:141] libmachine: About to run SSH command:
	hostname
	I1010 19:23:51.893367  148525 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1010 19:23:51.893399  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetMachineName
	I1010 19:23:51.893652  148525 buildroot.go:166] provisioning hostname "default-k8s-diff-port-361847"
	I1010 19:23:51.893699  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetMachineName
	I1010 19:23:51.893921  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHHostname
	I1010 19:23:51.896986  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:51.897377  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:51.897422  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:51.897662  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHPort
	I1010 19:23:51.897815  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:23:51.897949  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:23:51.898064  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHUsername
	I1010 19:23:51.898302  148525 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:51.898489  148525 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.32 22 <nil> <nil>}
	I1010 19:23:51.898502  148525 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-361847 && echo "default-k8s-diff-port-361847" | sudo tee /etc/hostname
	I1010 19:23:52.015158  148525 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-361847
	
	I1010 19:23:52.015199  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHHostname
	I1010 19:23:52.018094  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.018468  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:52.018497  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.018683  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHPort
	I1010 19:23:52.018901  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:23:52.019039  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:23:52.019209  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHUsername
	I1010 19:23:52.019474  148525 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:52.019690  148525 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.32 22 <nil> <nil>}
	I1010 19:23:52.019708  148525 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-361847' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-361847/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-361847' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1010 19:23:52.133923  148525 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1010 19:23:52.133960  148525 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19787-81676/.minikube CaCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19787-81676/.minikube}
	I1010 19:23:52.134007  148525 buildroot.go:174] setting up certificates
	I1010 19:23:52.134023  148525 provision.go:84] configureAuth start
	I1010 19:23:52.134043  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetMachineName
	I1010 19:23:52.134351  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetIP
	I1010 19:23:52.137242  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.137637  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:52.137670  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.137860  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHHostname
	I1010 19:23:52.140264  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.140640  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:52.140672  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.140833  148525 provision.go:143] copyHostCerts
	I1010 19:23:52.140907  148525 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem, removing ...
	I1010 19:23:52.140922  148525 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem
	I1010 19:23:52.140977  148525 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem (1078 bytes)
	I1010 19:23:52.141088  148525 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem, removing ...
	I1010 19:23:52.141098  148525 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem
	I1010 19:23:52.141118  148525 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem (1123 bytes)
	I1010 19:23:52.141175  148525 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem, removing ...
	I1010 19:23:52.141182  148525 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem
	I1010 19:23:52.141213  148525 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem (1675 bytes)
	I1010 19:23:52.141264  148525 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-361847 san=[127.0.0.1 192.168.50.32 default-k8s-diff-port-361847 localhost minikube]
	I1010 19:23:52.241146  148525 provision.go:177] copyRemoteCerts
	I1010 19:23:52.241212  148525 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1010 19:23:52.241241  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHHostname
	I1010 19:23:52.244061  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.244463  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:52.244490  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.244731  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHPort
	I1010 19:23:52.244929  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:23:52.245110  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHUsername
	I1010 19:23:52.245228  148525 sshutil.go:53] new ssh client: &{IP:192.168.50.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/default-k8s-diff-port-361847/id_rsa Username:docker}
	I1010 19:23:52.327309  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1010 19:23:52.352288  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1010 19:23:52.376308  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1010 19:23:52.400807  148525 provision.go:87] duration metric: took 266.765119ms to configureAuth
	I1010 19:23:52.400862  148525 buildroot.go:189] setting minikube options for container-runtime
	I1010 19:23:52.401065  148525 config.go:182] Loaded profile config "default-k8s-diff-port-361847": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 19:23:52.401171  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHHostname
	I1010 19:23:52.403552  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.403919  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:52.403950  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.404173  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHPort
	I1010 19:23:52.404371  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:23:52.404513  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:23:52.404629  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHUsername
	I1010 19:23:52.404743  148525 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:52.404927  148525 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.32 22 <nil> <nil>}
	I1010 19:23:52.404949  148525 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1010 19:23:52.622902  148525 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1010 19:23:52.622930  148525 machine.go:96] duration metric: took 837.779579ms to provisionDockerMachine
	I1010 19:23:52.622942  148525 start.go:293] postStartSetup for "default-k8s-diff-port-361847" (driver="kvm2")
	I1010 19:23:52.622952  148525 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1010 19:23:52.622968  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .DriverName
	I1010 19:23:52.623331  148525 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1010 19:23:52.623369  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHHostname
	I1010 19:23:52.626106  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.626435  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:52.626479  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.626721  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHPort
	I1010 19:23:52.626932  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:23:52.627091  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHUsername
	I1010 19:23:52.627262  148525 sshutil.go:53] new ssh client: &{IP:192.168.50.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/default-k8s-diff-port-361847/id_rsa Username:docker}
	I1010 19:23:52.708050  148525 ssh_runner.go:195] Run: cat /etc/os-release
	I1010 19:23:52.712524  148525 info.go:137] Remote host: Buildroot 2023.02.9
	I1010 19:23:52.712550  148525 filesync.go:126] Scanning /home/jenkins/minikube-integration/19787-81676/.minikube/addons for local assets ...
	I1010 19:23:52.712608  148525 filesync.go:126] Scanning /home/jenkins/minikube-integration/19787-81676/.minikube/files for local assets ...
	I1010 19:23:52.712688  148525 filesync.go:149] local asset: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem -> 888762.pem in /etc/ssl/certs
	I1010 19:23:52.712782  148525 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1010 19:23:52.723719  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem --> /etc/ssl/certs/888762.pem (1708 bytes)
	I1010 19:23:52.747686  148525 start.go:296] duration metric: took 124.729371ms for postStartSetup
	I1010 19:23:52.747727  148525 fix.go:56] duration metric: took 20.853721623s for fixHost
	I1010 19:23:52.747749  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHHostname
	I1010 19:23:52.750316  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.750645  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:52.750677  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.750817  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHPort
	I1010 19:23:52.751046  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:23:52.751195  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:23:52.751333  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHUsername
	I1010 19:23:52.751511  148525 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:52.751733  148525 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.32 22 <nil> <nil>}
	I1010 19:23:52.751749  148525 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1010 19:23:52.857986  148525 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728588232.831281012
	
	I1010 19:23:52.858019  148525 fix.go:216] guest clock: 1728588232.831281012
	I1010 19:23:52.858029  148525 fix.go:229] Guest: 2024-10-10 19:23:52.831281012 +0000 UTC Remote: 2024-10-10 19:23:52.747731551 +0000 UTC m=+158.845659062 (delta=83.549461ms)
	I1010 19:23:52.858075  148525 fix.go:200] guest clock delta is within tolerance: 83.549461ms
	I1010 19:23:52.858088  148525 start.go:83] releasing machines lock for "default-k8s-diff-port-361847", held for 20.964121636s
	I1010 19:23:52.858120  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .DriverName
	I1010 19:23:52.858491  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetIP
	I1010 19:23:52.861220  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.861640  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:52.861672  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.861828  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .DriverName
	I1010 19:23:52.862337  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .DriverName
	I1010 19:23:52.862548  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .DriverName
	I1010 19:23:52.862655  148525 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1010 19:23:52.862702  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHHostname
	I1010 19:23:52.862825  148525 ssh_runner.go:195] Run: cat /version.json
	I1010 19:23:52.862854  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHHostname
	I1010 19:23:52.865579  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.865840  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.865921  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:52.865960  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.866106  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHPort
	I1010 19:23:52.866290  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:23:52.866300  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:52.866319  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.866423  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHUsername
	I1010 19:23:52.866496  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHPort
	I1010 19:23:52.866648  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:23:52.866671  148525 sshutil.go:53] new ssh client: &{IP:192.168.50.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/default-k8s-diff-port-361847/id_rsa Username:docker}
	I1010 19:23:52.866798  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHUsername
	I1010 19:23:52.866910  148525 sshutil.go:53] new ssh client: &{IP:192.168.50.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/default-k8s-diff-port-361847/id_rsa Username:docker}
	I1010 19:23:52.966354  148525 ssh_runner.go:195] Run: systemctl --version
	I1010 19:23:52.972526  148525 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1010 19:23:53.119801  148525 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1010 19:23:53.126287  148525 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1010 19:23:53.126355  148525 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1010 19:23:53.147301  148525 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1010 19:23:53.147325  148525 start.go:495] detecting cgroup driver to use...
	I1010 19:23:53.147381  148525 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1010 19:23:53.167368  148525 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1010 19:23:53.183239  148525 docker.go:217] disabling cri-docker service (if available) ...
	I1010 19:23:53.183308  148525 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1010 19:23:53.203230  148525 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1010 19:23:53.217261  148525 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1010 19:23:53.343555  148525 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1010 19:23:53.491952  148525 docker.go:233] disabling docker service ...
	I1010 19:23:53.492054  148525 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1010 19:23:53.508136  148525 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1010 19:23:53.521662  148525 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1010 19:23:53.651858  148525 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1010 19:23:53.781954  148525 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1010 19:23:53.803934  148525 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1010 19:23:53.826070  148525 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1010 19:23:53.826146  148525 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:53.837506  148525 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1010 19:23:53.837587  148525 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:53.848653  148525 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:53.860511  148525 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:53.873254  148525 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1010 19:23:53.887862  148525 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:53.899507  148525 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:53.923325  148525 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:53.934999  148525 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1010 19:23:53.946869  148525 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1010 19:23:53.946945  148525 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1010 19:23:53.968116  148525 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1010 19:23:53.980109  148525 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 19:23:54.106345  148525 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1010 19:23:54.210345  148525 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1010 19:23:54.210417  148525 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1010 19:23:54.215968  148525 start.go:563] Will wait 60s for crictl version
	I1010 19:23:54.216037  148525 ssh_runner.go:195] Run: which crictl
	I1010 19:23:54.219885  148525 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1010 19:23:54.260286  148525 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1010 19:23:54.260375  148525 ssh_runner.go:195] Run: crio --version
	I1010 19:23:54.289908  148525 ssh_runner.go:195] Run: crio --version
	I1010 19:23:54.320940  148525 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1010 19:23:52.050137  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:23:54.060194  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:23:56.551981  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:23:54.027982  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:54.528065  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:55.027890  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:55.527753  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:56.027877  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:56.527790  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:57.028263  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:57.527790  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:58.028743  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:58.528039  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:54.234149  147213 main.go:141] libmachine: (no-preload-320324) Waiting to get IP...
	I1010 19:23:54.235147  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:23:54.235598  147213 main.go:141] libmachine: (no-preload-320324) DBG | unable to find current IP address of domain no-preload-320324 in network mk-no-preload-320324
	I1010 19:23:54.235657  147213 main.go:141] libmachine: (no-preload-320324) DBG | I1010 19:23:54.235580  149378 retry.go:31] will retry after 308.921504ms: waiting for machine to come up
	I1010 19:23:54.546327  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:23:54.547002  147213 main.go:141] libmachine: (no-preload-320324) DBG | unable to find current IP address of domain no-preload-320324 in network mk-no-preload-320324
	I1010 19:23:54.547029  147213 main.go:141] libmachine: (no-preload-320324) DBG | I1010 19:23:54.546956  149378 retry.go:31] will retry after 288.92327ms: waiting for machine to come up
	I1010 19:23:54.837625  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:23:54.838136  147213 main.go:141] libmachine: (no-preload-320324) DBG | unable to find current IP address of domain no-preload-320324 in network mk-no-preload-320324
	I1010 19:23:54.838164  147213 main.go:141] libmachine: (no-preload-320324) DBG | I1010 19:23:54.838054  149378 retry.go:31] will retry after 321.948113ms: waiting for machine to come up
	I1010 19:23:55.161940  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:23:55.162494  147213 main.go:141] libmachine: (no-preload-320324) DBG | unable to find current IP address of domain no-preload-320324 in network mk-no-preload-320324
	I1010 19:23:55.162526  147213 main.go:141] libmachine: (no-preload-320324) DBG | I1010 19:23:55.162441  149378 retry.go:31] will retry after 573.848095ms: waiting for machine to come up
	I1010 19:23:55.739080  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:23:55.739592  147213 main.go:141] libmachine: (no-preload-320324) DBG | unable to find current IP address of domain no-preload-320324 in network mk-no-preload-320324
	I1010 19:23:55.739620  147213 main.go:141] libmachine: (no-preload-320324) DBG | I1010 19:23:55.739494  149378 retry.go:31] will retry after 529.087622ms: waiting for machine to come up
	I1010 19:23:56.270324  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:23:56.270899  147213 main.go:141] libmachine: (no-preload-320324) DBG | unable to find current IP address of domain no-preload-320324 in network mk-no-preload-320324
	I1010 19:23:56.270929  147213 main.go:141] libmachine: (no-preload-320324) DBG | I1010 19:23:56.270850  149378 retry.go:31] will retry after 629.204989ms: waiting for machine to come up
	I1010 19:23:56.901836  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:23:56.902283  147213 main.go:141] libmachine: (no-preload-320324) DBG | unable to find current IP address of domain no-preload-320324 in network mk-no-preload-320324
	I1010 19:23:56.902325  147213 main.go:141] libmachine: (no-preload-320324) DBG | I1010 19:23:56.902222  149378 retry.go:31] will retry after 804.309499ms: waiting for machine to come up
	I1010 19:23:57.708806  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:23:57.709175  147213 main.go:141] libmachine: (no-preload-320324) DBG | unable to find current IP address of domain no-preload-320324 in network mk-no-preload-320324
	I1010 19:23:57.709208  147213 main.go:141] libmachine: (no-preload-320324) DBG | I1010 19:23:57.709151  149378 retry.go:31] will retry after 1.204078295s: waiting for machine to come up
	I1010 19:23:54.322534  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetIP
	I1010 19:23:54.325744  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:54.326217  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:54.326257  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:54.326533  148525 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1010 19:23:54.331527  148525 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 19:23:54.343881  148525 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-361847 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:default-k8s-diff-port-361847 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.32 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1010 19:23:54.344033  148525 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1010 19:23:54.344084  148525 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 19:23:54.389066  148525 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1010 19:23:54.389149  148525 ssh_runner.go:195] Run: which lz4
	I1010 19:23:54.393550  148525 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1010 19:23:54.397787  148525 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1010 19:23:54.397833  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1010 19:23:55.897111  148525 crio.go:462] duration metric: took 1.503593301s to copy over tarball
	I1010 19:23:55.897212  148525 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1010 19:23:58.060691  148525 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.16343467s)
	I1010 19:23:58.060731  148525 crio.go:469] duration metric: took 2.163580526s to extract the tarball
	I1010 19:23:58.060741  148525 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1010 19:23:58.103877  148525 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 19:23:58.162881  148525 crio.go:514] all images are preloaded for cri-o runtime.
	I1010 19:23:58.162907  148525 cache_images.go:84] Images are preloaded, skipping loading
	I1010 19:23:58.162915  148525 kubeadm.go:934] updating node { 192.168.50.32 8444 v1.31.1 crio true true} ...
	I1010 19:23:58.163031  148525 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-361847 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.32
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-361847 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1010 19:23:58.163098  148525 ssh_runner.go:195] Run: crio config
	I1010 19:23:58.219804  148525 cni.go:84] Creating CNI manager for ""
	I1010 19:23:58.219827  148525 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1010 19:23:58.219837  148525 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1010 19:23:58.219861  148525 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.32 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-361847 NodeName:default-k8s-diff-port-361847 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.32"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.32 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1010 19:23:58.219982  148525 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.32
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-361847"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.32
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.32"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1010 19:23:58.220042  148525 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1010 19:23:58.231444  148525 binaries.go:44] Found k8s binaries, skipping transfer
	I1010 19:23:58.231565  148525 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1010 19:23:58.241835  148525 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I1010 19:23:58.259408  148525 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1010 19:23:58.276571  148525 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I1010 19:23:58.294640  148525 ssh_runner.go:195] Run: grep 192.168.50.32	control-plane.minikube.internal$ /etc/hosts
	I1010 19:23:58.298503  148525 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.32	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 19:23:58.312286  148525 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 19:23:58.449757  148525 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 19:23:58.467342  148525 certs.go:68] Setting up /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/default-k8s-diff-port-361847 for IP: 192.168.50.32
	I1010 19:23:58.467377  148525 certs.go:194] generating shared ca certs ...
	I1010 19:23:58.467398  148525 certs.go:226] acquiring lock for ca certs: {Name:mk934aa2a82050d7b4e8800492bf64f91d140517 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 19:23:58.467583  148525 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key
	I1010 19:23:58.467642  148525 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key
	I1010 19:23:58.467655  148525 certs.go:256] generating profile certs ...
	I1010 19:23:58.467826  148525 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/default-k8s-diff-port-361847/client.key
	I1010 19:23:58.467895  148525 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/default-k8s-diff-port-361847/apiserver.key.ae5e3f04
	I1010 19:23:58.467951  148525 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/default-k8s-diff-port-361847/proxy-client.key
	I1010 19:23:58.468089  148525 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876.pem (1338 bytes)
	W1010 19:23:58.468136  148525 certs.go:480] ignoring /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876_empty.pem, impossibly tiny 0 bytes
	I1010 19:23:58.468153  148525 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem (1679 bytes)
	I1010 19:23:58.468194  148525 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem (1078 bytes)
	I1010 19:23:58.468226  148525 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem (1123 bytes)
	I1010 19:23:58.468260  148525 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem (1675 bytes)
	I1010 19:23:58.468317  148525 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem (1708 bytes)
	I1010 19:23:58.468931  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1010 19:23:58.529632  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1010 19:23:58.571900  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1010 19:23:58.612599  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1010 19:23:58.645536  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/default-k8s-diff-port-361847/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1010 19:23:58.675961  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/default-k8s-diff-port-361847/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1010 19:23:58.700712  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/default-k8s-diff-port-361847/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1010 19:23:58.725355  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/default-k8s-diff-port-361847/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1010 19:23:58.751138  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1010 19:23:58.775832  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876.pem --> /usr/share/ca-certificates/88876.pem (1338 bytes)
	I1010 19:23:58.800729  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem --> /usr/share/ca-certificates/888762.pem (1708 bytes)
	I1010 19:23:58.825558  148525 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1010 19:23:58.843331  148525 ssh_runner.go:195] Run: openssl version
	I1010 19:23:58.849271  148525 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/88876.pem && ln -fs /usr/share/ca-certificates/88876.pem /etc/ssl/certs/88876.pem"
	I1010 19:23:58.861031  148525 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/88876.pem
	I1010 19:23:58.865721  148525 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 10 18:09 /usr/share/ca-certificates/88876.pem
	I1010 19:23:58.865797  148525 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/88876.pem
	I1010 19:23:58.871961  148525 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/88876.pem /etc/ssl/certs/51391683.0"
	I1010 19:23:58.884520  148525 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/888762.pem && ln -fs /usr/share/ca-certificates/888762.pem /etc/ssl/certs/888762.pem"
	I1010 19:23:58.896744  148525 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/888762.pem
	I1010 19:23:58.901507  148525 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 10 18:09 /usr/share/ca-certificates/888762.pem
	I1010 19:23:58.901571  148525 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/888762.pem
	I1010 19:23:58.907366  148525 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/888762.pem /etc/ssl/certs/3ec20f2e.0"
	I1010 19:23:58.919784  148525 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1010 19:23:58.931972  148525 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1010 19:23:58.936897  148525 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 10 17:58 /usr/share/ca-certificates/minikubeCA.pem
	I1010 19:23:58.936981  148525 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1010 19:23:58.943007  148525 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1010 19:23:59.052037  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:01.551982  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:23:59.028519  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:59.527790  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:00.027889  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:00.528623  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:01.028697  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:01.528030  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:02.028062  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:02.527876  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:03.028745  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:03.528569  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:58.914409  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:23:58.914894  147213 main.go:141] libmachine: (no-preload-320324) DBG | unable to find current IP address of domain no-preload-320324 in network mk-no-preload-320324
	I1010 19:23:58.914927  147213 main.go:141] libmachine: (no-preload-320324) DBG | I1010 19:23:58.914831  149378 retry.go:31] will retry after 1.631827888s: waiting for machine to come up
	I1010 19:24:00.548505  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:00.549135  147213 main.go:141] libmachine: (no-preload-320324) DBG | unable to find current IP address of domain no-preload-320324 in network mk-no-preload-320324
	I1010 19:24:00.549164  147213 main.go:141] libmachine: (no-preload-320324) DBG | I1010 19:24:00.549043  149378 retry.go:31] will retry after 2.126895157s: waiting for machine to come up
	I1010 19:24:02.678328  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:02.678907  147213 main.go:141] libmachine: (no-preload-320324) DBG | unable to find current IP address of domain no-preload-320324 in network mk-no-preload-320324
	I1010 19:24:02.678969  147213 main.go:141] libmachine: (no-preload-320324) DBG | I1010 19:24:02.678891  149378 retry.go:31] will retry after 2.754376625s: waiting for machine to come up
	I1010 19:23:58.955104  148525 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1010 19:23:58.959833  148525 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1010 19:23:58.966528  148525 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1010 19:23:58.973590  148525 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1010 19:23:58.982390  148525 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1010 19:23:58.990767  148525 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1010 19:23:58.997162  148525 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1010 19:23:59.003647  148525 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-361847 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:default-k8s-diff-port-361847 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.32 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 19:23:59.003786  148525 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1010 19:23:59.003865  148525 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1010 19:23:59.048772  148525 cri.go:89] found id: ""
	I1010 19:23:59.048869  148525 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1010 19:23:59.061267  148525 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1010 19:23:59.061288  148525 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1010 19:23:59.061338  148525 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1010 19:23:59.072629  148525 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1010 19:23:59.074287  148525 kubeconfig.go:125] found "default-k8s-diff-port-361847" server: "https://192.168.50.32:8444"
	I1010 19:23:59.077880  148525 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1010 19:23:59.090738  148525 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.32
	I1010 19:23:59.090783  148525 kubeadm.go:1160] stopping kube-system containers ...
	I1010 19:23:59.090799  148525 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1010 19:23:59.090886  148525 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1010 19:23:59.136762  148525 cri.go:89] found id: ""
	I1010 19:23:59.136888  148525 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1010 19:23:59.155937  148525 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1010 19:23:59.166471  148525 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1010 19:23:59.166493  148525 kubeadm.go:157] found existing configuration files:
	
	I1010 19:23:59.166549  148525 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1010 19:23:59.178247  148525 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1010 19:23:59.178313  148525 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1010 19:23:59.189455  148525 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1010 19:23:59.200127  148525 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1010 19:23:59.200204  148525 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1010 19:23:59.210764  148525 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1010 19:23:59.221048  148525 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1010 19:23:59.221119  148525 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1010 19:23:59.231762  148525 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1010 19:23:59.242152  148525 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1010 19:23:59.242217  148525 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1010 19:23:59.252608  148525 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1010 19:23:59.265219  148525 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:23:59.391743  148525 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:24:00.243288  148525 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:24:00.453782  148525 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:24:00.532137  148525 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:24:00.623598  148525 api_server.go:52] waiting for apiserver process to appear ...
	I1010 19:24:00.623711  148525 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:01.124678  148525 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:01.624626  148525 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:01.667587  148525 api_server.go:72] duration metric: took 1.043987857s to wait for apiserver process to appear ...
	I1010 19:24:01.667621  148525 api_server.go:88] waiting for apiserver healthz status ...
	I1010 19:24:01.667649  148525 api_server.go:253] Checking apiserver healthz at https://192.168.50.32:8444/healthz ...
	I1010 19:24:01.668298  148525 api_server.go:269] stopped: https://192.168.50.32:8444/healthz: Get "https://192.168.50.32:8444/healthz": dial tcp 192.168.50.32:8444: connect: connection refused
	I1010 19:24:02.168273  148525 api_server.go:253] Checking apiserver healthz at https://192.168.50.32:8444/healthz ...
	I1010 19:24:05.275654  148525 api_server.go:279] https://192.168.50.32:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1010 19:24:05.275695  148525 api_server.go:103] status: https://192.168.50.32:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1010 19:24:05.275713  148525 api_server.go:253] Checking apiserver healthz at https://192.168.50.32:8444/healthz ...
	I1010 19:24:05.309713  148525 api_server.go:279] https://192.168.50.32:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1010 19:24:05.309770  148525 api_server.go:103] status: https://192.168.50.32:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1010 19:24:05.668325  148525 api_server.go:253] Checking apiserver healthz at https://192.168.50.32:8444/healthz ...
	I1010 19:24:05.684992  148525 api_server.go:279] https://192.168.50.32:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1010 19:24:05.685031  148525 api_server.go:103] status: https://192.168.50.32:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1010 19:24:06.168198  148525 api_server.go:253] Checking apiserver healthz at https://192.168.50.32:8444/healthz ...
	I1010 19:24:06.176584  148525 api_server.go:279] https://192.168.50.32:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1010 19:24:06.176627  148525 api_server.go:103] status: https://192.168.50.32:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1010 19:24:06.668130  148525 api_server.go:253] Checking apiserver healthz at https://192.168.50.32:8444/healthz ...
	I1010 19:24:06.682049  148525 api_server.go:279] https://192.168.50.32:8444/healthz returned 200:
	ok
	I1010 19:24:06.692780  148525 api_server.go:141] control plane version: v1.31.1
	I1010 19:24:06.692811  148525 api_server.go:131] duration metric: took 5.025182717s to wait for apiserver health ...
	I1010 19:24:06.692820  148525 cni.go:84] Creating CNI manager for ""
	I1010 19:24:06.692831  148525 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1010 19:24:06.694447  148525 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1010 19:24:03.558797  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:06.054012  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:04.028711  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:04.528770  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:05.028716  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:05.528083  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:06.028204  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:06.528430  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:07.027972  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:07.527987  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:08.027829  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:08.528676  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:05.435450  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:05.435940  147213 main.go:141] libmachine: (no-preload-320324) DBG | unable to find current IP address of domain no-preload-320324 in network mk-no-preload-320324
	I1010 19:24:05.435970  147213 main.go:141] libmachine: (no-preload-320324) DBG | I1010 19:24:05.435888  149378 retry.go:31] will retry after 2.981990051s: waiting for machine to come up
	I1010 19:24:08.419385  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:08.419982  147213 main.go:141] libmachine: (no-preload-320324) DBG | unable to find current IP address of domain no-preload-320324 in network mk-no-preload-320324
	I1010 19:24:08.420006  147213 main.go:141] libmachine: (no-preload-320324) DBG | I1010 19:24:08.419905  149378 retry.go:31] will retry after 3.976204267s: waiting for machine to come up
	I1010 19:24:06.695841  148525 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1010 19:24:06.711212  148525 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1010 19:24:06.747753  148525 system_pods.go:43] waiting for kube-system pods to appear ...
	I1010 19:24:06.768344  148525 system_pods.go:59] 8 kube-system pods found
	I1010 19:24:06.768429  148525 system_pods.go:61] "coredns-7c65d6cfc9-rv8vq" [93b209ea-bb5f-40c5-aea8-8771b785f021] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1010 19:24:06.768446  148525 system_pods.go:61] "etcd-default-k8s-diff-port-361847" [65129999-984d-497c-a6e1-9c53a5374991] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1010 19:24:06.768452  148525 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-361847" [5f18ba24-29cf-433e-a70d-23757278c04f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1010 19:24:06.768460  148525 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-361847" [c189c785-8ac5-4003-802d-9e7c089d450e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1010 19:24:06.768467  148525 system_pods.go:61] "kube-proxy-v5lm8" [e78eabf9-5c65-4cba-83fd-0837cef05126] Running
	I1010 19:24:06.768476  148525 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-361847" [4f84f0f5-e255-4534-9db3-e5cfee0b2447] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1010 19:24:06.768485  148525 system_pods.go:61] "metrics-server-6867b74b74-h5kjm" [a3979b79-bd21-490b-97ac-0a78efd43a99] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1010 19:24:06.768493  148525 system_pods.go:61] "storage-provisioner" [ca8606d3-9adb-46da-886a-3081b11b52a6] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1010 19:24:06.768499  148525 system_pods.go:74] duration metric: took 20.716461ms to wait for pod list to return data ...
	I1010 19:24:06.768509  148525 node_conditions.go:102] verifying NodePressure condition ...
	I1010 19:24:06.777935  148525 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1010 19:24:06.777973  148525 node_conditions.go:123] node cpu capacity is 2
	I1010 19:24:06.777988  148525 node_conditions.go:105] duration metric: took 9.473726ms to run NodePressure ...
	I1010 19:24:06.778019  148525 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:24:07.053296  148525 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1010 19:24:07.057585  148525 kubeadm.go:739] kubelet initialised
	I1010 19:24:07.057608  148525 kubeadm.go:740] duration metric: took 4.283027ms waiting for restarted kubelet to initialise ...
	I1010 19:24:07.057618  148525 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 19:24:07.064157  148525 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-rv8vq" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:07.069962  148525 pod_ready.go:98] node "default-k8s-diff-port-361847" hosting pod "coredns-7c65d6cfc9-rv8vq" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-361847" has status "Ready":"False"
	I1010 19:24:07.069989  148525 pod_ready.go:82] duration metric: took 5.791958ms for pod "coredns-7c65d6cfc9-rv8vq" in "kube-system" namespace to be "Ready" ...
	E1010 19:24:07.069999  148525 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-361847" hosting pod "coredns-7c65d6cfc9-rv8vq" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-361847" has status "Ready":"False"
	I1010 19:24:07.070022  148525 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:07.075615  148525 pod_ready.go:98] node "default-k8s-diff-port-361847" hosting pod "etcd-default-k8s-diff-port-361847" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-361847" has status "Ready":"False"
	I1010 19:24:07.075644  148525 pod_ready.go:82] duration metric: took 5.608749ms for pod "etcd-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	E1010 19:24:07.075654  148525 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-361847" hosting pod "etcd-default-k8s-diff-port-361847" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-361847" has status "Ready":"False"
	I1010 19:24:07.075661  148525 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:07.081717  148525 pod_ready.go:98] node "default-k8s-diff-port-361847" hosting pod "kube-apiserver-default-k8s-diff-port-361847" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-361847" has status "Ready":"False"
	I1010 19:24:07.081743  148525 pod_ready.go:82] duration metric: took 6.074977ms for pod "kube-apiserver-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	E1010 19:24:07.081754  148525 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-361847" hosting pod "kube-apiserver-default-k8s-diff-port-361847" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-361847" has status "Ready":"False"
	I1010 19:24:07.081761  148525 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:07.152204  148525 pod_ready.go:98] node "default-k8s-diff-port-361847" hosting pod "kube-controller-manager-default-k8s-diff-port-361847" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-361847" has status "Ready":"False"
	I1010 19:24:07.152244  148525 pod_ready.go:82] duration metric: took 70.475599ms for pod "kube-controller-manager-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	E1010 19:24:07.152258  148525 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-361847" hosting pod "kube-controller-manager-default-k8s-diff-port-361847" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-361847" has status "Ready":"False"
	I1010 19:24:07.152266  148525 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-v5lm8" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:07.551283  148525 pod_ready.go:93] pod "kube-proxy-v5lm8" in "kube-system" namespace has status "Ready":"True"
	I1010 19:24:07.551311  148525 pod_ready.go:82] duration metric: took 399.036581ms for pod "kube-proxy-v5lm8" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:07.551324  148525 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:08.550896  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:10.551437  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:09.028538  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:09.527883  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:10.028693  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:10.528439  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:11.028528  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:11.528679  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:12.028002  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:12.527904  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:13.028685  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:13.527833  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:12.401115  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.401808  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has current primary IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.401841  147213 main.go:141] libmachine: (no-preload-320324) Found IP for machine: 192.168.72.11
	I1010 19:24:12.401856  147213 main.go:141] libmachine: (no-preload-320324) Reserving static IP address...
	I1010 19:24:12.402368  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "no-preload-320324", mac: "52:54:00:95:03:cd", ip: "192.168.72.11"} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:12.402407  147213 main.go:141] libmachine: (no-preload-320324) DBG | skip adding static IP to network mk-no-preload-320324 - found existing host DHCP lease matching {name: "no-preload-320324", mac: "52:54:00:95:03:cd", ip: "192.168.72.11"}
	I1010 19:24:12.402426  147213 main.go:141] libmachine: (no-preload-320324) Reserved static IP address: 192.168.72.11
	I1010 19:24:12.402443  147213 main.go:141] libmachine: (no-preload-320324) Waiting for SSH to be available...
	I1010 19:24:12.402458  147213 main.go:141] libmachine: (no-preload-320324) DBG | Getting to WaitForSSH function...
	I1010 19:24:12.404803  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.405200  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:12.405226  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.405461  147213 main.go:141] libmachine: (no-preload-320324) DBG | Using SSH client type: external
	I1010 19:24:12.405494  147213 main.go:141] libmachine: (no-preload-320324) DBG | Using SSH private key: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/no-preload-320324/id_rsa (-rw-------)
	I1010 19:24:12.405527  147213 main.go:141] libmachine: (no-preload-320324) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.11 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19787-81676/.minikube/machines/no-preload-320324/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1010 19:24:12.405541  147213 main.go:141] libmachine: (no-preload-320324) DBG | About to run SSH command:
	I1010 19:24:12.405554  147213 main.go:141] libmachine: (no-preload-320324) DBG | exit 0
	I1010 19:24:12.529010  147213 main.go:141] libmachine: (no-preload-320324) DBG | SSH cmd err, output: <nil>: 
	I1010 19:24:12.529401  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetConfigRaw
	I1010 19:24:12.530257  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetIP
	I1010 19:24:12.533285  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.533692  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:12.533727  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.533963  147213 profile.go:143] Saving config to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/no-preload-320324/config.json ...
	I1010 19:24:12.534205  147213 machine.go:93] provisionDockerMachine start ...
	I1010 19:24:12.534230  147213 main.go:141] libmachine: (no-preload-320324) Calling .DriverName
	I1010 19:24:12.534450  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHHostname
	I1010 19:24:12.536585  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.536976  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:12.537003  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.537133  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHPort
	I1010 19:24:12.537323  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:12.537512  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:12.537689  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHUsername
	I1010 19:24:12.537925  147213 main.go:141] libmachine: Using SSH client type: native
	I1010 19:24:12.538138  147213 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.11 22 <nil> <nil>}
	I1010 19:24:12.538151  147213 main.go:141] libmachine: About to run SSH command:
	hostname
	I1010 19:24:12.641679  147213 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1010 19:24:12.641706  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetMachineName
	I1010 19:24:12.641964  147213 buildroot.go:166] provisioning hostname "no-preload-320324"
	I1010 19:24:12.642002  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetMachineName
	I1010 19:24:12.642235  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHHostname
	I1010 19:24:12.645149  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.645488  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:12.645521  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.645647  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHPort
	I1010 19:24:12.645836  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:12.646001  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:12.646155  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHUsername
	I1010 19:24:12.646352  147213 main.go:141] libmachine: Using SSH client type: native
	I1010 19:24:12.646533  147213 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.11 22 <nil> <nil>}
	I1010 19:24:12.646545  147213 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-320324 && echo "no-preload-320324" | sudo tee /etc/hostname
	I1010 19:24:12.766449  147213 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-320324
	
	I1010 19:24:12.766480  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHHostname
	I1010 19:24:12.769836  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.770331  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:12.770356  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.770584  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHPort
	I1010 19:24:12.770810  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:12.770962  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:12.771119  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHUsername
	I1010 19:24:12.771252  147213 main.go:141] libmachine: Using SSH client type: native
	I1010 19:24:12.771448  147213 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.11 22 <nil> <nil>}
	I1010 19:24:12.771470  147213 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-320324' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-320324/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-320324' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1010 19:24:12.882458  147213 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1010 19:24:12.882495  147213 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19787-81676/.minikube CaCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19787-81676/.minikube}
	I1010 19:24:12.882537  147213 buildroot.go:174] setting up certificates
	I1010 19:24:12.882547  147213 provision.go:84] configureAuth start
	I1010 19:24:12.882562  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetMachineName
	I1010 19:24:12.882865  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetIP
	I1010 19:24:12.885854  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.886139  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:12.886173  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.886308  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHHostname
	I1010 19:24:12.888479  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.888788  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:12.888819  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.888976  147213 provision.go:143] copyHostCerts
	I1010 19:24:12.889037  147213 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem, removing ...
	I1010 19:24:12.889049  147213 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem
	I1010 19:24:12.889102  147213 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem (1123 bytes)
	I1010 19:24:12.889235  147213 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem, removing ...
	I1010 19:24:12.889246  147213 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem
	I1010 19:24:12.889278  147213 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem (1675 bytes)
	I1010 19:24:12.889370  147213 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem, removing ...
	I1010 19:24:12.889381  147213 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem
	I1010 19:24:12.889406  147213 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem (1078 bytes)
	I1010 19:24:12.889493  147213 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem org=jenkins.no-preload-320324 san=[127.0.0.1 192.168.72.11 localhost minikube no-preload-320324]
	I1010 19:24:12.978176  147213 provision.go:177] copyRemoteCerts
	I1010 19:24:12.978235  147213 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1010 19:24:12.978261  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHHostname
	I1010 19:24:12.981662  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.982182  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:12.982218  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.982452  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHPort
	I1010 19:24:12.982647  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:12.982829  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHUsername
	I1010 19:24:12.983005  147213 sshutil.go:53] new ssh client: &{IP:192.168.72.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/no-preload-320324/id_rsa Username:docker}
	I1010 19:24:13.067269  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1010 19:24:13.092777  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1010 19:24:13.118530  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1010 19:24:13.143401  147213 provision.go:87] duration metric: took 260.833877ms to configureAuth
	I1010 19:24:13.143436  147213 buildroot.go:189] setting minikube options for container-runtime
	I1010 19:24:13.143678  147213 config.go:182] Loaded profile config "no-preload-320324": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 19:24:13.143776  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHHostname
	I1010 19:24:13.147086  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:13.147507  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:13.147531  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:13.147787  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHPort
	I1010 19:24:13.148032  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:13.148222  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:13.148452  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHUsername
	I1010 19:24:13.148660  147213 main.go:141] libmachine: Using SSH client type: native
	I1010 19:24:13.149013  147213 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.11 22 <nil> <nil>}
	I1010 19:24:13.149041  147213 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1010 19:24:13.375683  147213 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1010 19:24:13.375714  147213 machine.go:96] duration metric: took 841.493636ms to provisionDockerMachine
	I1010 19:24:13.375736  147213 start.go:293] postStartSetup for "no-preload-320324" (driver="kvm2")
	I1010 19:24:13.375754  147213 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1010 19:24:13.375775  147213 main.go:141] libmachine: (no-preload-320324) Calling .DriverName
	I1010 19:24:13.376085  147213 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1010 19:24:13.376116  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHHostname
	I1010 19:24:13.378855  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:13.379179  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:13.379224  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:13.379408  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHPort
	I1010 19:24:13.379608  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:13.379769  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHUsername
	I1010 19:24:13.379910  147213 sshutil.go:53] new ssh client: &{IP:192.168.72.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/no-preload-320324/id_rsa Username:docker}
	I1010 19:24:13.459580  147213 ssh_runner.go:195] Run: cat /etc/os-release
	I1010 19:24:13.463644  147213 info.go:137] Remote host: Buildroot 2023.02.9
	I1010 19:24:13.463674  147213 filesync.go:126] Scanning /home/jenkins/minikube-integration/19787-81676/.minikube/addons for local assets ...
	I1010 19:24:13.463751  147213 filesync.go:126] Scanning /home/jenkins/minikube-integration/19787-81676/.minikube/files for local assets ...
	I1010 19:24:13.463845  147213 filesync.go:149] local asset: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem -> 888762.pem in /etc/ssl/certs
	I1010 19:24:13.463963  147213 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1010 19:24:13.473483  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem --> /etc/ssl/certs/888762.pem (1708 bytes)
	I1010 19:24:13.498773  147213 start.go:296] duration metric: took 123.021762ms for postStartSetup
	I1010 19:24:13.498814  147213 fix.go:56] duration metric: took 20.640532088s for fixHost
	I1010 19:24:13.498834  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHHostname
	I1010 19:24:13.501681  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:13.502243  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:13.502281  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:13.502476  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHPort
	I1010 19:24:13.502679  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:13.502835  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:13.502993  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHUsername
	I1010 19:24:13.503177  147213 main.go:141] libmachine: Using SSH client type: native
	I1010 19:24:13.503383  147213 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.11 22 <nil> <nil>}
	I1010 19:24:13.503396  147213 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1010 19:24:13.613929  147213 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728588253.586950075
	
	I1010 19:24:13.613954  147213 fix.go:216] guest clock: 1728588253.586950075
	I1010 19:24:13.613963  147213 fix.go:229] Guest: 2024-10-10 19:24:13.586950075 +0000 UTC Remote: 2024-10-10 19:24:13.498818059 +0000 UTC m=+359.788559229 (delta=88.132016ms)
	I1010 19:24:13.613988  147213 fix.go:200] guest clock delta is within tolerance: 88.132016ms
	I1010 19:24:13.614020  147213 start.go:83] releasing machines lock for "no-preload-320324", held for 20.755775587s
	I1010 19:24:13.614063  147213 main.go:141] libmachine: (no-preload-320324) Calling .DriverName
	I1010 19:24:13.614473  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetIP
	I1010 19:24:13.617327  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:13.617694  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:13.617721  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:13.618016  147213 main.go:141] libmachine: (no-preload-320324) Calling .DriverName
	I1010 19:24:13.618670  147213 main.go:141] libmachine: (no-preload-320324) Calling .DriverName
	I1010 19:24:13.618884  147213 main.go:141] libmachine: (no-preload-320324) Calling .DriverName
	I1010 19:24:13.618989  147213 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1010 19:24:13.619047  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHHostname
	I1010 19:24:13.619142  147213 ssh_runner.go:195] Run: cat /version.json
	I1010 19:24:13.619185  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHHostname
	I1010 19:24:13.621972  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:13.622229  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:13.622322  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:13.622348  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:13.622533  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHPort
	I1010 19:24:13.622666  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:13.622697  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:13.622736  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:13.622865  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHPort
	I1010 19:24:13.622930  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHUsername
	I1010 19:24:13.623059  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:13.623073  147213 sshutil.go:53] new ssh client: &{IP:192.168.72.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/no-preload-320324/id_rsa Username:docker}
	I1010 19:24:13.623225  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHUsername
	I1010 19:24:13.623349  147213 sshutil.go:53] new ssh client: &{IP:192.168.72.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/no-preload-320324/id_rsa Username:docker}
	I1010 19:24:13.720999  147213 ssh_runner.go:195] Run: systemctl --version
	I1010 19:24:13.727679  147213 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1010 19:24:09.562834  148525 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-361847" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:12.058686  148525 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-361847" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:13.870558  147213 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1010 19:24:13.877853  147213 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1010 19:24:13.877923  147213 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1010 19:24:13.896295  147213 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1010 19:24:13.896325  147213 start.go:495] detecting cgroup driver to use...
	I1010 19:24:13.896400  147213 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1010 19:24:13.913122  147213 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1010 19:24:13.929359  147213 docker.go:217] disabling cri-docker service (if available) ...
	I1010 19:24:13.929437  147213 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1010 19:24:13.944840  147213 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1010 19:24:13.960062  147213 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1010 19:24:14.090774  147213 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1010 19:24:14.246094  147213 docker.go:233] disabling docker service ...
	I1010 19:24:14.246161  147213 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1010 19:24:14.264682  147213 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1010 19:24:14.280264  147213 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1010 19:24:14.437156  147213 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1010 19:24:14.569220  147213 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1010 19:24:14.585723  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1010 19:24:14.607349  147213 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1010 19:24:14.607429  147213 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:24:14.619113  147213 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1010 19:24:14.619198  147213 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:24:14.631818  147213 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:24:14.643977  147213 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:24:14.655753  147213 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1010 19:24:14.667235  147213 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:24:14.679225  147213 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:24:14.698760  147213 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:24:14.710440  147213 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1010 19:24:14.722565  147213 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1010 19:24:14.722625  147213 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1010 19:24:14.740587  147213 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1010 19:24:14.752630  147213 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 19:24:14.887728  147213 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1010 19:24:14.989026  147213 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1010 19:24:14.989109  147213 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1010 19:24:14.995309  147213 start.go:563] Will wait 60s for crictl version
	I1010 19:24:14.995366  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:24:14.999840  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1010 19:24:15.043758  147213 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1010 19:24:15.043856  147213 ssh_runner.go:195] Run: crio --version
	I1010 19:24:15.079274  147213 ssh_runner.go:195] Run: crio --version
	I1010 19:24:15.116630  147213 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1010 19:24:13.050633  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:15.552413  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:14.028090  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:14.528072  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:15.028648  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:15.528388  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:16.028060  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:16.528472  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:17.028098  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:17.528125  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:18.028563  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:18.528613  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:15.118343  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetIP
	I1010 19:24:15.121596  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:15.122101  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:15.122133  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:15.122396  147213 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1010 19:24:15.127140  147213 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 19:24:15.141249  147213 kubeadm.go:883] updating cluster {Name:no-preload-320324 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:no-preload-320324 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.11 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1010 19:24:15.141375  147213 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1010 19:24:15.141417  147213 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 19:24:15.183271  147213 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1010 19:24:15.183303  147213 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1010 19:24:15.183412  147213 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 19:24:15.183444  147213 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1010 19:24:15.183452  147213 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I1010 19:24:15.183459  147213 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1010 19:24:15.183422  147213 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1010 19:24:15.183493  147213 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1010 19:24:15.183512  147213 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I1010 19:24:15.183507  147213 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I1010 19:24:15.185097  147213 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I1010 19:24:15.185097  147213 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1010 19:24:15.185097  147213 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 19:24:15.185099  147213 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I1010 19:24:15.185097  147213 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I1010 19:24:15.185098  147213 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1010 19:24:15.185103  147213 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1010 19:24:15.185106  147213 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1010 19:24:15.328484  147213 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.1
	I1010 19:24:15.333573  147213 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I1010 19:24:15.340047  147213 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I1010 19:24:15.358922  147213 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I1010 19:24:15.359800  147213 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.1
	I1010 19:24:15.366668  147213 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.1
	I1010 19:24:15.409942  147213 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I1010 19:24:15.409995  147213 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1010 19:24:15.410050  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:24:15.416186  147213 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.1
	I1010 19:24:15.452343  147213 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I1010 19:24:15.452385  147213 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I1010 19:24:15.452426  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:24:15.533567  147213 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I1010 19:24:15.533620  147213 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I1010 19:24:15.533671  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:24:15.585611  147213 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I1010 19:24:15.585659  147213 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I1010 19:24:15.585685  147213 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I1010 19:24:15.585712  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:24:15.585724  147213 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I1010 19:24:15.585765  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:24:15.585769  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I1010 19:24:15.585805  147213 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I1010 19:24:15.585832  147213 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I1010 19:24:15.585856  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1010 19:24:15.585872  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:24:15.585943  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1010 19:24:15.603131  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I1010 19:24:15.661918  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1010 19:24:15.683739  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I1010 19:24:15.683760  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I1010 19:24:15.683833  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1010 19:24:15.683880  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I1010 19:24:15.685385  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I1010 19:24:15.792253  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1010 19:24:15.818116  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I1010 19:24:15.818183  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I1010 19:24:15.818289  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1010 19:24:15.818321  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I1010 19:24:15.818402  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I1010 19:24:15.878069  147213 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I1010 19:24:15.878202  147213 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I1010 19:24:15.940520  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I1010 19:24:15.953841  147213 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I1010 19:24:15.953955  147213 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I1010 19:24:15.953990  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I1010 19:24:15.954047  147213 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I1010 19:24:15.954115  147213 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I1010 19:24:15.954120  147213 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I1010 19:24:15.954130  147213 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I1010 19:24:15.954144  147213 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I1010 19:24:15.954157  147213 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I1010 19:24:15.954205  147213 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	I1010 19:24:16.005975  147213 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I1010 19:24:16.006028  147213 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I1010 19:24:16.006090  147213 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I1010 19:24:16.023905  147213 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I1010 19:24:16.023990  147213 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.1 (exists)
	I1010 19:24:16.024024  147213 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I1010 19:24:16.024023  147213 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.1 (exists)
	I1010 19:24:16.033715  147213 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 19:24:18.150881  147213 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1: (2.144766677s)
	I1010 19:24:18.150935  147213 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.1 (exists)
	I1010 19:24:18.150931  147213 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (2.196753845s)
	I1010 19:24:18.150944  147213 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1: (2.126894115s)
	I1010 19:24:18.150973  147213 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.1 (exists)
	I1010 19:24:18.150953  147213 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I1010 19:24:18.150982  147213 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.117235962s)
	I1010 19:24:18.151002  147213 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I1010 19:24:18.151014  147213 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1010 19:24:18.151053  147213 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 19:24:18.151069  147213 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I1010 19:24:18.151097  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:24:14.059223  148525 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-361847" in "kube-system" namespace has status "Ready":"True"
	I1010 19:24:14.059252  148525 pod_ready.go:82] duration metric: took 6.507918149s for pod "kube-scheduler-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:14.059266  148525 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:16.066908  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:18.082398  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:18.051799  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:20.552644  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:19.028306  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:19.527854  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:20.028765  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:20.528245  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:21.027936  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:21.528172  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:22.028527  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:22.528446  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:23.028709  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:23.528711  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:21.952099  147213 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.801005716s)
	I1010 19:24:21.952134  147213 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I1010 19:24:21.952163  147213 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I1010 19:24:21.952165  147213 ssh_runner.go:235] Completed: which crictl: (3.801048272s)
	I1010 19:24:21.952212  147213 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I1010 19:24:21.952225  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 19:24:21.993627  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 19:24:20.566055  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:22.567145  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:23.053514  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:25.554151  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:24.028402  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:24.528516  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:25.028207  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:25.527928  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:26.028032  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:26.527826  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:27.028609  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:27.528448  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:28.028018  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:28.528597  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:23.929370  147213 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1: (1.977128659s)
	I1010 19:24:23.929418  147213 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I1010 19:24:23.929450  147213 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I1010 19:24:23.929498  147213 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.935844384s)
	I1010 19:24:23.929532  147213 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1
	I1010 19:24:23.929551  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 19:24:26.009485  147213 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.079908324s)
	I1010 19:24:26.009567  147213 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1010 19:24:26.009484  147213 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1: (2.079925224s)
	I1010 19:24:26.009641  147213 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I1010 19:24:26.009671  147213 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I1010 19:24:26.009684  147213 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1010 19:24:26.009720  147213 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1
	I1010 19:24:27.968483  147213 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.958772952s)
	I1010 19:24:27.968534  147213 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1010 19:24:27.968559  147213 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1: (1.958813643s)
	I1010 19:24:27.968587  147213 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I1010 19:24:27.968619  147213 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I1010 19:24:27.968686  147213 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1
	I1010 19:24:25.069787  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:27.567013  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:28.050968  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:30.551528  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:29.027960  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:29.528054  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:30.028227  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:30.527791  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:31.027790  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:31.528761  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:32.028343  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:32.528553  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:33.028195  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:33.527895  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:29.315157  147213 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1: (1.346440456s)
	I1010 19:24:29.315211  147213 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I1010 19:24:29.315244  147213 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1010 19:24:29.315296  147213 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1010 19:24:30.173931  147213 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1010 19:24:30.173977  147213 cache_images.go:123] Successfully loaded all cached images
	I1010 19:24:30.173985  147213 cache_images.go:92] duration metric: took 14.990666845s to LoadCachedImages
	I1010 19:24:30.174001  147213 kubeadm.go:934] updating node { 192.168.72.11 8443 v1.31.1 crio true true} ...
	I1010 19:24:30.174129  147213 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-320324 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.11
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-320324 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1010 19:24:30.174221  147213 ssh_runner.go:195] Run: crio config
	I1010 19:24:30.222677  147213 cni.go:84] Creating CNI manager for ""
	I1010 19:24:30.222702  147213 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1010 19:24:30.222711  147213 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1010 19:24:30.222736  147213 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.11 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-320324 NodeName:no-preload-320324 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.11"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.11 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1010 19:24:30.222923  147213 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.11
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-320324"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.11
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.11"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1010 19:24:30.222998  147213 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1010 19:24:30.233755  147213 binaries.go:44] Found k8s binaries, skipping transfer
	I1010 19:24:30.233818  147213 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1010 19:24:30.243829  147213 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I1010 19:24:30.263056  147213 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1010 19:24:30.282362  147213 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I1010 19:24:30.300449  147213 ssh_runner.go:195] Run: grep 192.168.72.11	control-plane.minikube.internal$ /etc/hosts
	I1010 19:24:30.304661  147213 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.11	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 19:24:30.317462  147213 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 19:24:30.445515  147213 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 19:24:30.462816  147213 certs.go:68] Setting up /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/no-preload-320324 for IP: 192.168.72.11
	I1010 19:24:30.462847  147213 certs.go:194] generating shared ca certs ...
	I1010 19:24:30.462871  147213 certs.go:226] acquiring lock for ca certs: {Name:mk934aa2a82050d7b4e8800492bf64f91d140517 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 19:24:30.463074  147213 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key
	I1010 19:24:30.463132  147213 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key
	I1010 19:24:30.463145  147213 certs.go:256] generating profile certs ...
	I1010 19:24:30.463289  147213 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/no-preload-320324/client.key
	I1010 19:24:30.463364  147213 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/no-preload-320324/apiserver.key.a7785fc5
	I1010 19:24:30.463413  147213 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/no-preload-320324/proxy-client.key
	I1010 19:24:30.463565  147213 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876.pem (1338 bytes)
	W1010 19:24:30.463604  147213 certs.go:480] ignoring /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876_empty.pem, impossibly tiny 0 bytes
	I1010 19:24:30.463617  147213 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem (1679 bytes)
	I1010 19:24:30.463657  147213 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem (1078 bytes)
	I1010 19:24:30.463689  147213 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem (1123 bytes)
	I1010 19:24:30.463721  147213 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem (1675 bytes)
	I1010 19:24:30.463774  147213 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem (1708 bytes)
	I1010 19:24:30.464502  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1010 19:24:30.525320  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1010 19:24:30.565229  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1010 19:24:30.597731  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1010 19:24:30.626174  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/no-preload-320324/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1010 19:24:30.659991  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/no-preload-320324/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1010 19:24:30.685662  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/no-preload-320324/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1010 19:24:30.710757  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/no-preload-320324/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1010 19:24:30.736325  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem --> /usr/share/ca-certificates/888762.pem (1708 bytes)
	I1010 19:24:30.771239  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1010 19:24:30.796467  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876.pem --> /usr/share/ca-certificates/88876.pem (1338 bytes)
	I1010 19:24:30.821925  147213 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1010 19:24:30.840743  147213 ssh_runner.go:195] Run: openssl version
	I1010 19:24:30.846898  147213 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/88876.pem && ln -fs /usr/share/ca-certificates/88876.pem /etc/ssl/certs/88876.pem"
	I1010 19:24:30.858410  147213 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/88876.pem
	I1010 19:24:30.863188  147213 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 10 18:09 /usr/share/ca-certificates/88876.pem
	I1010 19:24:30.863260  147213 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/88876.pem
	I1010 19:24:30.869307  147213 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/88876.pem /etc/ssl/certs/51391683.0"
	I1010 19:24:30.880319  147213 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/888762.pem && ln -fs /usr/share/ca-certificates/888762.pem /etc/ssl/certs/888762.pem"
	I1010 19:24:30.891307  147213 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/888762.pem
	I1010 19:24:30.895771  147213 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 10 18:09 /usr/share/ca-certificates/888762.pem
	I1010 19:24:30.895828  147213 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/888762.pem
	I1010 19:24:30.901510  147213 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/888762.pem /etc/ssl/certs/3ec20f2e.0"
	I1010 19:24:30.912627  147213 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1010 19:24:30.924330  147213 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1010 19:24:30.929108  147213 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 10 17:58 /usr/share/ca-certificates/minikubeCA.pem
	I1010 19:24:30.929194  147213 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1010 19:24:30.935266  147213 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1010 19:24:30.946714  147213 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1010 19:24:30.951692  147213 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1010 19:24:30.957910  147213 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1010 19:24:30.964296  147213 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1010 19:24:30.971001  147213 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1010 19:24:30.977427  147213 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1010 19:24:30.984201  147213 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1010 19:24:30.990532  147213 kubeadm.go:392] StartCluster: {Name:no-preload-320324 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:no-preload-320324 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.11 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 19:24:30.990622  147213 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1010 19:24:30.990727  147213 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1010 19:24:31.033544  147213 cri.go:89] found id: ""
	I1010 19:24:31.033624  147213 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1010 19:24:31.044956  147213 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1010 19:24:31.044975  147213 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1010 19:24:31.045025  147213 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1010 19:24:31.056563  147213 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1010 19:24:31.057705  147213 kubeconfig.go:125] found "no-preload-320324" server: "https://192.168.72.11:8443"
	I1010 19:24:31.059853  147213 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1010 19:24:31.071304  147213 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.11
	I1010 19:24:31.071338  147213 kubeadm.go:1160] stopping kube-system containers ...
	I1010 19:24:31.071353  147213 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1010 19:24:31.071444  147213 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1010 19:24:31.107345  147213 cri.go:89] found id: ""
	I1010 19:24:31.107429  147213 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1010 19:24:31.125556  147213 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1010 19:24:31.135390  147213 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1010 19:24:31.135428  147213 kubeadm.go:157] found existing configuration files:
	
	I1010 19:24:31.135478  147213 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1010 19:24:31.144653  147213 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1010 19:24:31.144715  147213 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1010 19:24:31.154458  147213 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1010 19:24:31.163444  147213 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1010 19:24:31.163501  147213 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1010 19:24:31.172633  147213 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1010 19:24:31.181939  147213 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1010 19:24:31.182001  147213 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1010 19:24:31.191638  147213 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1010 19:24:31.200846  147213 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1010 19:24:31.200935  147213 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1010 19:24:31.211048  147213 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1010 19:24:31.221008  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:24:31.352733  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:24:32.270546  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:24:32.474510  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:24:32.551517  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:24:32.707737  147213 api_server.go:52] waiting for apiserver process to appear ...
	I1010 19:24:32.707826  147213 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:33.208647  147213 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:33.708539  147213 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:33.728647  147213 api_server.go:72] duration metric: took 1.020907246s to wait for apiserver process to appear ...
	I1010 19:24:33.728678  147213 api_server.go:88] waiting for apiserver healthz status ...
	I1010 19:24:33.728701  147213 api_server.go:253] Checking apiserver healthz at https://192.168.72.11:8443/healthz ...
	I1010 19:24:30.066635  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:32.066732  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:32.552277  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:35.051399  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:34.028434  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:34.527949  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:35.028224  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:35.528470  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:36.028540  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:36.528499  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:37.028362  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:37.528483  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:38.028531  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:38.527865  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:37.025756  147213 api_server.go:279] https://192.168.72.11:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1010 19:24:37.025787  147213 api_server.go:103] status: https://192.168.72.11:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1010 19:24:37.025802  147213 api_server.go:253] Checking apiserver healthz at https://192.168.72.11:8443/healthz ...
	I1010 19:24:37.078247  147213 api_server.go:279] https://192.168.72.11:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1010 19:24:37.078283  147213 api_server.go:103] status: https://192.168.72.11:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1010 19:24:37.229601  147213 api_server.go:253] Checking apiserver healthz at https://192.168.72.11:8443/healthz ...
	I1010 19:24:37.237166  147213 api_server.go:279] https://192.168.72.11:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1010 19:24:37.237204  147213 api_server.go:103] status: https://192.168.72.11:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1010 19:24:37.728824  147213 api_server.go:253] Checking apiserver healthz at https://192.168.72.11:8443/healthz ...
	I1010 19:24:37.735660  147213 api_server.go:279] https://192.168.72.11:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1010 19:24:37.735700  147213 api_server.go:103] status: https://192.168.72.11:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1010 19:24:38.229746  147213 api_server.go:253] Checking apiserver healthz at https://192.168.72.11:8443/healthz ...
	I1010 19:24:38.234449  147213 api_server.go:279] https://192.168.72.11:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1010 19:24:38.234491  147213 api_server.go:103] status: https://192.168.72.11:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1010 19:24:38.729000  147213 api_server.go:253] Checking apiserver healthz at https://192.168.72.11:8443/healthz ...
	I1010 19:24:38.737564  147213 api_server.go:279] https://192.168.72.11:8443/healthz returned 200:
	ok
	I1010 19:24:38.751982  147213 api_server.go:141] control plane version: v1.31.1
	I1010 19:24:38.752012  147213 api_server.go:131] duration metric: took 5.023326632s to wait for apiserver health ...
	I1010 19:24:38.752023  147213 cni.go:84] Creating CNI manager for ""
	I1010 19:24:38.752030  147213 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1010 19:24:38.753351  147213 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1010 19:24:34.067208  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:36.067413  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:38.566729  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:38.754645  147213 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1010 19:24:38.772086  147213 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1010 19:24:38.792017  147213 system_pods.go:43] waiting for kube-system pods to appear ...
	I1010 19:24:38.800547  147213 system_pods.go:59] 8 kube-system pods found
	I1010 19:24:38.800592  147213 system_pods.go:61] "coredns-7c65d6cfc9-86brb" [28e5f869-f82f-4bd4-9d9c-89499fa89c89] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1010 19:24:38.800602  147213 system_pods.go:61] "etcd-no-preload-320324" [3027fc7b-a2cd-47c9-a6f1-122bfd0475a8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1010 19:24:38.800609  147213 system_pods.go:61] "kube-apiserver-no-preload-320324" [6e3d2eab-4568-48e9-957c-e724ff89b5da] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1010 19:24:38.800617  147213 system_pods.go:61] "kube-controller-manager-no-preload-320324" [a6d65cd3-48b1-4faf-bb7f-f6433641d6f3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1010 19:24:38.800624  147213 system_pods.go:61] "kube-proxy-vn6sv" [e5b2c419-7299-4bc4-b263-99408b9484eb] Running
	I1010 19:24:38.800629  147213 system_pods.go:61] "kube-scheduler-no-preload-320324" [e089c258-e720-4c78-ada1-eebbe89556c1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1010 19:24:38.800638  147213 system_pods.go:61] "metrics-server-6867b74b74-8w9lk" [354939e6-2ca9-44f5-8e8e-c10493c68b79] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1010 19:24:38.800642  147213 system_pods.go:61] "storage-provisioner" [d4965b53-60c3-4a97-bd52-d164d977247a] Running
	I1010 19:24:38.800648  147213 system_pods.go:74] duration metric: took 8.60732ms to wait for pod list to return data ...
	I1010 19:24:38.800654  147213 node_conditions.go:102] verifying NodePressure condition ...
	I1010 19:24:38.804628  147213 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1010 19:24:38.804663  147213 node_conditions.go:123] node cpu capacity is 2
	I1010 19:24:38.804680  147213 node_conditions.go:105] duration metric: took 4.021699ms to run NodePressure ...
	I1010 19:24:38.804700  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:24:39.078452  147213 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1010 19:24:39.087090  147213 kubeadm.go:739] kubelet initialised
	I1010 19:24:39.087116  147213 kubeadm.go:740] duration metric: took 8.636436ms waiting for restarted kubelet to initialise ...
	I1010 19:24:39.087125  147213 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 19:24:39.094468  147213 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-86brb" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:39.108724  147213 pod_ready.go:98] node "no-preload-320324" hosting pod "coredns-7c65d6cfc9-86brb" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:39.108756  147213 pod_ready.go:82] duration metric: took 14.254631ms for pod "coredns-7c65d6cfc9-86brb" in "kube-system" namespace to be "Ready" ...
	E1010 19:24:39.108770  147213 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-320324" hosting pod "coredns-7c65d6cfc9-86brb" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:39.108780  147213 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:39.119304  147213 pod_ready.go:98] node "no-preload-320324" hosting pod "etcd-no-preload-320324" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:39.119335  147213 pod_ready.go:82] duration metric: took 10.543376ms for pod "etcd-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	E1010 19:24:39.119345  147213 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-320324" hosting pod "etcd-no-preload-320324" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:39.119352  147213 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:39.127243  147213 pod_ready.go:98] node "no-preload-320324" hosting pod "kube-apiserver-no-preload-320324" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:39.127268  147213 pod_ready.go:82] duration metric: took 7.907414ms for pod "kube-apiserver-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	E1010 19:24:39.127278  147213 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-320324" hosting pod "kube-apiserver-no-preload-320324" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:39.127285  147213 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:39.195549  147213 pod_ready.go:98] node "no-preload-320324" hosting pod "kube-controller-manager-no-preload-320324" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:39.195578  147213 pod_ready.go:82] duration metric: took 68.282333ms for pod "kube-controller-manager-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	E1010 19:24:39.195588  147213 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-320324" hosting pod "kube-controller-manager-no-preload-320324" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:39.195594  147213 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-vn6sv" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:39.595842  147213 pod_ready.go:98] node "no-preload-320324" hosting pod "kube-proxy-vn6sv" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:39.595871  147213 pod_ready.go:82] duration metric: took 400.267905ms for pod "kube-proxy-vn6sv" in "kube-system" namespace to be "Ready" ...
	E1010 19:24:39.595880  147213 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-320324" hosting pod "kube-proxy-vn6sv" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:39.595886  147213 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:39.995731  147213 pod_ready.go:98] node "no-preload-320324" hosting pod "kube-scheduler-no-preload-320324" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:39.995760  147213 pod_ready.go:82] duration metric: took 399.866947ms for pod "kube-scheduler-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	E1010 19:24:39.995770  147213 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-320324" hosting pod "kube-scheduler-no-preload-320324" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:39.995777  147213 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:40.396420  147213 pod_ready.go:98] node "no-preload-320324" hosting pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:40.396456  147213 pod_ready.go:82] duration metric: took 400.667834ms for pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace to be "Ready" ...
	E1010 19:24:40.396470  147213 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-320324" hosting pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:40.396482  147213 pod_ready.go:39] duration metric: took 1.309346973s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 19:24:40.396508  147213 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1010 19:24:40.409956  147213 ops.go:34] apiserver oom_adj: -16
	I1010 19:24:40.409980  147213 kubeadm.go:597] duration metric: took 9.364998977s to restartPrimaryControlPlane
	I1010 19:24:40.409991  147213 kubeadm.go:394] duration metric: took 9.419470024s to StartCluster
	I1010 19:24:40.410009  147213 settings.go:142] acquiring lock: {Name:mk8d73e472e6ca16e47be8eb712406a95625b8da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 19:24:40.410085  147213 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19787-81676/kubeconfig
	I1010 19:24:40.413037  147213 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19787-81676/kubeconfig: {Name:mk31297b607b774870865548d952622c04d970cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 19:24:40.413448  147213 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.11 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1010 19:24:40.413783  147213 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1010 19:24:40.413979  147213 config.go:182] Loaded profile config "no-preload-320324": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 19:24:40.413996  147213 addons.go:69] Setting default-storageclass=true in profile "no-preload-320324"
	I1010 19:24:40.414020  147213 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-320324"
	I1010 19:24:40.413983  147213 addons.go:69] Setting storage-provisioner=true in profile "no-preload-320324"
	I1010 19:24:40.414048  147213 addons.go:234] Setting addon storage-provisioner=true in "no-preload-320324"
	W1010 19:24:40.414057  147213 addons.go:243] addon storage-provisioner should already be in state true
	I1010 19:24:40.414091  147213 host.go:66] Checking if "no-preload-320324" exists ...
	I1010 19:24:40.414170  147213 addons.go:69] Setting metrics-server=true in profile "no-preload-320324"
	I1010 19:24:40.414230  147213 addons.go:234] Setting addon metrics-server=true in "no-preload-320324"
	W1010 19:24:40.414252  147213 addons.go:243] addon metrics-server should already be in state true
	I1010 19:24:40.414292  147213 host.go:66] Checking if "no-preload-320324" exists ...
	I1010 19:24:40.414612  147213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:24:40.414640  147213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:24:40.414678  147213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:24:40.414712  147213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:24:40.415409  147213 out.go:177] * Verifying Kubernetes components...
	I1010 19:24:40.415412  147213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:24:40.415553  147213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:24:40.416812  147213 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 19:24:40.431363  147213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44727
	I1010 19:24:40.431474  147213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46487
	I1010 19:24:40.431659  147213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33767
	I1010 19:24:40.431983  147213 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:24:40.432136  147213 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:24:40.432156  147213 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:24:40.432567  147213 main.go:141] libmachine: Using API Version  1
	I1010 19:24:40.432587  147213 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:24:40.432710  147213 main.go:141] libmachine: Using API Version  1
	I1010 19:24:40.432732  147213 main.go:141] libmachine: Using API Version  1
	I1010 19:24:40.432740  147213 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:24:40.432749  147213 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:24:40.433000  147213 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:24:40.433079  147213 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:24:40.433103  147213 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:24:40.433468  147213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:24:40.433498  147213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:24:40.436984  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetState
	I1010 19:24:40.453362  147213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:24:40.453426  147213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:24:40.454884  147213 addons.go:234] Setting addon default-storageclass=true in "no-preload-320324"
	W1010 19:24:40.454913  147213 addons.go:243] addon default-storageclass should already be in state true
	I1010 19:24:40.454947  147213 host.go:66] Checking if "no-preload-320324" exists ...
	I1010 19:24:40.455335  147213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:24:40.455394  147213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:24:40.470642  147213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38831
	I1010 19:24:40.471118  147213 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:24:40.471701  147213 main.go:141] libmachine: Using API Version  1
	I1010 19:24:40.471730  147213 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:24:40.472241  147213 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:24:40.472523  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetState
	I1010 19:24:40.473953  147213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36987
	I1010 19:24:40.474196  147213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34527
	I1010 19:24:40.474332  147213 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:24:40.474672  147213 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:24:40.474814  147213 main.go:141] libmachine: Using API Version  1
	I1010 19:24:40.474827  147213 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:24:40.475181  147213 main.go:141] libmachine: Using API Version  1
	I1010 19:24:40.475210  147213 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:24:40.475310  147213 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:24:40.475702  147213 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:24:40.475785  147213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:24:40.475825  147213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:24:40.475922  147213 main.go:141] libmachine: (no-preload-320324) Calling .DriverName
	I1010 19:24:40.476046  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetState
	I1010 19:24:40.478147  147213 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 19:24:40.478395  147213 main.go:141] libmachine: (no-preload-320324) Calling .DriverName
	I1010 19:24:40.479869  147213 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 19:24:40.479896  147213 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1010 19:24:40.479922  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHHostname
	I1010 19:24:40.480549  147213 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1010 19:24:37.051611  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:39.551952  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:41.553895  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:40.482101  147213 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1010 19:24:40.482119  147213 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1010 19:24:40.482144  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHHostname
	I1010 19:24:40.484066  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:40.484560  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:40.484588  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:40.484833  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHPort
	I1010 19:24:40.485065  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:40.485241  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHUsername
	I1010 19:24:40.485272  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:40.485443  147213 sshutil.go:53] new ssh client: &{IP:192.168.72.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/no-preload-320324/id_rsa Username:docker}
	I1010 19:24:40.485788  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:40.485807  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:40.485842  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHPort
	I1010 19:24:40.486017  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:40.486202  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHUsername
	I1010 19:24:40.486454  147213 sshutil.go:53] new ssh client: &{IP:192.168.72.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/no-preload-320324/id_rsa Username:docker}
	I1010 19:24:40.492533  147213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38967
	I1010 19:24:40.493012  147213 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:24:40.493566  147213 main.go:141] libmachine: Using API Version  1
	I1010 19:24:40.493595  147213 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:24:40.494056  147213 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:24:40.494325  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetState
	I1010 19:24:40.496053  147213 main.go:141] libmachine: (no-preload-320324) Calling .DriverName
	I1010 19:24:40.496301  147213 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1010 19:24:40.496321  147213 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1010 19:24:40.496344  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHHostname
	I1010 19:24:40.499125  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:40.499667  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:40.499690  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:40.499843  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHPort
	I1010 19:24:40.500022  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:40.500194  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHUsername
	I1010 19:24:40.500357  147213 sshutil.go:53] new ssh client: &{IP:192.168.72.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/no-preload-320324/id_rsa Username:docker}
	I1010 19:24:40.651454  147213 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 19:24:40.667056  147213 node_ready.go:35] waiting up to 6m0s for node "no-preload-320324" to be "Ready" ...
	I1010 19:24:40.782217  147213 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 19:24:40.803094  147213 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1010 19:24:40.803122  147213 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1010 19:24:40.812288  147213 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1010 19:24:40.837679  147213 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1010 19:24:40.837723  147213 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1010 19:24:40.882090  147213 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1010 19:24:40.882119  147213 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1010 19:24:40.940115  147213 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1010 19:24:41.949181  147213 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.136852217s)
	I1010 19:24:41.949258  147213 main.go:141] libmachine: Making call to close driver server
	I1010 19:24:41.949275  147213 main.go:141] libmachine: (no-preload-320324) Calling .Close
	I1010 19:24:41.949286  147213 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.167030419s)
	I1010 19:24:41.949327  147213 main.go:141] libmachine: Making call to close driver server
	I1010 19:24:41.949345  147213 main.go:141] libmachine: (no-preload-320324) Calling .Close
	I1010 19:24:41.949625  147213 main.go:141] libmachine: (no-preload-320324) DBG | Closing plugin on server side
	I1010 19:24:41.949652  147213 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:24:41.949660  147213 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:24:41.949661  147213 main.go:141] libmachine: (no-preload-320324) DBG | Closing plugin on server side
	I1010 19:24:41.949668  147213 main.go:141] libmachine: Making call to close driver server
	I1010 19:24:41.949679  147213 main.go:141] libmachine: (no-preload-320324) Calling .Close
	I1010 19:24:41.949761  147213 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:24:41.949804  147213 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:24:41.949819  147213 main.go:141] libmachine: Making call to close driver server
	I1010 19:24:41.949826  147213 main.go:141] libmachine: (no-preload-320324) Calling .Close
	I1010 19:24:41.950811  147213 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:24:41.950824  147213 main.go:141] libmachine: (no-preload-320324) DBG | Closing plugin on server side
	I1010 19:24:41.950827  147213 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:24:41.950822  147213 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:24:41.950845  147213 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:24:41.950811  147213 main.go:141] libmachine: (no-preload-320324) DBG | Closing plugin on server side
	I1010 19:24:41.957797  147213 main.go:141] libmachine: Making call to close driver server
	I1010 19:24:41.957814  147213 main.go:141] libmachine: (no-preload-320324) Calling .Close
	I1010 19:24:41.958071  147213 main.go:141] libmachine: (no-preload-320324) DBG | Closing plugin on server side
	I1010 19:24:41.958077  147213 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:24:41.958099  147213 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:24:42.005530  147213 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.065377363s)
	I1010 19:24:42.005590  147213 main.go:141] libmachine: Making call to close driver server
	I1010 19:24:42.005602  147213 main.go:141] libmachine: (no-preload-320324) Calling .Close
	I1010 19:24:42.005914  147213 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:24:42.005937  147213 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:24:42.005935  147213 main.go:141] libmachine: (no-preload-320324) DBG | Closing plugin on server side
	I1010 19:24:42.005972  147213 main.go:141] libmachine: Making call to close driver server
	I1010 19:24:42.006003  147213 main.go:141] libmachine: (no-preload-320324) Calling .Close
	I1010 19:24:42.006280  147213 main.go:141] libmachine: (no-preload-320324) DBG | Closing plugin on server side
	I1010 19:24:42.006313  147213 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:24:42.006335  147213 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:24:42.006354  147213 addons.go:475] Verifying addon metrics-server=true in "no-preload-320324"
	I1010 19:24:42.008523  147213 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1010 19:24:39.028363  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:39.528571  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:40.027992  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:40.528552  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:41.028220  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:41.527889  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:42.028462  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:24:42.028549  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:24:42.070669  148123 cri.go:89] found id: ""
	I1010 19:24:42.070701  148123 logs.go:282] 0 containers: []
	W1010 19:24:42.070710  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:24:42.070716  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:24:42.070775  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:24:42.110693  148123 cri.go:89] found id: ""
	I1010 19:24:42.110731  148123 logs.go:282] 0 containers: []
	W1010 19:24:42.110748  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:24:42.110756  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:24:42.110816  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:24:42.148471  148123 cri.go:89] found id: ""
	I1010 19:24:42.148511  148123 logs.go:282] 0 containers: []
	W1010 19:24:42.148525  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:24:42.148535  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:24:42.148603  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:24:42.184637  148123 cri.go:89] found id: ""
	I1010 19:24:42.184670  148123 logs.go:282] 0 containers: []
	W1010 19:24:42.184683  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:24:42.184691  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:24:42.184759  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:24:42.226794  148123 cri.go:89] found id: ""
	I1010 19:24:42.226834  148123 logs.go:282] 0 containers: []
	W1010 19:24:42.226848  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:24:42.226857  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:24:42.226942  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:24:42.262969  148123 cri.go:89] found id: ""
	I1010 19:24:42.263004  148123 logs.go:282] 0 containers: []
	W1010 19:24:42.263017  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:24:42.263027  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:24:42.263167  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:24:42.301063  148123 cri.go:89] found id: ""
	I1010 19:24:42.301088  148123 logs.go:282] 0 containers: []
	W1010 19:24:42.301096  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:24:42.301102  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:24:42.301153  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:24:42.338829  148123 cri.go:89] found id: ""
	I1010 19:24:42.338862  148123 logs.go:282] 0 containers: []
	W1010 19:24:42.338873  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:24:42.338886  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:24:42.338901  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:24:42.391426  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:24:42.391478  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:24:42.405520  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:24:42.405563  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:24:42.544245  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:24:42.544275  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:24:42.544292  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:24:42.620760  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:24:42.620804  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:24:42.009965  147213 addons.go:510] duration metric: took 1.596190602s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1010 19:24:42.672792  147213 node_ready.go:53] node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:41.066744  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:43.066850  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:43.557231  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:46.051820  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:45.164389  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:45.179714  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:24:45.179776  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:24:45.216271  148123 cri.go:89] found id: ""
	I1010 19:24:45.216308  148123 logs.go:282] 0 containers: []
	W1010 19:24:45.216319  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:24:45.216327  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:24:45.216394  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:24:45.255109  148123 cri.go:89] found id: ""
	I1010 19:24:45.255154  148123 logs.go:282] 0 containers: []
	W1010 19:24:45.255167  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:24:45.255176  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:24:45.255248  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:24:45.294338  148123 cri.go:89] found id: ""
	I1010 19:24:45.294369  148123 logs.go:282] 0 containers: []
	W1010 19:24:45.294380  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:24:45.294388  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:24:45.294457  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:24:45.339573  148123 cri.go:89] found id: ""
	I1010 19:24:45.339606  148123 logs.go:282] 0 containers: []
	W1010 19:24:45.339626  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:24:45.339636  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:24:45.339703  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:24:45.400184  148123 cri.go:89] found id: ""
	I1010 19:24:45.400214  148123 logs.go:282] 0 containers: []
	W1010 19:24:45.400225  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:24:45.400234  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:24:45.400301  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:24:45.436152  148123 cri.go:89] found id: ""
	I1010 19:24:45.436183  148123 logs.go:282] 0 containers: []
	W1010 19:24:45.436195  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:24:45.436203  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:24:45.436262  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:24:45.484321  148123 cri.go:89] found id: ""
	I1010 19:24:45.484347  148123 logs.go:282] 0 containers: []
	W1010 19:24:45.484355  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:24:45.484361  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:24:45.484441  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:24:45.532893  148123 cri.go:89] found id: ""
	I1010 19:24:45.532923  148123 logs.go:282] 0 containers: []
	W1010 19:24:45.532932  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:24:45.532943  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:24:45.532958  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:24:45.585183  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:24:45.585214  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:24:45.638800  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:24:45.638847  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:24:45.653928  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:24:45.653961  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:24:45.733534  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:24:45.733564  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:24:45.733580  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:24:48.314367  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:48.329687  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:24:48.329754  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:24:48.370372  148123 cri.go:89] found id: ""
	I1010 19:24:48.370400  148123 logs.go:282] 0 containers: []
	W1010 19:24:48.370409  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:24:48.370415  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:24:48.370490  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:24:48.409362  148123 cri.go:89] found id: ""
	I1010 19:24:48.409396  148123 logs.go:282] 0 containers: []
	W1010 19:24:48.409407  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:24:48.409415  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:24:48.409500  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:24:48.449640  148123 cri.go:89] found id: ""
	I1010 19:24:48.449672  148123 logs.go:282] 0 containers: []
	W1010 19:24:48.449681  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:24:48.449687  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:24:48.449746  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:24:48.485053  148123 cri.go:89] found id: ""
	I1010 19:24:48.485104  148123 logs.go:282] 0 containers: []
	W1010 19:24:48.485121  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:24:48.485129  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:24:48.485183  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:24:48.520083  148123 cri.go:89] found id: ""
	I1010 19:24:48.520113  148123 logs.go:282] 0 containers: []
	W1010 19:24:48.520121  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:24:48.520127  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:24:48.520185  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:24:48.555112  148123 cri.go:89] found id: ""
	I1010 19:24:48.555138  148123 logs.go:282] 0 containers: []
	W1010 19:24:48.555149  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:24:48.555156  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:24:48.555241  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:24:45.171882  147213 node_ready.go:53] node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:47.673073  147213 node_ready.go:49] node "no-preload-320324" has status "Ready":"True"
	I1010 19:24:47.673103  147213 node_ready.go:38] duration metric: took 7.00601327s for node "no-preload-320324" to be "Ready" ...
	I1010 19:24:47.673117  147213 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 19:24:47.682195  147213 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-86brb" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:47.690079  147213 pod_ready.go:93] pod "coredns-7c65d6cfc9-86brb" in "kube-system" namespace has status "Ready":"True"
	I1010 19:24:47.690111  147213 pod_ready.go:82] duration metric: took 7.882823ms for pod "coredns-7c65d6cfc9-86brb" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:47.690126  147213 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:47.698009  147213 pod_ready.go:93] pod "etcd-no-preload-320324" in "kube-system" namespace has status "Ready":"True"
	I1010 19:24:47.698038  147213 pod_ready.go:82] duration metric: took 7.903016ms for pod "etcd-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:47.698052  147213 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:45.066893  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:47.566144  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:48.551853  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:51.050365  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:48.592682  148123 cri.go:89] found id: ""
	I1010 19:24:48.592710  148123 logs.go:282] 0 containers: []
	W1010 19:24:48.592719  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:24:48.592725  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:24:48.592775  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:24:48.628449  148123 cri.go:89] found id: ""
	I1010 19:24:48.628482  148123 logs.go:282] 0 containers: []
	W1010 19:24:48.628490  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:24:48.628500  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:24:48.628513  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:24:48.709385  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:24:48.709428  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:24:48.752542  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:24:48.752677  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:24:48.812331  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:24:48.812385  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:24:48.827057  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:24:48.827095  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:24:48.908312  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:24:51.409122  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:51.423404  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:24:51.423482  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:24:51.465742  148123 cri.go:89] found id: ""
	I1010 19:24:51.465781  148123 logs.go:282] 0 containers: []
	W1010 19:24:51.465793  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:24:51.465803  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:24:51.465875  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:24:51.501597  148123 cri.go:89] found id: ""
	I1010 19:24:51.501630  148123 logs.go:282] 0 containers: []
	W1010 19:24:51.501641  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:24:51.501648  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:24:51.501711  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:24:51.537500  148123 cri.go:89] found id: ""
	I1010 19:24:51.537539  148123 logs.go:282] 0 containers: []
	W1010 19:24:51.537551  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:24:51.537559  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:24:51.537626  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:24:51.579546  148123 cri.go:89] found id: ""
	I1010 19:24:51.579576  148123 logs.go:282] 0 containers: []
	W1010 19:24:51.579587  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:24:51.579595  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:24:51.579660  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:24:51.619893  148123 cri.go:89] found id: ""
	I1010 19:24:51.619917  148123 logs.go:282] 0 containers: []
	W1010 19:24:51.619925  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:24:51.619931  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:24:51.619999  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:24:51.655787  148123 cri.go:89] found id: ""
	I1010 19:24:51.655824  148123 logs.go:282] 0 containers: []
	W1010 19:24:51.655837  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:24:51.655846  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:24:51.655921  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:24:51.699582  148123 cri.go:89] found id: ""
	I1010 19:24:51.699611  148123 logs.go:282] 0 containers: []
	W1010 19:24:51.699619  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:24:51.699625  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:24:51.699685  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:24:51.738658  148123 cri.go:89] found id: ""
	I1010 19:24:51.738689  148123 logs.go:282] 0 containers: []
	W1010 19:24:51.738697  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:24:51.738707  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:24:51.738721  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:24:51.789556  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:24:51.789587  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:24:51.858919  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:24:51.858968  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:24:51.887818  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:24:51.887854  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:24:51.964408  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:24:51.964434  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:24:51.964449  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:24:49.705130  147213 pod_ready.go:103] pod "kube-apiserver-no-preload-320324" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:51.705847  147213 pod_ready.go:103] pod "kube-apiserver-no-preload-320324" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:53.205374  147213 pod_ready.go:93] pod "kube-apiserver-no-preload-320324" in "kube-system" namespace has status "Ready":"True"
	I1010 19:24:53.205401  147213 pod_ready.go:82] duration metric: took 5.507341974s for pod "kube-apiserver-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:53.205413  147213 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:53.210237  147213 pod_ready.go:93] pod "kube-controller-manager-no-preload-320324" in "kube-system" namespace has status "Ready":"True"
	I1010 19:24:53.210259  147213 pod_ready.go:82] duration metric: took 4.83925ms for pod "kube-controller-manager-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:53.210269  147213 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-vn6sv" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:53.215158  147213 pod_ready.go:93] pod "kube-proxy-vn6sv" in "kube-system" namespace has status "Ready":"True"
	I1010 19:24:53.215186  147213 pod_ready.go:82] duration metric: took 4.909888ms for pod "kube-proxy-vn6sv" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:53.215198  147213 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:53.220077  147213 pod_ready.go:93] pod "kube-scheduler-no-preload-320324" in "kube-system" namespace has status "Ready":"True"
	I1010 19:24:53.220097  147213 pod_ready.go:82] duration metric: took 4.890652ms for pod "kube-scheduler-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:53.220105  147213 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:50.066165  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:52.066343  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:53.552604  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:56.050748  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:54.545627  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:54.560748  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:24:54.560830  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:24:54.595863  148123 cri.go:89] found id: ""
	I1010 19:24:54.595893  148123 logs.go:282] 0 containers: []
	W1010 19:24:54.595903  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:24:54.595912  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:24:54.595978  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:24:54.633645  148123 cri.go:89] found id: ""
	I1010 19:24:54.633681  148123 logs.go:282] 0 containers: []
	W1010 19:24:54.633693  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:24:54.633701  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:24:54.633761  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:24:54.668269  148123 cri.go:89] found id: ""
	I1010 19:24:54.668299  148123 logs.go:282] 0 containers: []
	W1010 19:24:54.668311  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:24:54.668317  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:24:54.668369  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:24:54.706559  148123 cri.go:89] found id: ""
	I1010 19:24:54.706591  148123 logs.go:282] 0 containers: []
	W1010 19:24:54.706600  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:24:54.706608  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:24:54.706673  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:24:54.746248  148123 cri.go:89] found id: ""
	I1010 19:24:54.746283  148123 logs.go:282] 0 containers: []
	W1010 19:24:54.746295  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:24:54.746303  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:24:54.746383  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:24:54.782982  148123 cri.go:89] found id: ""
	I1010 19:24:54.783017  148123 logs.go:282] 0 containers: []
	W1010 19:24:54.783027  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:24:54.783033  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:24:54.783085  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:24:54.819664  148123 cri.go:89] found id: ""
	I1010 19:24:54.819700  148123 logs.go:282] 0 containers: []
	W1010 19:24:54.819713  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:24:54.819722  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:24:54.819797  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:24:54.859603  148123 cri.go:89] found id: ""
	I1010 19:24:54.859632  148123 logs.go:282] 0 containers: []
	W1010 19:24:54.859640  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:24:54.859650  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:24:54.859662  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:24:54.910949  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:24:54.910987  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:24:54.925941  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:24:54.925975  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:24:55.009626  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:24:55.009654  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:24:55.009669  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:24:55.097196  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:24:55.097237  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:24:57.641732  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:57.661141  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:24:57.661222  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:24:57.716054  148123 cri.go:89] found id: ""
	I1010 19:24:57.716086  148123 logs.go:282] 0 containers: []
	W1010 19:24:57.716094  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:24:57.716100  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:24:57.716178  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:24:57.766862  148123 cri.go:89] found id: ""
	I1010 19:24:57.766892  148123 logs.go:282] 0 containers: []
	W1010 19:24:57.766906  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:24:57.766917  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:24:57.766989  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:24:57.814776  148123 cri.go:89] found id: ""
	I1010 19:24:57.814808  148123 logs.go:282] 0 containers: []
	W1010 19:24:57.814821  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:24:57.814829  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:24:57.814899  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:24:57.850458  148123 cri.go:89] found id: ""
	I1010 19:24:57.850495  148123 logs.go:282] 0 containers: []
	W1010 19:24:57.850505  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:24:57.850516  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:24:57.850667  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:24:57.886541  148123 cri.go:89] found id: ""
	I1010 19:24:57.886566  148123 logs.go:282] 0 containers: []
	W1010 19:24:57.886575  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:24:57.886581  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:24:57.886645  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:24:57.920748  148123 cri.go:89] found id: ""
	I1010 19:24:57.920783  148123 logs.go:282] 0 containers: []
	W1010 19:24:57.920795  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:24:57.920802  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:24:57.920887  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:24:57.957801  148123 cri.go:89] found id: ""
	I1010 19:24:57.957833  148123 logs.go:282] 0 containers: []
	W1010 19:24:57.957844  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:24:57.957852  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:24:57.957919  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:24:57.995648  148123 cri.go:89] found id: ""
	I1010 19:24:57.995683  148123 logs.go:282] 0 containers: []
	W1010 19:24:57.995694  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:24:57.995706  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:24:57.995723  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:24:58.034987  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:24:58.035030  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:24:58.089014  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:24:58.089066  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:24:58.104179  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:24:58.104211  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:24:58.178239  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:24:58.178271  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:24:58.178291  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:24:55.229459  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:57.727298  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:54.566779  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:57.065902  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:58.051248  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:00.550512  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:00.756057  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:00.770086  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:00.770150  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:00.811400  148123 cri.go:89] found id: ""
	I1010 19:25:00.811444  148123 logs.go:282] 0 containers: []
	W1010 19:25:00.811456  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:00.811466  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:00.811533  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:00.848821  148123 cri.go:89] found id: ""
	I1010 19:25:00.848859  148123 logs.go:282] 0 containers: []
	W1010 19:25:00.848870  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:00.848877  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:00.848947  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:00.883529  148123 cri.go:89] found id: ""
	I1010 19:25:00.883557  148123 logs.go:282] 0 containers: []
	W1010 19:25:00.883566  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:00.883573  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:00.883631  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:00.922952  148123 cri.go:89] found id: ""
	I1010 19:25:00.922982  148123 logs.go:282] 0 containers: []
	W1010 19:25:00.922992  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:00.922998  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:00.923057  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:00.956116  148123 cri.go:89] found id: ""
	I1010 19:25:00.956147  148123 logs.go:282] 0 containers: []
	W1010 19:25:00.956159  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:00.956168  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:00.956233  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:00.998892  148123 cri.go:89] found id: ""
	I1010 19:25:00.998919  148123 logs.go:282] 0 containers: []
	W1010 19:25:00.998930  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:00.998939  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:00.998996  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:01.037617  148123 cri.go:89] found id: ""
	I1010 19:25:01.037649  148123 logs.go:282] 0 containers: []
	W1010 19:25:01.037657  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:01.037663  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:01.037717  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:01.078003  148123 cri.go:89] found id: ""
	I1010 19:25:01.078034  148123 logs.go:282] 0 containers: []
	W1010 19:25:01.078046  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:01.078055  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:01.078069  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:01.118948  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:01.118977  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:01.171053  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:01.171091  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:01.187027  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:01.187060  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:01.261288  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:01.261315  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:01.261330  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:24:59.728997  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:02.227142  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:59.566448  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:02.066184  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:02.551951  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:05.050558  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:03.848144  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:03.862527  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:03.862601  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:03.899120  148123 cri.go:89] found id: ""
	I1010 19:25:03.899159  148123 logs.go:282] 0 containers: []
	W1010 19:25:03.899180  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:03.899187  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:03.899276  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:03.935461  148123 cri.go:89] found id: ""
	I1010 19:25:03.935492  148123 logs.go:282] 0 containers: []
	W1010 19:25:03.935500  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:03.935507  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:03.935569  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:03.971892  148123 cri.go:89] found id: ""
	I1010 19:25:03.971925  148123 logs.go:282] 0 containers: []
	W1010 19:25:03.971937  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:03.971945  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:03.972019  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:04.008341  148123 cri.go:89] found id: ""
	I1010 19:25:04.008368  148123 logs.go:282] 0 containers: []
	W1010 19:25:04.008377  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:04.008390  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:04.008447  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:04.042007  148123 cri.go:89] found id: ""
	I1010 19:25:04.042036  148123 logs.go:282] 0 containers: []
	W1010 19:25:04.042044  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:04.042051  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:04.042101  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:04.078017  148123 cri.go:89] found id: ""
	I1010 19:25:04.078045  148123 logs.go:282] 0 containers: []
	W1010 19:25:04.078053  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:04.078059  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:04.078112  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:04.116792  148123 cri.go:89] found id: ""
	I1010 19:25:04.116823  148123 logs.go:282] 0 containers: []
	W1010 19:25:04.116832  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:04.116839  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:04.116928  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:04.153468  148123 cri.go:89] found id: ""
	I1010 19:25:04.153496  148123 logs.go:282] 0 containers: []
	W1010 19:25:04.153503  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:04.153513  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:04.153525  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:04.230646  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:04.230683  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:04.270975  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:04.271015  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:04.320845  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:04.320902  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:04.337789  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:04.337828  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:04.416077  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:06.916412  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:06.931309  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:06.931376  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:06.969224  148123 cri.go:89] found id: ""
	I1010 19:25:06.969257  148123 logs.go:282] 0 containers: []
	W1010 19:25:06.969269  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:06.969277  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:06.969346  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:07.006598  148123 cri.go:89] found id: ""
	I1010 19:25:07.006653  148123 logs.go:282] 0 containers: []
	W1010 19:25:07.006667  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:07.006676  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:07.006744  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:07.046392  148123 cri.go:89] found id: ""
	I1010 19:25:07.046418  148123 logs.go:282] 0 containers: []
	W1010 19:25:07.046427  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:07.046433  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:07.046491  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:07.089570  148123 cri.go:89] found id: ""
	I1010 19:25:07.089603  148123 logs.go:282] 0 containers: []
	W1010 19:25:07.089615  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:07.089624  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:07.089689  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:07.127152  148123 cri.go:89] found id: ""
	I1010 19:25:07.127185  148123 logs.go:282] 0 containers: []
	W1010 19:25:07.127195  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:07.127204  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:07.127282  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:07.160516  148123 cri.go:89] found id: ""
	I1010 19:25:07.160543  148123 logs.go:282] 0 containers: []
	W1010 19:25:07.160554  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:07.160563  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:07.160635  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:07.197270  148123 cri.go:89] found id: ""
	I1010 19:25:07.197307  148123 logs.go:282] 0 containers: []
	W1010 19:25:07.197318  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:07.197327  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:07.197395  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:07.232658  148123 cri.go:89] found id: ""
	I1010 19:25:07.232687  148123 logs.go:282] 0 containers: []
	W1010 19:25:07.232696  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:07.232706  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:07.232720  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:07.246579  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:07.246609  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:07.315200  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:07.315230  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:07.315251  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:07.389532  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:07.389580  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:07.438808  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:07.438837  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:04.227537  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:06.727865  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:04.067121  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:06.565089  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:08.565565  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:07.051371  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:09.051420  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:11.054211  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:09.990294  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:10.004155  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:10.004270  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:10.039034  148123 cri.go:89] found id: ""
	I1010 19:25:10.039068  148123 logs.go:282] 0 containers: []
	W1010 19:25:10.039078  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:10.039087  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:10.039174  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:10.079936  148123 cri.go:89] found id: ""
	I1010 19:25:10.079967  148123 logs.go:282] 0 containers: []
	W1010 19:25:10.079978  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:10.079985  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:10.080068  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:10.116441  148123 cri.go:89] found id: ""
	I1010 19:25:10.116471  148123 logs.go:282] 0 containers: []
	W1010 19:25:10.116483  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:10.116491  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:10.116556  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:10.151998  148123 cri.go:89] found id: ""
	I1010 19:25:10.152045  148123 logs.go:282] 0 containers: []
	W1010 19:25:10.152058  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:10.152075  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:10.152153  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:10.188514  148123 cri.go:89] found id: ""
	I1010 19:25:10.188547  148123 logs.go:282] 0 containers: []
	W1010 19:25:10.188565  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:10.188574  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:10.188640  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:10.223648  148123 cri.go:89] found id: ""
	I1010 19:25:10.223682  148123 logs.go:282] 0 containers: []
	W1010 19:25:10.223694  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:10.223702  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:10.223771  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:10.264117  148123 cri.go:89] found id: ""
	I1010 19:25:10.264143  148123 logs.go:282] 0 containers: []
	W1010 19:25:10.264151  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:10.264158  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:10.264215  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:10.302566  148123 cri.go:89] found id: ""
	I1010 19:25:10.302601  148123 logs.go:282] 0 containers: []
	W1010 19:25:10.302613  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:10.302625  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:10.302649  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:10.316567  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:10.316606  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:10.388642  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:10.388674  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:10.388692  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:10.471035  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:10.471090  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:10.519600  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:10.519645  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:13.075428  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:13.090830  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:13.090915  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:13.126302  148123 cri.go:89] found id: ""
	I1010 19:25:13.126336  148123 logs.go:282] 0 containers: []
	W1010 19:25:13.126348  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:13.126357  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:13.126429  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:13.163309  148123 cri.go:89] found id: ""
	I1010 19:25:13.163344  148123 logs.go:282] 0 containers: []
	W1010 19:25:13.163357  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:13.163365  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:13.163428  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:13.197569  148123 cri.go:89] found id: ""
	I1010 19:25:13.197605  148123 logs.go:282] 0 containers: []
	W1010 19:25:13.197615  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:13.197621  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:13.197692  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:13.236299  148123 cri.go:89] found id: ""
	I1010 19:25:13.236328  148123 logs.go:282] 0 containers: []
	W1010 19:25:13.236336  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:13.236342  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:13.236406  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:13.275667  148123 cri.go:89] found id: ""
	I1010 19:25:13.275696  148123 logs.go:282] 0 containers: []
	W1010 19:25:13.275705  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:13.275711  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:13.275776  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:13.309728  148123 cri.go:89] found id: ""
	I1010 19:25:13.309763  148123 logs.go:282] 0 containers: []
	W1010 19:25:13.309774  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:13.309786  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:13.309854  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:13.347464  148123 cri.go:89] found id: ""
	I1010 19:25:13.347493  148123 logs.go:282] 0 containers: []
	W1010 19:25:13.347504  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:13.347513  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:13.347576  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:13.385086  148123 cri.go:89] found id: ""
	I1010 19:25:13.385119  148123 logs.go:282] 0 containers: []
	W1010 19:25:13.385130  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:13.385142  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:13.385156  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:13.466513  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:13.466553  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:13.508610  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:13.508651  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:09.226850  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:11.227241  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:13.726879  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:10.565663  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:12.565845  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:13.555465  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:16.051764  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:13.564897  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:13.564932  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:13.578771  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:13.578803  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:13.651857  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:16.153066  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:16.167613  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:16.167698  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:16.203867  148123 cri.go:89] found id: ""
	I1010 19:25:16.203897  148123 logs.go:282] 0 containers: []
	W1010 19:25:16.203906  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:16.203912  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:16.203965  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:16.242077  148123 cri.go:89] found id: ""
	I1010 19:25:16.242109  148123 logs.go:282] 0 containers: []
	W1010 19:25:16.242130  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:16.242138  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:16.242234  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:16.283792  148123 cri.go:89] found id: ""
	I1010 19:25:16.283825  148123 logs.go:282] 0 containers: []
	W1010 19:25:16.283836  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:16.283844  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:16.283907  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:16.319934  148123 cri.go:89] found id: ""
	I1010 19:25:16.319969  148123 logs.go:282] 0 containers: []
	W1010 19:25:16.319980  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:16.319990  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:16.320063  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:16.360439  148123 cri.go:89] found id: ""
	I1010 19:25:16.360482  148123 logs.go:282] 0 containers: []
	W1010 19:25:16.360495  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:16.360504  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:16.360569  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:16.395890  148123 cri.go:89] found id: ""
	I1010 19:25:16.395922  148123 logs.go:282] 0 containers: []
	W1010 19:25:16.395931  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:16.395941  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:16.396009  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:16.431396  148123 cri.go:89] found id: ""
	I1010 19:25:16.431442  148123 logs.go:282] 0 containers: []
	W1010 19:25:16.431456  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:16.431463  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:16.431531  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:16.471768  148123 cri.go:89] found id: ""
	I1010 19:25:16.471796  148123 logs.go:282] 0 containers: []
	W1010 19:25:16.471804  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:16.471814  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:16.471830  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:16.526112  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:16.526155  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:16.541041  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:16.541074  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:16.623245  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:16.623273  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:16.623289  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:16.710840  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:16.710884  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:15.727171  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:17.728705  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:15.067362  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:17.566242  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:18.551207  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:21.050222  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:19.257566  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:19.273982  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:19.274063  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:19.311360  148123 cri.go:89] found id: ""
	I1010 19:25:19.311392  148123 logs.go:282] 0 containers: []
	W1010 19:25:19.311404  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:19.311411  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:19.311475  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:19.347025  148123 cri.go:89] found id: ""
	I1010 19:25:19.347053  148123 logs.go:282] 0 containers: []
	W1010 19:25:19.347062  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:19.347068  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:19.347120  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:19.383575  148123 cri.go:89] found id: ""
	I1010 19:25:19.383608  148123 logs.go:282] 0 containers: []
	W1010 19:25:19.383620  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:19.383633  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:19.383698  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:19.419245  148123 cri.go:89] found id: ""
	I1010 19:25:19.419270  148123 logs.go:282] 0 containers: []
	W1010 19:25:19.419278  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:19.419284  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:19.419334  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:19.453563  148123 cri.go:89] found id: ""
	I1010 19:25:19.453591  148123 logs.go:282] 0 containers: []
	W1010 19:25:19.453601  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:19.453658  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:19.453715  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:19.496430  148123 cri.go:89] found id: ""
	I1010 19:25:19.496458  148123 logs.go:282] 0 containers: []
	W1010 19:25:19.496466  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:19.496473  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:19.496533  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:19.534626  148123 cri.go:89] found id: ""
	I1010 19:25:19.534670  148123 logs.go:282] 0 containers: []
	W1010 19:25:19.534682  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:19.534691  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:19.534757  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:19.574186  148123 cri.go:89] found id: ""
	I1010 19:25:19.574236  148123 logs.go:282] 0 containers: []
	W1010 19:25:19.574248  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:19.574261  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:19.574283  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:19.587952  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:19.587989  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:19.664607  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:19.664644  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:19.664664  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:19.742185  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:19.742224  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:19.781530  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:19.781562  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:22.335398  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:22.349627  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:22.349693  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:22.389075  148123 cri.go:89] found id: ""
	I1010 19:25:22.389102  148123 logs.go:282] 0 containers: []
	W1010 19:25:22.389110  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:22.389117  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:22.389168  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:22.424331  148123 cri.go:89] found id: ""
	I1010 19:25:22.424368  148123 logs.go:282] 0 containers: []
	W1010 19:25:22.424381  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:22.424389  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:22.424460  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:22.464011  148123 cri.go:89] found id: ""
	I1010 19:25:22.464051  148123 logs.go:282] 0 containers: []
	W1010 19:25:22.464062  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:22.464070  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:22.464141  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:22.500786  148123 cri.go:89] found id: ""
	I1010 19:25:22.500819  148123 logs.go:282] 0 containers: []
	W1010 19:25:22.500828  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:22.500835  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:22.500914  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:22.538016  148123 cri.go:89] found id: ""
	I1010 19:25:22.538053  148123 logs.go:282] 0 containers: []
	W1010 19:25:22.538061  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:22.538068  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:22.538124  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:22.576600  148123 cri.go:89] found id: ""
	I1010 19:25:22.576629  148123 logs.go:282] 0 containers: []
	W1010 19:25:22.576637  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:22.576643  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:22.576702  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:22.616020  148123 cri.go:89] found id: ""
	I1010 19:25:22.616048  148123 logs.go:282] 0 containers: []
	W1010 19:25:22.616057  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:22.616064  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:22.616114  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:22.652238  148123 cri.go:89] found id: ""
	I1010 19:25:22.652271  148123 logs.go:282] 0 containers: []
	W1010 19:25:22.652283  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:22.652295  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:22.652311  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:22.726182  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:22.726216  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:22.726234  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:22.814040  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:22.814086  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:22.859254  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:22.859289  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:22.916565  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:22.916607  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:20.227871  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:22.732566  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:20.066872  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:22.566173  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:23.050833  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:25.551662  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:25.432636  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:25.446982  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:25.447047  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:25.482440  148123 cri.go:89] found id: ""
	I1010 19:25:25.482472  148123 logs.go:282] 0 containers: []
	W1010 19:25:25.482484  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:25.482492  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:25.482552  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:25.517709  148123 cri.go:89] found id: ""
	I1010 19:25:25.517742  148123 logs.go:282] 0 containers: []
	W1010 19:25:25.517756  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:25.517764  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:25.517867  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:25.561504  148123 cri.go:89] found id: ""
	I1010 19:25:25.561532  148123 logs.go:282] 0 containers: []
	W1010 19:25:25.561544  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:25.561552  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:25.561616  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:25.601704  148123 cri.go:89] found id: ""
	I1010 19:25:25.601741  148123 logs.go:282] 0 containers: []
	W1010 19:25:25.601753  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:25.601762  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:25.601825  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:25.638519  148123 cri.go:89] found id: ""
	I1010 19:25:25.638544  148123 logs.go:282] 0 containers: []
	W1010 19:25:25.638552  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:25.638558  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:25.638609  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:25.673390  148123 cri.go:89] found id: ""
	I1010 19:25:25.673425  148123 logs.go:282] 0 containers: []
	W1010 19:25:25.673436  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:25.673447  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:25.673525  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:25.708251  148123 cri.go:89] found id: ""
	I1010 19:25:25.708278  148123 logs.go:282] 0 containers: []
	W1010 19:25:25.708286  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:25.708293  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:25.708354  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:25.749793  148123 cri.go:89] found id: ""
	I1010 19:25:25.749826  148123 logs.go:282] 0 containers: []
	W1010 19:25:25.749837  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:25.749846  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:25.749861  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:25.802511  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:25.802554  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:25.817523  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:25.817562  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:25.898595  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:25.898619  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:25.898635  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:25.978629  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:25.978684  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:28.520165  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:28.532725  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:28.532793  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:25.226875  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:27.729015  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:25.066298  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:27.066963  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:27.551915  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:29.558497  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:28.571667  148123 cri.go:89] found id: ""
	I1010 19:25:28.571695  148123 logs.go:282] 0 containers: []
	W1010 19:25:28.571704  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:28.571710  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:28.571770  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:28.608136  148123 cri.go:89] found id: ""
	I1010 19:25:28.608165  148123 logs.go:282] 0 containers: []
	W1010 19:25:28.608174  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:28.608181  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:28.608244  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:28.642123  148123 cri.go:89] found id: ""
	I1010 19:25:28.642161  148123 logs.go:282] 0 containers: []
	W1010 19:25:28.642173  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:28.642181  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:28.642242  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:28.678600  148123 cri.go:89] found id: ""
	I1010 19:25:28.678633  148123 logs.go:282] 0 containers: []
	W1010 19:25:28.678643  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:28.678651  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:28.678702  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:28.716627  148123 cri.go:89] found id: ""
	I1010 19:25:28.716660  148123 logs.go:282] 0 containers: []
	W1010 19:25:28.716673  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:28.716681  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:28.716766  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:28.752750  148123 cri.go:89] found id: ""
	I1010 19:25:28.752786  148123 logs.go:282] 0 containers: []
	W1010 19:25:28.752798  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:28.752806  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:28.752898  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:28.790021  148123 cri.go:89] found id: ""
	I1010 19:25:28.790054  148123 logs.go:282] 0 containers: []
	W1010 19:25:28.790066  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:28.790075  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:28.790128  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:28.828760  148123 cri.go:89] found id: ""
	I1010 19:25:28.828803  148123 logs.go:282] 0 containers: []
	W1010 19:25:28.828827  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:28.828839  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:28.828877  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:28.879773  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:28.879811  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:28.894311  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:28.894342  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:28.968675  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:28.968706  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:28.968729  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:29.045862  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:29.045913  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:31.588772  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:31.601453  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:31.601526  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:31.637056  148123 cri.go:89] found id: ""
	I1010 19:25:31.637090  148123 logs.go:282] 0 containers: []
	W1010 19:25:31.637101  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:31.637108  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:31.637183  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:31.677825  148123 cri.go:89] found id: ""
	I1010 19:25:31.677854  148123 logs.go:282] 0 containers: []
	W1010 19:25:31.677862  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:31.677867  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:31.677920  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:31.715372  148123 cri.go:89] found id: ""
	I1010 19:25:31.715402  148123 logs.go:282] 0 containers: []
	W1010 19:25:31.715411  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:31.715419  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:31.715470  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:31.753767  148123 cri.go:89] found id: ""
	I1010 19:25:31.753794  148123 logs.go:282] 0 containers: []
	W1010 19:25:31.753806  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:31.753814  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:31.753879  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:31.792999  148123 cri.go:89] found id: ""
	I1010 19:25:31.793024  148123 logs.go:282] 0 containers: []
	W1010 19:25:31.793035  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:31.793050  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:31.793106  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:31.831571  148123 cri.go:89] found id: ""
	I1010 19:25:31.831598  148123 logs.go:282] 0 containers: []
	W1010 19:25:31.831607  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:31.831614  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:31.831673  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:31.880382  148123 cri.go:89] found id: ""
	I1010 19:25:31.880422  148123 logs.go:282] 0 containers: []
	W1010 19:25:31.880436  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:31.880447  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:31.880527  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:31.919596  148123 cri.go:89] found id: ""
	I1010 19:25:31.919627  148123 logs.go:282] 0 containers: []
	W1010 19:25:31.919637  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:31.919647  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:31.919661  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:31.967963  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:31.968001  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:31.983064  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:31.983106  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:32.056073  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:32.056105  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:32.056122  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:32.142927  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:32.142978  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:30.226683  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:32.227047  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:29.565699  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:31.566109  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:32.051411  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:34.052064  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:36.550062  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:34.690025  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:34.705326  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:34.705413  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:34.740756  148123 cri.go:89] found id: ""
	I1010 19:25:34.740786  148123 logs.go:282] 0 containers: []
	W1010 19:25:34.740793  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:34.740799  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:34.740872  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:34.777021  148123 cri.go:89] found id: ""
	I1010 19:25:34.777049  148123 logs.go:282] 0 containers: []
	W1010 19:25:34.777061  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:34.777069  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:34.777123  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:34.826699  148123 cri.go:89] found id: ""
	I1010 19:25:34.826735  148123 logs.go:282] 0 containers: []
	W1010 19:25:34.826747  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:34.826754  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:34.826896  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:34.864297  148123 cri.go:89] found id: ""
	I1010 19:25:34.864329  148123 logs.go:282] 0 containers: []
	W1010 19:25:34.864340  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:34.864348  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:34.864415  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:34.899198  148123 cri.go:89] found id: ""
	I1010 19:25:34.899234  148123 logs.go:282] 0 containers: []
	W1010 19:25:34.899247  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:34.899256  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:34.899315  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:34.934132  148123 cri.go:89] found id: ""
	I1010 19:25:34.934162  148123 logs.go:282] 0 containers: []
	W1010 19:25:34.934170  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:34.934176  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:34.934238  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:34.969858  148123 cri.go:89] found id: ""
	I1010 19:25:34.969891  148123 logs.go:282] 0 containers: []
	W1010 19:25:34.969903  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:34.969911  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:34.969978  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:35.006082  148123 cri.go:89] found id: ""
	I1010 19:25:35.006121  148123 logs.go:282] 0 containers: []
	W1010 19:25:35.006132  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:35.006145  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:35.006160  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:35.060596  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:35.060631  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:35.076246  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:35.076281  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:35.151879  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:35.151912  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:35.151931  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:35.231802  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:35.231845  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:37.774550  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:37.787871  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:37.787938  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:37.824273  148123 cri.go:89] found id: ""
	I1010 19:25:37.824306  148123 logs.go:282] 0 containers: []
	W1010 19:25:37.824317  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:37.824325  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:37.824410  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:37.860548  148123 cri.go:89] found id: ""
	I1010 19:25:37.860582  148123 logs.go:282] 0 containers: []
	W1010 19:25:37.860593  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:37.860601  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:37.860677  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:37.894616  148123 cri.go:89] found id: ""
	I1010 19:25:37.894644  148123 logs.go:282] 0 containers: []
	W1010 19:25:37.894654  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:37.894662  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:37.894730  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:37.931247  148123 cri.go:89] found id: ""
	I1010 19:25:37.931272  148123 logs.go:282] 0 containers: []
	W1010 19:25:37.931280  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:37.931287  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:37.931337  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:37.968028  148123 cri.go:89] found id: ""
	I1010 19:25:37.968068  148123 logs.go:282] 0 containers: []
	W1010 19:25:37.968079  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:37.968086  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:37.968153  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:38.004661  148123 cri.go:89] found id: ""
	I1010 19:25:38.004692  148123 logs.go:282] 0 containers: []
	W1010 19:25:38.004700  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:38.004706  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:38.004760  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:38.040874  148123 cri.go:89] found id: ""
	I1010 19:25:38.040906  148123 logs.go:282] 0 containers: []
	W1010 19:25:38.040915  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:38.040922  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:38.040990  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:38.083771  148123 cri.go:89] found id: ""
	I1010 19:25:38.083802  148123 logs.go:282] 0 containers: []
	W1010 19:25:38.083811  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:38.083821  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:38.083836  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:38.099645  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:38.099683  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:38.172168  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:38.172207  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:38.172223  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:38.248523  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:38.248561  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:38.306594  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:38.306630  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:34.728106  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:37.226285  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:34.065919  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:36.066751  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:38.067361  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:38.550359  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:40.551190  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:40.873868  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:40.889561  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:40.889668  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:40.926769  148123 cri.go:89] found id: ""
	I1010 19:25:40.926800  148123 logs.go:282] 0 containers: []
	W1010 19:25:40.926808  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:40.926816  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:40.926869  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:40.962564  148123 cri.go:89] found id: ""
	I1010 19:25:40.962592  148123 logs.go:282] 0 containers: []
	W1010 19:25:40.962600  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:40.962606  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:40.962659  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:40.997856  148123 cri.go:89] found id: ""
	I1010 19:25:40.997885  148123 logs.go:282] 0 containers: []
	W1010 19:25:40.997894  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:40.997900  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:40.997951  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:41.035479  148123 cri.go:89] found id: ""
	I1010 19:25:41.035506  148123 logs.go:282] 0 containers: []
	W1010 19:25:41.035514  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:41.035520  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:41.035575  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:41.077733  148123 cri.go:89] found id: ""
	I1010 19:25:41.077817  148123 logs.go:282] 0 containers: []
	W1010 19:25:41.077830  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:41.077840  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:41.077916  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:41.115439  148123 cri.go:89] found id: ""
	I1010 19:25:41.115476  148123 logs.go:282] 0 containers: []
	W1010 19:25:41.115489  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:41.115498  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:41.115552  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:41.152586  148123 cri.go:89] found id: ""
	I1010 19:25:41.152625  148123 logs.go:282] 0 containers: []
	W1010 19:25:41.152636  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:41.152643  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:41.152695  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:41.186807  148123 cri.go:89] found id: ""
	I1010 19:25:41.186840  148123 logs.go:282] 0 containers: []
	W1010 19:25:41.186851  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:41.186862  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:41.186880  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:41.200404  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:41.200447  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:41.280717  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:41.280744  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:41.280760  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:41.360561  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:41.360611  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:41.404461  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:41.404497  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:39.226903  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:41.227077  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:43.727197  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:40.570404  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:43.066523  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:43.050813  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:45.051094  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:43.955723  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:43.970895  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:43.970964  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:44.012162  148123 cri.go:89] found id: ""
	I1010 19:25:44.012194  148123 logs.go:282] 0 containers: []
	W1010 19:25:44.012203  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:44.012209  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:44.012282  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:44.074583  148123 cri.go:89] found id: ""
	I1010 19:25:44.074606  148123 logs.go:282] 0 containers: []
	W1010 19:25:44.074614  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:44.074620  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:44.074675  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:44.133562  148123 cri.go:89] found id: ""
	I1010 19:25:44.133588  148123 logs.go:282] 0 containers: []
	W1010 19:25:44.133596  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:44.133602  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:44.133658  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:44.168832  148123 cri.go:89] found id: ""
	I1010 19:25:44.168883  148123 logs.go:282] 0 containers: []
	W1010 19:25:44.168896  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:44.168908  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:44.168967  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:44.204896  148123 cri.go:89] found id: ""
	I1010 19:25:44.204923  148123 logs.go:282] 0 containers: []
	W1010 19:25:44.204934  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:44.204943  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:44.205002  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:44.242410  148123 cri.go:89] found id: ""
	I1010 19:25:44.242437  148123 logs.go:282] 0 containers: []
	W1010 19:25:44.242448  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:44.242456  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:44.242524  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:44.278553  148123 cri.go:89] found id: ""
	I1010 19:25:44.278581  148123 logs.go:282] 0 containers: []
	W1010 19:25:44.278589  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:44.278595  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:44.278658  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:44.316099  148123 cri.go:89] found id: ""
	I1010 19:25:44.316125  148123 logs.go:282] 0 containers: []
	W1010 19:25:44.316132  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:44.316141  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:44.316155  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:44.365809  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:44.365860  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:44.380129  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:44.380162  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:44.454241  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:44.454267  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:44.454281  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:44.530928  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:44.530974  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:47.078279  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:47.092233  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:47.092310  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:47.129517  148123 cri.go:89] found id: ""
	I1010 19:25:47.129557  148123 logs.go:282] 0 containers: []
	W1010 19:25:47.129573  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:47.129582  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:47.129654  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:47.170309  148123 cri.go:89] found id: ""
	I1010 19:25:47.170345  148123 logs.go:282] 0 containers: []
	W1010 19:25:47.170357  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:47.170365  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:47.170432  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:47.205110  148123 cri.go:89] found id: ""
	I1010 19:25:47.205145  148123 logs.go:282] 0 containers: []
	W1010 19:25:47.205156  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:47.205162  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:47.205228  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:47.245668  148123 cri.go:89] found id: ""
	I1010 19:25:47.245695  148123 logs.go:282] 0 containers: []
	W1010 19:25:47.245704  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:47.245711  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:47.245768  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:47.284549  148123 cri.go:89] found id: ""
	I1010 19:25:47.284576  148123 logs.go:282] 0 containers: []
	W1010 19:25:47.284584  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:47.284590  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:47.284643  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:47.324676  148123 cri.go:89] found id: ""
	I1010 19:25:47.324708  148123 logs.go:282] 0 containers: []
	W1010 19:25:47.324720  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:47.324729  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:47.324782  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:47.364315  148123 cri.go:89] found id: ""
	I1010 19:25:47.364347  148123 logs.go:282] 0 containers: []
	W1010 19:25:47.364356  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:47.364362  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:47.364433  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:47.400611  148123 cri.go:89] found id: ""
	I1010 19:25:47.400641  148123 logs.go:282] 0 containers: []
	W1010 19:25:47.400652  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:47.400664  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:47.400680  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:47.415185  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:47.415227  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:47.481638  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:47.481665  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:47.481681  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:47.560135  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:47.560171  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:47.602144  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:47.602174  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:46.227386  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:48.227699  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:45.066887  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:47.565340  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:47.051459  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:49.550170  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:51.554542  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:50.151770  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:50.165336  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:50.165403  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:50.201045  148123 cri.go:89] found id: ""
	I1010 19:25:50.201072  148123 logs.go:282] 0 containers: []
	W1010 19:25:50.201082  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:50.201089  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:50.201154  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:50.237047  148123 cri.go:89] found id: ""
	I1010 19:25:50.237082  148123 logs.go:282] 0 containers: []
	W1010 19:25:50.237094  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:50.237102  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:50.237174  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:50.273727  148123 cri.go:89] found id: ""
	I1010 19:25:50.273756  148123 logs.go:282] 0 containers: []
	W1010 19:25:50.273767  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:50.273780  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:50.273843  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:50.311397  148123 cri.go:89] found id: ""
	I1010 19:25:50.311424  148123 logs.go:282] 0 containers: []
	W1010 19:25:50.311433  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:50.311450  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:50.311513  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:50.347593  148123 cri.go:89] found id: ""
	I1010 19:25:50.347625  148123 logs.go:282] 0 containers: []
	W1010 19:25:50.347637  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:50.347705  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:50.347782  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:50.382195  148123 cri.go:89] found id: ""
	I1010 19:25:50.382228  148123 logs.go:282] 0 containers: []
	W1010 19:25:50.382240  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:50.382247  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:50.382313  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:50.419189  148123 cri.go:89] found id: ""
	I1010 19:25:50.419221  148123 logs.go:282] 0 containers: []
	W1010 19:25:50.419229  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:50.419236  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:50.419297  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:50.454368  148123 cri.go:89] found id: ""
	I1010 19:25:50.454399  148123 logs.go:282] 0 containers: []
	W1010 19:25:50.454410  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:50.454421  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:50.454440  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:50.495575  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:50.495623  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:50.548691  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:50.548737  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:50.564585  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:50.564621  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:50.637152  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:50.637174  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:50.637190  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:53.221939  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:53.235889  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:53.235968  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:53.278589  148123 cri.go:89] found id: ""
	I1010 19:25:53.278620  148123 logs.go:282] 0 containers: []
	W1010 19:25:53.278632  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:53.278640  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:53.278705  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:53.315062  148123 cri.go:89] found id: ""
	I1010 19:25:53.315090  148123 logs.go:282] 0 containers: []
	W1010 19:25:53.315101  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:53.315109  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:53.315182  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:53.348994  148123 cri.go:89] found id: ""
	I1010 19:25:53.349031  148123 logs.go:282] 0 containers: []
	W1010 19:25:53.349040  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:53.349046  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:53.349099  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:53.383334  148123 cri.go:89] found id: ""
	I1010 19:25:53.383363  148123 logs.go:282] 0 containers: []
	W1010 19:25:53.383373  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:53.383380  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:53.383450  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:53.417888  148123 cri.go:89] found id: ""
	I1010 19:25:53.417920  148123 logs.go:282] 0 containers: []
	W1010 19:25:53.417931  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:53.417938  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:53.418013  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:53.454757  148123 cri.go:89] found id: ""
	I1010 19:25:53.454787  148123 logs.go:282] 0 containers: []
	W1010 19:25:53.454798  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:53.454806  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:53.454871  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:53.491422  148123 cri.go:89] found id: ""
	I1010 19:25:53.491452  148123 logs.go:282] 0 containers: []
	W1010 19:25:53.491464  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:53.491472  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:53.491541  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:53.531240  148123 cri.go:89] found id: ""
	I1010 19:25:53.531271  148123 logs.go:282] 0 containers: []
	W1010 19:25:53.531282  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:53.531294  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:53.531310  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:50.727196  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:53.226957  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:50.065907  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:52.567056  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:54.051112  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:56.554137  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:53.582576  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:53.582608  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:53.596481  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:53.596511  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:53.666134  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:53.666162  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:53.666178  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:53.745621  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:53.745659  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:56.290744  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:56.307536  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:56.307610  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:56.353456  148123 cri.go:89] found id: ""
	I1010 19:25:56.353484  148123 logs.go:282] 0 containers: []
	W1010 19:25:56.353494  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:56.353501  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:56.353553  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:56.394093  148123 cri.go:89] found id: ""
	I1010 19:25:56.394120  148123 logs.go:282] 0 containers: []
	W1010 19:25:56.394131  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:56.394138  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:56.394203  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:56.428388  148123 cri.go:89] found id: ""
	I1010 19:25:56.428428  148123 logs.go:282] 0 containers: []
	W1010 19:25:56.428441  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:56.428450  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:56.428524  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:56.464414  148123 cri.go:89] found id: ""
	I1010 19:25:56.464459  148123 logs.go:282] 0 containers: []
	W1010 19:25:56.464472  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:56.464480  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:56.464563  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:56.505271  148123 cri.go:89] found id: ""
	I1010 19:25:56.505307  148123 logs.go:282] 0 containers: []
	W1010 19:25:56.505316  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:56.505322  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:56.505374  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:56.546424  148123 cri.go:89] found id: ""
	I1010 19:25:56.546456  148123 logs.go:282] 0 containers: []
	W1010 19:25:56.546467  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:56.546475  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:56.546545  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:56.587458  148123 cri.go:89] found id: ""
	I1010 19:25:56.587489  148123 logs.go:282] 0 containers: []
	W1010 19:25:56.587501  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:56.587509  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:56.587588  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:56.628248  148123 cri.go:89] found id: ""
	I1010 19:25:56.628279  148123 logs.go:282] 0 containers: []
	W1010 19:25:56.628291  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:56.628304  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:56.628320  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:56.642109  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:56.642141  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:56.723646  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:56.723678  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:56.723696  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:56.805849  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:56.805899  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:56.849863  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:56.849891  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:55.230447  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:57.726896  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:55.066248  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:57.565240  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:59.051145  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:01.554276  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:59.407093  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:59.422915  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:59.423005  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:59.462867  148123 cri.go:89] found id: ""
	I1010 19:25:59.462898  148123 logs.go:282] 0 containers: []
	W1010 19:25:59.462909  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:59.462917  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:59.462980  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:59.496931  148123 cri.go:89] found id: ""
	I1010 19:25:59.496958  148123 logs.go:282] 0 containers: []
	W1010 19:25:59.496968  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:59.496983  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:59.497049  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:59.532241  148123 cri.go:89] found id: ""
	I1010 19:25:59.532271  148123 logs.go:282] 0 containers: []
	W1010 19:25:59.532283  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:59.532291  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:59.532352  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:59.571285  148123 cri.go:89] found id: ""
	I1010 19:25:59.571313  148123 logs.go:282] 0 containers: []
	W1010 19:25:59.571324  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:59.571331  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:59.571391  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:59.606687  148123 cri.go:89] found id: ""
	I1010 19:25:59.606721  148123 logs.go:282] 0 containers: []
	W1010 19:25:59.606734  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:59.606741  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:59.606800  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:59.643247  148123 cri.go:89] found id: ""
	I1010 19:25:59.643276  148123 logs.go:282] 0 containers: []
	W1010 19:25:59.643286  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:59.643294  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:59.643369  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:59.679305  148123 cri.go:89] found id: ""
	I1010 19:25:59.679335  148123 logs.go:282] 0 containers: []
	W1010 19:25:59.679344  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:59.679350  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:59.679407  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:59.716149  148123 cri.go:89] found id: ""
	I1010 19:25:59.716198  148123 logs.go:282] 0 containers: []
	W1010 19:25:59.716210  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:59.716222  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:59.716239  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:59.730334  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:59.730364  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:59.799759  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:59.799786  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:59.799804  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:59.881883  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:59.881925  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:59.923755  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:59.923784  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:02.478043  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:02.492627  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:02.492705  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:02.532133  148123 cri.go:89] found id: ""
	I1010 19:26:02.532160  148123 logs.go:282] 0 containers: []
	W1010 19:26:02.532172  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:02.532179  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:02.532244  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:02.571915  148123 cri.go:89] found id: ""
	I1010 19:26:02.571943  148123 logs.go:282] 0 containers: []
	W1010 19:26:02.571951  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:02.571957  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:02.572014  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:02.616330  148123 cri.go:89] found id: ""
	I1010 19:26:02.616365  148123 logs.go:282] 0 containers: []
	W1010 19:26:02.616376  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:02.616385  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:02.616455  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:02.662245  148123 cri.go:89] found id: ""
	I1010 19:26:02.662275  148123 logs.go:282] 0 containers: []
	W1010 19:26:02.662284  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:02.662291  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:02.662354  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:02.698692  148123 cri.go:89] found id: ""
	I1010 19:26:02.698720  148123 logs.go:282] 0 containers: []
	W1010 19:26:02.698731  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:02.698739  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:02.698805  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:02.736643  148123 cri.go:89] found id: ""
	I1010 19:26:02.736667  148123 logs.go:282] 0 containers: []
	W1010 19:26:02.736677  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:02.736685  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:02.736742  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:02.780356  148123 cri.go:89] found id: ""
	I1010 19:26:02.780388  148123 logs.go:282] 0 containers: []
	W1010 19:26:02.780399  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:02.780407  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:02.780475  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:02.816716  148123 cri.go:89] found id: ""
	I1010 19:26:02.816744  148123 logs.go:282] 0 containers: []
	W1010 19:26:02.816752  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:02.816761  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:02.816775  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:02.899002  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:02.899046  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:02.942060  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:02.942093  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:02.997605  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:02.997661  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:03.011768  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:03.011804  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:03.096404  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:59.727075  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:02.227526  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:59.565903  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:02.066179  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:04.049656  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:06.050425  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:05.596971  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:05.612524  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:05.612598  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:05.649999  148123 cri.go:89] found id: ""
	I1010 19:26:05.650032  148123 logs.go:282] 0 containers: []
	W1010 19:26:05.650044  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:05.650052  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:05.650119  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:05.684365  148123 cri.go:89] found id: ""
	I1010 19:26:05.684398  148123 logs.go:282] 0 containers: []
	W1010 19:26:05.684419  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:05.684427  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:05.684494  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:05.721801  148123 cri.go:89] found id: ""
	I1010 19:26:05.721832  148123 logs.go:282] 0 containers: []
	W1010 19:26:05.721840  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:05.721847  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:05.721908  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:05.762010  148123 cri.go:89] found id: ""
	I1010 19:26:05.762037  148123 logs.go:282] 0 containers: []
	W1010 19:26:05.762046  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:05.762056  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:05.762117  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:05.803425  148123 cri.go:89] found id: ""
	I1010 19:26:05.803457  148123 logs.go:282] 0 containers: []
	W1010 19:26:05.803468  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:05.803476  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:05.803551  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:05.838800  148123 cri.go:89] found id: ""
	I1010 19:26:05.838833  148123 logs.go:282] 0 containers: []
	W1010 19:26:05.838845  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:05.838853  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:05.838921  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:05.875110  148123 cri.go:89] found id: ""
	I1010 19:26:05.875147  148123 logs.go:282] 0 containers: []
	W1010 19:26:05.875158  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:05.875167  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:05.875245  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:05.908951  148123 cri.go:89] found id: ""
	I1010 19:26:05.908988  148123 logs.go:282] 0 containers: []
	W1010 19:26:05.909000  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:05.909012  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:05.909027  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:05.922263  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:05.922293  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:05.996441  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:05.996465  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:05.996484  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:06.078113  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:06.078160  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:06.122919  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:06.122956  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:04.726335  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:06.728178  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:04.066573  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:06.564991  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:08.566655  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:08.050522  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:10.550288  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:08.674464  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:08.688452  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:08.688570  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:08.726240  148123 cri.go:89] found id: ""
	I1010 19:26:08.726269  148123 logs.go:282] 0 containers: []
	W1010 19:26:08.726279  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:08.726287  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:08.726370  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:08.764762  148123 cri.go:89] found id: ""
	I1010 19:26:08.764790  148123 logs.go:282] 0 containers: []
	W1010 19:26:08.764799  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:08.764806  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:08.764887  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:08.805522  148123 cri.go:89] found id: ""
	I1010 19:26:08.805552  148123 logs.go:282] 0 containers: []
	W1010 19:26:08.805563  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:08.805572  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:08.805642  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:08.845694  148123 cri.go:89] found id: ""
	I1010 19:26:08.845729  148123 logs.go:282] 0 containers: []
	W1010 19:26:08.845740  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:08.845747  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:08.845817  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:08.885024  148123 cri.go:89] found id: ""
	I1010 19:26:08.885054  148123 logs.go:282] 0 containers: []
	W1010 19:26:08.885066  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:08.885074  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:08.885138  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:08.927500  148123 cri.go:89] found id: ""
	I1010 19:26:08.927531  148123 logs.go:282] 0 containers: []
	W1010 19:26:08.927540  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:08.927547  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:08.927616  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:08.963870  148123 cri.go:89] found id: ""
	I1010 19:26:08.963904  148123 logs.go:282] 0 containers: []
	W1010 19:26:08.963916  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:08.963924  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:08.963988  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:08.997984  148123 cri.go:89] found id: ""
	I1010 19:26:08.998017  148123 logs.go:282] 0 containers: []
	W1010 19:26:08.998027  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:08.998039  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:08.998056  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:09.049307  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:09.049341  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:09.063341  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:09.063432  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:09.145190  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:09.145214  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:09.145226  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:09.231409  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:09.231445  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:11.773475  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:11.788981  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:11.789055  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:11.830289  148123 cri.go:89] found id: ""
	I1010 19:26:11.830319  148123 logs.go:282] 0 containers: []
	W1010 19:26:11.830327  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:11.830333  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:11.830399  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:11.867093  148123 cri.go:89] found id: ""
	I1010 19:26:11.867120  148123 logs.go:282] 0 containers: []
	W1010 19:26:11.867131  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:11.867138  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:11.867212  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:11.902146  148123 cri.go:89] found id: ""
	I1010 19:26:11.902181  148123 logs.go:282] 0 containers: []
	W1010 19:26:11.902192  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:11.902201  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:11.902271  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:11.938652  148123 cri.go:89] found id: ""
	I1010 19:26:11.938681  148123 logs.go:282] 0 containers: []
	W1010 19:26:11.938691  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:11.938703  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:11.938771  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:11.975903  148123 cri.go:89] found id: ""
	I1010 19:26:11.975937  148123 logs.go:282] 0 containers: []
	W1010 19:26:11.975947  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:11.975955  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:11.976020  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:12.014083  148123 cri.go:89] found id: ""
	I1010 19:26:12.014111  148123 logs.go:282] 0 containers: []
	W1010 19:26:12.014119  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:12.014126  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:12.014190  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:12.057832  148123 cri.go:89] found id: ""
	I1010 19:26:12.057858  148123 logs.go:282] 0 containers: []
	W1010 19:26:12.057867  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:12.057874  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:12.057925  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:12.103253  148123 cri.go:89] found id: ""
	I1010 19:26:12.103286  148123 logs.go:282] 0 containers: []
	W1010 19:26:12.103298  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:12.103311  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:12.103326  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:12.171062  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:12.171104  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:12.185463  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:12.185496  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:12.261568  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:12.261592  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:12.261607  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:12.345819  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:12.345860  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:09.226954  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:11.227205  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:13.227457  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:11.066777  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:13.565854  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:12.551323  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:15.051745  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:14.886283  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:14.901117  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:14.901213  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:14.935235  148123 cri.go:89] found id: ""
	I1010 19:26:14.935264  148123 logs.go:282] 0 containers: []
	W1010 19:26:14.935273  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:14.935280  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:14.935336  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:14.970617  148123 cri.go:89] found id: ""
	I1010 19:26:14.970642  148123 logs.go:282] 0 containers: []
	W1010 19:26:14.970652  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:14.970658  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:14.970722  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:15.011086  148123 cri.go:89] found id: ""
	I1010 19:26:15.011113  148123 logs.go:282] 0 containers: []
	W1010 19:26:15.011122  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:15.011128  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:15.011188  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:15.053004  148123 cri.go:89] found id: ""
	I1010 19:26:15.053026  148123 logs.go:282] 0 containers: []
	W1010 19:26:15.053033  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:15.053039  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:15.053099  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:15.089275  148123 cri.go:89] found id: ""
	I1010 19:26:15.089304  148123 logs.go:282] 0 containers: []
	W1010 19:26:15.089312  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:15.089319  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:15.089383  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:15.126620  148123 cri.go:89] found id: ""
	I1010 19:26:15.126645  148123 logs.go:282] 0 containers: []
	W1010 19:26:15.126654  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:15.126660  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:15.126723  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:15.160920  148123 cri.go:89] found id: ""
	I1010 19:26:15.160956  148123 logs.go:282] 0 containers: []
	W1010 19:26:15.160969  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:15.160979  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:15.161037  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:15.197430  148123 cri.go:89] found id: ""
	I1010 19:26:15.197462  148123 logs.go:282] 0 containers: []
	W1010 19:26:15.197474  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:15.197486  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:15.197507  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:15.240905  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:15.240953  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:15.297663  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:15.297719  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:15.312279  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:15.312313  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:15.395296  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:15.395320  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:15.395336  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:17.978030  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:17.992574  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:17.992651  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:18.028846  148123 cri.go:89] found id: ""
	I1010 19:26:18.028890  148123 logs.go:282] 0 containers: []
	W1010 19:26:18.028901  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:18.028908  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:18.028965  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:18.073004  148123 cri.go:89] found id: ""
	I1010 19:26:18.073033  148123 logs.go:282] 0 containers: []
	W1010 19:26:18.073041  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:18.073047  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:18.073118  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:18.112062  148123 cri.go:89] found id: ""
	I1010 19:26:18.112098  148123 logs.go:282] 0 containers: []
	W1010 19:26:18.112111  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:18.112121  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:18.112209  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:18.151565  148123 cri.go:89] found id: ""
	I1010 19:26:18.151597  148123 logs.go:282] 0 containers: []
	W1010 19:26:18.151608  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:18.151616  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:18.151675  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:18.197483  148123 cri.go:89] found id: ""
	I1010 19:26:18.197509  148123 logs.go:282] 0 containers: []
	W1010 19:26:18.197518  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:18.197524  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:18.197591  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:18.235151  148123 cri.go:89] found id: ""
	I1010 19:26:18.235181  148123 logs.go:282] 0 containers: []
	W1010 19:26:18.235193  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:18.235201  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:18.235279  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:18.273807  148123 cri.go:89] found id: ""
	I1010 19:26:18.273841  148123 logs.go:282] 0 containers: []
	W1010 19:26:18.273852  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:18.273858  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:18.273920  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:18.311644  148123 cri.go:89] found id: ""
	I1010 19:26:18.311677  148123 logs.go:282] 0 containers: []
	W1010 19:26:18.311688  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:18.311700  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:18.311716  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:18.355599  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:18.355635  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:18.407786  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:18.407830  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:18.422944  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:18.422976  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:18.496830  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:18.496873  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:18.496887  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:15.227600  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:17.726712  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:16.065701  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:18.066861  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:17.558257  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:20.050914  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:21.108300  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:21.123177  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:21.123243  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:21.162544  148123 cri.go:89] found id: ""
	I1010 19:26:21.162575  148123 logs.go:282] 0 containers: []
	W1010 19:26:21.162586  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:21.162595  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:21.162662  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:21.199799  148123 cri.go:89] found id: ""
	I1010 19:26:21.199828  148123 logs.go:282] 0 containers: []
	W1010 19:26:21.199839  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:21.199845  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:21.199901  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:21.246333  148123 cri.go:89] found id: ""
	I1010 19:26:21.246357  148123 logs.go:282] 0 containers: []
	W1010 19:26:21.246366  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:21.246372  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:21.246433  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:21.282681  148123 cri.go:89] found id: ""
	I1010 19:26:21.282719  148123 logs.go:282] 0 containers: []
	W1010 19:26:21.282732  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:21.282740  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:21.282810  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:21.319500  148123 cri.go:89] found id: ""
	I1010 19:26:21.319535  148123 logs.go:282] 0 containers: []
	W1010 19:26:21.319546  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:21.319554  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:21.319623  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:21.355779  148123 cri.go:89] found id: ""
	I1010 19:26:21.355809  148123 logs.go:282] 0 containers: []
	W1010 19:26:21.355820  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:21.355828  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:21.355902  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:21.394567  148123 cri.go:89] found id: ""
	I1010 19:26:21.394608  148123 logs.go:282] 0 containers: []
	W1010 19:26:21.394617  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:21.394623  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:21.394684  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:21.430009  148123 cri.go:89] found id: ""
	I1010 19:26:21.430046  148123 logs.go:282] 0 containers: []
	W1010 19:26:21.430058  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:21.430069  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:21.430089  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:21.443267  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:21.443301  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:21.517046  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:21.517068  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:21.517083  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:21.594927  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:21.594978  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:21.634274  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:21.634311  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:20.227157  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:22.727736  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:20.566652  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:23.066459  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:22.550526  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:25.050647  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:24.189760  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:24.204355  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:24.204420  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:24.248130  148123 cri.go:89] found id: ""
	I1010 19:26:24.248160  148123 logs.go:282] 0 containers: []
	W1010 19:26:24.248171  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:24.248178  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:24.248244  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:24.293281  148123 cri.go:89] found id: ""
	I1010 19:26:24.293312  148123 logs.go:282] 0 containers: []
	W1010 19:26:24.293324  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:24.293331  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:24.293400  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:24.350709  148123 cri.go:89] found id: ""
	I1010 19:26:24.350743  148123 logs.go:282] 0 containers: []
	W1010 19:26:24.350755  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:24.350765  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:24.350838  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:24.394113  148123 cri.go:89] found id: ""
	I1010 19:26:24.394152  148123 logs.go:282] 0 containers: []
	W1010 19:26:24.394170  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:24.394181  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:24.394256  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:24.445999  148123 cri.go:89] found id: ""
	I1010 19:26:24.446030  148123 logs.go:282] 0 containers: []
	W1010 19:26:24.446042  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:24.446051  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:24.446120  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:24.483566  148123 cri.go:89] found id: ""
	I1010 19:26:24.483596  148123 logs.go:282] 0 containers: []
	W1010 19:26:24.483605  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:24.483612  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:24.483665  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:24.520705  148123 cri.go:89] found id: ""
	I1010 19:26:24.520736  148123 logs.go:282] 0 containers: []
	W1010 19:26:24.520748  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:24.520757  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:24.520825  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:24.557316  148123 cri.go:89] found id: ""
	I1010 19:26:24.557346  148123 logs.go:282] 0 containers: []
	W1010 19:26:24.557355  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:24.557364  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:24.557376  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:24.608065  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:24.608109  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:24.623202  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:24.623250  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:24.694665  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:24.694697  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:24.694716  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:24.783369  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:24.783414  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:27.327524  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:27.341132  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:27.341214  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:27.376589  148123 cri.go:89] found id: ""
	I1010 19:26:27.376618  148123 logs.go:282] 0 containers: []
	W1010 19:26:27.376627  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:27.376633  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:27.376684  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:27.414452  148123 cri.go:89] found id: ""
	I1010 19:26:27.414491  148123 logs.go:282] 0 containers: []
	W1010 19:26:27.414502  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:27.414510  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:27.414572  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:27.449839  148123 cri.go:89] found id: ""
	I1010 19:26:27.449867  148123 logs.go:282] 0 containers: []
	W1010 19:26:27.449875  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:27.449882  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:27.449932  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:27.492445  148123 cri.go:89] found id: ""
	I1010 19:26:27.492472  148123 logs.go:282] 0 containers: []
	W1010 19:26:27.492480  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:27.492487  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:27.492549  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:27.533032  148123 cri.go:89] found id: ""
	I1010 19:26:27.533085  148123 logs.go:282] 0 containers: []
	W1010 19:26:27.533095  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:27.533122  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:27.533199  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:27.571096  148123 cri.go:89] found id: ""
	I1010 19:26:27.571122  148123 logs.go:282] 0 containers: []
	W1010 19:26:27.571130  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:27.571135  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:27.571204  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:27.608762  148123 cri.go:89] found id: ""
	I1010 19:26:27.608798  148123 logs.go:282] 0 containers: []
	W1010 19:26:27.608809  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:27.608818  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:27.608896  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:27.648200  148123 cri.go:89] found id: ""
	I1010 19:26:27.648236  148123 logs.go:282] 0 containers: []
	W1010 19:26:27.648248  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:27.648260  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:27.648275  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:27.698124  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:27.698154  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:27.755009  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:27.755052  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:27.770048  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:27.770084  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:27.845475  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:27.845505  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:27.845519  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:24.729352  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:26.731831  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:25.566028  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:27.567052  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:27.555698  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:30.049914  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:30.423117  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:30.436706  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:30.436768  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:30.473367  148123 cri.go:89] found id: ""
	I1010 19:26:30.473396  148123 logs.go:282] 0 containers: []
	W1010 19:26:30.473408  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:30.473417  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:30.473470  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:30.510560  148123 cri.go:89] found id: ""
	I1010 19:26:30.510599  148123 logs.go:282] 0 containers: []
	W1010 19:26:30.510610  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:30.510619  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:30.510687  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:30.549682  148123 cri.go:89] found id: ""
	I1010 19:26:30.549715  148123 logs.go:282] 0 containers: []
	W1010 19:26:30.549726  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:30.549734  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:30.549798  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:30.586401  148123 cri.go:89] found id: ""
	I1010 19:26:30.586425  148123 logs.go:282] 0 containers: []
	W1010 19:26:30.586434  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:30.586441  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:30.586492  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:30.622012  148123 cri.go:89] found id: ""
	I1010 19:26:30.622037  148123 logs.go:282] 0 containers: []
	W1010 19:26:30.622045  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:30.622052  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:30.622107  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:30.658336  148123 cri.go:89] found id: ""
	I1010 19:26:30.658364  148123 logs.go:282] 0 containers: []
	W1010 19:26:30.658372  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:30.658378  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:30.658442  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:30.695495  148123 cri.go:89] found id: ""
	I1010 19:26:30.695523  148123 logs.go:282] 0 containers: []
	W1010 19:26:30.695532  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:30.695538  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:30.695591  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:30.735093  148123 cri.go:89] found id: ""
	I1010 19:26:30.735125  148123 logs.go:282] 0 containers: []
	W1010 19:26:30.735136  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:30.735148  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:30.735163  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:30.816097  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:30.816140  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:30.853878  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:30.853915  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:30.906289  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:30.906341  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:30.923742  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:30.923769  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:30.993226  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:33.493799  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:33.508083  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:33.508166  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:33.545019  148123 cri.go:89] found id: ""
	I1010 19:26:33.545055  148123 logs.go:282] 0 containers: []
	W1010 19:26:33.545072  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:33.545081  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:33.545150  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:29.226673  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:31.227117  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:33.727777  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:30.068231  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:32.566025  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:32.050118  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:34.051720  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:36.550138  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:33.584215  148123 cri.go:89] found id: ""
	I1010 19:26:33.584242  148123 logs.go:282] 0 containers: []
	W1010 19:26:33.584252  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:33.584259  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:33.584315  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:33.620598  148123 cri.go:89] found id: ""
	I1010 19:26:33.620627  148123 logs.go:282] 0 containers: []
	W1010 19:26:33.620636  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:33.620643  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:33.620691  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:33.655225  148123 cri.go:89] found id: ""
	I1010 19:26:33.655252  148123 logs.go:282] 0 containers: []
	W1010 19:26:33.655260  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:33.655267  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:33.655322  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:33.693464  148123 cri.go:89] found id: ""
	I1010 19:26:33.693490  148123 logs.go:282] 0 containers: []
	W1010 19:26:33.693498  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:33.693505  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:33.693554  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:33.734024  148123 cri.go:89] found id: ""
	I1010 19:26:33.734059  148123 logs.go:282] 0 containers: []
	W1010 19:26:33.734071  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:33.734079  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:33.734141  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:33.769434  148123 cri.go:89] found id: ""
	I1010 19:26:33.769467  148123 logs.go:282] 0 containers: []
	W1010 19:26:33.769476  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:33.769483  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:33.769533  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:33.809011  148123 cri.go:89] found id: ""
	I1010 19:26:33.809040  148123 logs.go:282] 0 containers: []
	W1010 19:26:33.809049  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:33.809059  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:33.809071  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:33.864186  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:33.864230  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:33.878681  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:33.878711  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:33.958225  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:33.958250  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:33.958265  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:34.034730  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:34.034774  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:36.577123  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:36.591494  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:36.591560  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:36.625170  148123 cri.go:89] found id: ""
	I1010 19:26:36.625210  148123 logs.go:282] 0 containers: []
	W1010 19:26:36.625222  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:36.625230  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:36.625295  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:36.660790  148123 cri.go:89] found id: ""
	I1010 19:26:36.660821  148123 logs.go:282] 0 containers: []
	W1010 19:26:36.660831  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:36.660838  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:36.660911  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:36.695742  148123 cri.go:89] found id: ""
	I1010 19:26:36.695774  148123 logs.go:282] 0 containers: []
	W1010 19:26:36.695786  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:36.695793  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:36.695854  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:36.729523  148123 cri.go:89] found id: ""
	I1010 19:26:36.729552  148123 logs.go:282] 0 containers: []
	W1010 19:26:36.729561  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:36.729569  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:36.729630  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:36.765515  148123 cri.go:89] found id: ""
	I1010 19:26:36.765549  148123 logs.go:282] 0 containers: []
	W1010 19:26:36.765561  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:36.765571  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:36.765635  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:36.799110  148123 cri.go:89] found id: ""
	I1010 19:26:36.799144  148123 logs.go:282] 0 containers: []
	W1010 19:26:36.799155  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:36.799164  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:36.799224  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:36.832806  148123 cri.go:89] found id: ""
	I1010 19:26:36.832834  148123 logs.go:282] 0 containers: []
	W1010 19:26:36.832845  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:36.832865  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:36.832933  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:36.869093  148123 cri.go:89] found id: ""
	I1010 19:26:36.869126  148123 logs.go:282] 0 containers: []
	W1010 19:26:36.869137  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:36.869149  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:36.869165  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:36.922229  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:36.922276  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:36.936973  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:36.937019  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:37.016400  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:37.016425  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:37.016448  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:37.106308  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:37.106354  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:36.227451  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:38.726229  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:35.067396  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:37.565711  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:38.550438  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:41.050698  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:39.652101  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:39.665546  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:39.665639  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:39.699646  148123 cri.go:89] found id: ""
	I1010 19:26:39.699675  148123 logs.go:282] 0 containers: []
	W1010 19:26:39.699686  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:39.699693  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:39.699745  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:39.735568  148123 cri.go:89] found id: ""
	I1010 19:26:39.735592  148123 logs.go:282] 0 containers: []
	W1010 19:26:39.735600  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:39.735606  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:39.735656  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:39.770021  148123 cri.go:89] found id: ""
	I1010 19:26:39.770049  148123 logs.go:282] 0 containers: []
	W1010 19:26:39.770060  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:39.770067  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:39.770139  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:39.804867  148123 cri.go:89] found id: ""
	I1010 19:26:39.804894  148123 logs.go:282] 0 containers: []
	W1010 19:26:39.804903  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:39.804910  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:39.804972  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:39.847074  148123 cri.go:89] found id: ""
	I1010 19:26:39.847104  148123 logs.go:282] 0 containers: []
	W1010 19:26:39.847115  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:39.847124  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:39.847200  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:39.890334  148123 cri.go:89] found id: ""
	I1010 19:26:39.890368  148123 logs.go:282] 0 containers: []
	W1010 19:26:39.890379  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:39.890387  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:39.890456  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:39.926316  148123 cri.go:89] found id: ""
	I1010 19:26:39.926346  148123 logs.go:282] 0 containers: []
	W1010 19:26:39.926355  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:39.926362  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:39.926416  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:39.966179  148123 cri.go:89] found id: ""
	I1010 19:26:39.966215  148123 logs.go:282] 0 containers: []
	W1010 19:26:39.966227  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:39.966239  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:39.966255  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:40.047457  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:40.047505  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:40.086611  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:40.086656  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:40.139123  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:40.139167  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:40.152966  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:40.152995  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:40.231009  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:42.731851  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:42.748201  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:42.748287  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:42.787567  148123 cri.go:89] found id: ""
	I1010 19:26:42.787599  148123 logs.go:282] 0 containers: []
	W1010 19:26:42.787607  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:42.787614  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:42.787697  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:42.831051  148123 cri.go:89] found id: ""
	I1010 19:26:42.831089  148123 logs.go:282] 0 containers: []
	W1010 19:26:42.831100  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:42.831109  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:42.831180  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:42.871352  148123 cri.go:89] found id: ""
	I1010 19:26:42.871390  148123 logs.go:282] 0 containers: []
	W1010 19:26:42.871402  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:42.871410  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:42.871484  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:42.910283  148123 cri.go:89] found id: ""
	I1010 19:26:42.910316  148123 logs.go:282] 0 containers: []
	W1010 19:26:42.910327  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:42.910335  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:42.910403  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:42.946971  148123 cri.go:89] found id: ""
	I1010 19:26:42.947003  148123 logs.go:282] 0 containers: []
	W1010 19:26:42.947012  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:42.947019  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:42.947087  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:42.985484  148123 cri.go:89] found id: ""
	I1010 19:26:42.985517  148123 logs.go:282] 0 containers: []
	W1010 19:26:42.985528  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:42.985537  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:42.985603  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:43.024500  148123 cri.go:89] found id: ""
	I1010 19:26:43.024535  148123 logs.go:282] 0 containers: []
	W1010 19:26:43.024544  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:43.024552  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:43.024608  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:43.064008  148123 cri.go:89] found id: ""
	I1010 19:26:43.064039  148123 logs.go:282] 0 containers: []
	W1010 19:26:43.064049  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:43.064060  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:43.064075  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:43.119367  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:43.119415  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:43.133396  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:43.133428  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:43.207447  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:43.207470  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:43.207491  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:43.290416  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:43.290456  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:40.727919  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:43.227782  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:40.066461  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:42.565505  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:43.051835  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:45.052308  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:45.834115  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:45.849172  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:45.849238  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:45.886020  148123 cri.go:89] found id: ""
	I1010 19:26:45.886049  148123 logs.go:282] 0 containers: []
	W1010 19:26:45.886060  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:45.886067  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:45.886152  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:45.923188  148123 cri.go:89] found id: ""
	I1010 19:26:45.923218  148123 logs.go:282] 0 containers: []
	W1010 19:26:45.923245  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:45.923253  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:45.923316  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:45.960297  148123 cri.go:89] found id: ""
	I1010 19:26:45.960337  148123 logs.go:282] 0 containers: []
	W1010 19:26:45.960351  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:45.960361  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:45.960432  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:45.995140  148123 cri.go:89] found id: ""
	I1010 19:26:45.995173  148123 logs.go:282] 0 containers: []
	W1010 19:26:45.995184  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:45.995191  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:45.995265  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:46.040390  148123 cri.go:89] found id: ""
	I1010 19:26:46.040418  148123 logs.go:282] 0 containers: []
	W1010 19:26:46.040426  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:46.040433  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:46.040500  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:46.079999  148123 cri.go:89] found id: ""
	I1010 19:26:46.080029  148123 logs.go:282] 0 containers: []
	W1010 19:26:46.080042  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:46.080052  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:46.080105  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:46.117331  148123 cri.go:89] found id: ""
	I1010 19:26:46.117357  148123 logs.go:282] 0 containers: []
	W1010 19:26:46.117364  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:46.117370  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:46.117441  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:46.152248  148123 cri.go:89] found id: ""
	I1010 19:26:46.152279  148123 logs.go:282] 0 containers: []
	W1010 19:26:46.152290  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:46.152303  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:46.152319  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:46.192503  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:46.192533  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:46.246419  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:46.246456  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:46.259864  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:46.259895  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:46.338518  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:46.338549  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:46.338567  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:45.726776  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:48.228318  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:44.567056  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:47.065636  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:47.551013  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:50.053824  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:48.924216  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:48.938741  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:48.938805  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:48.973310  148123 cri.go:89] found id: ""
	I1010 19:26:48.973339  148123 logs.go:282] 0 containers: []
	W1010 19:26:48.973348  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:48.973356  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:48.973411  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:49.010467  148123 cri.go:89] found id: ""
	I1010 19:26:49.010492  148123 logs.go:282] 0 containers: []
	W1010 19:26:49.010500  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:49.010506  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:49.010585  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:49.051621  148123 cri.go:89] found id: ""
	I1010 19:26:49.051646  148123 logs.go:282] 0 containers: []
	W1010 19:26:49.051653  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:49.051664  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:49.051727  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:49.091090  148123 cri.go:89] found id: ""
	I1010 19:26:49.091121  148123 logs.go:282] 0 containers: []
	W1010 19:26:49.091132  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:49.091140  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:49.091202  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:49.127669  148123 cri.go:89] found id: ""
	I1010 19:26:49.127712  148123 logs.go:282] 0 containers: []
	W1010 19:26:49.127724  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:49.127732  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:49.127804  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:49.164446  148123 cri.go:89] found id: ""
	I1010 19:26:49.164476  148123 logs.go:282] 0 containers: []
	W1010 19:26:49.164485  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:49.164491  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:49.164553  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:49.201222  148123 cri.go:89] found id: ""
	I1010 19:26:49.201263  148123 logs.go:282] 0 containers: []
	W1010 19:26:49.201275  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:49.201284  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:49.201345  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:49.238258  148123 cri.go:89] found id: ""
	I1010 19:26:49.238283  148123 logs.go:282] 0 containers: []
	W1010 19:26:49.238290  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:49.238299  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:49.238311  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:49.313576  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:49.313619  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:49.351066  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:49.351096  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:49.402772  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:49.402823  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:49.417713  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:49.417752  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:49.488834  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:51.989031  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:52.003056  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:52.003140  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:52.039682  148123 cri.go:89] found id: ""
	I1010 19:26:52.039709  148123 logs.go:282] 0 containers: []
	W1010 19:26:52.039718  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:52.039725  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:52.039788  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:52.081402  148123 cri.go:89] found id: ""
	I1010 19:26:52.081433  148123 logs.go:282] 0 containers: []
	W1010 19:26:52.081443  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:52.081449  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:52.081502  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:52.118270  148123 cri.go:89] found id: ""
	I1010 19:26:52.118304  148123 logs.go:282] 0 containers: []
	W1010 19:26:52.118315  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:52.118325  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:52.118392  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:52.154692  148123 cri.go:89] found id: ""
	I1010 19:26:52.154724  148123 logs.go:282] 0 containers: []
	W1010 19:26:52.154735  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:52.154743  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:52.154807  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:52.190056  148123 cri.go:89] found id: ""
	I1010 19:26:52.190085  148123 logs.go:282] 0 containers: []
	W1010 19:26:52.190094  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:52.190101  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:52.190161  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:52.225468  148123 cri.go:89] found id: ""
	I1010 19:26:52.225501  148123 logs.go:282] 0 containers: []
	W1010 19:26:52.225511  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:52.225521  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:52.225589  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:52.260682  148123 cri.go:89] found id: ""
	I1010 19:26:52.260710  148123 logs.go:282] 0 containers: []
	W1010 19:26:52.260718  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:52.260724  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:52.260774  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:52.297613  148123 cri.go:89] found id: ""
	I1010 19:26:52.297642  148123 logs.go:282] 0 containers: []
	W1010 19:26:52.297659  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:52.297672  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:52.297689  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:52.352224  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:52.352267  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:52.367003  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:52.367033  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:52.443124  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:52.443157  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:52.443173  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:52.522391  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:52.522433  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:50.726363  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:52.727069  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:49.069109  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:51.566132  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:53.567867  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:52.554195  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:55.050995  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:55.066877  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:55.082191  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:55.082258  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:55.120729  148123 cri.go:89] found id: ""
	I1010 19:26:55.120763  148123 logs.go:282] 0 containers: []
	W1010 19:26:55.120775  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:55.120784  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:55.120863  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:55.154795  148123 cri.go:89] found id: ""
	I1010 19:26:55.154827  148123 logs.go:282] 0 containers: []
	W1010 19:26:55.154839  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:55.154848  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:55.154904  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:55.194846  148123 cri.go:89] found id: ""
	I1010 19:26:55.194876  148123 logs.go:282] 0 containers: []
	W1010 19:26:55.194889  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:55.194897  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:55.194958  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:55.230173  148123 cri.go:89] found id: ""
	I1010 19:26:55.230201  148123 logs.go:282] 0 containers: []
	W1010 19:26:55.230212  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:55.230220  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:55.230290  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:55.264999  148123 cri.go:89] found id: ""
	I1010 19:26:55.265025  148123 logs.go:282] 0 containers: []
	W1010 19:26:55.265032  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:55.265039  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:55.265096  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:55.303666  148123 cri.go:89] found id: ""
	I1010 19:26:55.303695  148123 logs.go:282] 0 containers: []
	W1010 19:26:55.303703  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:55.303724  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:55.303797  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:55.342061  148123 cri.go:89] found id: ""
	I1010 19:26:55.342087  148123 logs.go:282] 0 containers: []
	W1010 19:26:55.342098  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:55.342106  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:55.342171  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:55.378305  148123 cri.go:89] found id: ""
	I1010 19:26:55.378336  148123 logs.go:282] 0 containers: []
	W1010 19:26:55.378345  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:55.378354  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:55.378365  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:55.416815  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:55.416865  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:55.467371  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:55.467415  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:55.481704  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:55.481744  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:55.559219  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:55.559242  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:55.559254  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:58.141472  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:58.154326  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:58.154394  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:58.193053  148123 cri.go:89] found id: ""
	I1010 19:26:58.193080  148123 logs.go:282] 0 containers: []
	W1010 19:26:58.193088  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:58.193094  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:58.193142  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:58.232514  148123 cri.go:89] found id: ""
	I1010 19:26:58.232538  148123 logs.go:282] 0 containers: []
	W1010 19:26:58.232545  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:58.232551  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:58.232599  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:58.266724  148123 cri.go:89] found id: ""
	I1010 19:26:58.266756  148123 logs.go:282] 0 containers: []
	W1010 19:26:58.266765  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:58.266771  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:58.266824  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:58.301986  148123 cri.go:89] found id: ""
	I1010 19:26:58.302012  148123 logs.go:282] 0 containers: []
	W1010 19:26:58.302019  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:58.302031  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:58.302080  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:58.340394  148123 cri.go:89] found id: ""
	I1010 19:26:58.340431  148123 logs.go:282] 0 containers: []
	W1010 19:26:58.340440  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:58.340448  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:58.340500  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:58.407035  148123 cri.go:89] found id: ""
	I1010 19:26:58.407069  148123 logs.go:282] 0 containers: []
	W1010 19:26:58.407081  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:58.407088  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:58.407151  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:58.446650  148123 cri.go:89] found id: ""
	I1010 19:26:58.446682  148123 logs.go:282] 0 containers: []
	W1010 19:26:58.446694  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:58.446703  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:58.446763  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:58.480719  148123 cri.go:89] found id: ""
	I1010 19:26:58.480750  148123 logs.go:282] 0 containers: []
	W1010 19:26:58.480759  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:58.480768  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:58.480781  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:58.561568  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:58.561602  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:55.227199  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:57.726841  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:56.065787  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:58.566732  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:57.550718  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:59.550793  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:58.609237  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:58.609264  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:58.659705  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:58.659748  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:58.674646  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:58.674680  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:58.750922  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:01.252098  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:01.265420  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:01.265497  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:01.300133  148123 cri.go:89] found id: ""
	I1010 19:27:01.300168  148123 logs.go:282] 0 containers: []
	W1010 19:27:01.300180  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:01.300188  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:01.300272  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:01.337451  148123 cri.go:89] found id: ""
	I1010 19:27:01.337477  148123 logs.go:282] 0 containers: []
	W1010 19:27:01.337485  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:01.337492  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:01.337539  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:01.371987  148123 cri.go:89] found id: ""
	I1010 19:27:01.372019  148123 logs.go:282] 0 containers: []
	W1010 19:27:01.372028  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:01.372034  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:01.372098  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:01.408996  148123 cri.go:89] found id: ""
	I1010 19:27:01.409022  148123 logs.go:282] 0 containers: []
	W1010 19:27:01.409033  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:01.409042  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:01.409109  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:01.443558  148123 cri.go:89] found id: ""
	I1010 19:27:01.443587  148123 logs.go:282] 0 containers: []
	W1010 19:27:01.443595  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:01.443602  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:01.443663  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:01.478517  148123 cri.go:89] found id: ""
	I1010 19:27:01.478546  148123 logs.go:282] 0 containers: []
	W1010 19:27:01.478555  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:01.478561  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:01.478611  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:01.513528  148123 cri.go:89] found id: ""
	I1010 19:27:01.513556  148123 logs.go:282] 0 containers: []
	W1010 19:27:01.513568  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:01.513576  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:01.513641  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:01.558861  148123 cri.go:89] found id: ""
	I1010 19:27:01.558897  148123 logs.go:282] 0 containers: []
	W1010 19:27:01.558909  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:01.558921  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:01.558937  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:01.599719  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:01.599747  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:01.650736  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:01.650774  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:01.664643  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:01.664669  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:01.736533  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:01.736557  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:01.736570  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:00.225540  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:02.226962  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:00.567193  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:03.066587  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:02.050439  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:04.050984  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:06.550977  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:04.317302  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:04.330450  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:04.330519  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:04.368893  148123 cri.go:89] found id: ""
	I1010 19:27:04.368923  148123 logs.go:282] 0 containers: []
	W1010 19:27:04.368932  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:04.368939  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:04.368993  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:04.406598  148123 cri.go:89] found id: ""
	I1010 19:27:04.406625  148123 logs.go:282] 0 containers: []
	W1010 19:27:04.406634  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:04.406640  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:04.406692  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:04.441485  148123 cri.go:89] found id: ""
	I1010 19:27:04.441515  148123 logs.go:282] 0 containers: []
	W1010 19:27:04.441525  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:04.441532  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:04.441581  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:04.476663  148123 cri.go:89] found id: ""
	I1010 19:27:04.476690  148123 logs.go:282] 0 containers: []
	W1010 19:27:04.476698  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:04.476704  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:04.476765  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:04.512124  148123 cri.go:89] found id: ""
	I1010 19:27:04.512171  148123 logs.go:282] 0 containers: []
	W1010 19:27:04.512186  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:04.512195  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:04.512260  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:04.547902  148123 cri.go:89] found id: ""
	I1010 19:27:04.547929  148123 logs.go:282] 0 containers: []
	W1010 19:27:04.547940  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:04.547949  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:04.548008  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:04.585257  148123 cri.go:89] found id: ""
	I1010 19:27:04.585287  148123 logs.go:282] 0 containers: []
	W1010 19:27:04.585297  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:04.585303  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:04.585352  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:04.621019  148123 cri.go:89] found id: ""
	I1010 19:27:04.621048  148123 logs.go:282] 0 containers: []
	W1010 19:27:04.621057  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:04.621068  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:04.621080  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:04.662770  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:04.662801  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:04.714551  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:04.714592  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:04.730922  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:04.730951  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:04.841139  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:04.841163  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:04.841178  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:07.428528  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:07.442423  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:07.442498  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:07.476722  148123 cri.go:89] found id: ""
	I1010 19:27:07.476753  148123 logs.go:282] 0 containers: []
	W1010 19:27:07.476764  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:07.476772  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:07.476835  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:07.512211  148123 cri.go:89] found id: ""
	I1010 19:27:07.512245  148123 logs.go:282] 0 containers: []
	W1010 19:27:07.512256  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:07.512263  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:07.512325  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:07.546546  148123 cri.go:89] found id: ""
	I1010 19:27:07.546588  148123 logs.go:282] 0 containers: []
	W1010 19:27:07.546601  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:07.546619  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:07.546688  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:07.585743  148123 cri.go:89] found id: ""
	I1010 19:27:07.585768  148123 logs.go:282] 0 containers: []
	W1010 19:27:07.585777  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:07.585783  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:07.585848  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:07.626827  148123 cri.go:89] found id: ""
	I1010 19:27:07.626855  148123 logs.go:282] 0 containers: []
	W1010 19:27:07.626865  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:07.626874  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:07.626960  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:07.661910  148123 cri.go:89] found id: ""
	I1010 19:27:07.661940  148123 logs.go:282] 0 containers: []
	W1010 19:27:07.661948  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:07.661955  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:07.662072  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:07.698255  148123 cri.go:89] found id: ""
	I1010 19:27:07.698288  148123 logs.go:282] 0 containers: []
	W1010 19:27:07.698301  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:07.698309  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:07.698374  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:07.734389  148123 cri.go:89] found id: ""
	I1010 19:27:07.734417  148123 logs.go:282] 0 containers: []
	W1010 19:27:07.734428  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:07.734440  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:07.734454  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:07.814089  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:07.814130  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:07.854799  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:07.854831  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:07.906595  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:07.906635  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:07.921928  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:07.921966  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:07.989839  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:04.727522  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:07.226694  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:05.565868  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:07.567139  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:09.050772  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:11.051291  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:10.490115  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:10.504216  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:10.504292  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:10.554789  148123 cri.go:89] found id: ""
	I1010 19:27:10.554815  148123 logs.go:282] 0 containers: []
	W1010 19:27:10.554825  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:10.554831  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:10.554889  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:10.618970  148123 cri.go:89] found id: ""
	I1010 19:27:10.618997  148123 logs.go:282] 0 containers: []
	W1010 19:27:10.619005  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:10.619012  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:10.619061  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:10.666900  148123 cri.go:89] found id: ""
	I1010 19:27:10.666933  148123 logs.go:282] 0 containers: []
	W1010 19:27:10.666946  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:10.666953  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:10.667023  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:10.704364  148123 cri.go:89] found id: ""
	I1010 19:27:10.704405  148123 logs.go:282] 0 containers: []
	W1010 19:27:10.704430  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:10.704450  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:10.704525  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:10.742341  148123 cri.go:89] found id: ""
	I1010 19:27:10.742380  148123 logs.go:282] 0 containers: []
	W1010 19:27:10.742389  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:10.742396  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:10.742461  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:10.783056  148123 cri.go:89] found id: ""
	I1010 19:27:10.783088  148123 logs.go:282] 0 containers: []
	W1010 19:27:10.783098  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:10.783105  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:10.783174  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:10.823198  148123 cri.go:89] found id: ""
	I1010 19:27:10.823224  148123 logs.go:282] 0 containers: []
	W1010 19:27:10.823233  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:10.823243  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:10.823302  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:10.858154  148123 cri.go:89] found id: ""
	I1010 19:27:10.858195  148123 logs.go:282] 0 containers: []
	W1010 19:27:10.858204  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:10.858213  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:10.858229  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:10.935491  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:10.935518  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:10.935532  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:11.015653  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:11.015694  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:11.058765  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:11.058793  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:11.116748  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:11.116799  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:09.727270  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:12.225797  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:10.065372  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:12.065695  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:13.550669  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:16.051044  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:13.631579  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:13.644885  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:13.644953  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:13.685242  148123 cri.go:89] found id: ""
	I1010 19:27:13.685273  148123 logs.go:282] 0 containers: []
	W1010 19:27:13.685285  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:13.685294  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:13.685364  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:13.720831  148123 cri.go:89] found id: ""
	I1010 19:27:13.720866  148123 logs.go:282] 0 containers: []
	W1010 19:27:13.720877  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:13.720885  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:13.720947  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:13.766374  148123 cri.go:89] found id: ""
	I1010 19:27:13.766404  148123 logs.go:282] 0 containers: []
	W1010 19:27:13.766413  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:13.766419  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:13.766468  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:13.804550  148123 cri.go:89] found id: ""
	I1010 19:27:13.804585  148123 logs.go:282] 0 containers: []
	W1010 19:27:13.804597  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:13.804606  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:13.804674  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:13.842406  148123 cri.go:89] found id: ""
	I1010 19:27:13.842432  148123 logs.go:282] 0 containers: []
	W1010 19:27:13.842440  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:13.842460  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:13.842678  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:13.882365  148123 cri.go:89] found id: ""
	I1010 19:27:13.882402  148123 logs.go:282] 0 containers: []
	W1010 19:27:13.882415  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:13.882423  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:13.882505  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:13.926016  148123 cri.go:89] found id: ""
	I1010 19:27:13.926052  148123 logs.go:282] 0 containers: []
	W1010 19:27:13.926065  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:13.926075  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:13.926177  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:13.960485  148123 cri.go:89] found id: ""
	I1010 19:27:13.960523  148123 logs.go:282] 0 containers: []
	W1010 19:27:13.960533  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:13.960542  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:13.960558  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:14.013013  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:14.013055  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:14.026906  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:14.026935  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:14.104469  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:14.104494  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:14.104507  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:14.185917  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:14.185959  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:16.732088  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:16.746646  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:16.746710  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:16.784715  148123 cri.go:89] found id: ""
	I1010 19:27:16.784739  148123 logs.go:282] 0 containers: []
	W1010 19:27:16.784749  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:16.784755  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:16.784807  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:16.824181  148123 cri.go:89] found id: ""
	I1010 19:27:16.824210  148123 logs.go:282] 0 containers: []
	W1010 19:27:16.824220  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:16.824228  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:16.824289  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:16.860901  148123 cri.go:89] found id: ""
	I1010 19:27:16.860932  148123 logs.go:282] 0 containers: []
	W1010 19:27:16.860941  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:16.860947  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:16.860997  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:16.895127  148123 cri.go:89] found id: ""
	I1010 19:27:16.895159  148123 logs.go:282] 0 containers: []
	W1010 19:27:16.895175  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:16.895183  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:16.895266  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:16.931989  148123 cri.go:89] found id: ""
	I1010 19:27:16.932020  148123 logs.go:282] 0 containers: []
	W1010 19:27:16.932028  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:16.932035  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:16.932086  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:16.967254  148123 cri.go:89] found id: ""
	I1010 19:27:16.967283  148123 logs.go:282] 0 containers: []
	W1010 19:27:16.967292  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:16.967299  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:16.967347  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:17.003010  148123 cri.go:89] found id: ""
	I1010 19:27:17.003035  148123 logs.go:282] 0 containers: []
	W1010 19:27:17.003043  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:17.003050  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:17.003097  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:17.040053  148123 cri.go:89] found id: ""
	I1010 19:27:17.040093  148123 logs.go:282] 0 containers: []
	W1010 19:27:17.040102  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:17.040118  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:17.040131  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:17.093176  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:17.093217  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:17.108086  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:17.108116  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:17.186423  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:17.186452  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:17.186467  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:17.271884  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:17.271926  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:14.227197  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:16.739354  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:14.066233  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:16.565852  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:18.566337  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:18.051613  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:20.549888  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:19.817954  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:19.831924  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:19.831993  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:19.873415  148123 cri.go:89] found id: ""
	I1010 19:27:19.873441  148123 logs.go:282] 0 containers: []
	W1010 19:27:19.873459  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:19.873466  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:19.873527  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:19.912174  148123 cri.go:89] found id: ""
	I1010 19:27:19.912207  148123 logs.go:282] 0 containers: []
	W1010 19:27:19.912219  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:19.912227  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:19.912298  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:19.948422  148123 cri.go:89] found id: ""
	I1010 19:27:19.948456  148123 logs.go:282] 0 containers: []
	W1010 19:27:19.948466  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:19.948472  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:19.948524  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:19.983917  148123 cri.go:89] found id: ""
	I1010 19:27:19.983949  148123 logs.go:282] 0 containers: []
	W1010 19:27:19.983962  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:19.983970  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:19.984024  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:20.019160  148123 cri.go:89] found id: ""
	I1010 19:27:20.019187  148123 logs.go:282] 0 containers: []
	W1010 19:27:20.019198  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:20.019207  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:20.019271  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:20.056073  148123 cri.go:89] found id: ""
	I1010 19:27:20.056104  148123 logs.go:282] 0 containers: []
	W1010 19:27:20.056116  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:20.056124  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:20.056187  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:20.095877  148123 cri.go:89] found id: ""
	I1010 19:27:20.095916  148123 logs.go:282] 0 containers: []
	W1010 19:27:20.095928  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:20.095935  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:20.096007  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:20.132360  148123 cri.go:89] found id: ""
	I1010 19:27:20.132385  148123 logs.go:282] 0 containers: []
	W1010 19:27:20.132394  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:20.132402  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:20.132413  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:20.190573  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:20.190619  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:20.205785  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:20.205819  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:20.278882  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:20.278911  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:20.278924  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:20.363982  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:20.364024  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:22.906059  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:22.919839  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:22.919912  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:22.958203  148123 cri.go:89] found id: ""
	I1010 19:27:22.958242  148123 logs.go:282] 0 containers: []
	W1010 19:27:22.958252  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:22.958258  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:22.958312  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:22.995834  148123 cri.go:89] found id: ""
	I1010 19:27:22.995866  148123 logs.go:282] 0 containers: []
	W1010 19:27:22.995874  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:22.995880  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:22.995945  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:23.035913  148123 cri.go:89] found id: ""
	I1010 19:27:23.035950  148123 logs.go:282] 0 containers: []
	W1010 19:27:23.035962  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:23.035970  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:23.036039  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:23.073995  148123 cri.go:89] found id: ""
	I1010 19:27:23.074036  148123 logs.go:282] 0 containers: []
	W1010 19:27:23.074049  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:23.074057  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:23.074117  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:23.111190  148123 cri.go:89] found id: ""
	I1010 19:27:23.111222  148123 logs.go:282] 0 containers: []
	W1010 19:27:23.111234  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:23.111243  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:23.111305  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:23.147771  148123 cri.go:89] found id: ""
	I1010 19:27:23.147797  148123 logs.go:282] 0 containers: []
	W1010 19:27:23.147806  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:23.147814  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:23.147878  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:23.185485  148123 cri.go:89] found id: ""
	I1010 19:27:23.185517  148123 logs.go:282] 0 containers: []
	W1010 19:27:23.185527  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:23.185535  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:23.185598  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:23.222030  148123 cri.go:89] found id: ""
	I1010 19:27:23.222060  148123 logs.go:282] 0 containers: []
	W1010 19:27:23.222070  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:23.222081  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:23.222097  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:23.301826  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:23.301849  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:23.301861  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:23.378688  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:23.378730  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:23.426683  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:23.426717  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:23.478425  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:23.478467  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:19.226994  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:21.727366  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:21.067094  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:23.567075  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:22.550076  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:24.551681  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:25.993768  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:26.008311  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:26.008383  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:26.046315  148123 cri.go:89] found id: ""
	I1010 19:27:26.046343  148123 logs.go:282] 0 containers: []
	W1010 19:27:26.046355  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:26.046364  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:26.046418  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:26.081823  148123 cri.go:89] found id: ""
	I1010 19:27:26.081847  148123 logs.go:282] 0 containers: []
	W1010 19:27:26.081855  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:26.081861  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:26.081913  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:26.117139  148123 cri.go:89] found id: ""
	I1010 19:27:26.117167  148123 logs.go:282] 0 containers: []
	W1010 19:27:26.117183  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:26.117191  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:26.117242  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:26.154477  148123 cri.go:89] found id: ""
	I1010 19:27:26.154501  148123 logs.go:282] 0 containers: []
	W1010 19:27:26.154510  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:26.154517  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:26.154568  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:26.187497  148123 cri.go:89] found id: ""
	I1010 19:27:26.187527  148123 logs.go:282] 0 containers: []
	W1010 19:27:26.187535  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:26.187541  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:26.187604  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:26.223110  148123 cri.go:89] found id: ""
	I1010 19:27:26.223140  148123 logs.go:282] 0 containers: []
	W1010 19:27:26.223151  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:26.223160  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:26.223226  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:26.264975  148123 cri.go:89] found id: ""
	I1010 19:27:26.265003  148123 logs.go:282] 0 containers: []
	W1010 19:27:26.265011  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:26.265017  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:26.265067  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:26.307467  148123 cri.go:89] found id: ""
	I1010 19:27:26.307494  148123 logs.go:282] 0 containers: []
	W1010 19:27:26.307503  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:26.307512  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:26.307524  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:26.346313  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:26.346342  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:26.400069  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:26.400109  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:26.415467  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:26.415500  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:26.486986  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:26.487016  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:26.487033  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:24.226736  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:26.228720  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:28.726470  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:26.067100  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:28.565675  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:27.051110  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:29.051207  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:31.553085  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:29.064522  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:29.077631  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:29.077696  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:29.116127  148123 cri.go:89] found id: ""
	I1010 19:27:29.116154  148123 logs.go:282] 0 containers: []
	W1010 19:27:29.116164  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:29.116172  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:29.116241  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:29.152863  148123 cri.go:89] found id: ""
	I1010 19:27:29.152893  148123 logs.go:282] 0 containers: []
	W1010 19:27:29.152901  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:29.152907  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:29.152963  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:29.189951  148123 cri.go:89] found id: ""
	I1010 19:27:29.189983  148123 logs.go:282] 0 containers: []
	W1010 19:27:29.189992  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:29.189999  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:29.190060  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:29.226611  148123 cri.go:89] found id: ""
	I1010 19:27:29.226646  148123 logs.go:282] 0 containers: []
	W1010 19:27:29.226657  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:29.226665  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:29.226729  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:29.271884  148123 cri.go:89] found id: ""
	I1010 19:27:29.271921  148123 logs.go:282] 0 containers: []
	W1010 19:27:29.271934  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:29.271944  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:29.272016  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:29.306128  148123 cri.go:89] found id: ""
	I1010 19:27:29.306168  148123 logs.go:282] 0 containers: []
	W1010 19:27:29.306181  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:29.306191  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:29.306255  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:29.341869  148123 cri.go:89] found id: ""
	I1010 19:27:29.341893  148123 logs.go:282] 0 containers: []
	W1010 19:27:29.341901  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:29.341908  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:29.341962  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:29.376213  148123 cri.go:89] found id: ""
	I1010 19:27:29.376240  148123 logs.go:282] 0 containers: []
	W1010 19:27:29.376249  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:29.376259  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:29.376273  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:29.428827  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:29.428878  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:29.443719  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:29.443754  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:29.516166  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:29.516193  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:29.516208  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:29.596643  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:29.596681  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:32.135286  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:32.148791  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:32.148893  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:32.185511  148123 cri.go:89] found id: ""
	I1010 19:27:32.185542  148123 logs.go:282] 0 containers: []
	W1010 19:27:32.185553  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:32.185560  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:32.185626  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:32.222908  148123 cri.go:89] found id: ""
	I1010 19:27:32.222934  148123 logs.go:282] 0 containers: []
	W1010 19:27:32.222942  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:32.222948  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:32.222999  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:32.260998  148123 cri.go:89] found id: ""
	I1010 19:27:32.261033  148123 logs.go:282] 0 containers: []
	W1010 19:27:32.261045  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:32.261063  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:32.261128  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:32.298311  148123 cri.go:89] found id: ""
	I1010 19:27:32.298339  148123 logs.go:282] 0 containers: []
	W1010 19:27:32.298348  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:32.298355  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:32.298409  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:32.334134  148123 cri.go:89] found id: ""
	I1010 19:27:32.334220  148123 logs.go:282] 0 containers: []
	W1010 19:27:32.334236  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:32.334249  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:32.334319  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:32.369690  148123 cri.go:89] found id: ""
	I1010 19:27:32.369723  148123 logs.go:282] 0 containers: []
	W1010 19:27:32.369735  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:32.369743  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:32.369807  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:32.407164  148123 cri.go:89] found id: ""
	I1010 19:27:32.407210  148123 logs.go:282] 0 containers: []
	W1010 19:27:32.407218  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:32.407224  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:32.407279  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:32.446468  148123 cri.go:89] found id: ""
	I1010 19:27:32.446494  148123 logs.go:282] 0 containers: []
	W1010 19:27:32.446505  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:32.446519  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:32.446535  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:32.497953  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:32.497997  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:32.513590  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:32.513620  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:32.594037  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:32.594072  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:32.594090  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:32.673546  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:32.673587  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:30.727725  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:32.727813  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:31.066731  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:33.067815  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:34.050574  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:36.550119  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:35.226084  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:35.241152  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:35.241222  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:35.283208  148123 cri.go:89] found id: ""
	I1010 19:27:35.283245  148123 logs.go:282] 0 containers: []
	W1010 19:27:35.283269  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:35.283286  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:35.283346  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:35.320406  148123 cri.go:89] found id: ""
	I1010 19:27:35.320444  148123 logs.go:282] 0 containers: []
	W1010 19:27:35.320457  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:35.320466  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:35.320523  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:35.360984  148123 cri.go:89] found id: ""
	I1010 19:27:35.361015  148123 logs.go:282] 0 containers: []
	W1010 19:27:35.361027  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:35.361034  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:35.361097  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:35.400191  148123 cri.go:89] found id: ""
	I1010 19:27:35.400219  148123 logs.go:282] 0 containers: []
	W1010 19:27:35.400230  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:35.400244  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:35.400314  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:35.438092  148123 cri.go:89] found id: ""
	I1010 19:27:35.438126  148123 logs.go:282] 0 containers: []
	W1010 19:27:35.438139  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:35.438147  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:35.438215  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:35.472656  148123 cri.go:89] found id: ""
	I1010 19:27:35.472681  148123 logs.go:282] 0 containers: []
	W1010 19:27:35.472688  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:35.472694  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:35.472743  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:35.505895  148123 cri.go:89] found id: ""
	I1010 19:27:35.505931  148123 logs.go:282] 0 containers: []
	W1010 19:27:35.505942  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:35.505950  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:35.506015  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:35.540406  148123 cri.go:89] found id: ""
	I1010 19:27:35.540441  148123 logs.go:282] 0 containers: []
	W1010 19:27:35.540451  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:35.540462  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:35.540478  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:35.555874  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:35.555903  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:35.628400  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:35.628427  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:35.628453  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:35.708262  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:35.708305  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:35.752796  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:35.752824  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:38.306212  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:38.319473  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:38.319543  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:38.357493  148123 cri.go:89] found id: ""
	I1010 19:27:38.357520  148123 logs.go:282] 0 containers: []
	W1010 19:27:38.357529  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:38.357536  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:38.357588  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:38.395908  148123 cri.go:89] found id: ""
	I1010 19:27:38.395935  148123 logs.go:282] 0 containers: []
	W1010 19:27:38.395943  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:38.395949  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:38.396010  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:38.434495  148123 cri.go:89] found id: ""
	I1010 19:27:38.434529  148123 logs.go:282] 0 containers: []
	W1010 19:27:38.434539  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:38.434545  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:38.434608  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:38.474647  148123 cri.go:89] found id: ""
	I1010 19:27:38.474686  148123 logs.go:282] 0 containers: []
	W1010 19:27:38.474698  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:38.474710  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:38.474784  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:38.511105  148123 cri.go:89] found id: ""
	I1010 19:27:38.511133  148123 logs.go:282] 0 containers: []
	W1010 19:27:38.511141  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:38.511148  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:38.511199  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:38.551011  148123 cri.go:89] found id: ""
	I1010 19:27:38.551055  148123 logs.go:282] 0 containers: []
	W1010 19:27:38.551066  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:38.551074  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:38.551139  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:35.227301  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:37.726528  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:35.567838  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:38.066658  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:38.552499  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:40.544561  147758 pod_ready.go:82] duration metric: took 4m0.00091784s for pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace to be "Ready" ...
	E1010 19:27:40.544600  147758 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace to be "Ready" (will not retry!)
	I1010 19:27:40.544623  147758 pod_ready.go:39] duration metric: took 4m15.623470592s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 19:27:40.544664  147758 kubeadm.go:597] duration metric: took 4m22.92080204s to restartPrimaryControlPlane
	W1010 19:27:40.544737  147758 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1010 19:27:40.544829  147758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1010 19:27:38.590579  148123 cri.go:89] found id: ""
	I1010 19:27:38.590606  148123 logs.go:282] 0 containers: []
	W1010 19:27:38.590615  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:38.590621  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:38.590686  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:38.627775  148123 cri.go:89] found id: ""
	I1010 19:27:38.627812  148123 logs.go:282] 0 containers: []
	W1010 19:27:38.627826  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:38.627838  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:38.627854  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:38.683195  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:38.683239  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:38.697220  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:38.697250  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:38.771619  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:38.771651  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:38.771668  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:38.851324  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:38.851362  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:41.408165  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:41.422958  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:41.423032  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:41.466774  148123 cri.go:89] found id: ""
	I1010 19:27:41.466806  148123 logs.go:282] 0 containers: []
	W1010 19:27:41.466816  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:41.466823  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:41.466874  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:41.506429  148123 cri.go:89] found id: ""
	I1010 19:27:41.506470  148123 logs.go:282] 0 containers: []
	W1010 19:27:41.506482  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:41.506489  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:41.506555  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:41.548929  148123 cri.go:89] found id: ""
	I1010 19:27:41.548965  148123 logs.go:282] 0 containers: []
	W1010 19:27:41.548976  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:41.548983  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:41.549044  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:41.592243  148123 cri.go:89] found id: ""
	I1010 19:27:41.592274  148123 logs.go:282] 0 containers: []
	W1010 19:27:41.592283  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:41.592290  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:41.592342  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:41.628516  148123 cri.go:89] found id: ""
	I1010 19:27:41.628558  148123 logs.go:282] 0 containers: []
	W1010 19:27:41.628570  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:41.628578  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:41.628650  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:41.665011  148123 cri.go:89] found id: ""
	I1010 19:27:41.665041  148123 logs.go:282] 0 containers: []
	W1010 19:27:41.665053  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:41.665060  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:41.665112  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:41.702650  148123 cri.go:89] found id: ""
	I1010 19:27:41.702681  148123 logs.go:282] 0 containers: []
	W1010 19:27:41.702692  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:41.702700  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:41.702764  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:41.738971  148123 cri.go:89] found id: ""
	I1010 19:27:41.739002  148123 logs.go:282] 0 containers: []
	W1010 19:27:41.739021  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:41.739031  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:41.739062  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:41.813296  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:41.813321  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:41.813335  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:41.895138  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:41.895185  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:41.941975  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:41.942012  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:41.994838  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:41.994888  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:39.727140  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:41.728263  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:40.566241  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:43.065219  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:44.511153  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:44.524871  148123 kubeadm.go:597] duration metric: took 4m4.194337013s to restartPrimaryControlPlane
	W1010 19:27:44.524953  148123 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1010 19:27:44.524988  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1010 19:27:44.994063  148123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 19:27:45.010926  148123 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1010 19:27:45.021757  148123 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1010 19:27:45.032246  148123 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1010 19:27:45.032270  148123 kubeadm.go:157] found existing configuration files:
	
	I1010 19:27:45.032326  148123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1010 19:27:45.042799  148123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1010 19:27:45.042866  148123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1010 19:27:45.053017  148123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1010 19:27:45.063373  148123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1010 19:27:45.063445  148123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1010 19:27:45.074689  148123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1010 19:27:45.085187  148123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1010 19:27:45.085241  148123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1010 19:27:45.096275  148123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1010 19:27:45.106686  148123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1010 19:27:45.106750  148123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1010 19:27:45.117018  148123 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1010 19:27:45.358920  148123 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1010 19:27:44.226853  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:46.227586  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:48.727469  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:45.066410  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:47.569864  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:51.230704  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:53.727351  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:50.065845  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:52.066267  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:55.727457  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:58.226861  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:54.564611  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:56.566702  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:00.728542  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:03.225779  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:59.065614  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:01.068088  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:03.566502  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:06.739904  147758 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.195045639s)
	I1010 19:28:06.739984  147758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 19:28:06.756046  147758 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1010 19:28:06.768580  147758 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1010 19:28:06.780663  147758 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1010 19:28:06.780732  147758 kubeadm.go:157] found existing configuration files:
	
	I1010 19:28:06.780807  147758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1010 19:28:06.792092  147758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1010 19:28:06.792179  147758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1010 19:28:06.804515  147758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1010 19:28:06.814969  147758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1010 19:28:06.815040  147758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1010 19:28:06.826056  147758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1010 19:28:06.836050  147758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1010 19:28:06.836108  147758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1010 19:28:06.846125  147758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1010 19:28:06.855505  147758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1010 19:28:06.855559  147758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1010 19:28:06.865367  147758 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1010 19:28:06.916227  147758 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1010 19:28:06.916375  147758 kubeadm.go:310] [preflight] Running pre-flight checks
	I1010 19:28:07.036539  147758 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1010 19:28:07.036652  147758 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1010 19:28:07.036762  147758 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1010 19:28:07.044897  147758 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1010 19:28:07.046978  147758 out.go:235]   - Generating certificates and keys ...
	I1010 19:28:07.047117  147758 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1010 19:28:07.047229  147758 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1010 19:28:07.047384  147758 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1010 19:28:07.047467  147758 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1010 19:28:07.047584  147758 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1010 19:28:07.047675  147758 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1010 19:28:07.047794  147758 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1010 19:28:07.047902  147758 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1010 19:28:07.048005  147758 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1010 19:28:07.048093  147758 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1010 19:28:07.048142  147758 kubeadm.go:310] [certs] Using the existing "sa" key
	I1010 19:28:07.048210  147758 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1010 19:28:07.127836  147758 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1010 19:28:07.434492  147758 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1010 19:28:07.487567  147758 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1010 19:28:07.731314  147758 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1010 19:28:07.919060  147758 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1010 19:28:07.919565  147758 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1010 19:28:07.922740  147758 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1010 19:28:05.227611  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:07.229836  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:06.065246  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:08.067360  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:07.925140  147758 out.go:235]   - Booting up control plane ...
	I1010 19:28:07.925239  147758 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1010 19:28:07.925356  147758 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1010 19:28:07.925444  147758 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1010 19:28:07.944375  147758 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1010 19:28:07.951182  147758 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1010 19:28:07.951274  147758 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1010 19:28:08.087325  147758 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1010 19:28:08.087560  147758 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1010 19:28:08.598361  147758 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 511.081439ms
	I1010 19:28:08.598502  147758 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1010 19:28:09.727932  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:12.227939  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:10.566945  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:13.067142  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:14.100517  147758 kubeadm.go:310] [api-check] The API server is healthy after 5.501985157s
	I1010 19:28:14.119932  147758 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1010 19:28:14.149557  147758 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1010 19:28:14.207413  147758 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1010 19:28:14.207735  147758 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-541370 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1010 19:28:14.226199  147758 kubeadm.go:310] [bootstrap-token] Using token: sbg4v0.t5me93bb5vn8m913
	I1010 19:28:14.228059  147758 out.go:235]   - Configuring RBAC rules ...
	I1010 19:28:14.228208  147758 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1010 19:28:14.241706  147758 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1010 19:28:14.256554  147758 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1010 19:28:14.263129  147758 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1010 19:28:14.274346  147758 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1010 19:28:14.282313  147758 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1010 19:28:14.507850  147758 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1010 19:28:14.970234  147758 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1010 19:28:15.508328  147758 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1010 19:28:15.509530  147758 kubeadm.go:310] 
	I1010 19:28:15.509635  147758 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1010 19:28:15.509653  147758 kubeadm.go:310] 
	I1010 19:28:15.509743  147758 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1010 19:28:15.509762  147758 kubeadm.go:310] 
	I1010 19:28:15.509795  147758 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1010 19:28:15.509888  147758 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1010 19:28:15.509954  147758 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1010 19:28:15.509970  147758 kubeadm.go:310] 
	I1010 19:28:15.510083  147758 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1010 19:28:15.510103  147758 kubeadm.go:310] 
	I1010 19:28:15.510203  147758 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1010 19:28:15.510214  147758 kubeadm.go:310] 
	I1010 19:28:15.510297  147758 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1010 19:28:15.510410  147758 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1010 19:28:15.510489  147758 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1010 19:28:15.510495  147758 kubeadm.go:310] 
	I1010 19:28:15.510603  147758 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1010 19:28:15.510707  147758 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1010 19:28:15.510724  147758 kubeadm.go:310] 
	I1010 19:28:15.510807  147758 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token sbg4v0.t5me93bb5vn8m913 \
	I1010 19:28:15.510958  147758 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:bc8568f96f96483e4403a7eff9866ac7bd8b0e76d8203796a49dd1a769f11bda \
	I1010 19:28:15.511005  147758 kubeadm.go:310] 	--control-plane 
	I1010 19:28:15.511034  147758 kubeadm.go:310] 
	I1010 19:28:15.511161  147758 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1010 19:28:15.511173  147758 kubeadm.go:310] 
	I1010 19:28:15.511268  147758 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token sbg4v0.t5me93bb5vn8m913 \
	I1010 19:28:15.511403  147758 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:bc8568f96f96483e4403a7eff9866ac7bd8b0e76d8203796a49dd1a769f11bda 
	I1010 19:28:15.512298  147758 kubeadm.go:310] W1010 19:28:06.890572    2539 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1010 19:28:15.512594  147758 kubeadm.go:310] W1010 19:28:06.891448    2539 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1010 19:28:15.512702  147758 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1010 19:28:15.512734  147758 cni.go:84] Creating CNI manager for ""
	I1010 19:28:15.512744  147758 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1010 19:28:15.514703  147758 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1010 19:28:15.516229  147758 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1010 19:28:15.527554  147758 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1010 19:28:15.549266  147758 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1010 19:28:15.549362  147758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:15.549399  147758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-541370 minikube.k8s.io/updated_at=2024_10_10T19_28_15_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=e90d7de550e839433dc631623053398b652747fc minikube.k8s.io/name=embed-certs-541370 minikube.k8s.io/primary=true
	I1010 19:28:15.590732  147758 ops.go:34] apiserver oom_adj: -16
	I1010 19:28:15.740942  147758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:16.241392  147758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:16.741807  147758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:14.229241  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:16.727260  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:14.059512  148525 pod_ready.go:82] duration metric: took 4m0.00022742s for pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace to be "Ready" ...
	E1010 19:28:14.059550  148525 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace to be "Ready" (will not retry!)
	I1010 19:28:14.059569  148525 pod_ready.go:39] duration metric: took 4m7.001942194s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 19:28:14.059614  148525 kubeadm.go:597] duration metric: took 4m14.998320151s to restartPrimaryControlPlane
	W1010 19:28:14.059672  148525 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1010 19:28:14.059698  148525 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1010 19:28:17.241315  147758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:17.741580  147758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:18.241006  147758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:18.742042  147758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:19.241251  147758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:19.741030  147758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:19.862541  147758 kubeadm.go:1113] duration metric: took 4.313246481s to wait for elevateKubeSystemPrivileges
	I1010 19:28:19.862579  147758 kubeadm.go:394] duration metric: took 5m2.288571479s to StartCluster
	I1010 19:28:19.862628  147758 settings.go:142] acquiring lock: {Name:mk8d73e472e6ca16e47be8eb712406a95625b8da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 19:28:19.862751  147758 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19787-81676/kubeconfig
	I1010 19:28:19.864528  147758 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19787-81676/kubeconfig: {Name:mk31297b607b774870865548d952622c04d970cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 19:28:19.864812  147758 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.120 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1010 19:28:19.864910  147758 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1010 19:28:19.865019  147758 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-541370"
	I1010 19:28:19.865041  147758 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-541370"
	W1010 19:28:19.865053  147758 addons.go:243] addon storage-provisioner should already be in state true
	I1010 19:28:19.865062  147758 addons.go:69] Setting default-storageclass=true in profile "embed-certs-541370"
	I1010 19:28:19.865085  147758 host.go:66] Checking if "embed-certs-541370" exists ...
	I1010 19:28:19.865077  147758 config.go:182] Loaded profile config "embed-certs-541370": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 19:28:19.865129  147758 addons.go:69] Setting metrics-server=true in profile "embed-certs-541370"
	I1010 19:28:19.865164  147758 addons.go:234] Setting addon metrics-server=true in "embed-certs-541370"
	W1010 19:28:19.865179  147758 addons.go:243] addon metrics-server should already be in state true
	I1010 19:28:19.865115  147758 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-541370"
	I1010 19:28:19.865215  147758 host.go:66] Checking if "embed-certs-541370" exists ...
	I1010 19:28:19.865558  147758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:28:19.865593  147758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:28:19.865607  147758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:28:19.865629  147758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:28:19.865595  147758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:28:19.865725  147758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:28:19.866857  147758 out.go:177] * Verifying Kubernetes components...
	I1010 19:28:19.868590  147758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 19:28:19.882524  147758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42899
	I1010 19:28:19.882595  147758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39477
	I1010 19:28:19.882678  147758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42999
	I1010 19:28:19.883065  147758 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:28:19.883168  147758 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:28:19.883281  147758 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:28:19.883559  147758 main.go:141] libmachine: Using API Version  1
	I1010 19:28:19.883575  147758 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:28:19.883657  147758 main.go:141] libmachine: Using API Version  1
	I1010 19:28:19.883669  147758 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:28:19.883802  147758 main.go:141] libmachine: Using API Version  1
	I1010 19:28:19.883818  147758 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:28:19.883968  147758 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:28:19.883976  147758 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:28:19.884141  147758 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:28:19.884194  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetState
	I1010 19:28:19.884408  147758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:28:19.884437  147758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:28:19.884684  147758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:28:19.884746  147758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:28:19.887912  147758 addons.go:234] Setting addon default-storageclass=true in "embed-certs-541370"
	W1010 19:28:19.887942  147758 addons.go:243] addon default-storageclass should already be in state true
	I1010 19:28:19.887973  147758 host.go:66] Checking if "embed-certs-541370" exists ...
	I1010 19:28:19.888333  147758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:28:19.888383  147758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:28:19.901588  147758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37895
	I1010 19:28:19.902131  147758 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:28:19.902597  147758 main.go:141] libmachine: Using API Version  1
	I1010 19:28:19.902621  147758 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:28:19.902927  147758 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:28:19.903101  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetState
	I1010 19:28:19.904556  147758 main.go:141] libmachine: (embed-certs-541370) Calling .DriverName
	I1010 19:28:19.905207  147758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33927
	I1010 19:28:19.905621  147758 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:28:19.906188  147758 main.go:141] libmachine: Using API Version  1
	I1010 19:28:19.906209  147758 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:28:19.906599  147758 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:28:19.906647  147758 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 19:28:19.906837  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetState
	I1010 19:28:19.907699  147758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34245
	I1010 19:28:19.908147  147758 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:28:19.908557  147758 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 19:28:19.908584  147758 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1010 19:28:19.908610  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHHostname
	I1010 19:28:19.908705  147758 main.go:141] libmachine: (embed-certs-541370) Calling .DriverName
	I1010 19:28:19.908717  147758 main.go:141] libmachine: Using API Version  1
	I1010 19:28:19.908745  147758 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:28:19.909364  147758 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:28:19.910154  147758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:28:19.910208  147758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:28:19.910840  147758 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1010 19:28:19.912716  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:28:19.912722  147758 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1010 19:28:19.912743  147758 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1010 19:28:19.912769  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHHostname
	I1010 19:28:19.913199  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:28:19.913224  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:28:19.913500  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHPort
	I1010 19:28:19.913682  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:28:19.913845  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHUsername
	I1010 19:28:19.913972  147758 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/embed-certs-541370/id_rsa Username:docker}
	I1010 19:28:19.921800  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:28:19.922343  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:28:19.922374  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:28:19.922653  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHPort
	I1010 19:28:19.922842  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:28:19.922965  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHUsername
	I1010 19:28:19.923108  147758 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/embed-certs-541370/id_rsa Username:docker}
	I1010 19:28:19.935097  147758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39933
	I1010 19:28:19.935605  147758 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:28:19.936123  147758 main.go:141] libmachine: Using API Version  1
	I1010 19:28:19.936146  147758 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:28:19.936561  147758 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:28:19.936747  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetState
	I1010 19:28:19.938789  147758 main.go:141] libmachine: (embed-certs-541370) Calling .DriverName
	I1010 19:28:19.939019  147758 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1010 19:28:19.939034  147758 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1010 19:28:19.939054  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHHostname
	I1010 19:28:19.941682  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:28:19.942137  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:28:19.942165  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:28:19.942404  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHPort
	I1010 19:28:19.942642  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:28:19.942767  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHUsername
	I1010 19:28:19.942915  147758 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/embed-certs-541370/id_rsa Username:docker}
	I1010 19:28:20.108247  147758 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 19:28:20.149819  147758 node_ready.go:35] waiting up to 6m0s for node "embed-certs-541370" to be "Ready" ...
	I1010 19:28:20.163096  147758 node_ready.go:49] node "embed-certs-541370" has status "Ready":"True"
	I1010 19:28:20.163118  147758 node_ready.go:38] duration metric: took 13.26779ms for node "embed-certs-541370" to be "Ready" ...
	I1010 19:28:20.163128  147758 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 19:28:20.168620  147758 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-59752" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:20.241952  147758 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1010 19:28:20.241978  147758 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1010 19:28:20.249679  147758 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1010 19:28:20.290149  147758 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1010 19:28:20.290190  147758 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1010 19:28:20.291475  147758 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 19:28:20.410539  147758 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1010 19:28:20.410582  147758 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1010 19:28:20.491567  147758 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1010 19:28:20.684370  147758 main.go:141] libmachine: Making call to close driver server
	I1010 19:28:20.684403  147758 main.go:141] libmachine: (embed-certs-541370) Calling .Close
	I1010 19:28:20.684695  147758 main.go:141] libmachine: (embed-certs-541370) DBG | Closing plugin on server side
	I1010 19:28:20.684742  147758 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:28:20.684749  147758 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:28:20.684756  147758 main.go:141] libmachine: Making call to close driver server
	I1010 19:28:20.684764  147758 main.go:141] libmachine: (embed-certs-541370) Calling .Close
	I1010 19:28:20.685029  147758 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:28:20.685059  147758 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:28:20.685036  147758 main.go:141] libmachine: (embed-certs-541370) DBG | Closing plugin on server side
	I1010 19:28:20.695901  147758 main.go:141] libmachine: Making call to close driver server
	I1010 19:28:20.695926  147758 main.go:141] libmachine: (embed-certs-541370) Calling .Close
	I1010 19:28:20.696202  147758 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:28:20.696249  147758 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:28:21.439463  147758 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.147952803s)
	I1010 19:28:21.439626  147758 main.go:141] libmachine: Making call to close driver server
	I1010 19:28:21.439659  147758 main.go:141] libmachine: (embed-certs-541370) Calling .Close
	I1010 19:28:21.439951  147758 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:28:21.439969  147758 main.go:141] libmachine: (embed-certs-541370) DBG | Closing plugin on server side
	I1010 19:28:21.439976  147758 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:28:21.439997  147758 main.go:141] libmachine: Making call to close driver server
	I1010 19:28:21.440009  147758 main.go:141] libmachine: (embed-certs-541370) Calling .Close
	I1010 19:28:21.440299  147758 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:28:21.440298  147758 main.go:141] libmachine: (embed-certs-541370) DBG | Closing plugin on server side
	I1010 19:28:21.440314  147758 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:28:21.780486  147758 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.288854773s)
	I1010 19:28:21.780551  147758 main.go:141] libmachine: Making call to close driver server
	I1010 19:28:21.780567  147758 main.go:141] libmachine: (embed-certs-541370) Calling .Close
	I1010 19:28:21.780948  147758 main.go:141] libmachine: (embed-certs-541370) DBG | Closing plugin on server side
	I1010 19:28:21.780980  147758 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:28:21.780996  147758 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:28:21.781007  147758 main.go:141] libmachine: Making call to close driver server
	I1010 19:28:21.781016  147758 main.go:141] libmachine: (embed-certs-541370) Calling .Close
	I1010 19:28:21.781289  147758 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:28:21.781310  147758 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:28:21.781331  147758 addons.go:475] Verifying addon metrics-server=true in "embed-certs-541370"
	I1010 19:28:21.783512  147758 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1010 19:28:21.784958  147758 addons.go:510] duration metric: took 1.92006141s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1010 19:28:19.225844  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:21.227960  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:23.726439  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:22.195129  147758 pod_ready.go:103] pod "coredns-7c65d6cfc9-59752" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:24.678736  147758 pod_ready.go:103] pod "coredns-7c65d6cfc9-59752" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:25.727053  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:27.727657  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:27.177348  147758 pod_ready.go:103] pod "coredns-7c65d6cfc9-59752" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:29.177459  147758 pod_ready.go:93] pod "coredns-7c65d6cfc9-59752" in "kube-system" namespace has status "Ready":"True"
	I1010 19:28:29.177485  147758 pod_ready.go:82] duration metric: took 9.008841503s for pod "coredns-7c65d6cfc9-59752" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:29.177495  147758 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-n7wxs" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:29.182744  147758 pod_ready.go:93] pod "coredns-7c65d6cfc9-n7wxs" in "kube-system" namespace has status "Ready":"True"
	I1010 19:28:29.182777  147758 pod_ready.go:82] duration metric: took 5.273263ms for pod "coredns-7c65d6cfc9-n7wxs" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:29.182791  147758 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:29.191507  147758 pod_ready.go:93] pod "etcd-embed-certs-541370" in "kube-system" namespace has status "Ready":"True"
	I1010 19:28:29.191539  147758 pod_ready.go:82] duration metric: took 8.738961ms for pod "etcd-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:29.191554  147758 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:29.199167  147758 pod_ready.go:93] pod "kube-apiserver-embed-certs-541370" in "kube-system" namespace has status "Ready":"True"
	I1010 19:28:29.199218  147758 pod_ready.go:82] duration metric: took 7.635672ms for pod "kube-apiserver-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:29.199234  147758 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:29.204558  147758 pod_ready.go:93] pod "kube-controller-manager-embed-certs-541370" in "kube-system" namespace has status "Ready":"True"
	I1010 19:28:29.204581  147758 pod_ready.go:82] duration metric: took 5.337574ms for pod "kube-controller-manager-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:29.204591  147758 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-6hdds" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:29.573781  147758 pod_ready.go:93] pod "kube-proxy-6hdds" in "kube-system" namespace has status "Ready":"True"
	I1010 19:28:29.573808  147758 pod_ready.go:82] duration metric: took 369.210969ms for pod "kube-proxy-6hdds" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:29.573818  147758 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:29.974015  147758 pod_ready.go:93] pod "kube-scheduler-embed-certs-541370" in "kube-system" namespace has status "Ready":"True"
	I1010 19:28:29.974039  147758 pod_ready.go:82] duration metric: took 400.214845ms for pod "kube-scheduler-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:29.974048  147758 pod_ready.go:39] duration metric: took 9.810911064s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 19:28:29.974066  147758 api_server.go:52] waiting for apiserver process to appear ...
	I1010 19:28:29.974120  147758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:28:29.991332  147758 api_server.go:72] duration metric: took 10.126480862s to wait for apiserver process to appear ...
	I1010 19:28:29.991356  147758 api_server.go:88] waiting for apiserver healthz status ...
	I1010 19:28:29.991382  147758 api_server.go:253] Checking apiserver healthz at https://192.168.39.120:8443/healthz ...
	I1010 19:28:29.995855  147758 api_server.go:279] https://192.168.39.120:8443/healthz returned 200:
	ok
	I1010 19:28:29.997488  147758 api_server.go:141] control plane version: v1.31.1
	I1010 19:28:29.997516  147758 api_server.go:131] duration metric: took 6.152312ms to wait for apiserver health ...
	I1010 19:28:29.997526  147758 system_pods.go:43] waiting for kube-system pods to appear ...
	I1010 19:28:30.176631  147758 system_pods.go:59] 9 kube-system pods found
	I1010 19:28:30.176662  147758 system_pods.go:61] "coredns-7c65d6cfc9-59752" [f7980c69-dd8e-42e0-a0ab-1dedf2203367] Running
	I1010 19:28:30.176668  147758 system_pods.go:61] "coredns-7c65d6cfc9-n7wxs" [ac89df92-bbdb-432b-8c4a-d9ced3c2f2b5] Running
	I1010 19:28:30.176672  147758 system_pods.go:61] "etcd-embed-certs-541370" [94f92d38-753f-47c7-95b6-7ef0486a172d] Running
	I1010 19:28:30.176676  147758 system_pods.go:61] "kube-apiserver-embed-certs-541370" [b60f7ec1-6416-4e0b-8f49-d673496187b6] Running
	I1010 19:28:30.176680  147758 system_pods.go:61] "kube-controller-manager-embed-certs-541370" [ca09511b-ec93-4146-9d43-5fbc0880394a] Running
	I1010 19:28:30.176683  147758 system_pods.go:61] "kube-proxy-6hdds" [fe7cbbf4-12be-469d-b176-37c4daccab96] Running
	I1010 19:28:30.176686  147758 system_pods.go:61] "kube-scheduler-embed-certs-541370" [37c96f22-d5a4-4233-ba65-7367b075656a] Running
	I1010 19:28:30.176693  147758 system_pods.go:61] "metrics-server-6867b74b74-znhn4" [5dc1f764-c7c7-480e-b787-5f5cf6c14a84] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1010 19:28:30.176699  147758 system_pods.go:61] "storage-provisioner" [cfb28184-daef-40be-9170-b42058727418] Running
	I1010 19:28:30.176707  147758 system_pods.go:74] duration metric: took 179.174083ms to wait for pod list to return data ...
	I1010 19:28:30.176714  147758 default_sa.go:34] waiting for default service account to be created ...
	I1010 19:28:30.375326  147758 default_sa.go:45] found service account: "default"
	I1010 19:28:30.375361  147758 default_sa.go:55] duration metric: took 198.640267ms for default service account to be created ...
	I1010 19:28:30.375374  147758 system_pods.go:116] waiting for k8s-apps to be running ...
	I1010 19:28:30.578749  147758 system_pods.go:86] 9 kube-system pods found
	I1010 19:28:30.578780  147758 system_pods.go:89] "coredns-7c65d6cfc9-59752" [f7980c69-dd8e-42e0-a0ab-1dedf2203367] Running
	I1010 19:28:30.578786  147758 system_pods.go:89] "coredns-7c65d6cfc9-n7wxs" [ac89df92-bbdb-432b-8c4a-d9ced3c2f2b5] Running
	I1010 19:28:30.578790  147758 system_pods.go:89] "etcd-embed-certs-541370" [94f92d38-753f-47c7-95b6-7ef0486a172d] Running
	I1010 19:28:30.578794  147758 system_pods.go:89] "kube-apiserver-embed-certs-541370" [b60f7ec1-6416-4e0b-8f49-d673496187b6] Running
	I1010 19:28:30.578797  147758 system_pods.go:89] "kube-controller-manager-embed-certs-541370" [ca09511b-ec93-4146-9d43-5fbc0880394a] Running
	I1010 19:28:30.578801  147758 system_pods.go:89] "kube-proxy-6hdds" [fe7cbbf4-12be-469d-b176-37c4daccab96] Running
	I1010 19:28:30.578804  147758 system_pods.go:89] "kube-scheduler-embed-certs-541370" [37c96f22-d5a4-4233-ba65-7367b075656a] Running
	I1010 19:28:30.578810  147758 system_pods.go:89] "metrics-server-6867b74b74-znhn4" [5dc1f764-c7c7-480e-b787-5f5cf6c14a84] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1010 19:28:30.578814  147758 system_pods.go:89] "storage-provisioner" [cfb28184-daef-40be-9170-b42058727418] Running
	I1010 19:28:30.578822  147758 system_pods.go:126] duration metric: took 203.441477ms to wait for k8s-apps to be running ...
	I1010 19:28:30.578829  147758 system_svc.go:44] waiting for kubelet service to be running ....
	I1010 19:28:30.578877  147758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 19:28:30.596523  147758 system_svc.go:56] duration metric: took 17.684729ms WaitForService to wait for kubelet
	I1010 19:28:30.596553  147758 kubeadm.go:582] duration metric: took 10.731708748s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1010 19:28:30.596573  147758 node_conditions.go:102] verifying NodePressure condition ...
	I1010 19:28:30.774749  147758 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1010 19:28:30.774783  147758 node_conditions.go:123] node cpu capacity is 2
	I1010 19:28:30.774807  147758 node_conditions.go:105] duration metric: took 178.228671ms to run NodePressure ...
	I1010 19:28:30.774822  147758 start.go:241] waiting for startup goroutines ...
	I1010 19:28:30.774831  147758 start.go:246] waiting for cluster config update ...
	I1010 19:28:30.774845  147758 start.go:255] writing updated cluster config ...
	I1010 19:28:30.775121  147758 ssh_runner.go:195] Run: rm -f paused
	I1010 19:28:30.826689  147758 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1010 19:28:30.828795  147758 out.go:177] * Done! kubectl is now configured to use "embed-certs-541370" cluster and "default" namespace by default
	I1010 19:28:29.728096  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:32.229632  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:34.726536  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:36.727032  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:38.727488  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:40.372903  148525 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.31317648s)
	I1010 19:28:40.372991  148525 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 19:28:40.389319  148525 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1010 19:28:40.400123  148525 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1010 19:28:40.411906  148525 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1010 19:28:40.411932  148525 kubeadm.go:157] found existing configuration files:
	
	I1010 19:28:40.411976  148525 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1010 19:28:40.421840  148525 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1010 19:28:40.421904  148525 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1010 19:28:40.432229  148525 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1010 19:28:40.442121  148525 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1010 19:28:40.442203  148525 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1010 19:28:40.452969  148525 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1010 19:28:40.463085  148525 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1010 19:28:40.463146  148525 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1010 19:28:40.473103  148525 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1010 19:28:40.482854  148525 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1010 19:28:40.482914  148525 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1010 19:28:40.494023  148525 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1010 19:28:40.543369  148525 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1010 19:28:40.543466  148525 kubeadm.go:310] [preflight] Running pre-flight checks
	I1010 19:28:40.657301  148525 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1010 19:28:40.657462  148525 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1010 19:28:40.657579  148525 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1010 19:28:40.669222  148525 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1010 19:28:40.670995  148525 out.go:235]   - Generating certificates and keys ...
	I1010 19:28:40.671102  148525 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1010 19:28:40.671171  148525 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1010 19:28:40.671284  148525 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1010 19:28:40.671374  148525 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1010 19:28:40.671471  148525 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1010 19:28:40.671557  148525 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1010 19:28:40.671650  148525 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1010 19:28:40.671751  148525 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1010 19:28:40.671895  148525 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1010 19:28:40.672000  148525 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1010 19:28:40.672056  148525 kubeadm.go:310] [certs] Using the existing "sa" key
	I1010 19:28:40.672136  148525 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1010 19:28:40.876613  148525 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1010 19:28:41.109518  148525 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1010 19:28:41.186751  148525 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1010 19:28:41.424710  148525 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1010 19:28:41.479611  148525 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1010 19:28:41.480235  148525 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1010 19:28:41.483222  148525 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1010 19:28:41.227521  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:43.728023  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:41.484809  148525 out.go:235]   - Booting up control plane ...
	I1010 19:28:41.484935  148525 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1010 19:28:41.485020  148525 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1010 19:28:41.485317  148525 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1010 19:28:41.506919  148525 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1010 19:28:41.517006  148525 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1010 19:28:41.517077  148525 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1010 19:28:41.653211  148525 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1010 19:28:41.653364  148525 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1010 19:28:42.655360  148525 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001910447s
	I1010 19:28:42.655482  148525 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1010 19:28:47.658431  148525 kubeadm.go:310] [api-check] The API server is healthy after 5.003169217s
	I1010 19:28:47.676178  148525 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1010 19:28:47.694752  148525 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1010 19:28:47.720376  148525 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1010 19:28:47.720645  148525 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-361847 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1010 19:28:47.736489  148525 kubeadm.go:310] [bootstrap-token] Using token: cprf0t.lm4xp75yi0cdu4sy
	I1010 19:28:46.228217  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:48.726740  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:47.737958  148525 out.go:235]   - Configuring RBAC rules ...
	I1010 19:28:47.738089  148525 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1010 19:28:47.750073  148525 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1010 19:28:47.758010  148525 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1010 19:28:47.761649  148525 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1010 19:28:47.768953  148525 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1010 19:28:47.774428  148525 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1010 19:28:48.065988  148525 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1010 19:28:48.502538  148525 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1010 19:28:49.066479  148525 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1010 19:28:49.069842  148525 kubeadm.go:310] 
	I1010 19:28:49.069937  148525 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1010 19:28:49.069947  148525 kubeadm.go:310] 
	I1010 19:28:49.070046  148525 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1010 19:28:49.070058  148525 kubeadm.go:310] 
	I1010 19:28:49.070089  148525 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1010 19:28:49.070166  148525 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1010 19:28:49.070254  148525 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1010 19:28:49.070265  148525 kubeadm.go:310] 
	I1010 19:28:49.070342  148525 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1010 19:28:49.070353  148525 kubeadm.go:310] 
	I1010 19:28:49.070446  148525 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1010 19:28:49.070478  148525 kubeadm.go:310] 
	I1010 19:28:49.070544  148525 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1010 19:28:49.070640  148525 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1010 19:28:49.070750  148525 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1010 19:28:49.070773  148525 kubeadm.go:310] 
	I1010 19:28:49.070880  148525 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1010 19:28:49.070990  148525 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1010 19:28:49.071001  148525 kubeadm.go:310] 
	I1010 19:28:49.071153  148525 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token cprf0t.lm4xp75yi0cdu4sy \
	I1010 19:28:49.071299  148525 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:bc8568f96f96483e4403a7eff9866ac7bd8b0e76d8203796a49dd1a769f11bda \
	I1010 19:28:49.071330  148525 kubeadm.go:310] 	--control-plane 
	I1010 19:28:49.071349  148525 kubeadm.go:310] 
	I1010 19:28:49.071468  148525 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1010 19:28:49.071497  148525 kubeadm.go:310] 
	I1010 19:28:49.072228  148525 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token cprf0t.lm4xp75yi0cdu4sy \
	I1010 19:28:49.072354  148525 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:bc8568f96f96483e4403a7eff9866ac7bd8b0e76d8203796a49dd1a769f11bda 
	I1010 19:28:49.074595  148525 kubeadm.go:310] W1010 19:28:40.525557    2560 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1010 19:28:49.074944  148525 kubeadm.go:310] W1010 19:28:40.526329    2560 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1010 19:28:49.075102  148525 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1010 19:28:49.075143  148525 cni.go:84] Creating CNI manager for ""
	I1010 19:28:49.075166  148525 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1010 19:28:49.077190  148525 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1010 19:28:49.078665  148525 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1010 19:28:49.091792  148525 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1010 19:28:49.113801  148525 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1010 19:28:49.113920  148525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-361847 minikube.k8s.io/updated_at=2024_10_10T19_28_49_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=e90d7de550e839433dc631623053398b652747fc minikube.k8s.io/name=default-k8s-diff-port-361847 minikube.k8s.io/primary=true
	I1010 19:28:49.114074  148525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:49.154398  148525 ops.go:34] apiserver oom_adj: -16
	I1010 19:28:49.351271  148525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:49.852049  148525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:50.351441  148525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:50.852022  148525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:51.351391  148525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:51.851329  148525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:52.351840  148525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:52.852392  148525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:53.351397  148525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:53.443325  148525 kubeadm.go:1113] duration metric: took 4.329288133s to wait for elevateKubeSystemPrivileges
	I1010 19:28:53.443363  148525 kubeadm.go:394] duration metric: took 4m54.439732071s to StartCluster
	I1010 19:28:53.443386  148525 settings.go:142] acquiring lock: {Name:mk8d73e472e6ca16e47be8eb712406a95625b8da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 19:28:53.443481  148525 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19787-81676/kubeconfig
	I1010 19:28:53.445465  148525 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19787-81676/kubeconfig: {Name:mk31297b607b774870865548d952622c04d970cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 19:28:53.445747  148525 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.32 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1010 19:28:53.445842  148525 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1010 19:28:53.445957  148525 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-361847"
	I1010 19:28:53.445980  148525 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-361847"
	W1010 19:28:53.445992  148525 addons.go:243] addon storage-provisioner should already be in state true
	I1010 19:28:53.446004  148525 config.go:182] Loaded profile config "default-k8s-diff-port-361847": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 19:28:53.446026  148525 host.go:66] Checking if "default-k8s-diff-port-361847" exists ...
	I1010 19:28:53.446065  148525 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-361847"
	I1010 19:28:53.446100  148525 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-361847"
	I1010 19:28:53.446085  148525 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-361847"
	I1010 19:28:53.446137  148525 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-361847"
	W1010 19:28:53.446151  148525 addons.go:243] addon metrics-server should already be in state true
	I1010 19:28:53.446242  148525 host.go:66] Checking if "default-k8s-diff-port-361847" exists ...
	I1010 19:28:53.446515  148525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:28:53.446562  148525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:28:53.447089  148525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:28:53.447135  148525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:28:53.447315  148525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:28:53.447360  148525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:28:53.450779  148525 out.go:177] * Verifying Kubernetes components...
	I1010 19:28:53.452838  148525 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 19:28:53.465502  148525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36411
	I1010 19:28:53.466020  148525 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:28:53.466572  148525 main.go:141] libmachine: Using API Version  1
	I1010 19:28:53.466594  148525 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:28:53.466772  148525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42293
	I1010 19:28:53.467034  148525 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:28:53.467209  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetState
	I1010 19:28:53.467310  148525 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:28:53.467828  148525 main.go:141] libmachine: Using API Version  1
	I1010 19:28:53.467857  148525 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:28:53.467899  148525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36387
	I1010 19:28:53.468270  148525 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:28:53.468451  148525 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:28:53.468866  148525 main.go:141] libmachine: Using API Version  1
	I1010 19:28:53.468891  148525 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:28:53.469102  148525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:28:53.469150  148525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:28:53.469484  148525 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:28:53.470068  148525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:28:53.470114  148525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:28:53.471192  148525 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-361847"
	W1010 19:28:53.471213  148525 addons.go:243] addon default-storageclass should already be in state true
	I1010 19:28:53.471261  148525 host.go:66] Checking if "default-k8s-diff-port-361847" exists ...
	I1010 19:28:53.471618  148525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:28:53.471664  148525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:28:53.486550  148525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38971
	I1010 19:28:53.487068  148525 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:28:53.487608  148525 main.go:141] libmachine: Using API Version  1
	I1010 19:28:53.487626  148525 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:28:53.488015  148525 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:28:53.488329  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetState
	I1010 19:28:53.490200  148525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44971
	I1010 19:28:53.490240  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .DriverName
	I1010 19:28:53.490790  148525 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:28:53.491318  148525 main.go:141] libmachine: Using API Version  1
	I1010 19:28:53.491341  148525 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:28:53.491682  148525 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:28:53.491957  148525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45297
	I1010 19:28:53.492100  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetState
	I1010 19:28:53.492423  148525 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:28:53.492731  148525 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1010 19:28:53.492811  148525 main.go:141] libmachine: Using API Version  1
	I1010 19:28:53.492831  148525 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:28:53.493240  148525 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:28:53.493885  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .DriverName
	I1010 19:28:53.493979  148525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:28:53.494031  148525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:28:53.494359  148525 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1010 19:28:53.494381  148525 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1010 19:28:53.494397  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHHostname
	I1010 19:28:53.495771  148525 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 19:28:51.226596  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:53.227299  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:53.227335  147213 pod_ready.go:82] duration metric: took 4m0.007224391s for pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace to be "Ready" ...
	E1010 19:28:53.227346  147213 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1010 19:28:53.227355  147213 pod_ready.go:39] duration metric: took 4m5.554224355s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 19:28:53.227375  147213 api_server.go:52] waiting for apiserver process to appear ...
	I1010 19:28:53.227419  147213 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:28:53.227484  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:28:53.288713  147213 cri.go:89] found id: "20a9cb514f18abab89d82173a52c43693b13548712f96726c491e14100e5deba"
	I1010 19:28:53.288749  147213 cri.go:89] found id: ""
	I1010 19:28:53.288759  147213 logs.go:282] 1 containers: [20a9cb514f18abab89d82173a52c43693b13548712f96726c491e14100e5deba]
	I1010 19:28:53.288823  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:53.294819  147213 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:28:53.294904  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:28:53.340169  147213 cri.go:89] found id: "bfc9f1f069a0299235908025d3c8840e059ddc2fc2f579932edb134cff568c36"
	I1010 19:28:53.340197  147213 cri.go:89] found id: ""
	I1010 19:28:53.340207  147213 logs.go:282] 1 containers: [bfc9f1f069a0299235908025d3c8840e059ddc2fc2f579932edb134cff568c36]
	I1010 19:28:53.340271  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:53.345214  147213 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:28:53.345292  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:28:53.392808  147213 cri.go:89] found id: "3c98f0e3e46ce82feb8638281ffb8c8cdf326b15115a6edfe91de59de6868846"
	I1010 19:28:53.392838  147213 cri.go:89] found id: ""
	I1010 19:28:53.392859  147213 logs.go:282] 1 containers: [3c98f0e3e46ce82feb8638281ffb8c8cdf326b15115a6edfe91de59de6868846]
	I1010 19:28:53.392921  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:53.398275  147213 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:28:53.398361  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:28:53.439567  147213 cri.go:89] found id: "d397ef1d012accb6f9cb41fa5bd5ed4649588e9ad8552f87d2f9a579a9c3f023"
	I1010 19:28:53.439594  147213 cri.go:89] found id: ""
	I1010 19:28:53.439604  147213 logs.go:282] 1 containers: [d397ef1d012accb6f9cb41fa5bd5ed4649588e9ad8552f87d2f9a579a9c3f023]
	I1010 19:28:53.439665  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:53.444366  147213 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:28:53.444436  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:28:53.522580  147213 cri.go:89] found id: "3a26f9cbec8dc8e6e1b1c1cc24a2432dc85819a78557be2263db6bc34e847664"
	I1010 19:28:53.522597  147213 cri.go:89] found id: ""
	I1010 19:28:53.522605  147213 logs.go:282] 1 containers: [3a26f9cbec8dc8e6e1b1c1cc24a2432dc85819a78557be2263db6bc34e847664]
	I1010 19:28:53.522654  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:53.528890  147213 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:28:53.528974  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:28:53.575933  147213 cri.go:89] found id: "d59196636b2826805e226757e3c5d3683d49f2fddb43f2e965aeae02026742ea"
	I1010 19:28:53.575963  147213 cri.go:89] found id: ""
	I1010 19:28:53.575975  147213 logs.go:282] 1 containers: [d59196636b2826805e226757e3c5d3683d49f2fddb43f2e965aeae02026742ea]
	I1010 19:28:53.576035  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:53.581693  147213 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:28:53.581763  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:28:53.619789  147213 cri.go:89] found id: ""
	I1010 19:28:53.619819  147213 logs.go:282] 0 containers: []
	W1010 19:28:53.619831  147213 logs.go:284] No container was found matching "kindnet"
	I1010 19:28:53.619839  147213 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1010 19:28:53.619899  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1010 19:28:53.659715  147213 cri.go:89] found id: "dfabbf70cd44985666c6b54a456e49c66a41cc19300a483565decd937ce240ab"
	I1010 19:28:53.659746  147213 cri.go:89] found id: "e14d37c6da3f630d60720c5dc7ec414f060a7b327fafd27b633f3402e552840e"
	I1010 19:28:53.659752  147213 cri.go:89] found id: ""
	I1010 19:28:53.659762  147213 logs.go:282] 2 containers: [dfabbf70cd44985666c6b54a456e49c66a41cc19300a483565decd937ce240ab e14d37c6da3f630d60720c5dc7ec414f060a7b327fafd27b633f3402e552840e]
	I1010 19:28:53.659828  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:53.664377  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:53.668766  147213 logs.go:123] Gathering logs for dmesg ...
	I1010 19:28:53.668796  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:28:53.685976  147213 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:28:53.686007  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 19:28:53.497232  148525 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 19:28:53.497251  148525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1010 19:28:53.497273  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHHostname
	I1010 19:28:53.497732  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:28:53.498599  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:28:53.498627  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:28:53.498971  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHPort
	I1010 19:28:53.499159  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:28:53.499312  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHUsername
	I1010 19:28:53.499414  148525 sshutil.go:53] new ssh client: &{IP:192.168.50.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/default-k8s-diff-port-361847/id_rsa Username:docker}
	I1010 19:28:53.501044  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:28:53.501509  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:28:53.501531  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:28:53.501782  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHPort
	I1010 19:28:53.501956  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:28:53.502080  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHUsername
	I1010 19:28:53.502232  148525 sshutil.go:53] new ssh client: &{IP:192.168.50.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/default-k8s-diff-port-361847/id_rsa Username:docker}
	I1010 19:28:53.512240  148525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34899
	I1010 19:28:53.512809  148525 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:28:53.513347  148525 main.go:141] libmachine: Using API Version  1
	I1010 19:28:53.513368  148525 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:28:53.513787  148525 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:28:53.514001  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetState
	I1010 19:28:53.515436  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .DriverName
	I1010 19:28:53.515639  148525 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1010 19:28:53.515659  148525 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1010 19:28:53.515681  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHHostname
	I1010 19:28:53.518128  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:28:53.518596  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:28:53.518628  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:28:53.518909  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHPort
	I1010 19:28:53.519080  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:28:53.519216  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHUsername
	I1010 19:28:53.519376  148525 sshutil.go:53] new ssh client: &{IP:192.168.50.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/default-k8s-diff-port-361847/id_rsa Username:docker}
	I1010 19:28:53.712871  148525 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 19:28:53.755059  148525 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-361847" to be "Ready" ...
	I1010 19:28:53.766564  148525 node_ready.go:49] node "default-k8s-diff-port-361847" has status "Ready":"True"
	I1010 19:28:53.766590  148525 node_ready.go:38] duration metric: took 11.490223ms for node "default-k8s-diff-port-361847" to be "Ready" ...
	I1010 19:28:53.766603  148525 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 19:28:53.777458  148525 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-dh9th" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:53.875493  148525 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1010 19:28:53.875525  148525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1010 19:28:53.911443  148525 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1010 19:28:53.944885  148525 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1010 19:28:53.944919  148525 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1010 19:28:53.945487  148525 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 19:28:54.011209  148525 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1010 19:28:54.011239  148525 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1010 19:28:54.039679  148525 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1010 19:28:54.598172  148525 main.go:141] libmachine: Making call to close driver server
	I1010 19:28:54.598226  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .Close
	I1010 19:28:54.598584  148525 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:28:54.598608  148525 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:28:54.598619  148525 main.go:141] libmachine: Making call to close driver server
	I1010 19:28:54.598629  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .Close
	I1010 19:28:54.598898  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | Closing plugin on server side
	I1010 19:28:54.598931  148525 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:28:54.598939  148525 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:28:54.643365  148525 main.go:141] libmachine: Making call to close driver server
	I1010 19:28:54.643392  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .Close
	I1010 19:28:54.643734  148525 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:28:54.643760  148525 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:28:55.287018  148525 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.341483807s)
	I1010 19:28:55.287045  148525 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.247326452s)
	I1010 19:28:55.287089  148525 main.go:141] libmachine: Making call to close driver server
	I1010 19:28:55.287094  148525 main.go:141] libmachine: Making call to close driver server
	I1010 19:28:55.287106  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .Close
	I1010 19:28:55.287112  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .Close
	I1010 19:28:55.287440  148525 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:28:55.287479  148525 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:28:55.287506  148525 main.go:141] libmachine: Making call to close driver server
	I1010 19:28:55.287524  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .Close
	I1010 19:28:55.287570  148525 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:28:55.287589  148525 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:28:55.287598  148525 main.go:141] libmachine: Making call to close driver server
	I1010 19:28:55.287607  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .Close
	I1010 19:28:55.287818  148525 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:28:55.287831  148525 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:28:55.287840  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | Closing plugin on server side
	I1010 19:28:55.287855  148525 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:28:55.287862  148525 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:28:55.287872  148525 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-361847"
	I1010 19:28:55.287880  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | Closing plugin on server side
	I1010 19:28:55.289944  148525 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1010 19:28:53.841387  147213 logs.go:123] Gathering logs for kube-apiserver [20a9cb514f18abab89d82173a52c43693b13548712f96726c491e14100e5deba] ...
	I1010 19:28:53.841441  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 20a9cb514f18abab89d82173a52c43693b13548712f96726c491e14100e5deba"
	I1010 19:28:53.892951  147213 logs.go:123] Gathering logs for coredns [3c98f0e3e46ce82feb8638281ffb8c8cdf326b15115a6edfe91de59de6868846] ...
	I1010 19:28:53.893005  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c98f0e3e46ce82feb8638281ffb8c8cdf326b15115a6edfe91de59de6868846"
	I1010 19:28:53.947636  147213 logs.go:123] Gathering logs for kube-scheduler [d397ef1d012accb6f9cb41fa5bd5ed4649588e9ad8552f87d2f9a579a9c3f023] ...
	I1010 19:28:53.947668  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d397ef1d012accb6f9cb41fa5bd5ed4649588e9ad8552f87d2f9a579a9c3f023"
	I1010 19:28:53.992969  147213 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:28:53.992998  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:28:54.520652  147213 logs.go:123] Gathering logs for kubelet ...
	I1010 19:28:54.520703  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:28:54.588366  147213 logs.go:123] Gathering logs for etcd [bfc9f1f069a0299235908025d3c8840e059ddc2fc2f579932edb134cff568c36] ...
	I1010 19:28:54.588418  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bfc9f1f069a0299235908025d3c8840e059ddc2fc2f579932edb134cff568c36"
	I1010 19:28:54.651179  147213 logs.go:123] Gathering logs for kube-proxy [3a26f9cbec8dc8e6e1b1c1cc24a2432dc85819a78557be2263db6bc34e847664] ...
	I1010 19:28:54.651227  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3a26f9cbec8dc8e6e1b1c1cc24a2432dc85819a78557be2263db6bc34e847664"
	I1010 19:28:54.712881  147213 logs.go:123] Gathering logs for kube-controller-manager [d59196636b2826805e226757e3c5d3683d49f2fddb43f2e965aeae02026742ea] ...
	I1010 19:28:54.712925  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d59196636b2826805e226757e3c5d3683d49f2fddb43f2e965aeae02026742ea"
	I1010 19:28:54.779030  147213 logs.go:123] Gathering logs for storage-provisioner [dfabbf70cd44985666c6b54a456e49c66a41cc19300a483565decd937ce240ab] ...
	I1010 19:28:54.779094  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dfabbf70cd44985666c6b54a456e49c66a41cc19300a483565decd937ce240ab"
	I1010 19:28:54.821961  147213 logs.go:123] Gathering logs for storage-provisioner [e14d37c6da3f630d60720c5dc7ec414f060a7b327fafd27b633f3402e552840e] ...
	I1010 19:28:54.822002  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e14d37c6da3f630d60720c5dc7ec414f060a7b327fafd27b633f3402e552840e"
	I1010 19:28:54.871409  147213 logs.go:123] Gathering logs for container status ...
	I1010 19:28:54.871446  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:28:57.425310  147213 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:28:57.442308  147213 api_server.go:72] duration metric: took 4m17.02881034s to wait for apiserver process to appear ...
	I1010 19:28:57.442343  147213 api_server.go:88] waiting for apiserver healthz status ...
	I1010 19:28:57.442383  147213 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:28:57.442444  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:28:57.481392  147213 cri.go:89] found id: "20a9cb514f18abab89d82173a52c43693b13548712f96726c491e14100e5deba"
	I1010 19:28:57.481420  147213 cri.go:89] found id: ""
	I1010 19:28:57.481430  147213 logs.go:282] 1 containers: [20a9cb514f18abab89d82173a52c43693b13548712f96726c491e14100e5deba]
	I1010 19:28:57.481503  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:57.486191  147213 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:28:57.486269  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:28:57.532238  147213 cri.go:89] found id: "bfc9f1f069a0299235908025d3c8840e059ddc2fc2f579932edb134cff568c36"
	I1010 19:28:57.532271  147213 cri.go:89] found id: ""
	I1010 19:28:57.532284  147213 logs.go:282] 1 containers: [bfc9f1f069a0299235908025d3c8840e059ddc2fc2f579932edb134cff568c36]
	I1010 19:28:57.532357  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:57.538105  147213 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:28:57.538188  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:28:57.579729  147213 cri.go:89] found id: "3c98f0e3e46ce82feb8638281ffb8c8cdf326b15115a6edfe91de59de6868846"
	I1010 19:28:57.579757  147213 cri.go:89] found id: ""
	I1010 19:28:57.579767  147213 logs.go:282] 1 containers: [3c98f0e3e46ce82feb8638281ffb8c8cdf326b15115a6edfe91de59de6868846]
	I1010 19:28:57.579833  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:57.584494  147213 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:28:57.584568  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:28:57.623920  147213 cri.go:89] found id: "d397ef1d012accb6f9cb41fa5bd5ed4649588e9ad8552f87d2f9a579a9c3f023"
	I1010 19:28:57.623949  147213 cri.go:89] found id: ""
	I1010 19:28:57.623960  147213 logs.go:282] 1 containers: [d397ef1d012accb6f9cb41fa5bd5ed4649588e9ad8552f87d2f9a579a9c3f023]
	I1010 19:28:57.624028  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:57.628927  147213 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:28:57.629018  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:28:57.669669  147213 cri.go:89] found id: "3a26f9cbec8dc8e6e1b1c1cc24a2432dc85819a78557be2263db6bc34e847664"
	I1010 19:28:57.669698  147213 cri.go:89] found id: ""
	I1010 19:28:57.669707  147213 logs.go:282] 1 containers: [3a26f9cbec8dc8e6e1b1c1cc24a2432dc85819a78557be2263db6bc34e847664]
	I1010 19:28:57.669771  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:57.674449  147213 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:28:57.674526  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:28:57.721856  147213 cri.go:89] found id: "d59196636b2826805e226757e3c5d3683d49f2fddb43f2e965aeae02026742ea"
	I1010 19:28:57.721881  147213 cri.go:89] found id: ""
	I1010 19:28:57.721891  147213 logs.go:282] 1 containers: [d59196636b2826805e226757e3c5d3683d49f2fddb43f2e965aeae02026742ea]
	I1010 19:28:57.721955  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:57.726422  147213 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:28:57.726497  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:28:57.764464  147213 cri.go:89] found id: ""
	I1010 19:28:57.764499  147213 logs.go:282] 0 containers: []
	W1010 19:28:57.764512  147213 logs.go:284] No container was found matching "kindnet"
	I1010 19:28:57.764521  147213 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1010 19:28:57.764595  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1010 19:28:57.809758  147213 cri.go:89] found id: "dfabbf70cd44985666c6b54a456e49c66a41cc19300a483565decd937ce240ab"
	I1010 19:28:57.809784  147213 cri.go:89] found id: "e14d37c6da3f630d60720c5dc7ec414f060a7b327fafd27b633f3402e552840e"
	I1010 19:28:57.809788  147213 cri.go:89] found id: ""
	I1010 19:28:57.809797  147213 logs.go:282] 2 containers: [dfabbf70cd44985666c6b54a456e49c66a41cc19300a483565decd937ce240ab e14d37c6da3f630d60720c5dc7ec414f060a7b327fafd27b633f3402e552840e]
	I1010 19:28:57.809854  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:57.815576  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:57.820152  147213 logs.go:123] Gathering logs for kube-apiserver [20a9cb514f18abab89d82173a52c43693b13548712f96726c491e14100e5deba] ...
	I1010 19:28:57.820181  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 20a9cb514f18abab89d82173a52c43693b13548712f96726c491e14100e5deba"
	I1010 19:28:57.869339  147213 logs.go:123] Gathering logs for etcd [bfc9f1f069a0299235908025d3c8840e059ddc2fc2f579932edb134cff568c36] ...
	I1010 19:28:57.869383  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bfc9f1f069a0299235908025d3c8840e059ddc2fc2f579932edb134cff568c36"
	I1010 19:28:57.918698  147213 logs.go:123] Gathering logs for kube-proxy [3a26f9cbec8dc8e6e1b1c1cc24a2432dc85819a78557be2263db6bc34e847664] ...
	I1010 19:28:57.918739  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3a26f9cbec8dc8e6e1b1c1cc24a2432dc85819a78557be2263db6bc34e847664"
	I1010 19:28:57.960939  147213 logs.go:123] Gathering logs for kube-controller-manager [d59196636b2826805e226757e3c5d3683d49f2fddb43f2e965aeae02026742ea] ...
	I1010 19:28:57.960985  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d59196636b2826805e226757e3c5d3683d49f2fddb43f2e965aeae02026742ea"
	I1010 19:28:58.013572  147213 logs.go:123] Gathering logs for storage-provisioner [dfabbf70cd44985666c6b54a456e49c66a41cc19300a483565decd937ce240ab] ...
	I1010 19:28:58.013612  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dfabbf70cd44985666c6b54a456e49c66a41cc19300a483565decd937ce240ab"
	I1010 19:28:58.053247  147213 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:28:58.053277  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:28:58.507428  147213 logs.go:123] Gathering logs for container status ...
	I1010 19:28:58.507473  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:28:58.552704  147213 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:28:58.552742  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 19:28:58.672077  147213 logs.go:123] Gathering logs for dmesg ...
	I1010 19:28:58.672127  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:28:58.690997  147213 logs.go:123] Gathering logs for coredns [3c98f0e3e46ce82feb8638281ffb8c8cdf326b15115a6edfe91de59de6868846] ...
	I1010 19:28:58.691049  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c98f0e3e46ce82feb8638281ffb8c8cdf326b15115a6edfe91de59de6868846"
	I1010 19:28:58.735251  147213 logs.go:123] Gathering logs for kube-scheduler [d397ef1d012accb6f9cb41fa5bd5ed4649588e9ad8552f87d2f9a579a9c3f023] ...
	I1010 19:28:58.735287  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d397ef1d012accb6f9cb41fa5bd5ed4649588e9ad8552f87d2f9a579a9c3f023"
	I1010 19:28:55.291700  148525 addons.go:510] duration metric: took 1.845864985s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1010 19:28:55.785186  148525 pod_ready.go:103] pod "coredns-7c65d6cfc9-dh9th" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:57.789567  148525 pod_ready.go:103] pod "coredns-7c65d6cfc9-dh9th" in "kube-system" namespace has status "Ready":"False"
	I1010 19:29:00.284444  148525 pod_ready.go:103] pod "coredns-7c65d6cfc9-dh9th" in "kube-system" namespace has status "Ready":"False"
	I1010 19:29:01.297627  148525 pod_ready.go:93] pod "coredns-7c65d6cfc9-dh9th" in "kube-system" namespace has status "Ready":"True"
	I1010 19:29:01.297660  148525 pod_ready.go:82] duration metric: took 7.520173084s for pod "coredns-7c65d6cfc9-dh9th" in "kube-system" namespace to be "Ready" ...
	I1010 19:29:01.297676  148525 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-fgxh7" in "kube-system" namespace to be "Ready" ...
	I1010 19:29:01.804654  148525 pod_ready.go:93] pod "coredns-7c65d6cfc9-fgxh7" in "kube-system" namespace has status "Ready":"True"
	I1010 19:29:01.804676  148525 pod_ready.go:82] duration metric: took 506.992872ms for pod "coredns-7c65d6cfc9-fgxh7" in "kube-system" namespace to be "Ready" ...
	I1010 19:29:01.804690  148525 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	I1010 19:29:01.809788  148525 pod_ready.go:93] pod "etcd-default-k8s-diff-port-361847" in "kube-system" namespace has status "Ready":"True"
	I1010 19:29:01.809814  148525 pod_ready.go:82] duration metric: took 5.116023ms for pod "etcd-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	I1010 19:29:01.809825  148525 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	I1010 19:29:01.814460  148525 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-361847" in "kube-system" namespace has status "Ready":"True"
	I1010 19:29:01.814486  148525 pod_ready.go:82] duration metric: took 4.652085ms for pod "kube-apiserver-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	I1010 19:29:01.814501  148525 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	I1010 19:29:01.819719  148525 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-361847" in "kube-system" namespace has status "Ready":"True"
	I1010 19:29:01.819741  148525 pod_ready.go:82] duration metric: took 5.231258ms for pod "kube-controller-manager-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	I1010 19:29:01.819753  148525 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-jlvn6" in "kube-system" namespace to be "Ready" ...
	I1010 19:29:02.082285  148525 pod_ready.go:93] pod "kube-proxy-jlvn6" in "kube-system" namespace has status "Ready":"True"
	I1010 19:29:02.082325  148525 pod_ready.go:82] duration metric: took 262.562954ms for pod "kube-proxy-jlvn6" in "kube-system" namespace to be "Ready" ...
	I1010 19:29:02.082342  148525 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	I1010 19:29:02.481705  148525 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-361847" in "kube-system" namespace has status "Ready":"True"
	I1010 19:29:02.481730  148525 pod_ready.go:82] duration metric: took 399.378957ms for pod "kube-scheduler-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	I1010 19:29:02.481742  148525 pod_ready.go:39] duration metric: took 8.715126416s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 19:29:02.481779  148525 api_server.go:52] waiting for apiserver process to appear ...
	I1010 19:29:02.481832  148525 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:29:02.498706  148525 api_server.go:72] duration metric: took 9.052891898s to wait for apiserver process to appear ...
	I1010 19:29:02.498760  148525 api_server.go:88] waiting for apiserver healthz status ...
	I1010 19:29:02.498795  148525 api_server.go:253] Checking apiserver healthz at https://192.168.50.32:8444/healthz ...
	I1010 19:29:02.503501  148525 api_server.go:279] https://192.168.50.32:8444/healthz returned 200:
	ok
	I1010 19:29:02.504594  148525 api_server.go:141] control plane version: v1.31.1
	I1010 19:29:02.504620  148525 api_server.go:131] duration metric: took 5.850548ms to wait for apiserver health ...
	I1010 19:29:02.504629  148525 system_pods.go:43] waiting for kube-system pods to appear ...
	I1010 19:29:02.685579  148525 system_pods.go:59] 9 kube-system pods found
	I1010 19:29:02.685611  148525 system_pods.go:61] "coredns-7c65d6cfc9-dh9th" [ff14d755-810a-497a-b1fc-7fe231748af3] Running
	I1010 19:29:02.685618  148525 system_pods.go:61] "coredns-7c65d6cfc9-fgxh7" [b4faa977-3205-4395-bda3-8fe24fdcf6cc] Running
	I1010 19:29:02.685624  148525 system_pods.go:61] "etcd-default-k8s-diff-port-361847" [d7e1e625-2945-4755-927a-aab5e40f5392] Running
	I1010 19:29:02.685630  148525 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-361847" [0f7dace6-6cdb-4438-a4b3-3fecac56c709] Running
	I1010 19:29:02.685635  148525 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-361847" [d08fb57e-d916-4d4f-929a-509c1c8c0f89] Running
	I1010 19:29:02.685639  148525 system_pods.go:61] "kube-proxy-jlvn6" [6336f682-0362-4855-b848-3540052aec19] Running
	I1010 19:29:02.685644  148525 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-361847" [60282768-5672-42b9-b9f5-6d5cd0b186cf] Running
	I1010 19:29:02.685653  148525 system_pods.go:61] "metrics-server-6867b74b74-fdf7p" [6f8ca204-13fe-4adb-9c09-33ec6821ff2d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1010 19:29:02.685658  148525 system_pods.go:61] "storage-provisioner" [ea1a4ade-9648-401f-a0ad-633ab3c1196b] Running
	I1010 19:29:02.685669  148525 system_pods.go:74] duration metric: took 181.032548ms to wait for pod list to return data ...
	I1010 19:29:02.685683  148525 default_sa.go:34] waiting for default service account to be created ...
	I1010 19:29:02.883256  148525 default_sa.go:45] found service account: "default"
	I1010 19:29:02.883288  148525 default_sa.go:55] duration metric: took 197.59742ms for default service account to be created ...
	I1010 19:29:02.883298  148525 system_pods.go:116] waiting for k8s-apps to be running ...
	I1010 19:29:03.084706  148525 system_pods.go:86] 9 kube-system pods found
	I1010 19:29:03.084737  148525 system_pods.go:89] "coredns-7c65d6cfc9-dh9th" [ff14d755-810a-497a-b1fc-7fe231748af3] Running
	I1010 19:29:03.084742  148525 system_pods.go:89] "coredns-7c65d6cfc9-fgxh7" [b4faa977-3205-4395-bda3-8fe24fdcf6cc] Running
	I1010 19:29:03.084746  148525 system_pods.go:89] "etcd-default-k8s-diff-port-361847" [d7e1e625-2945-4755-927a-aab5e40f5392] Running
	I1010 19:29:03.084751  148525 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-361847" [0f7dace6-6cdb-4438-a4b3-3fecac56c709] Running
	I1010 19:29:03.084755  148525 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-361847" [d08fb57e-d916-4d4f-929a-509c1c8c0f89] Running
	I1010 19:29:03.084759  148525 system_pods.go:89] "kube-proxy-jlvn6" [6336f682-0362-4855-b848-3540052aec19] Running
	I1010 19:29:03.084762  148525 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-361847" [60282768-5672-42b9-b9f5-6d5cd0b186cf] Running
	I1010 19:29:03.084768  148525 system_pods.go:89] "metrics-server-6867b74b74-fdf7p" [6f8ca204-13fe-4adb-9c09-33ec6821ff2d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1010 19:29:03.084772  148525 system_pods.go:89] "storage-provisioner" [ea1a4ade-9648-401f-a0ad-633ab3c1196b] Running
	I1010 19:29:03.084779  148525 system_pods.go:126] duration metric: took 201.476637ms to wait for k8s-apps to be running ...
	I1010 19:29:03.084787  148525 system_svc.go:44] waiting for kubelet service to be running ....
	I1010 19:29:03.084832  148525 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 19:29:03.100986  148525 system_svc.go:56] duration metric: took 16.183062ms WaitForService to wait for kubelet
	I1010 19:29:03.101026  148525 kubeadm.go:582] duration metric: took 9.655245557s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1010 19:29:03.101050  148525 node_conditions.go:102] verifying NodePressure condition ...
	I1010 19:29:03.282063  148525 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1010 19:29:03.282095  148525 node_conditions.go:123] node cpu capacity is 2
	I1010 19:29:03.282106  148525 node_conditions.go:105] duration metric: took 181.049888ms to run NodePressure ...
	I1010 19:29:03.282119  148525 start.go:241] waiting for startup goroutines ...
	I1010 19:29:03.282125  148525 start.go:246] waiting for cluster config update ...
	I1010 19:29:03.282135  148525 start.go:255] writing updated cluster config ...
	I1010 19:29:03.282414  148525 ssh_runner.go:195] Run: rm -f paused
	I1010 19:29:03.331838  148525 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1010 19:29:03.333698  148525 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-361847" cluster and "default" namespace by default
	I1010 19:28:58.775358  147213 logs.go:123] Gathering logs for storage-provisioner [e14d37c6da3f630d60720c5dc7ec414f060a7b327fafd27b633f3402e552840e] ...
	I1010 19:28:58.775396  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e14d37c6da3f630d60720c5dc7ec414f060a7b327fafd27b633f3402e552840e"
	I1010 19:28:58.812210  147213 logs.go:123] Gathering logs for kubelet ...
	I1010 19:28:58.812269  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:29:01.381750  147213 api_server.go:253] Checking apiserver healthz at https://192.168.72.11:8443/healthz ...
	I1010 19:29:01.386658  147213 api_server.go:279] https://192.168.72.11:8443/healthz returned 200:
	ok
	I1010 19:29:01.387793  147213 api_server.go:141] control plane version: v1.31.1
	I1010 19:29:01.387819  147213 api_server.go:131] duration metric: took 3.945468552s to wait for apiserver health ...
	I1010 19:29:01.387829  147213 system_pods.go:43] waiting for kube-system pods to appear ...
	I1010 19:29:01.387861  147213 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:29:01.387948  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:29:01.433312  147213 cri.go:89] found id: "20a9cb514f18abab89d82173a52c43693b13548712f96726c491e14100e5deba"
	I1010 19:29:01.433344  147213 cri.go:89] found id: ""
	I1010 19:29:01.433433  147213 logs.go:282] 1 containers: [20a9cb514f18abab89d82173a52c43693b13548712f96726c491e14100e5deba]
	I1010 19:29:01.433521  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:29:01.437920  147213 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:29:01.437983  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:29:01.476429  147213 cri.go:89] found id: "bfc9f1f069a0299235908025d3c8840e059ddc2fc2f579932edb134cff568c36"
	I1010 19:29:01.476458  147213 cri.go:89] found id: ""
	I1010 19:29:01.476470  147213 logs.go:282] 1 containers: [bfc9f1f069a0299235908025d3c8840e059ddc2fc2f579932edb134cff568c36]
	I1010 19:29:01.476522  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:29:01.480912  147213 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:29:01.480987  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:29:01.522141  147213 cri.go:89] found id: "3c98f0e3e46ce82feb8638281ffb8c8cdf326b15115a6edfe91de59de6868846"
	I1010 19:29:01.522164  147213 cri.go:89] found id: ""
	I1010 19:29:01.522173  147213 logs.go:282] 1 containers: [3c98f0e3e46ce82feb8638281ffb8c8cdf326b15115a6edfe91de59de6868846]
	I1010 19:29:01.522238  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:29:01.526742  147213 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:29:01.526803  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:29:01.572715  147213 cri.go:89] found id: "d397ef1d012accb6f9cb41fa5bd5ed4649588e9ad8552f87d2f9a579a9c3f023"
	I1010 19:29:01.572747  147213 cri.go:89] found id: ""
	I1010 19:29:01.572759  147213 logs.go:282] 1 containers: [d397ef1d012accb6f9cb41fa5bd5ed4649588e9ad8552f87d2f9a579a9c3f023]
	I1010 19:29:01.572814  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:29:01.577754  147213 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:29:01.577832  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:29:01.616077  147213 cri.go:89] found id: "3a26f9cbec8dc8e6e1b1c1cc24a2432dc85819a78557be2263db6bc34e847664"
	I1010 19:29:01.616104  147213 cri.go:89] found id: ""
	I1010 19:29:01.616121  147213 logs.go:282] 1 containers: [3a26f9cbec8dc8e6e1b1c1cc24a2432dc85819a78557be2263db6bc34e847664]
	I1010 19:29:01.616185  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:29:01.620622  147213 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:29:01.620702  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:29:01.662859  147213 cri.go:89] found id: "d59196636b2826805e226757e3c5d3683d49f2fddb43f2e965aeae02026742ea"
	I1010 19:29:01.662889  147213 cri.go:89] found id: ""
	I1010 19:29:01.662903  147213 logs.go:282] 1 containers: [d59196636b2826805e226757e3c5d3683d49f2fddb43f2e965aeae02026742ea]
	I1010 19:29:01.662964  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:29:01.667491  147213 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:29:01.667585  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:29:01.706191  147213 cri.go:89] found id: ""
	I1010 19:29:01.706217  147213 logs.go:282] 0 containers: []
	W1010 19:29:01.706228  147213 logs.go:284] No container was found matching "kindnet"
	I1010 19:29:01.706234  147213 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1010 19:29:01.706299  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1010 19:29:01.753559  147213 cri.go:89] found id: "dfabbf70cd44985666c6b54a456e49c66a41cc19300a483565decd937ce240ab"
	I1010 19:29:01.753581  147213 cri.go:89] found id: "e14d37c6da3f630d60720c5dc7ec414f060a7b327fafd27b633f3402e552840e"
	I1010 19:29:01.753584  147213 cri.go:89] found id: ""
	I1010 19:29:01.753591  147213 logs.go:282] 2 containers: [dfabbf70cd44985666c6b54a456e49c66a41cc19300a483565decd937ce240ab e14d37c6da3f630d60720c5dc7ec414f060a7b327fafd27b633f3402e552840e]
	I1010 19:29:01.753645  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:29:01.758179  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:29:01.762336  147213 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:29:01.762358  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 19:29:01.867667  147213 logs.go:123] Gathering logs for etcd [bfc9f1f069a0299235908025d3c8840e059ddc2fc2f579932edb134cff568c36] ...
	I1010 19:29:01.867698  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bfc9f1f069a0299235908025d3c8840e059ddc2fc2f579932edb134cff568c36"
	I1010 19:29:01.911722  147213 logs.go:123] Gathering logs for coredns [3c98f0e3e46ce82feb8638281ffb8c8cdf326b15115a6edfe91de59de6868846] ...
	I1010 19:29:01.911756  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c98f0e3e46ce82feb8638281ffb8c8cdf326b15115a6edfe91de59de6868846"
	I1010 19:29:01.955152  147213 logs.go:123] Gathering logs for kube-proxy [3a26f9cbec8dc8e6e1b1c1cc24a2432dc85819a78557be2263db6bc34e847664] ...
	I1010 19:29:01.955189  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3a26f9cbec8dc8e6e1b1c1cc24a2432dc85819a78557be2263db6bc34e847664"
	I1010 19:29:01.995010  147213 logs.go:123] Gathering logs for kube-controller-manager [d59196636b2826805e226757e3c5d3683d49f2fddb43f2e965aeae02026742ea] ...
	I1010 19:29:01.995041  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d59196636b2826805e226757e3c5d3683d49f2fddb43f2e965aeae02026742ea"
	I1010 19:29:02.047505  147213 logs.go:123] Gathering logs for storage-provisioner [dfabbf70cd44985666c6b54a456e49c66a41cc19300a483565decd937ce240ab] ...
	I1010 19:29:02.047546  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dfabbf70cd44985666c6b54a456e49c66a41cc19300a483565decd937ce240ab"
	I1010 19:29:02.085080  147213 logs.go:123] Gathering logs for storage-provisioner [e14d37c6da3f630d60720c5dc7ec414f060a7b327fafd27b633f3402e552840e] ...
	I1010 19:29:02.085110  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e14d37c6da3f630d60720c5dc7ec414f060a7b327fafd27b633f3402e552840e"
	I1010 19:29:02.128482  147213 logs.go:123] Gathering logs for kubelet ...
	I1010 19:29:02.128527  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:29:02.194867  147213 logs.go:123] Gathering logs for dmesg ...
	I1010 19:29:02.194904  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:29:02.211881  147213 logs.go:123] Gathering logs for kube-apiserver [20a9cb514f18abab89d82173a52c43693b13548712f96726c491e14100e5deba] ...
	I1010 19:29:02.211911  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 20a9cb514f18abab89d82173a52c43693b13548712f96726c491e14100e5deba"
	I1010 19:29:02.262969  147213 logs.go:123] Gathering logs for kube-scheduler [d397ef1d012accb6f9cb41fa5bd5ed4649588e9ad8552f87d2f9a579a9c3f023] ...
	I1010 19:29:02.263013  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d397ef1d012accb6f9cb41fa5bd5ed4649588e9ad8552f87d2f9a579a9c3f023"
	I1010 19:29:02.302921  147213 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:29:02.302956  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:29:02.671102  147213 logs.go:123] Gathering logs for container status ...
	I1010 19:29:02.671169  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:29:05.241477  147213 system_pods.go:59] 8 kube-system pods found
	I1010 19:29:05.241508  147213 system_pods.go:61] "coredns-7c65d6cfc9-86brb" [28e5f869-f82f-4bd4-9d9c-89499fa89c89] Running
	I1010 19:29:05.241513  147213 system_pods.go:61] "etcd-no-preload-320324" [3027fc7b-a2cd-47c9-a6f1-122bfd0475a8] Running
	I1010 19:29:05.241517  147213 system_pods.go:61] "kube-apiserver-no-preload-320324" [6e3d2eab-4568-48e9-957c-e724ff89b5da] Running
	I1010 19:29:05.241521  147213 system_pods.go:61] "kube-controller-manager-no-preload-320324" [a6d65cd3-48b1-4faf-bb7f-f6433641d6f3] Running
	I1010 19:29:05.241525  147213 system_pods.go:61] "kube-proxy-vn6sv" [e5b2c419-7299-4bc4-b263-99408b9484eb] Running
	I1010 19:29:05.241528  147213 system_pods.go:61] "kube-scheduler-no-preload-320324" [e089c258-e720-4c78-ada1-eebbe89556c1] Running
	I1010 19:29:05.241534  147213 system_pods.go:61] "metrics-server-6867b74b74-8w9lk" [354939e6-2ca9-44f5-8e8e-c10493c68b79] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1010 19:29:05.241540  147213 system_pods.go:61] "storage-provisioner" [d4965b53-60c3-4a97-bd52-d164d977247a] Running
	I1010 19:29:05.241549  147213 system_pods.go:74] duration metric: took 3.853712488s to wait for pod list to return data ...
	I1010 19:29:05.241556  147213 default_sa.go:34] waiting for default service account to be created ...
	I1010 19:29:05.244686  147213 default_sa.go:45] found service account: "default"
	I1010 19:29:05.244721  147213 default_sa.go:55] duration metric: took 3.158069ms for default service account to be created ...
	I1010 19:29:05.244733  147213 system_pods.go:116] waiting for k8s-apps to be running ...
	I1010 19:29:05.249372  147213 system_pods.go:86] 8 kube-system pods found
	I1010 19:29:05.249398  147213 system_pods.go:89] "coredns-7c65d6cfc9-86brb" [28e5f869-f82f-4bd4-9d9c-89499fa89c89] Running
	I1010 19:29:05.249404  147213 system_pods.go:89] "etcd-no-preload-320324" [3027fc7b-a2cd-47c9-a6f1-122bfd0475a8] Running
	I1010 19:29:05.249408  147213 system_pods.go:89] "kube-apiserver-no-preload-320324" [6e3d2eab-4568-48e9-957c-e724ff89b5da] Running
	I1010 19:29:05.249413  147213 system_pods.go:89] "kube-controller-manager-no-preload-320324" [a6d65cd3-48b1-4faf-bb7f-f6433641d6f3] Running
	I1010 19:29:05.249418  147213 system_pods.go:89] "kube-proxy-vn6sv" [e5b2c419-7299-4bc4-b263-99408b9484eb] Running
	I1010 19:29:05.249425  147213 system_pods.go:89] "kube-scheduler-no-preload-320324" [e089c258-e720-4c78-ada1-eebbe89556c1] Running
	I1010 19:29:05.249433  147213 system_pods.go:89] "metrics-server-6867b74b74-8w9lk" [354939e6-2ca9-44f5-8e8e-c10493c68b79] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1010 19:29:05.249442  147213 system_pods.go:89] "storage-provisioner" [d4965b53-60c3-4a97-bd52-d164d977247a] Running
	I1010 19:29:05.249455  147213 system_pods.go:126] duration metric: took 4.715381ms to wait for k8s-apps to be running ...
	I1010 19:29:05.249467  147213 system_svc.go:44] waiting for kubelet service to be running ....
	I1010 19:29:05.249519  147213 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 19:29:05.265180  147213 system_svc.go:56] duration metric: took 15.703413ms WaitForService to wait for kubelet
	I1010 19:29:05.265216  147213 kubeadm.go:582] duration metric: took 4m24.851723603s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1010 19:29:05.265237  147213 node_conditions.go:102] verifying NodePressure condition ...
	I1010 19:29:05.268775  147213 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1010 19:29:05.268807  147213 node_conditions.go:123] node cpu capacity is 2
	I1010 19:29:05.268821  147213 node_conditions.go:105] duration metric: took 3.575195ms to run NodePressure ...
	I1010 19:29:05.268834  147213 start.go:241] waiting for startup goroutines ...
	I1010 19:29:05.268840  147213 start.go:246] waiting for cluster config update ...
	I1010 19:29:05.268869  147213 start.go:255] writing updated cluster config ...
	I1010 19:29:05.269148  147213 ssh_runner.go:195] Run: rm -f paused
	I1010 19:29:05.319999  147213 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1010 19:29:05.322189  147213 out.go:177] * Done! kubectl is now configured to use "no-preload-320324" cluster and "default" namespace by default
	I1010 19:29:41.275119  148123 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1010 19:29:41.275272  148123 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1010 19:29:41.276822  148123 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1010 19:29:41.276919  148123 kubeadm.go:310] [preflight] Running pre-flight checks
	I1010 19:29:41.277017  148123 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1010 19:29:41.277142  148123 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1010 19:29:41.277254  148123 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1010 19:29:41.277357  148123 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1010 19:29:41.279069  148123 out.go:235]   - Generating certificates and keys ...
	I1010 19:29:41.279160  148123 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1010 19:29:41.279217  148123 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1010 19:29:41.279306  148123 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1010 19:29:41.279381  148123 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1010 19:29:41.279484  148123 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1010 19:29:41.279576  148123 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1010 19:29:41.279674  148123 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1010 19:29:41.279779  148123 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1010 19:29:41.279906  148123 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1010 19:29:41.279971  148123 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1010 19:29:41.280005  148123 kubeadm.go:310] [certs] Using the existing "sa" key
	I1010 19:29:41.280052  148123 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1010 19:29:41.280095  148123 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1010 19:29:41.280144  148123 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1010 19:29:41.280219  148123 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1010 19:29:41.280317  148123 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1010 19:29:41.280474  148123 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1010 19:29:41.280583  148123 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1010 19:29:41.280648  148123 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1010 19:29:41.280736  148123 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1010 19:29:41.282180  148123 out.go:235]   - Booting up control plane ...
	I1010 19:29:41.282266  148123 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1010 19:29:41.282336  148123 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1010 19:29:41.282414  148123 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1010 19:29:41.282538  148123 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1010 19:29:41.282748  148123 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1010 19:29:41.282822  148123 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1010 19:29:41.282896  148123 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1010 19:29:41.283061  148123 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1010 19:29:41.283123  148123 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1010 19:29:41.283287  148123 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1010 19:29:41.283348  148123 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1010 19:29:41.283519  148123 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1010 19:29:41.283623  148123 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1010 19:29:41.283809  148123 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1010 19:29:41.283900  148123 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1010 19:29:41.284115  148123 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1010 19:29:41.284135  148123 kubeadm.go:310] 
	I1010 19:29:41.284201  148123 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1010 19:29:41.284243  148123 kubeadm.go:310] 		timed out waiting for the condition
	I1010 19:29:41.284257  148123 kubeadm.go:310] 
	I1010 19:29:41.284307  148123 kubeadm.go:310] 	This error is likely caused by:
	I1010 19:29:41.284346  148123 kubeadm.go:310] 		- The kubelet is not running
	I1010 19:29:41.284481  148123 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1010 19:29:41.284505  148123 kubeadm.go:310] 
	I1010 19:29:41.284655  148123 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1010 19:29:41.284714  148123 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1010 19:29:41.284752  148123 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1010 19:29:41.284758  148123 kubeadm.go:310] 
	I1010 19:29:41.284913  148123 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1010 19:29:41.285038  148123 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1010 19:29:41.285055  148123 kubeadm.go:310] 
	I1010 19:29:41.285169  148123 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1010 19:29:41.285307  148123 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1010 19:29:41.285475  148123 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1010 19:29:41.285587  148123 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1010 19:29:41.285644  148123 kubeadm.go:310] 
	W1010 19:29:41.285747  148123 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1010 19:29:41.285792  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1010 19:29:46.681684  148123 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.395862838s)
	I1010 19:29:46.681797  148123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 19:29:46.696927  148123 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1010 19:29:46.708250  148123 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1010 19:29:46.708280  148123 kubeadm.go:157] found existing configuration files:
	
	I1010 19:29:46.708339  148123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1010 19:29:46.719748  148123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1010 19:29:46.719828  148123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1010 19:29:46.730892  148123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1010 19:29:46.742329  148123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1010 19:29:46.742401  148123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1010 19:29:46.752919  148123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1010 19:29:46.762860  148123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1010 19:29:46.762932  148123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1010 19:29:46.772513  148123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1010 19:29:46.781672  148123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1010 19:29:46.781746  148123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1010 19:29:46.791250  148123 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1010 19:29:47.018706  148123 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1010 19:31:43.351851  148123 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1010 19:31:43.351992  148123 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1010 19:31:43.353664  148123 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1010 19:31:43.353752  148123 kubeadm.go:310] [preflight] Running pre-flight checks
	I1010 19:31:43.353861  148123 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1010 19:31:43.353998  148123 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1010 19:31:43.354130  148123 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1010 19:31:43.354235  148123 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1010 19:31:43.356450  148123 out.go:235]   - Generating certificates and keys ...
	I1010 19:31:43.356557  148123 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1010 19:31:43.356652  148123 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1010 19:31:43.356783  148123 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1010 19:31:43.356902  148123 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1010 19:31:43.357002  148123 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1010 19:31:43.357074  148123 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1010 19:31:43.357181  148123 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1010 19:31:43.357247  148123 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1010 19:31:43.357325  148123 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1010 19:31:43.357408  148123 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1010 19:31:43.357459  148123 kubeadm.go:310] [certs] Using the existing "sa" key
	I1010 19:31:43.357529  148123 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1010 19:31:43.357604  148123 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1010 19:31:43.357676  148123 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1010 19:31:43.357751  148123 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1010 19:31:43.357830  148123 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1010 19:31:43.357979  148123 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1010 19:31:43.358092  148123 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1010 19:31:43.358158  148123 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1010 19:31:43.358263  148123 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1010 19:31:43.360216  148123 out.go:235]   - Booting up control plane ...
	I1010 19:31:43.360332  148123 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1010 19:31:43.360463  148123 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1010 19:31:43.360551  148123 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1010 19:31:43.360673  148123 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1010 19:31:43.360907  148123 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1010 19:31:43.360976  148123 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1010 19:31:43.361058  148123 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1010 19:31:43.361309  148123 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1010 19:31:43.361410  148123 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1010 19:31:43.361648  148123 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1010 19:31:43.361742  148123 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1010 19:31:43.361960  148123 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1010 19:31:43.362058  148123 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1010 19:31:43.362308  148123 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1010 19:31:43.362399  148123 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1010 19:31:43.362640  148123 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1010 19:31:43.362652  148123 kubeadm.go:310] 
	I1010 19:31:43.362708  148123 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1010 19:31:43.362758  148123 kubeadm.go:310] 		timed out waiting for the condition
	I1010 19:31:43.362768  148123 kubeadm.go:310] 
	I1010 19:31:43.362809  148123 kubeadm.go:310] 	This error is likely caused by:
	I1010 19:31:43.362858  148123 kubeadm.go:310] 		- The kubelet is not running
	I1010 19:31:43.362981  148123 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1010 19:31:43.362993  148123 kubeadm.go:310] 
	I1010 19:31:43.363119  148123 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1010 19:31:43.363164  148123 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1010 19:31:43.363204  148123 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1010 19:31:43.363219  148123 kubeadm.go:310] 
	I1010 19:31:43.363344  148123 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1010 19:31:43.363454  148123 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1010 19:31:43.363465  148123 kubeadm.go:310] 
	I1010 19:31:43.363591  148123 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1010 19:31:43.363708  148123 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1010 19:31:43.363803  148123 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1010 19:31:43.363892  148123 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1010 19:31:43.363978  148123 kubeadm.go:394] duration metric: took 8m3.085634474s to StartCluster
	I1010 19:31:43.364052  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:31:43.364159  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:31:43.364268  148123 kubeadm.go:310] 
	I1010 19:31:43.413041  148123 cri.go:89] found id: ""
	I1010 19:31:43.413075  148123 logs.go:282] 0 containers: []
	W1010 19:31:43.413086  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:31:43.413092  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:31:43.413206  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:31:43.456454  148123 cri.go:89] found id: ""
	I1010 19:31:43.456487  148123 logs.go:282] 0 containers: []
	W1010 19:31:43.456499  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:31:43.456507  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:31:43.456567  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:31:43.498582  148123 cri.go:89] found id: ""
	I1010 19:31:43.498614  148123 logs.go:282] 0 containers: []
	W1010 19:31:43.498624  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:31:43.498633  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:31:43.498694  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:31:43.540557  148123 cri.go:89] found id: ""
	I1010 19:31:43.540589  148123 logs.go:282] 0 containers: []
	W1010 19:31:43.540598  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:31:43.540605  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:31:43.540661  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:31:43.576743  148123 cri.go:89] found id: ""
	I1010 19:31:43.576776  148123 logs.go:282] 0 containers: []
	W1010 19:31:43.576787  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:31:43.576796  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:31:43.576887  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:31:43.614620  148123 cri.go:89] found id: ""
	I1010 19:31:43.614652  148123 logs.go:282] 0 containers: []
	W1010 19:31:43.614662  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:31:43.614668  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:31:43.614730  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:31:43.652008  148123 cri.go:89] found id: ""
	I1010 19:31:43.652061  148123 logs.go:282] 0 containers: []
	W1010 19:31:43.652075  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:31:43.652084  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:31:43.652153  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:31:43.689990  148123 cri.go:89] found id: ""
	I1010 19:31:43.690019  148123 logs.go:282] 0 containers: []
	W1010 19:31:43.690028  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:31:43.690043  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:31:43.690056  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:31:43.745634  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:31:43.745673  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:31:43.760064  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:31:43.760095  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:31:43.840168  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:31:43.840197  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:31:43.840214  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:31:43.943265  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:31:43.943303  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1010 19:31:43.987758  148123 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1010 19:31:43.987816  148123 out.go:270] * 
	W1010 19:31:43.987891  148123 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1010 19:31:43.987906  148123 out.go:270] * 
	W1010 19:31:43.988703  148123 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1010 19:31:43.991928  148123 out.go:201] 
	W1010 19:31:43.993494  148123 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1010 19:31:43.993556  148123 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1010 19:31:43.993585  148123 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1010 19:31:43.995273  148123 out.go:201] 
	
	
	==> CRI-O <==
	Oct 10 19:43:59 no-preload-320324 crio[719]: time="2024-10-10 19:43:59.272994343Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728589439272974400,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=65f1ea0c-e7a6-472b-9c87-e8006bd42db7 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 10 19:43:59 no-preload-320324 crio[719]: time="2024-10-10 19:43:59.273533000Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=79f07015-9823-4ae4-867c-2ccf661670a8 name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 19:43:59 no-preload-320324 crio[719]: time="2024-10-10 19:43:59.273585748Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=79f07015-9823-4ae4-867c-2ccf661670a8 name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 19:43:59 no-preload-320324 crio[719]: time="2024-10-10 19:43:59.273817993Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dfabbf70cd44985666c6b54a456e49c66a41cc19300a483565decd937ce240ab,PodSandboxId:3fd1f120d7e860d22ca8af701b88b3cd9684b1ead0bb35aa7ba45bc017f5a780,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728588308885762612,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4965b53-60c3-4a97-bd52-d164d977247a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2cdd328ebee5dae2ff3960247044a0983c6992c0f2770f8d7093a26068e1d385,PodSandboxId:86ec69250ff8253f8bc74a5d230ebbcf7a36105b8133f79a7c9c870db90ed0b0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1728588288025681818,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0d6bf6d0-79fc-47a0-b588-a7c47c06e191,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c98f0e3e46ce82feb8638281ffb8c8cdf326b15115a6edfe91de59de6868846,PodSandboxId:fbc0eb700ee8cbff7cc6e919d6c674db8b6d7b3694c8ca70a2d81ce5a385c0a8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728588285772217003,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-86brb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28e5f869-f82f-4bd4-9d9c-89499fa89c89,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e14d37c6da3f630d60720c5dc7ec414f060a7b327fafd27b633f3402e552840e,PodSandboxId:3fd1f120d7e860d22ca8af701b88b3cd9684b1ead0bb35aa7ba45bc017f5a780,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1728588278099877129,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d
4965b53-60c3-4a97-bd52-d164d977247a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a26f9cbec8dc8e6e1b1c1cc24a2432dc85819a78557be2263db6bc34e847664,PodSandboxId:453ffe467d42c751befb3e2f09eaa4e43416314d797244a4c255966077ed50eb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728588278100034786,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vn6sv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5b2c419-7299-4bc4-b263-99408b9484
eb,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d59196636b2826805e226757e3c5d3683d49f2fddb43f2e965aeae02026742ea,PodSandboxId:39c9cdf3f7d947934aa345dbc5ee85c57af2eba8066e6683886d2bb0efae5893,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728588273334377136,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-320324,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb9368a480685ae4
528d20670060406c,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfc9f1f069a0299235908025d3c8840e059ddc2fc2f579932edb134cff568c36,PodSandboxId:b590286c4520957ce4dfe6cb3da44b285f654565000cbe5d407c96b76d43c1d9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728588273283344969,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-320324,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3cbcec58925c6e71fe76a35de28ca1b8,},Annotations:map[string]s
tring{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d397ef1d012accb6f9cb41fa5bd5ed4649588e9ad8552f87d2f9a579a9c3f023,PodSandboxId:21ba7aa94ead484c32379a5c41acc3e6cd5e6e587e0e66f72a9cb0c7ce8b29d8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728588273280730422,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-320324,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56981c4dc0d3b087e0e04dd21a000497,},Annotations:map[string]string{io.kube
rnetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20a9cb514f18abab89d82173a52c43693b13548712f96726c491e14100e5deba,PodSandboxId:37f20f27a64ee15d5185bf4dd7bee2bc6e698ebe0826a2b8b13462e3a3ff441e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728588273287152746,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-320324,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4b732db1a340209d45b2d9b80eca5de,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=79f07015-9823-4ae4-867c-2ccf661670a8 name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 19:43:59 no-preload-320324 crio[719]: time="2024-10-10 19:43:59.310908562Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6f25a69e-b04d-453e-9680-523cb3570b88 name=/runtime.v1.RuntimeService/Version
	Oct 10 19:43:59 no-preload-320324 crio[719]: time="2024-10-10 19:43:59.311005309Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6f25a69e-b04d-453e-9680-523cb3570b88 name=/runtime.v1.RuntimeService/Version
	Oct 10 19:43:59 no-preload-320324 crio[719]: time="2024-10-10 19:43:59.312255301Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9e20370f-552e-455d-87ee-852ced2d7612 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 10 19:43:59 no-preload-320324 crio[719]: time="2024-10-10 19:43:59.312656318Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728589439312634382,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9e20370f-552e-455d-87ee-852ced2d7612 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 10 19:43:59 no-preload-320324 crio[719]: time="2024-10-10 19:43:59.313235285Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d689d7f6-c804-4c9d-934d-20d267fafbc1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 19:43:59 no-preload-320324 crio[719]: time="2024-10-10 19:43:59.313285033Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d689d7f6-c804-4c9d-934d-20d267fafbc1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 19:43:59 no-preload-320324 crio[719]: time="2024-10-10 19:43:59.313558828Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dfabbf70cd44985666c6b54a456e49c66a41cc19300a483565decd937ce240ab,PodSandboxId:3fd1f120d7e860d22ca8af701b88b3cd9684b1ead0bb35aa7ba45bc017f5a780,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728588308885762612,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4965b53-60c3-4a97-bd52-d164d977247a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2cdd328ebee5dae2ff3960247044a0983c6992c0f2770f8d7093a26068e1d385,PodSandboxId:86ec69250ff8253f8bc74a5d230ebbcf7a36105b8133f79a7c9c870db90ed0b0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1728588288025681818,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0d6bf6d0-79fc-47a0-b588-a7c47c06e191,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c98f0e3e46ce82feb8638281ffb8c8cdf326b15115a6edfe91de59de6868846,PodSandboxId:fbc0eb700ee8cbff7cc6e919d6c674db8b6d7b3694c8ca70a2d81ce5a385c0a8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728588285772217003,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-86brb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28e5f869-f82f-4bd4-9d9c-89499fa89c89,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e14d37c6da3f630d60720c5dc7ec414f060a7b327fafd27b633f3402e552840e,PodSandboxId:3fd1f120d7e860d22ca8af701b88b3cd9684b1ead0bb35aa7ba45bc017f5a780,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1728588278099877129,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d
4965b53-60c3-4a97-bd52-d164d977247a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a26f9cbec8dc8e6e1b1c1cc24a2432dc85819a78557be2263db6bc34e847664,PodSandboxId:453ffe467d42c751befb3e2f09eaa4e43416314d797244a4c255966077ed50eb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728588278100034786,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vn6sv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5b2c419-7299-4bc4-b263-99408b9484
eb,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d59196636b2826805e226757e3c5d3683d49f2fddb43f2e965aeae02026742ea,PodSandboxId:39c9cdf3f7d947934aa345dbc5ee85c57af2eba8066e6683886d2bb0efae5893,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728588273334377136,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-320324,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb9368a480685ae4
528d20670060406c,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfc9f1f069a0299235908025d3c8840e059ddc2fc2f579932edb134cff568c36,PodSandboxId:b590286c4520957ce4dfe6cb3da44b285f654565000cbe5d407c96b76d43c1d9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728588273283344969,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-320324,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3cbcec58925c6e71fe76a35de28ca1b8,},Annotations:map[string]s
tring{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d397ef1d012accb6f9cb41fa5bd5ed4649588e9ad8552f87d2f9a579a9c3f023,PodSandboxId:21ba7aa94ead484c32379a5c41acc3e6cd5e6e587e0e66f72a9cb0c7ce8b29d8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728588273280730422,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-320324,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56981c4dc0d3b087e0e04dd21a000497,},Annotations:map[string]string{io.kube
rnetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20a9cb514f18abab89d82173a52c43693b13548712f96726c491e14100e5deba,PodSandboxId:37f20f27a64ee15d5185bf4dd7bee2bc6e698ebe0826a2b8b13462e3a3ff441e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728588273287152746,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-320324,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4b732db1a340209d45b2d9b80eca5de,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d689d7f6-c804-4c9d-934d-20d267fafbc1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 19:43:59 no-preload-320324 crio[719]: time="2024-10-10 19:43:59.355256588Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bba9b979-98b9-4345-9e03-ba93ced9c717 name=/runtime.v1.RuntimeService/Version
	Oct 10 19:43:59 no-preload-320324 crio[719]: time="2024-10-10 19:43:59.355362214Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bba9b979-98b9-4345-9e03-ba93ced9c717 name=/runtime.v1.RuntimeService/Version
	Oct 10 19:43:59 no-preload-320324 crio[719]: time="2024-10-10 19:43:59.356996092Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f9c0747a-b224-4689-80e1-a6f6befa0606 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 10 19:43:59 no-preload-320324 crio[719]: time="2024-10-10 19:43:59.357337657Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728589439357317453,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f9c0747a-b224-4689-80e1-a6f6befa0606 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 10 19:43:59 no-preload-320324 crio[719]: time="2024-10-10 19:43:59.358859914Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3a82d25c-7bcf-4724-baef-e09c79de00ba name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 19:43:59 no-preload-320324 crio[719]: time="2024-10-10 19:43:59.358915667Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3a82d25c-7bcf-4724-baef-e09c79de00ba name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 19:43:59 no-preload-320324 crio[719]: time="2024-10-10 19:43:59.359111406Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dfabbf70cd44985666c6b54a456e49c66a41cc19300a483565decd937ce240ab,PodSandboxId:3fd1f120d7e860d22ca8af701b88b3cd9684b1ead0bb35aa7ba45bc017f5a780,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728588308885762612,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4965b53-60c3-4a97-bd52-d164d977247a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2cdd328ebee5dae2ff3960247044a0983c6992c0f2770f8d7093a26068e1d385,PodSandboxId:86ec69250ff8253f8bc74a5d230ebbcf7a36105b8133f79a7c9c870db90ed0b0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1728588288025681818,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0d6bf6d0-79fc-47a0-b588-a7c47c06e191,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c98f0e3e46ce82feb8638281ffb8c8cdf326b15115a6edfe91de59de6868846,PodSandboxId:fbc0eb700ee8cbff7cc6e919d6c674db8b6d7b3694c8ca70a2d81ce5a385c0a8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728588285772217003,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-86brb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28e5f869-f82f-4bd4-9d9c-89499fa89c89,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e14d37c6da3f630d60720c5dc7ec414f060a7b327fafd27b633f3402e552840e,PodSandboxId:3fd1f120d7e860d22ca8af701b88b3cd9684b1ead0bb35aa7ba45bc017f5a780,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1728588278099877129,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d
4965b53-60c3-4a97-bd52-d164d977247a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a26f9cbec8dc8e6e1b1c1cc24a2432dc85819a78557be2263db6bc34e847664,PodSandboxId:453ffe467d42c751befb3e2f09eaa4e43416314d797244a4c255966077ed50eb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728588278100034786,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vn6sv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5b2c419-7299-4bc4-b263-99408b9484
eb,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d59196636b2826805e226757e3c5d3683d49f2fddb43f2e965aeae02026742ea,PodSandboxId:39c9cdf3f7d947934aa345dbc5ee85c57af2eba8066e6683886d2bb0efae5893,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728588273334377136,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-320324,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb9368a480685ae4
528d20670060406c,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfc9f1f069a0299235908025d3c8840e059ddc2fc2f579932edb134cff568c36,PodSandboxId:b590286c4520957ce4dfe6cb3da44b285f654565000cbe5d407c96b76d43c1d9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728588273283344969,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-320324,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3cbcec58925c6e71fe76a35de28ca1b8,},Annotations:map[string]s
tring{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d397ef1d012accb6f9cb41fa5bd5ed4649588e9ad8552f87d2f9a579a9c3f023,PodSandboxId:21ba7aa94ead484c32379a5c41acc3e6cd5e6e587e0e66f72a9cb0c7ce8b29d8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728588273280730422,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-320324,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56981c4dc0d3b087e0e04dd21a000497,},Annotations:map[string]string{io.kube
rnetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20a9cb514f18abab89d82173a52c43693b13548712f96726c491e14100e5deba,PodSandboxId:37f20f27a64ee15d5185bf4dd7bee2bc6e698ebe0826a2b8b13462e3a3ff441e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728588273287152746,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-320324,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4b732db1a340209d45b2d9b80eca5de,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3a82d25c-7bcf-4724-baef-e09c79de00ba name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 19:43:59 no-preload-320324 crio[719]: time="2024-10-10 19:43:59.392945341Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d0ef5201-0976-4d3a-a6d7-6f01b567f8d6 name=/runtime.v1.RuntimeService/Version
	Oct 10 19:43:59 no-preload-320324 crio[719]: time="2024-10-10 19:43:59.393025122Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d0ef5201-0976-4d3a-a6d7-6f01b567f8d6 name=/runtime.v1.RuntimeService/Version
	Oct 10 19:43:59 no-preload-320324 crio[719]: time="2024-10-10 19:43:59.394322083Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=56fb13f6-09a9-45c3-b128-6afbbc2af6fe name=/runtime.v1.ImageService/ImageFsInfo
	Oct 10 19:43:59 no-preload-320324 crio[719]: time="2024-10-10 19:43:59.394985971Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728589439394958934,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=56fb13f6-09a9-45c3-b128-6afbbc2af6fe name=/runtime.v1.ImageService/ImageFsInfo
	Oct 10 19:43:59 no-preload-320324 crio[719]: time="2024-10-10 19:43:59.395589776Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7949d1be-c3a4-47ab-b3c7-3a6e04eca6fa name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 19:43:59 no-preload-320324 crio[719]: time="2024-10-10 19:43:59.395640865Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7949d1be-c3a4-47ab-b3c7-3a6e04eca6fa name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 19:43:59 no-preload-320324 crio[719]: time="2024-10-10 19:43:59.396121318Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dfabbf70cd44985666c6b54a456e49c66a41cc19300a483565decd937ce240ab,PodSandboxId:3fd1f120d7e860d22ca8af701b88b3cd9684b1ead0bb35aa7ba45bc017f5a780,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728588308885762612,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4965b53-60c3-4a97-bd52-d164d977247a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2cdd328ebee5dae2ff3960247044a0983c6992c0f2770f8d7093a26068e1d385,PodSandboxId:86ec69250ff8253f8bc74a5d230ebbcf7a36105b8133f79a7c9c870db90ed0b0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1728588288025681818,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0d6bf6d0-79fc-47a0-b588-a7c47c06e191,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c98f0e3e46ce82feb8638281ffb8c8cdf326b15115a6edfe91de59de6868846,PodSandboxId:fbc0eb700ee8cbff7cc6e919d6c674db8b6d7b3694c8ca70a2d81ce5a385c0a8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728588285772217003,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-86brb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28e5f869-f82f-4bd4-9d9c-89499fa89c89,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e14d37c6da3f630d60720c5dc7ec414f060a7b327fafd27b633f3402e552840e,PodSandboxId:3fd1f120d7e860d22ca8af701b88b3cd9684b1ead0bb35aa7ba45bc017f5a780,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1728588278099877129,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d
4965b53-60c3-4a97-bd52-d164d977247a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a26f9cbec8dc8e6e1b1c1cc24a2432dc85819a78557be2263db6bc34e847664,PodSandboxId:453ffe467d42c751befb3e2f09eaa4e43416314d797244a4c255966077ed50eb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728588278100034786,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vn6sv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5b2c419-7299-4bc4-b263-99408b9484
eb,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d59196636b2826805e226757e3c5d3683d49f2fddb43f2e965aeae02026742ea,PodSandboxId:39c9cdf3f7d947934aa345dbc5ee85c57af2eba8066e6683886d2bb0efae5893,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728588273334377136,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-320324,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb9368a480685ae4
528d20670060406c,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfc9f1f069a0299235908025d3c8840e059ddc2fc2f579932edb134cff568c36,PodSandboxId:b590286c4520957ce4dfe6cb3da44b285f654565000cbe5d407c96b76d43c1d9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728588273283344969,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-320324,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3cbcec58925c6e71fe76a35de28ca1b8,},Annotations:map[string]s
tring{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d397ef1d012accb6f9cb41fa5bd5ed4649588e9ad8552f87d2f9a579a9c3f023,PodSandboxId:21ba7aa94ead484c32379a5c41acc3e6cd5e6e587e0e66f72a9cb0c7ce8b29d8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728588273280730422,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-320324,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56981c4dc0d3b087e0e04dd21a000497,},Annotations:map[string]string{io.kube
rnetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20a9cb514f18abab89d82173a52c43693b13548712f96726c491e14100e5deba,PodSandboxId:37f20f27a64ee15d5185bf4dd7bee2bc6e698ebe0826a2b8b13462e3a3ff441e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728588273287152746,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-320324,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c4b732db1a340209d45b2d9b80eca5de,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7949d1be-c3a4-47ab-b3c7-3a6e04eca6fa name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	dfabbf70cd449       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      18 minutes ago      Running             storage-provisioner       2                   3fd1f120d7e86       storage-provisioner
	2cdd328ebee5d       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   19 minutes ago      Running             busybox                   1                   86ec69250ff82       busybox
	3c98f0e3e46ce       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      19 minutes ago      Running             coredns                   1                   fbc0eb700ee8c       coredns-7c65d6cfc9-86brb
	3a26f9cbec8dc       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      19 minutes ago      Running             kube-proxy                1                   453ffe467d42c       kube-proxy-vn6sv
	e14d37c6da3f6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      19 minutes ago      Exited              storage-provisioner       1                   3fd1f120d7e86       storage-provisioner
	d59196636b282       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      19 minutes ago      Running             kube-controller-manager   1                   39c9cdf3f7d94       kube-controller-manager-no-preload-320324
	20a9cb514f18a       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      19 minutes ago      Running             kube-apiserver            1                   37f20f27a64ee       kube-apiserver-no-preload-320324
	bfc9f1f069a02       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      19 minutes ago      Running             etcd                      1                   b590286c45209       etcd-no-preload-320324
	d397ef1d012ac       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      19 minutes ago      Running             kube-scheduler            1                   21ba7aa94ead4       kube-scheduler-no-preload-320324
	
	
	==> coredns [3c98f0e3e46ce82feb8638281ffb8c8cdf326b15115a6edfe91de59de6868846] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:51761 - 27500 "HINFO IN 1293154880471448858.3682064905009596402. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.025178284s
	
	
	==> describe nodes <==
	Name:               no-preload-320324
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-320324
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e90d7de550e839433dc631623053398b652747fc
	                    minikube.k8s.io/name=no-preload-320324
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_10T19_15_18_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 10 Oct 2024 19:15:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-320324
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 10 Oct 2024 19:43:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 10 Oct 2024 19:40:26 +0000   Thu, 10 Oct 2024 19:15:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 10 Oct 2024 19:40:26 +0000   Thu, 10 Oct 2024 19:15:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 10 Oct 2024 19:40:26 +0000   Thu, 10 Oct 2024 19:15:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 10 Oct 2024 19:40:26 +0000   Thu, 10 Oct 2024 19:24:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.11
	  Hostname:    no-preload-320324
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 de1127283fbd43379c307da5d31f891b
	  System UUID:                de112728-3fbd-4337-9c30-7da5d31f891b
	  Boot ID:                    b2b25208-4027-431b-8637-789bdffffd2d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 coredns-7c65d6cfc9-86brb                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     28m
	  kube-system                 etcd-no-preload-320324                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         28m
	  kube-system                 kube-apiserver-no-preload-320324             250m (12%)    0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 kube-controller-manager-no-preload-320324    200m (10%)    0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 kube-proxy-vn6sv                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 kube-scheduler-no-preload-320324             100m (5%)     0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 metrics-server-6867b74b74-8w9lk              100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         28m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (17%)  170Mi (8%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28m                kube-proxy       
	  Normal  Starting                 19m                kube-proxy       
	  Normal  NodeHasSufficientPID     28m                kubelet          Node no-preload-320324 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  28m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  28m                kubelet          Node no-preload-320324 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28m                kubelet          Node no-preload-320324 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 28m                kubelet          Starting kubelet.
	  Normal  NodeReady                28m                kubelet          Node no-preload-320324 status is now: NodeReady
	  Normal  RegisteredNode           28m                node-controller  Node no-preload-320324 event: Registered Node no-preload-320324 in Controller
	  Normal  Starting                 19m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  19m (x8 over 19m)  kubelet          Node no-preload-320324 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m (x8 over 19m)  kubelet          Node no-preload-320324 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x7 over 19m)  kubelet          Node no-preload-320324 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           19m                node-controller  Node no-preload-320324 event: Registered Node no-preload-320324 in Controller
	
	
	==> dmesg <==
	[Oct10 19:23] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051625] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042153] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Oct10 19:24] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.736405] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.597623] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.667265] systemd-fstab-generator[640]: Ignoring "noauto" option for root device
	[  +0.066371] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.073612] systemd-fstab-generator[652]: Ignoring "noauto" option for root device
	[  +0.187890] systemd-fstab-generator[666]: Ignoring "noauto" option for root device
	[  +0.153053] systemd-fstab-generator[678]: Ignoring "noauto" option for root device
	[  +0.317917] systemd-fstab-generator[711]: Ignoring "noauto" option for root device
	[ +15.554998] systemd-fstab-generator[1252]: Ignoring "noauto" option for root device
	[  +0.067907] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.962336] systemd-fstab-generator[1374]: Ignoring "noauto" option for root device
	[  +3.366540] kauditd_printk_skb: 97 callbacks suppressed
	[  +4.745112] systemd-fstab-generator[2004]: Ignoring "noauto" option for root device
	[  +3.165730] kauditd_printk_skb: 61 callbacks suppressed
	[Oct10 19:25] kauditd_printk_skb: 46 callbacks suppressed
	
	
	==> etcd [bfc9f1f069a0299235908025d3c8840e059ddc2fc2f579932edb134cff568c36] <==
	{"level":"info","ts":"2024-10-10T19:24:33.874903Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.72.11:2380"}
	{"level":"info","ts":"2024-10-10T19:24:33.875450Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.72.11:2380"}
	{"level":"info","ts":"2024-10-10T19:24:33.875715Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-10T19:24:35.671177Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a2fc37409d191146 is starting a new election at term 2"}
	{"level":"info","ts":"2024-10-10T19:24:35.671255Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a2fc37409d191146 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-10-10T19:24:35.671296Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a2fc37409d191146 received MsgPreVoteResp from a2fc37409d191146 at term 2"}
	{"level":"info","ts":"2024-10-10T19:24:35.671325Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a2fc37409d191146 became candidate at term 3"}
	{"level":"info","ts":"2024-10-10T19:24:35.671333Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a2fc37409d191146 received MsgVoteResp from a2fc37409d191146 at term 3"}
	{"level":"info","ts":"2024-10-10T19:24:35.671345Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a2fc37409d191146 became leader at term 3"}
	{"level":"info","ts":"2024-10-10T19:24:35.671355Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: a2fc37409d191146 elected leader a2fc37409d191146 at term 3"}
	{"level":"info","ts":"2024-10-10T19:24:35.689874Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-10T19:24:35.690094Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-10T19:24:35.690677Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-10T19:24:35.690718Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-10T19:24:35.689871Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"a2fc37409d191146","local-member-attributes":"{Name:no-preload-320324 ClientURLs:[https://192.168.72.11:2379]}","request-path":"/0/members/a2fc37409d191146/attributes","cluster-id":"df21a150cc67cfa3","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-10T19:24:35.691655Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-10T19:24:35.691658Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-10T19:24:35.692578Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.11:2379"}
	{"level":"info","ts":"2024-10-10T19:24:35.693393Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-10T19:34:35.728701Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":814}
	{"level":"info","ts":"2024-10-10T19:34:35.740588Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":814,"took":"11.515394ms","hash":3626833409,"current-db-size-bytes":2711552,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":2711552,"current-db-size-in-use":"2.7 MB"}
	{"level":"info","ts":"2024-10-10T19:34:35.740662Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3626833409,"revision":814,"compact-revision":-1}
	{"level":"info","ts":"2024-10-10T19:39:35.736616Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1057}
	{"level":"info","ts":"2024-10-10T19:39:35.742386Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1057,"took":"4.755783ms","hash":3949895335,"current-db-size-bytes":2711552,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":1622016,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-10-10T19:39:35.742528Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3949895335,"revision":1057,"compact-revision":814}
	
	
	==> kernel <==
	 19:43:59 up 20 min,  0 users,  load average: 0.83, 0.34, 0.22
	Linux no-preload-320324 5.10.207 #1 SMP Tue Oct 8 15:16:25 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [20a9cb514f18abab89d82173a52c43693b13548712f96726c491e14100e5deba] <==
	W1010 19:39:38.099553       1 handler_proxy.go:99] no RequestInfo found in the context
	E1010 19:39:38.099661       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1010 19:39:38.100803       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1010 19:39:38.100854       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1010 19:40:38.101758       1 handler_proxy.go:99] no RequestInfo found in the context
	W1010 19:40:38.101807       1 handler_proxy.go:99] no RequestInfo found in the context
	E1010 19:40:38.101993       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E1010 19:40:38.102108       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1010 19:40:38.103318       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1010 19:40:38.103371       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1010 19:42:38.103812       1 handler_proxy.go:99] no RequestInfo found in the context
	E1010 19:42:38.103956       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1010 19:42:38.103812       1 handler_proxy.go:99] no RequestInfo found in the context
	E1010 19:42:38.104072       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1010 19:42:38.105235       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1010 19:42:38.105297       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [d59196636b2826805e226757e3c5d3683d49f2fddb43f2e965aeae02026742ea] <==
	E1010 19:38:40.847812       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1010 19:38:41.328709       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1010 19:39:10.854987       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1010 19:39:11.337272       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1010 19:39:40.862775       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1010 19:39:41.344614       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1010 19:40:10.869999       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1010 19:40:11.356915       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1010 19:40:26.440167       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-320324"
	I1010 19:40:36.653873       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="287.67µs"
	E1010 19:40:40.876637       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1010 19:40:41.364615       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1010 19:40:51.652951       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="282.758µs"
	E1010 19:41:10.882311       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1010 19:41:11.373565       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1010 19:41:40.889257       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1010 19:41:41.383314       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1010 19:42:10.896881       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1010 19:42:11.391631       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1010 19:42:40.905015       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1010 19:42:41.398956       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1010 19:43:10.911179       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1010 19:43:11.407215       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1010 19:43:40.919021       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1010 19:43:41.418225       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [3a26f9cbec8dc8e6e1b1c1cc24a2432dc85819a78557be2263db6bc34e847664] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1010 19:24:38.411824       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1010 19:24:38.421685       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.72.11"]
	E1010 19:24:38.421766       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1010 19:24:38.458130       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1010 19:24:38.458178       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1010 19:24:38.458201       1 server_linux.go:169] "Using iptables Proxier"
	I1010 19:24:38.461131       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1010 19:24:38.461391       1 server.go:483] "Version info" version="v1.31.1"
	I1010 19:24:38.461570       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1010 19:24:38.463182       1 config.go:199] "Starting service config controller"
	I1010 19:24:38.463227       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1010 19:24:38.463264       1 config.go:105] "Starting endpoint slice config controller"
	I1010 19:24:38.463285       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1010 19:24:38.463832       1 config.go:328] "Starting node config controller"
	I1010 19:24:38.463865       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1010 19:24:38.564462       1 shared_informer.go:320] Caches are synced for node config
	I1010 19:24:38.564516       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1010 19:24:38.564485       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [d397ef1d012accb6f9cb41fa5bd5ed4649588e9ad8552f87d2f9a579a9c3f023] <==
	I1010 19:24:34.563688       1 serving.go:386] Generated self-signed cert in-memory
	W1010 19:24:37.065842       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1010 19:24:37.065952       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1010 19:24:37.065992       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1010 19:24:37.066018       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1010 19:24:37.092595       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I1010 19:24:37.092823       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1010 19:24:37.094920       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1010 19:24:37.095061       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1010 19:24:37.095304       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1010 19:24:37.095385       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1010 19:24:37.196478       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 10 19:42:47 no-preload-320324 kubelet[1381]: E1010 19:42:47.635874    1381 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-8w9lk" podUID="354939e6-2ca9-44f5-8e8e-c10493c68b79"
	Oct 10 19:42:52 no-preload-320324 kubelet[1381]: E1010 19:42:52.965775    1381 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728589372965157246,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 19:42:52 no-preload-320324 kubelet[1381]: E1010 19:42:52.965829    1381 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728589372965157246,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 19:43:01 no-preload-320324 kubelet[1381]: E1010 19:43:01.636325    1381 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-8w9lk" podUID="354939e6-2ca9-44f5-8e8e-c10493c68b79"
	Oct 10 19:43:02 no-preload-320324 kubelet[1381]: E1010 19:43:02.967981    1381 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728589382967565514,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 19:43:02 no-preload-320324 kubelet[1381]: E1010 19:43:02.968384    1381 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728589382967565514,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 19:43:12 no-preload-320324 kubelet[1381]: E1010 19:43:12.638236    1381 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-8w9lk" podUID="354939e6-2ca9-44f5-8e8e-c10493c68b79"
	Oct 10 19:43:12 no-preload-320324 kubelet[1381]: E1010 19:43:12.970784    1381 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728589392970258325,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 19:43:12 no-preload-320324 kubelet[1381]: E1010 19:43:12.970926    1381 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728589392970258325,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 19:43:22 no-preload-320324 kubelet[1381]: E1010 19:43:22.973093    1381 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728589402972760491,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 19:43:22 no-preload-320324 kubelet[1381]: E1010 19:43:22.973179    1381 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728589402972760491,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 19:43:27 no-preload-320324 kubelet[1381]: E1010 19:43:27.636454    1381 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-8w9lk" podUID="354939e6-2ca9-44f5-8e8e-c10493c68b79"
	Oct 10 19:43:32 no-preload-320324 kubelet[1381]: E1010 19:43:32.678036    1381 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 10 19:43:32 no-preload-320324 kubelet[1381]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 10 19:43:32 no-preload-320324 kubelet[1381]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 10 19:43:32 no-preload-320324 kubelet[1381]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 10 19:43:32 no-preload-320324 kubelet[1381]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 10 19:43:32 no-preload-320324 kubelet[1381]: E1010 19:43:32.975131    1381 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728589412974863362,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 19:43:32 no-preload-320324 kubelet[1381]: E1010 19:43:32.975176    1381 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728589412974863362,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 19:43:40 no-preload-320324 kubelet[1381]: E1010 19:43:40.638943    1381 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-8w9lk" podUID="354939e6-2ca9-44f5-8e8e-c10493c68b79"
	Oct 10 19:43:42 no-preload-320324 kubelet[1381]: E1010 19:43:42.976435    1381 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728589422975941062,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 19:43:42 no-preload-320324 kubelet[1381]: E1010 19:43:42.976974    1381 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728589422975941062,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 19:43:52 no-preload-320324 kubelet[1381]: E1010 19:43:52.979141    1381 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728589432978808826,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 19:43:52 no-preload-320324 kubelet[1381]: E1010 19:43:52.979194    1381 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728589432978808826,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 10 19:43:54 no-preload-320324 kubelet[1381]: E1010 19:43:54.635195    1381 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-8w9lk" podUID="354939e6-2ca9-44f5-8e8e-c10493c68b79"
	
	
	==> storage-provisioner [dfabbf70cd44985666c6b54a456e49c66a41cc19300a483565decd937ce240ab] <==
	I1010 19:25:08.969389       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1010 19:25:08.982242       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1010 19:25:08.982376       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1010 19:25:26.394136       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1010 19:25:26.394650       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a148e308-6b33-4484-b27c-21c54d403579", APIVersion:"v1", ResourceVersion:"598", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-320324_1eb5e73e-87ab-4f71-aa5d-dc947bcff248 became leader
	I1010 19:25:26.394713       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-320324_1eb5e73e-87ab-4f71-aa5d-dc947bcff248!
	I1010 19:25:26.495632       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-320324_1eb5e73e-87ab-4f71-aa5d-dc947bcff248!
	
	
	==> storage-provisioner [e14d37c6da3f630d60720c5dc7ec414f060a7b327fafd27b633f3402e552840e] <==
	I1010 19:24:38.345339       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1010 19:25:08.350627       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-320324 -n no-preload-320324
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-320324 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-8w9lk
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-320324 describe pod metrics-server-6867b74b74-8w9lk
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-320324 describe pod metrics-server-6867b74b74-8w9lk: exit status 1 (69.794626ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-8w9lk" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-320324 describe pod metrics-server-6867b74b74-8w9lk: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (351.36s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (172.71s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
E1010 19:40:53.095796   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/functional-346405/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
E1010 19:41:23.923143   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/auto-873515/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
E1010 19:41:58.553070   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/kindnet-873515/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
E1010 19:42:04.753640   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/calico-873515/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
E1010 19:42:50.018146   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/functional-346405/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
E1010 19:43:11.750978   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/custom-flannel-873515/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
E1010 19:43:28.350173   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/enable-default-cni-873515/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.112:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.112:8443: connect: connection refused
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-947203 -n old-k8s-version-947203
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-947203 -n old-k8s-version-947203: exit status 2 (249.58052ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-947203" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-947203 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-947203 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.526µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-947203 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-947203 -n old-k8s-version-947203
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-947203 -n old-k8s-version-947203: exit status 2 (232.500102ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-947203 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-947203 logs -n 25: (1.623708714s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p no-preload-320324                                   | no-preload-320324            | jenkins | v1.34.0 | 10 Oct 24 19:15 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-029826             | newest-cni-029826            | jenkins | v1.34.0 | 10 Oct 24 19:16 UTC | 10 Oct 24 19:16 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-029826                                   | newest-cni-029826            | jenkins | v1.34.0 | 10 Oct 24 19:16 UTC | 10 Oct 24 19:16 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-029826                  | newest-cni-029826            | jenkins | v1.34.0 | 10 Oct 24 19:16 UTC | 10 Oct 24 19:16 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-029826 --memory=2200 --alsologtostderr   | newest-cni-029826            | jenkins | v1.34.0 | 10 Oct 24 19:16 UTC | 10 Oct 24 19:16 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-541370            | embed-certs-541370           | jenkins | v1.34.0 | 10 Oct 24 19:16 UTC | 10 Oct 24 19:16 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-541370                                  | embed-certs-541370           | jenkins | v1.34.0 | 10 Oct 24 19:16 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| image   | newest-cni-029826 image list                           | newest-cni-029826            | jenkins | v1.34.0 | 10 Oct 24 19:16 UTC | 10 Oct 24 19:16 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-029826                                   | newest-cni-029826            | jenkins | v1.34.0 | 10 Oct 24 19:16 UTC | 10 Oct 24 19:16 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-029826                                   | newest-cni-029826            | jenkins | v1.34.0 | 10 Oct 24 19:16 UTC | 10 Oct 24 19:16 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-029826                                   | newest-cni-029826            | jenkins | v1.34.0 | 10 Oct 24 19:16 UTC | 10 Oct 24 19:16 UTC |
	| delete  | -p newest-cni-029826                                   | newest-cni-029826            | jenkins | v1.34.0 | 10 Oct 24 19:16 UTC | 10 Oct 24 19:17 UTC |
	| start   | -p                                                     | default-k8s-diff-port-361847 | jenkins | v1.34.0 | 10 Oct 24 19:17 UTC | 10 Oct 24 19:18 UTC |
	|         | default-k8s-diff-port-361847                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-320324                  | no-preload-320324            | jenkins | v1.34.0 | 10 Oct 24 19:18 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-320324                                   | no-preload-320324            | jenkins | v1.34.0 | 10 Oct 24 19:18 UTC | 10 Oct 24 19:29 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-947203        | old-k8s-version-947203       | jenkins | v1.34.0 | 10 Oct 24 19:18 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-361847  | default-k8s-diff-port-361847 | jenkins | v1.34.0 | 10 Oct 24 19:18 UTC | 10 Oct 24 19:18 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-361847 | jenkins | v1.34.0 | 10 Oct 24 19:18 UTC |                     |
	|         | default-k8s-diff-port-361847                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-541370                 | embed-certs-541370           | jenkins | v1.34.0 | 10 Oct 24 19:18 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-541370                                  | embed-certs-541370           | jenkins | v1.34.0 | 10 Oct 24 19:19 UTC | 10 Oct 24 19:28 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-947203                              | old-k8s-version-947203       | jenkins | v1.34.0 | 10 Oct 24 19:19 UTC | 10 Oct 24 19:19 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-947203             | old-k8s-version-947203       | jenkins | v1.34.0 | 10 Oct 24 19:19 UTC | 10 Oct 24 19:19 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-947203                              | old-k8s-version-947203       | jenkins | v1.34.0 | 10 Oct 24 19:19 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-361847       | default-k8s-diff-port-361847 | jenkins | v1.34.0 | 10 Oct 24 19:21 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-361847 | jenkins | v1.34.0 | 10 Oct 24 19:21 UTC | 10 Oct 24 19:29 UTC |
	|         | default-k8s-diff-port-361847                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/10 19:21:13
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1010 19:21:13.943219  148525 out.go:345] Setting OutFile to fd 1 ...
	I1010 19:21:13.943336  148525 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 19:21:13.943343  148525 out.go:358] Setting ErrFile to fd 2...
	I1010 19:21:13.943347  148525 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 19:21:13.943560  148525 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19787-81676/.minikube/bin
	I1010 19:21:13.944109  148525 out.go:352] Setting JSON to false
	I1010 19:21:13.945219  148525 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":11020,"bootTime":1728577054,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1010 19:21:13.945321  148525 start.go:139] virtualization: kvm guest
	I1010 19:21:13.947915  148525 out.go:177] * [default-k8s-diff-port-361847] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1010 19:21:13.950021  148525 out.go:177]   - MINIKUBE_LOCATION=19787
	I1010 19:21:13.950037  148525 notify.go:220] Checking for updates...
	I1010 19:21:13.952994  148525 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1010 19:21:13.954661  148525 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19787-81676/kubeconfig
	I1010 19:21:13.956438  148525 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19787-81676/.minikube
	I1010 19:21:13.958502  148525 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1010 19:21:13.960099  148525 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1010 19:21:13.961930  148525 config.go:182] Loaded profile config "default-k8s-diff-port-361847": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 19:21:13.962374  148525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:21:13.962450  148525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:21:13.978323  148525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41247
	I1010 19:21:13.978926  148525 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:21:13.979520  148525 main.go:141] libmachine: Using API Version  1
	I1010 19:21:13.979538  148525 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:21:13.979954  148525 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:21:13.980144  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .DriverName
	I1010 19:21:13.980446  148525 driver.go:394] Setting default libvirt URI to qemu:///system
	I1010 19:21:13.980745  148525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:21:13.980784  148525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:21:13.996046  148525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37617
	I1010 19:21:13.996534  148525 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:21:13.997069  148525 main.go:141] libmachine: Using API Version  1
	I1010 19:21:13.997097  148525 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:21:13.997530  148525 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:21:13.997788  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .DriverName
	I1010 19:21:14.033593  148525 out.go:177] * Using the kvm2 driver based on existing profile
	I1010 19:21:14.035367  148525 start.go:297] selected driver: kvm2
	I1010 19:21:14.035394  148525 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-361847 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-361847 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.32 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 19:21:14.035526  148525 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1010 19:21:14.036341  148525 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 19:21:14.036452  148525 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19787-81676/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1010 19:21:14.052462  148525 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1010 19:21:14.052918  148525 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1010 19:21:14.052967  148525 cni.go:84] Creating CNI manager for ""
	I1010 19:21:14.053019  148525 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1010 19:21:14.053067  148525 start.go:340] cluster config:
	{Name:default-k8s-diff-port-361847 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-361847 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.32 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-h
ost Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 19:21:14.053178  148525 iso.go:125] acquiring lock: {Name:mk53f4624ffe67cac4f5d6b29b9ed87292a010a5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 19:21:14.055485  148525 out.go:177] * Starting "default-k8s-diff-port-361847" primary control-plane node in "default-k8s-diff-port-361847" cluster
	I1010 19:21:16.773106  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:21:14.056945  148525 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1010 19:21:14.057002  148525 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1010 19:21:14.057014  148525 cache.go:56] Caching tarball of preloaded images
	I1010 19:21:14.057118  148525 preload.go:172] Found /home/jenkins/minikube-integration/19787-81676/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1010 19:21:14.057134  148525 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1010 19:21:14.057268  148525 profile.go:143] Saving config to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/default-k8s-diff-port-361847/config.json ...
	I1010 19:21:14.057476  148525 start.go:360] acquireMachinesLock for default-k8s-diff-port-361847: {Name:mkf50a8a3600473176c10a3ea212772e747151e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1010 19:21:22.853158  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:21:25.925174  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:21:32.005160  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:21:35.077198  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:21:41.157130  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:21:44.229127  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:21:50.309136  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:21:53.381191  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:21:59.461129  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:22:02.533201  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:22:08.613124  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:22:11.685169  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:22:17.765161  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:22:20.837208  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:22:26.917127  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:22:29.989172  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:22:36.069147  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:22:39.141173  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:22:45.221160  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:22:48.293141  147213 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.11:22: connect: no route to host
	I1010 19:22:51.297376  147758 start.go:364] duration metric: took 3m49.312490934s to acquireMachinesLock for "embed-certs-541370"
	I1010 19:22:51.297453  147758 start.go:96] Skipping create...Using existing machine configuration
	I1010 19:22:51.297464  147758 fix.go:54] fixHost starting: 
	I1010 19:22:51.297787  147758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:22:51.297848  147758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:22:51.314087  147758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43639
	I1010 19:22:51.314588  147758 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:22:51.315115  147758 main.go:141] libmachine: Using API Version  1
	I1010 19:22:51.315138  147758 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:22:51.315509  147758 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:22:51.315691  147758 main.go:141] libmachine: (embed-certs-541370) Calling .DriverName
	I1010 19:22:51.315879  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetState
	I1010 19:22:51.317597  147758 fix.go:112] recreateIfNeeded on embed-certs-541370: state=Stopped err=<nil>
	I1010 19:22:51.317621  147758 main.go:141] libmachine: (embed-certs-541370) Calling .DriverName
	W1010 19:22:51.317781  147758 fix.go:138] unexpected machine state, will restart: <nil>
	I1010 19:22:51.319664  147758 out.go:177] * Restarting existing kvm2 VM for "embed-certs-541370" ...
	I1010 19:22:51.320967  147758 main.go:141] libmachine: (embed-certs-541370) Calling .Start
	I1010 19:22:51.321134  147758 main.go:141] libmachine: (embed-certs-541370) Ensuring networks are active...
	I1010 19:22:51.322026  147758 main.go:141] libmachine: (embed-certs-541370) Ensuring network default is active
	I1010 19:22:51.322468  147758 main.go:141] libmachine: (embed-certs-541370) Ensuring network mk-embed-certs-541370 is active
	I1010 19:22:51.322874  147758 main.go:141] libmachine: (embed-certs-541370) Getting domain xml...
	I1010 19:22:51.323687  147758 main.go:141] libmachine: (embed-certs-541370) Creating domain...
	I1010 19:22:51.294881  147213 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1010 19:22:51.294927  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetMachineName
	I1010 19:22:51.295226  147213 buildroot.go:166] provisioning hostname "no-preload-320324"
	I1010 19:22:51.295256  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetMachineName
	I1010 19:22:51.295454  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHHostname
	I1010 19:22:51.297198  147213 machine.go:96] duration metric: took 4m37.414594306s to provisionDockerMachine
	I1010 19:22:51.297252  147213 fix.go:56] duration metric: took 4m37.436635356s for fixHost
	I1010 19:22:51.297259  147213 start.go:83] releasing machines lock for "no-preload-320324", held for 4m37.436668423s
	W1010 19:22:51.297278  147213 start.go:714] error starting host: provision: host is not running
	W1010 19:22:51.297382  147213 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1010 19:22:51.297396  147213 start.go:729] Will try again in 5 seconds ...
	I1010 19:22:52.568699  147758 main.go:141] libmachine: (embed-certs-541370) Waiting to get IP...
	I1010 19:22:52.569582  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:22:52.569952  147758 main.go:141] libmachine: (embed-certs-541370) DBG | unable to find current IP address of domain embed-certs-541370 in network mk-embed-certs-541370
	I1010 19:22:52.570018  147758 main.go:141] libmachine: (embed-certs-541370) DBG | I1010 19:22:52.569935  148914 retry.go:31] will retry after 261.244287ms: waiting for machine to come up
	I1010 19:22:52.832639  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:22:52.833280  147758 main.go:141] libmachine: (embed-certs-541370) DBG | unable to find current IP address of domain embed-certs-541370 in network mk-embed-certs-541370
	I1010 19:22:52.833310  147758 main.go:141] libmachine: (embed-certs-541370) DBG | I1010 19:22:52.833200  148914 retry.go:31] will retry after 304.116732ms: waiting for machine to come up
	I1010 19:22:53.138770  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:22:53.139091  147758 main.go:141] libmachine: (embed-certs-541370) DBG | unable to find current IP address of domain embed-certs-541370 in network mk-embed-certs-541370
	I1010 19:22:53.139124  147758 main.go:141] libmachine: (embed-certs-541370) DBG | I1010 19:22:53.139055  148914 retry.go:31] will retry after 484.354474ms: waiting for machine to come up
	I1010 19:22:53.624831  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:22:53.625293  147758 main.go:141] libmachine: (embed-certs-541370) DBG | unable to find current IP address of domain embed-certs-541370 in network mk-embed-certs-541370
	I1010 19:22:53.625323  147758 main.go:141] libmachine: (embed-certs-541370) DBG | I1010 19:22:53.625234  148914 retry.go:31] will retry after 591.916836ms: waiting for machine to come up
	I1010 19:22:54.219214  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:22:54.219732  147758 main.go:141] libmachine: (embed-certs-541370) DBG | unable to find current IP address of domain embed-certs-541370 in network mk-embed-certs-541370
	I1010 19:22:54.219763  147758 main.go:141] libmachine: (embed-certs-541370) DBG | I1010 19:22:54.219673  148914 retry.go:31] will retry after 614.162479ms: waiting for machine to come up
	I1010 19:22:54.835573  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:22:54.836038  147758 main.go:141] libmachine: (embed-certs-541370) DBG | unable to find current IP address of domain embed-certs-541370 in network mk-embed-certs-541370
	I1010 19:22:54.836063  147758 main.go:141] libmachine: (embed-certs-541370) DBG | I1010 19:22:54.835988  148914 retry.go:31] will retry after 824.170953ms: waiting for machine to come up
	I1010 19:22:55.662092  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:22:55.662646  147758 main.go:141] libmachine: (embed-certs-541370) DBG | unable to find current IP address of domain embed-certs-541370 in network mk-embed-certs-541370
	I1010 19:22:55.662668  147758 main.go:141] libmachine: (embed-certs-541370) DBG | I1010 19:22:55.662586  148914 retry.go:31] will retry after 928.483848ms: waiting for machine to come up
	I1010 19:22:56.593200  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:22:56.593724  147758 main.go:141] libmachine: (embed-certs-541370) DBG | unable to find current IP address of domain embed-certs-541370 in network mk-embed-certs-541370
	I1010 19:22:56.593756  147758 main.go:141] libmachine: (embed-certs-541370) DBG | I1010 19:22:56.593679  148914 retry.go:31] will retry after 941.138644ms: waiting for machine to come up
	I1010 19:22:56.299351  147213 start.go:360] acquireMachinesLock for no-preload-320324: {Name:mkf50a8a3600473176c10a3ea212772e747151e2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1010 19:22:57.536977  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:22:57.537403  147758 main.go:141] libmachine: (embed-certs-541370) DBG | unable to find current IP address of domain embed-certs-541370 in network mk-embed-certs-541370
	I1010 19:22:57.537429  147758 main.go:141] libmachine: (embed-certs-541370) DBG | I1010 19:22:57.537331  148914 retry.go:31] will retry after 1.262203584s: waiting for machine to come up
	I1010 19:22:58.801921  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:22:58.802420  147758 main.go:141] libmachine: (embed-certs-541370) DBG | unable to find current IP address of domain embed-certs-541370 in network mk-embed-certs-541370
	I1010 19:22:58.802454  147758 main.go:141] libmachine: (embed-certs-541370) DBG | I1010 19:22:58.802381  148914 retry.go:31] will retry after 2.154751391s: waiting for machine to come up
	I1010 19:23:00.960100  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:00.960661  147758 main.go:141] libmachine: (embed-certs-541370) DBG | unable to find current IP address of domain embed-certs-541370 in network mk-embed-certs-541370
	I1010 19:23:00.960684  147758 main.go:141] libmachine: (embed-certs-541370) DBG | I1010 19:23:00.960607  148914 retry.go:31] will retry after 1.945155171s: waiting for machine to come up
	I1010 19:23:02.907705  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:02.908097  147758 main.go:141] libmachine: (embed-certs-541370) DBG | unable to find current IP address of domain embed-certs-541370 in network mk-embed-certs-541370
	I1010 19:23:02.908129  147758 main.go:141] libmachine: (embed-certs-541370) DBG | I1010 19:23:02.908038  148914 retry.go:31] will retry after 3.245262469s: waiting for machine to come up
	I1010 19:23:06.157527  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:06.157897  147758 main.go:141] libmachine: (embed-certs-541370) DBG | unable to find current IP address of domain embed-certs-541370 in network mk-embed-certs-541370
	I1010 19:23:06.157925  147758 main.go:141] libmachine: (embed-certs-541370) DBG | I1010 19:23:06.157858  148914 retry.go:31] will retry after 3.973579024s: waiting for machine to come up
	I1010 19:23:11.321975  148123 start.go:364] duration metric: took 3m17.648521443s to acquireMachinesLock for "old-k8s-version-947203"
	I1010 19:23:11.322043  148123 start.go:96] Skipping create...Using existing machine configuration
	I1010 19:23:11.322056  148123 fix.go:54] fixHost starting: 
	I1010 19:23:11.322537  148123 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:23:11.322596  148123 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:23:11.339586  148123 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42157
	I1010 19:23:11.340091  148123 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:23:11.340686  148123 main.go:141] libmachine: Using API Version  1
	I1010 19:23:11.340719  148123 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:23:11.341101  148123 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:23:11.341295  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .DriverName
	I1010 19:23:11.341453  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetState
	I1010 19:23:11.343215  148123 fix.go:112] recreateIfNeeded on old-k8s-version-947203: state=Stopped err=<nil>
	I1010 19:23:11.343247  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .DriverName
	W1010 19:23:11.343408  148123 fix.go:138] unexpected machine state, will restart: <nil>
	I1010 19:23:11.345653  148123 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-947203" ...
	I1010 19:23:10.135369  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.135799  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has current primary IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.135830  147758 main.go:141] libmachine: (embed-certs-541370) Found IP for machine: 192.168.39.120
	I1010 19:23:10.135839  147758 main.go:141] libmachine: (embed-certs-541370) Reserving static IP address...
	I1010 19:23:10.136283  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "embed-certs-541370", mac: "52:54:00:e2:ee:d0", ip: "192.168.39.120"} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:10.136311  147758 main.go:141] libmachine: (embed-certs-541370) Reserved static IP address: 192.168.39.120
	I1010 19:23:10.136327  147758 main.go:141] libmachine: (embed-certs-541370) DBG | skip adding static IP to network mk-embed-certs-541370 - found existing host DHCP lease matching {name: "embed-certs-541370", mac: "52:54:00:e2:ee:d0", ip: "192.168.39.120"}
	I1010 19:23:10.136339  147758 main.go:141] libmachine: (embed-certs-541370) DBG | Getting to WaitForSSH function...
	I1010 19:23:10.136351  147758 main.go:141] libmachine: (embed-certs-541370) Waiting for SSH to be available...
	I1010 19:23:10.138861  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.139259  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:10.139293  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.139438  147758 main.go:141] libmachine: (embed-certs-541370) DBG | Using SSH client type: external
	I1010 19:23:10.139472  147758 main.go:141] libmachine: (embed-certs-541370) DBG | Using SSH private key: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/embed-certs-541370/id_rsa (-rw-------)
	I1010 19:23:10.139517  147758 main.go:141] libmachine: (embed-certs-541370) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.120 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19787-81676/.minikube/machines/embed-certs-541370/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1010 19:23:10.139541  147758 main.go:141] libmachine: (embed-certs-541370) DBG | About to run SSH command:
	I1010 19:23:10.139562  147758 main.go:141] libmachine: (embed-certs-541370) DBG | exit 0
	I1010 19:23:10.261078  147758 main.go:141] libmachine: (embed-certs-541370) DBG | SSH cmd err, output: <nil>: 
	I1010 19:23:10.261533  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetConfigRaw
	I1010 19:23:10.262192  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetIP
	I1010 19:23:10.265071  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.265467  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:10.265515  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.265737  147758 profile.go:143] Saving config to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/embed-certs-541370/config.json ...
	I1010 19:23:10.265941  147758 machine.go:93] provisionDockerMachine start ...
	I1010 19:23:10.265960  147758 main.go:141] libmachine: (embed-certs-541370) Calling .DriverName
	I1010 19:23:10.266188  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHHostname
	I1010 19:23:10.269186  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.269618  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:10.269649  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.269799  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHPort
	I1010 19:23:10.269984  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:23:10.270206  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:23:10.270345  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHUsername
	I1010 19:23:10.270550  147758 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:10.270834  147758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.120 22 <nil> <nil>}
	I1010 19:23:10.270849  147758 main.go:141] libmachine: About to run SSH command:
	hostname
	I1010 19:23:10.373285  147758 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1010 19:23:10.373316  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetMachineName
	I1010 19:23:10.373625  147758 buildroot.go:166] provisioning hostname "embed-certs-541370"
	I1010 19:23:10.373660  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetMachineName
	I1010 19:23:10.373835  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHHostname
	I1010 19:23:10.376552  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.376951  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:10.376994  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.377132  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHPort
	I1010 19:23:10.377332  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:23:10.377489  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:23:10.377606  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHUsername
	I1010 19:23:10.377745  147758 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:10.377918  147758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.120 22 <nil> <nil>}
	I1010 19:23:10.377930  147758 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-541370 && echo "embed-certs-541370" | sudo tee /etc/hostname
	I1010 19:23:10.495847  147758 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-541370
	
	I1010 19:23:10.495880  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHHostname
	I1010 19:23:10.498868  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.499205  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:10.499247  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.499362  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHPort
	I1010 19:23:10.499556  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:23:10.499700  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:23:10.499829  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHUsername
	I1010 19:23:10.499961  147758 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:10.500187  147758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.120 22 <nil> <nil>}
	I1010 19:23:10.500210  147758 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-541370' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-541370/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-541370' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1010 19:23:10.614318  147758 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1010 19:23:10.614357  147758 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19787-81676/.minikube CaCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19787-81676/.minikube}
	I1010 19:23:10.614412  147758 buildroot.go:174] setting up certificates
	I1010 19:23:10.614429  147758 provision.go:84] configureAuth start
	I1010 19:23:10.614457  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetMachineName
	I1010 19:23:10.614763  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetIP
	I1010 19:23:10.617457  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.617888  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:10.617916  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.618078  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHHostname
	I1010 19:23:10.620243  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.620635  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:10.620666  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.620789  147758 provision.go:143] copyHostCerts
	I1010 19:23:10.620895  147758 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem, removing ...
	I1010 19:23:10.620913  147758 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem
	I1010 19:23:10.620998  147758 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem (1078 bytes)
	I1010 19:23:10.621111  147758 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem, removing ...
	I1010 19:23:10.621123  147758 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem
	I1010 19:23:10.621159  147758 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem (1123 bytes)
	I1010 19:23:10.621245  147758 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem, removing ...
	I1010 19:23:10.621257  147758 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem
	I1010 19:23:10.621292  147758 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem (1675 bytes)
	I1010 19:23:10.621364  147758 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem org=jenkins.embed-certs-541370 san=[127.0.0.1 192.168.39.120 embed-certs-541370 localhost minikube]
	I1010 19:23:10.697456  147758 provision.go:177] copyRemoteCerts
	I1010 19:23:10.697515  147758 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1010 19:23:10.697547  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHHostname
	I1010 19:23:10.700439  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.700763  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:10.700799  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.700956  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHPort
	I1010 19:23:10.701162  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:23:10.701320  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHUsername
	I1010 19:23:10.701465  147758 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/embed-certs-541370/id_rsa Username:docker}
	I1010 19:23:10.783442  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1010 19:23:10.808446  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1010 19:23:10.832117  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1010 19:23:10.856286  147758 provision.go:87] duration metric: took 241.840139ms to configureAuth
	I1010 19:23:10.856318  147758 buildroot.go:189] setting minikube options for container-runtime
	I1010 19:23:10.856528  147758 config.go:182] Loaded profile config "embed-certs-541370": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 19:23:10.856640  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHHostname
	I1010 19:23:10.859252  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.859677  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:10.859708  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:10.859916  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHPort
	I1010 19:23:10.860087  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:23:10.860222  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:23:10.860344  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHUsername
	I1010 19:23:10.860524  147758 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:10.860688  147758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.120 22 <nil> <nil>}
	I1010 19:23:10.860702  147758 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1010 19:23:11.086349  147758 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1010 19:23:11.086375  147758 machine.go:96] duration metric: took 820.421344ms to provisionDockerMachine
	I1010 19:23:11.086386  147758 start.go:293] postStartSetup for "embed-certs-541370" (driver="kvm2")
	I1010 19:23:11.086401  147758 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1010 19:23:11.086423  147758 main.go:141] libmachine: (embed-certs-541370) Calling .DriverName
	I1010 19:23:11.086755  147758 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1010 19:23:11.086783  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHHostname
	I1010 19:23:11.089482  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:11.089838  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:11.089860  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:11.090042  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHPort
	I1010 19:23:11.090253  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:23:11.090410  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHUsername
	I1010 19:23:11.090535  147758 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/embed-certs-541370/id_rsa Username:docker}
	I1010 19:23:11.172474  147758 ssh_runner.go:195] Run: cat /etc/os-release
	I1010 19:23:11.176699  147758 info.go:137] Remote host: Buildroot 2023.02.9
	I1010 19:23:11.176733  147758 filesync.go:126] Scanning /home/jenkins/minikube-integration/19787-81676/.minikube/addons for local assets ...
	I1010 19:23:11.176800  147758 filesync.go:126] Scanning /home/jenkins/minikube-integration/19787-81676/.minikube/files for local assets ...
	I1010 19:23:11.176899  147758 filesync.go:149] local asset: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem -> 888762.pem in /etc/ssl/certs
	I1010 19:23:11.177044  147758 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1010 19:23:11.186985  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem --> /etc/ssl/certs/888762.pem (1708 bytes)
	I1010 19:23:11.211385  147758 start.go:296] duration metric: took 124.982089ms for postStartSetup
	I1010 19:23:11.211442  147758 fix.go:56] duration metric: took 19.913977793s for fixHost
	I1010 19:23:11.211472  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHHostname
	I1010 19:23:11.214421  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:11.214780  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:11.214812  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:11.214999  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHPort
	I1010 19:23:11.215219  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:23:11.215429  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:23:11.215612  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHUsername
	I1010 19:23:11.215786  147758 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:11.215974  147758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.120 22 <nil> <nil>}
	I1010 19:23:11.215985  147758 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1010 19:23:11.321786  147758 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728588191.295446348
	
	I1010 19:23:11.321814  147758 fix.go:216] guest clock: 1728588191.295446348
	I1010 19:23:11.321822  147758 fix.go:229] Guest: 2024-10-10 19:23:11.295446348 +0000 UTC Remote: 2024-10-10 19:23:11.211447413 +0000 UTC m=+249.373680838 (delta=83.998935ms)
	I1010 19:23:11.321870  147758 fix.go:200] guest clock delta is within tolerance: 83.998935ms
	I1010 19:23:11.321877  147758 start.go:83] releasing machines lock for "embed-certs-541370", held for 20.024455781s
	I1010 19:23:11.321905  147758 main.go:141] libmachine: (embed-certs-541370) Calling .DriverName
	I1010 19:23:11.322169  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetIP
	I1010 19:23:11.325004  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:11.325350  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:11.325375  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:11.325566  147758 main.go:141] libmachine: (embed-certs-541370) Calling .DriverName
	I1010 19:23:11.326090  147758 main.go:141] libmachine: (embed-certs-541370) Calling .DriverName
	I1010 19:23:11.326294  147758 main.go:141] libmachine: (embed-certs-541370) Calling .DriverName
	I1010 19:23:11.326383  147758 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1010 19:23:11.326444  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHHostname
	I1010 19:23:11.326501  147758 ssh_runner.go:195] Run: cat /version.json
	I1010 19:23:11.326529  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHHostname
	I1010 19:23:11.329311  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:11.329657  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:11.329690  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:11.329713  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:11.329866  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHPort
	I1010 19:23:11.330057  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:23:11.330160  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:11.330188  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:11.330204  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHUsername
	I1010 19:23:11.330344  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHPort
	I1010 19:23:11.330346  147758 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/embed-certs-541370/id_rsa Username:docker}
	I1010 19:23:11.330538  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:23:11.330687  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHUsername
	I1010 19:23:11.330821  147758 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/embed-certs-541370/id_rsa Username:docker}
	I1010 19:23:11.406525  147758 ssh_runner.go:195] Run: systemctl --version
	I1010 19:23:11.428958  147758 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1010 19:23:11.577663  147758 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1010 19:23:11.584024  147758 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1010 19:23:11.584112  147758 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1010 19:23:11.603163  147758 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1010 19:23:11.603190  147758 start.go:495] detecting cgroup driver to use...
	I1010 19:23:11.603291  147758 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1010 19:23:11.624744  147758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1010 19:23:11.645477  147758 docker.go:217] disabling cri-docker service (if available) ...
	I1010 19:23:11.645537  147758 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1010 19:23:11.660216  147758 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1010 19:23:11.675019  147758 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1010 19:23:11.796038  147758 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1010 19:23:11.967750  147758 docker.go:233] disabling docker service ...
	I1010 19:23:11.967828  147758 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1010 19:23:11.983184  147758 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1010 19:23:12.001603  147758 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1010 19:23:12.149408  147758 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1010 19:23:12.306724  147758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1010 19:23:12.324302  147758 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1010 19:23:12.345426  147758 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1010 19:23:12.345508  147758 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:12.357812  147758 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1010 19:23:12.357883  147758 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:12.370095  147758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:12.382389  147758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:12.395000  147758 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1010 19:23:12.408429  147758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:12.426851  147758 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:12.450568  147758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:12.463434  147758 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1010 19:23:12.474537  147758 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1010 19:23:12.474606  147758 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1010 19:23:12.489074  147758 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1010 19:23:12.500048  147758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 19:23:12.635695  147758 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1010 19:23:12.733511  147758 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1010 19:23:12.733593  147758 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1010 19:23:12.739072  147758 start.go:563] Will wait 60s for crictl version
	I1010 19:23:12.739138  147758 ssh_runner.go:195] Run: which crictl
	I1010 19:23:12.743675  147758 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1010 19:23:12.792272  147758 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1010 19:23:12.792379  147758 ssh_runner.go:195] Run: crio --version
	I1010 19:23:12.829968  147758 ssh_runner.go:195] Run: crio --version
	I1010 19:23:12.862579  147758 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1010 19:23:11.347158  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .Start
	I1010 19:23:11.347359  148123 main.go:141] libmachine: (old-k8s-version-947203) Ensuring networks are active...
	I1010 19:23:11.348252  148123 main.go:141] libmachine: (old-k8s-version-947203) Ensuring network default is active
	I1010 19:23:11.348689  148123 main.go:141] libmachine: (old-k8s-version-947203) Ensuring network mk-old-k8s-version-947203 is active
	I1010 19:23:11.349139  148123 main.go:141] libmachine: (old-k8s-version-947203) Getting domain xml...
	I1010 19:23:11.349799  148123 main.go:141] libmachine: (old-k8s-version-947203) Creating domain...
	I1010 19:23:12.671472  148123 main.go:141] libmachine: (old-k8s-version-947203) Waiting to get IP...
	I1010 19:23:12.672628  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:12.673145  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:12.673278  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:12.673132  149057 retry.go:31] will retry after 280.111667ms: waiting for machine to come up
	I1010 19:23:12.954865  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:12.955325  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:12.955377  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:12.955300  149057 retry.go:31] will retry after 289.967238ms: waiting for machine to come up
	I1010 19:23:13.247039  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:13.247663  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:13.247769  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:13.247632  149057 retry.go:31] will retry after 319.085935ms: waiting for machine to come up
	I1010 19:23:12.863797  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetIP
	I1010 19:23:12.867335  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:12.867760  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:23:12.867794  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:23:12.868029  147758 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1010 19:23:12.872503  147758 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 19:23:12.887684  147758 kubeadm.go:883] updating cluster {Name:embed-certs-541370 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:embed-certs-541370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.120 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1010 19:23:12.887809  147758 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1010 19:23:12.887853  147758 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 19:23:12.924155  147758 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1010 19:23:12.924240  147758 ssh_runner.go:195] Run: which lz4
	I1010 19:23:12.928613  147758 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1010 19:23:12.933024  147758 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1010 19:23:12.933069  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1010 19:23:14.450790  147758 crio.go:462] duration metric: took 1.522223644s to copy over tarball
	I1010 19:23:14.450893  147758 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1010 19:23:16.642155  147758 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.191220673s)
	I1010 19:23:16.642193  147758 crio.go:469] duration metric: took 2.191371146s to extract the tarball
	I1010 19:23:16.642202  147758 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1010 19:23:16.679611  147758 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 19:23:16.723840  147758 crio.go:514] all images are preloaded for cri-o runtime.
	I1010 19:23:16.723865  147758 cache_images.go:84] Images are preloaded, skipping loading
	I1010 19:23:16.723874  147758 kubeadm.go:934] updating node { 192.168.39.120 8443 v1.31.1 crio true true} ...
	I1010 19:23:16.723998  147758 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-541370 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.120
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-541370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1010 19:23:16.724081  147758 ssh_runner.go:195] Run: crio config
	I1010 19:23:16.779659  147758 cni.go:84] Creating CNI manager for ""
	I1010 19:23:16.779682  147758 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1010 19:23:16.779693  147758 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1010 19:23:16.779714  147758 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.120 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-541370 NodeName:embed-certs-541370 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.120"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.120 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1010 19:23:16.779842  147758 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.120
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-541370"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.120
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.120"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1010 19:23:16.779904  147758 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1010 19:23:16.791424  147758 binaries.go:44] Found k8s binaries, skipping transfer
	I1010 19:23:16.791493  147758 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1010 19:23:16.801715  147758 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I1010 19:23:16.821364  147758 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1010 19:23:16.842703  147758 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I1010 19:23:16.864835  147758 ssh_runner.go:195] Run: grep 192.168.39.120	control-plane.minikube.internal$ /etc/hosts
	I1010 19:23:16.868928  147758 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.120	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 19:23:13.568192  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:13.568690  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:13.568721  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:13.568657  149057 retry.go:31] will retry after 575.032546ms: waiting for machine to come up
	I1010 19:23:14.145650  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:14.146293  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:14.146319  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:14.146234  149057 retry.go:31] will retry after 702.803794ms: waiting for machine to come up
	I1010 19:23:14.851201  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:14.851736  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:14.851769  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:14.851706  149057 retry.go:31] will retry after 883.195401ms: waiting for machine to come up
	I1010 19:23:15.736627  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:15.737199  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:15.737252  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:15.737155  149057 retry.go:31] will retry after 794.699815ms: waiting for machine to come up
	I1010 19:23:16.533952  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:16.534510  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:16.534545  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:16.534443  149057 retry.go:31] will retry after 961.751912ms: waiting for machine to come up
	I1010 19:23:17.497535  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:17.498007  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:17.498035  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:17.497936  149057 retry.go:31] will retry after 1.423503471s: waiting for machine to come up
	I1010 19:23:16.883162  147758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 19:23:17.027646  147758 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 19:23:17.045083  147758 certs.go:68] Setting up /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/embed-certs-541370 for IP: 192.168.39.120
	I1010 19:23:17.045108  147758 certs.go:194] generating shared ca certs ...
	I1010 19:23:17.045130  147758 certs.go:226] acquiring lock for ca certs: {Name:mk934aa2a82050d7b4e8800492bf64f91d140517 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 19:23:17.045491  147758 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key
	I1010 19:23:17.045561  147758 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key
	I1010 19:23:17.045579  147758 certs.go:256] generating profile certs ...
	I1010 19:23:17.045730  147758 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/embed-certs-541370/client.key
	I1010 19:23:17.045814  147758 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/embed-certs-541370/apiserver.key.dd7630a8
	I1010 19:23:17.045874  147758 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/embed-certs-541370/proxy-client.key
	I1010 19:23:17.046015  147758 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876.pem (1338 bytes)
	W1010 19:23:17.046055  147758 certs.go:480] ignoring /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876_empty.pem, impossibly tiny 0 bytes
	I1010 19:23:17.046075  147758 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem (1679 bytes)
	I1010 19:23:17.046114  147758 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem (1078 bytes)
	I1010 19:23:17.046150  147758 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem (1123 bytes)
	I1010 19:23:17.046177  147758 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem (1675 bytes)
	I1010 19:23:17.046235  147758 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem (1708 bytes)
	I1010 19:23:17.047131  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1010 19:23:17.087057  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1010 19:23:17.137707  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1010 19:23:17.181707  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1010 19:23:17.213227  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/embed-certs-541370/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1010 19:23:17.247846  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/embed-certs-541370/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1010 19:23:17.275989  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/embed-certs-541370/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1010 19:23:17.301144  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/embed-certs-541370/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1010 19:23:17.326232  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem --> /usr/share/ca-certificates/888762.pem (1708 bytes)
	I1010 19:23:17.350586  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1010 19:23:17.374666  147758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876.pem --> /usr/share/ca-certificates/88876.pem (1338 bytes)
	I1010 19:23:17.399570  147758 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1010 19:23:17.417846  147758 ssh_runner.go:195] Run: openssl version
	I1010 19:23:17.424206  147758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/888762.pem && ln -fs /usr/share/ca-certificates/888762.pem /etc/ssl/certs/888762.pem"
	I1010 19:23:17.436091  147758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/888762.pem
	I1010 19:23:17.441020  147758 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 10 18:09 /usr/share/ca-certificates/888762.pem
	I1010 19:23:17.441090  147758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/888762.pem
	I1010 19:23:17.447318  147758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/888762.pem /etc/ssl/certs/3ec20f2e.0"
	I1010 19:23:17.459191  147758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1010 19:23:17.470878  147758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1010 19:23:17.476185  147758 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 10 17:58 /usr/share/ca-certificates/minikubeCA.pem
	I1010 19:23:17.476248  147758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1010 19:23:17.482808  147758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1010 19:23:17.494626  147758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/88876.pem && ln -fs /usr/share/ca-certificates/88876.pem /etc/ssl/certs/88876.pem"
	I1010 19:23:17.506522  147758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/88876.pem
	I1010 19:23:17.511484  147758 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 10 18:09 /usr/share/ca-certificates/88876.pem
	I1010 19:23:17.511558  147758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/88876.pem
	I1010 19:23:17.517445  147758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/88876.pem /etc/ssl/certs/51391683.0"
	I1010 19:23:17.529109  147758 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1010 19:23:17.534139  147758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1010 19:23:17.540846  147758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1010 19:23:17.547429  147758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1010 19:23:17.554350  147758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1010 19:23:17.561036  147758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1010 19:23:17.567571  147758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1010 19:23:17.574019  147758 kubeadm.go:392] StartCluster: {Name:embed-certs-541370 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:embed-certs-541370 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.120 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 19:23:17.574128  147758 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1010 19:23:17.574187  147758 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1010 19:23:17.612699  147758 cri.go:89] found id: ""
	I1010 19:23:17.612804  147758 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1010 19:23:17.623827  147758 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1010 19:23:17.623856  147758 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1010 19:23:17.623917  147758 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1010 19:23:17.634732  147758 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1010 19:23:17.635754  147758 kubeconfig.go:125] found "embed-certs-541370" server: "https://192.168.39.120:8443"
	I1010 19:23:17.637813  147758 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1010 19:23:17.648543  147758 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.120
	I1010 19:23:17.648590  147758 kubeadm.go:1160] stopping kube-system containers ...
	I1010 19:23:17.648606  147758 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1010 19:23:17.648671  147758 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1010 19:23:17.693966  147758 cri.go:89] found id: ""
	I1010 19:23:17.694057  147758 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1010 19:23:17.715977  147758 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1010 19:23:17.727871  147758 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1010 19:23:17.727891  147758 kubeadm.go:157] found existing configuration files:
	
	I1010 19:23:17.727942  147758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1010 19:23:17.738274  147758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1010 19:23:17.738340  147758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1010 19:23:17.748925  147758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1010 19:23:17.758945  147758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1010 19:23:17.759008  147758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1010 19:23:17.769169  147758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1010 19:23:17.779196  147758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1010 19:23:17.779282  147758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1010 19:23:17.790948  147758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1010 19:23:17.802264  147758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1010 19:23:17.802332  147758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1010 19:23:17.814009  147758 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1010 19:23:17.826820  147758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:23:17.947270  147758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:23:19.128720  147758 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.181409785s)
	I1010 19:23:19.128770  147758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:23:19.343735  147758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:23:19.419728  147758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:23:19.529802  147758 api_server.go:52] waiting for apiserver process to appear ...
	I1010 19:23:19.529930  147758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:20.030019  147758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:20.530833  147758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:20.558314  147758 api_server.go:72] duration metric: took 1.028510044s to wait for apiserver process to appear ...
	I1010 19:23:20.558350  147758 api_server.go:88] waiting for apiserver healthz status ...
	I1010 19:23:20.558375  147758 api_server.go:253] Checking apiserver healthz at https://192.168.39.120:8443/healthz ...
	I1010 19:23:20.558991  147758 api_server.go:269] stopped: https://192.168.39.120:8443/healthz: Get "https://192.168.39.120:8443/healthz": dial tcp 192.168.39.120:8443: connect: connection refused
	I1010 19:23:21.058727  147758 api_server.go:253] Checking apiserver healthz at https://192.168.39.120:8443/healthz ...
	I1010 19:23:18.923057  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:18.923636  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:18.923671  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:18.923582  149057 retry.go:31] will retry after 2.09836426s: waiting for machine to come up
	I1010 19:23:21.023500  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:21.024054  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:21.024084  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:21.023999  149057 retry.go:31] will retry after 2.809962093s: waiting for machine to come up
	I1010 19:23:23.187135  147758 api_server.go:279] https://192.168.39.120:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1010 19:23:23.187187  147758 api_server.go:103] status: https://192.168.39.120:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1010 19:23:23.187203  147758 api_server.go:253] Checking apiserver healthz at https://192.168.39.120:8443/healthz ...
	I1010 19:23:23.233367  147758 api_server.go:279] https://192.168.39.120:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1010 19:23:23.233414  147758 api_server.go:103] status: https://192.168.39.120:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1010 19:23:23.558658  147758 api_server.go:253] Checking apiserver healthz at https://192.168.39.120:8443/healthz ...
	I1010 19:23:23.575108  147758 api_server.go:279] https://192.168.39.120:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1010 19:23:23.575139  147758 api_server.go:103] status: https://192.168.39.120:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1010 19:23:24.058679  147758 api_server.go:253] Checking apiserver healthz at https://192.168.39.120:8443/healthz ...
	I1010 19:23:24.065735  147758 api_server.go:279] https://192.168.39.120:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1010 19:23:24.065763  147758 api_server.go:103] status: https://192.168.39.120:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1010 19:23:24.559440  147758 api_server.go:253] Checking apiserver healthz at https://192.168.39.120:8443/healthz ...
	I1010 19:23:24.565460  147758 api_server.go:279] https://192.168.39.120:8443/healthz returned 200:
	ok
	I1010 19:23:24.571828  147758 api_server.go:141] control plane version: v1.31.1
	I1010 19:23:24.571859  147758 api_server.go:131] duration metric: took 4.013501806s to wait for apiserver health ...
	I1010 19:23:24.571869  147758 cni.go:84] Creating CNI manager for ""
	I1010 19:23:24.571875  147758 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1010 19:23:24.573875  147758 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1010 19:23:24.575458  147758 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1010 19:23:24.586870  147758 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1010 19:23:24.624362  147758 system_pods.go:43] waiting for kube-system pods to appear ...
	I1010 19:23:24.643465  147758 system_pods.go:59] 8 kube-system pods found
	I1010 19:23:24.643516  147758 system_pods.go:61] "coredns-7c65d6cfc9-fgtkg" [df696e79-ca6f-4d73-a57e-9c6cdc93c505] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1010 19:23:24.643532  147758 system_pods.go:61] "etcd-embed-certs-541370" [254fa12c-b0d2-499f-8dd9-c1505efeaaab] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1010 19:23:24.643543  147758 system_pods.go:61] "kube-apiserver-embed-certs-541370" [fcd3809d-d325-4481-8e86-c246e29458fe] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1010 19:23:24.643565  147758 system_pods.go:61] "kube-controller-manager-embed-certs-541370" [ab0fdd6b-d9b7-48dc-b82f-29b21d2295ee] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1010 19:23:24.643584  147758 system_pods.go:61] "kube-proxy-f5l6x" [446383fa-44c5-4b9e-bfc5-e38799597e75] Running
	I1010 19:23:24.643592  147758 system_pods.go:61] "kube-scheduler-embed-certs-541370" [1c6af7e7-ce16-4ae2-8feb-e5d474173de1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1010 19:23:24.643603  147758 system_pods.go:61] "metrics-server-6867b74b74-kw529" [aad00321-d499-4563-849e-286d6e699fc8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1010 19:23:24.643611  147758 system_pods.go:61] "storage-provisioner" [df4ae621-5066-4276-9276-a0538a9f9dd1] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1010 19:23:24.643620  147758 system_pods.go:74] duration metric: took 19.234558ms to wait for pod list to return data ...
	I1010 19:23:24.643637  147758 node_conditions.go:102] verifying NodePressure condition ...
	I1010 19:23:24.651647  147758 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1010 19:23:24.651683  147758 node_conditions.go:123] node cpu capacity is 2
	I1010 19:23:24.651699  147758 node_conditions.go:105] duration metric: took 8.056629ms to run NodePressure ...
	I1010 19:23:24.651720  147758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:23:24.915651  147758 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1010 19:23:24.921104  147758 kubeadm.go:739] kubelet initialised
	I1010 19:23:24.921131  147758 kubeadm.go:740] duration metric: took 5.44643ms waiting for restarted kubelet to initialise ...
	I1010 19:23:24.921142  147758 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 19:23:24.927535  147758 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-fgtkg" in "kube-system" namespace to be "Ready" ...
	I1010 19:23:23.837827  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:23.838271  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:23.838295  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:23.838220  149057 retry.go:31] will retry after 2.562944835s: waiting for machine to come up
	I1010 19:23:26.402699  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:26.403250  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | unable to find current IP address of domain old-k8s-version-947203 in network mk-old-k8s-version-947203
	I1010 19:23:26.403294  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | I1010 19:23:26.403192  149057 retry.go:31] will retry after 3.867656846s: waiting for machine to come up
	I1010 19:23:26.932764  147758 pod_ready.go:103] pod "coredns-7c65d6cfc9-fgtkg" in "kube-system" namespace has status "Ready":"False"
	I1010 19:23:28.936055  147758 pod_ready.go:103] pod "coredns-7c65d6cfc9-fgtkg" in "kube-system" namespace has status "Ready":"False"
	I1010 19:23:31.434959  147758 pod_ready.go:103] pod "coredns-7c65d6cfc9-fgtkg" in "kube-system" namespace has status "Ready":"False"
	I1010 19:23:31.893914  148525 start.go:364] duration metric: took 2m17.836396131s to acquireMachinesLock for "default-k8s-diff-port-361847"
	I1010 19:23:31.893993  148525 start.go:96] Skipping create...Using existing machine configuration
	I1010 19:23:31.894007  148525 fix.go:54] fixHost starting: 
	I1010 19:23:31.894438  148525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:23:31.894502  148525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:23:31.914583  148525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37215
	I1010 19:23:31.915054  148525 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:23:31.915535  148525 main.go:141] libmachine: Using API Version  1
	I1010 19:23:31.915560  148525 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:23:31.915967  148525 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:23:31.916207  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .DriverName
	I1010 19:23:31.916387  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetState
	I1010 19:23:31.918035  148525 fix.go:112] recreateIfNeeded on default-k8s-diff-port-361847: state=Stopped err=<nil>
	I1010 19:23:31.918073  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .DriverName
	W1010 19:23:31.918241  148525 fix.go:138] unexpected machine state, will restart: <nil>
	I1010 19:23:31.920390  148525 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-361847" ...
	I1010 19:23:30.275222  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.275762  148123 main.go:141] libmachine: (old-k8s-version-947203) Found IP for machine: 192.168.61.112
	I1010 19:23:30.275779  148123 main.go:141] libmachine: (old-k8s-version-947203) Reserving static IP address...
	I1010 19:23:30.275794  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has current primary IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.276478  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "old-k8s-version-947203", mac: "52:54:00:b5:0b:b2", ip: "192.168.61.112"} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:30.276529  148123 main.go:141] libmachine: (old-k8s-version-947203) Reserved static IP address: 192.168.61.112
	I1010 19:23:30.276553  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | skip adding static IP to network mk-old-k8s-version-947203 - found existing host DHCP lease matching {name: "old-k8s-version-947203", mac: "52:54:00:b5:0b:b2", ip: "192.168.61.112"}
	I1010 19:23:30.276574  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | Getting to WaitForSSH function...
	I1010 19:23:30.276590  148123 main.go:141] libmachine: (old-k8s-version-947203) Waiting for SSH to be available...
	I1010 19:23:30.278486  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.278854  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:30.278890  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.278984  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | Using SSH client type: external
	I1010 19:23:30.279016  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | Using SSH private key: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/old-k8s-version-947203/id_rsa (-rw-------)
	I1010 19:23:30.279051  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.112 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19787-81676/.minikube/machines/old-k8s-version-947203/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1010 19:23:30.279068  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | About to run SSH command:
	I1010 19:23:30.279080  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | exit 0
	I1010 19:23:30.405053  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | SSH cmd err, output: <nil>: 
	I1010 19:23:30.405492  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetConfigRaw
	I1010 19:23:30.406224  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetIP
	I1010 19:23:30.408830  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.409178  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:30.409204  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.409462  148123 profile.go:143] Saving config to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/old-k8s-version-947203/config.json ...
	I1010 19:23:30.409673  148123 machine.go:93] provisionDockerMachine start ...
	I1010 19:23:30.409693  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .DriverName
	I1010 19:23:30.409916  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHHostname
	I1010 19:23:30.412131  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.412400  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:30.412427  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.412574  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHPort
	I1010 19:23:30.412770  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:30.412965  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:30.413117  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHUsername
	I1010 19:23:30.413368  148123 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:30.413653  148123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.112 22 <nil> <nil>}
	I1010 19:23:30.413668  148123 main.go:141] libmachine: About to run SSH command:
	hostname
	I1010 19:23:30.525149  148123 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1010 19:23:30.525176  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetMachineName
	I1010 19:23:30.525417  148123 buildroot.go:166] provisioning hostname "old-k8s-version-947203"
	I1010 19:23:30.525478  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetMachineName
	I1010 19:23:30.525703  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHHostname
	I1010 19:23:30.528343  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.528681  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:30.528708  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.528841  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHPort
	I1010 19:23:30.529035  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:30.529185  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:30.529303  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHUsername
	I1010 19:23:30.529460  148123 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:30.529635  148123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.112 22 <nil> <nil>}
	I1010 19:23:30.529645  148123 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-947203 && echo "old-k8s-version-947203" | sudo tee /etc/hostname
	I1010 19:23:30.655605  148123 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-947203
	
	I1010 19:23:30.655644  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHHostname
	I1010 19:23:30.658439  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.658834  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:30.658869  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.659063  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHPort
	I1010 19:23:30.659285  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:30.659458  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:30.659574  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHUsername
	I1010 19:23:30.659733  148123 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:30.659905  148123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.112 22 <nil> <nil>}
	I1010 19:23:30.659921  148123 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-947203' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-947203/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-947203' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1010 19:23:30.781974  148123 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1010 19:23:30.782011  148123 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19787-81676/.minikube CaCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19787-81676/.minikube}
	I1010 19:23:30.782033  148123 buildroot.go:174] setting up certificates
	I1010 19:23:30.782048  148123 provision.go:84] configureAuth start
	I1010 19:23:30.782059  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetMachineName
	I1010 19:23:30.782379  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetIP
	I1010 19:23:30.785189  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.785584  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:30.785613  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.785782  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHHostname
	I1010 19:23:30.788086  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.788432  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:30.788458  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:30.788582  148123 provision.go:143] copyHostCerts
	I1010 19:23:30.788645  148123 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem, removing ...
	I1010 19:23:30.788660  148123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem
	I1010 19:23:30.788729  148123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem (1078 bytes)
	I1010 19:23:30.788839  148123 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem, removing ...
	I1010 19:23:30.788867  148123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem
	I1010 19:23:30.788900  148123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem (1123 bytes)
	I1010 19:23:30.788977  148123 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem, removing ...
	I1010 19:23:30.788986  148123 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem
	I1010 19:23:30.789013  148123 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem (1675 bytes)
	I1010 19:23:30.789081  148123 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-947203 san=[127.0.0.1 192.168.61.112 localhost minikube old-k8s-version-947203]
	I1010 19:23:31.240626  148123 provision.go:177] copyRemoteCerts
	I1010 19:23:31.240698  148123 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1010 19:23:31.240737  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHHostname
	I1010 19:23:31.243678  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.244108  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:31.244138  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.244356  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHPort
	I1010 19:23:31.244565  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:31.244765  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHUsername
	I1010 19:23:31.244929  148123 sshutil.go:53] new ssh client: &{IP:192.168.61.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/old-k8s-version-947203/id_rsa Username:docker}
	I1010 19:23:31.331337  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1010 19:23:31.356266  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1010 19:23:31.381186  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1010 19:23:31.405301  148123 provision.go:87] duration metric: took 623.235619ms to configureAuth
	I1010 19:23:31.405337  148123 buildroot.go:189] setting minikube options for container-runtime
	I1010 19:23:31.405541  148123 config.go:182] Loaded profile config "old-k8s-version-947203": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1010 19:23:31.405620  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHHostname
	I1010 19:23:31.408111  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.408444  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:31.408470  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.408695  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHPort
	I1010 19:23:31.408947  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:31.409109  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:31.409229  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHUsername
	I1010 19:23:31.409374  148123 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:31.409583  148123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.112 22 <nil> <nil>}
	I1010 19:23:31.409609  148123 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1010 19:23:31.647023  148123 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1010 19:23:31.647050  148123 machine.go:96] duration metric: took 1.237364012s to provisionDockerMachine
	I1010 19:23:31.647063  148123 start.go:293] postStartSetup for "old-k8s-version-947203" (driver="kvm2")
	I1010 19:23:31.647073  148123 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1010 19:23:31.647104  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .DriverName
	I1010 19:23:31.647464  148123 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1010 19:23:31.647502  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHHostname
	I1010 19:23:31.649991  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.650288  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:31.650311  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.650484  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHPort
	I1010 19:23:31.650637  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:31.650832  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHUsername
	I1010 19:23:31.650968  148123 sshutil.go:53] new ssh client: &{IP:192.168.61.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/old-k8s-version-947203/id_rsa Username:docker}
	I1010 19:23:31.735985  148123 ssh_runner.go:195] Run: cat /etc/os-release
	I1010 19:23:31.740689  148123 info.go:137] Remote host: Buildroot 2023.02.9
	I1010 19:23:31.740725  148123 filesync.go:126] Scanning /home/jenkins/minikube-integration/19787-81676/.minikube/addons for local assets ...
	I1010 19:23:31.740803  148123 filesync.go:126] Scanning /home/jenkins/minikube-integration/19787-81676/.minikube/files for local assets ...
	I1010 19:23:31.740947  148123 filesync.go:149] local asset: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem -> 888762.pem in /etc/ssl/certs
	I1010 19:23:31.741057  148123 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1010 19:23:31.751383  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem --> /etc/ssl/certs/888762.pem (1708 bytes)
	I1010 19:23:31.777091  148123 start.go:296] duration metric: took 130.011618ms for postStartSetup
	I1010 19:23:31.777134  148123 fix.go:56] duration metric: took 20.455078936s for fixHost
	I1010 19:23:31.777158  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHHostname
	I1010 19:23:31.780083  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.780661  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:31.780708  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.780832  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHPort
	I1010 19:23:31.781069  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:31.781245  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:31.781382  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHUsername
	I1010 19:23:31.781534  148123 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:31.781711  148123 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.112 22 <nil> <nil>}
	I1010 19:23:31.781721  148123 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1010 19:23:31.893727  148123 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728588211.849777581
	
	I1010 19:23:31.893756  148123 fix.go:216] guest clock: 1728588211.849777581
	I1010 19:23:31.893763  148123 fix.go:229] Guest: 2024-10-10 19:23:31.849777581 +0000 UTC Remote: 2024-10-10 19:23:31.777138808 +0000 UTC m=+218.253674512 (delta=72.638773ms)
	I1010 19:23:31.893806  148123 fix.go:200] guest clock delta is within tolerance: 72.638773ms
	I1010 19:23:31.893813  148123 start.go:83] releasing machines lock for "old-k8s-version-947203", held for 20.571797307s
	I1010 19:23:31.893848  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .DriverName
	I1010 19:23:31.894151  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetIP
	I1010 19:23:31.896747  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.897156  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:31.897207  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.897385  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .DriverName
	I1010 19:23:31.898036  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .DriverName
	I1010 19:23:31.898229  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .DriverName
	I1010 19:23:31.898375  148123 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1010 19:23:31.898422  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHHostname
	I1010 19:23:31.898461  148123 ssh_runner.go:195] Run: cat /version.json
	I1010 19:23:31.898487  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHHostname
	I1010 19:23:31.901274  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.901450  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.901608  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:31.901649  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.901784  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHPort
	I1010 19:23:31.901920  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:31.901945  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:31.901952  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:31.902112  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHPort
	I1010 19:23:31.902130  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHUsername
	I1010 19:23:31.902306  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHKeyPath
	I1010 19:23:31.902348  148123 sshutil.go:53] new ssh client: &{IP:192.168.61.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/old-k8s-version-947203/id_rsa Username:docker}
	I1010 19:23:31.902412  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetSSHUsername
	I1010 19:23:31.902533  148123 sshutil.go:53] new ssh client: &{IP:192.168.61.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/old-k8s-version-947203/id_rsa Username:docker}
	I1010 19:23:32.016264  148123 ssh_runner.go:195] Run: systemctl --version
	I1010 19:23:32.022618  148123 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1010 19:23:32.169502  148123 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1010 19:23:32.175960  148123 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1010 19:23:32.176039  148123 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1010 19:23:32.193584  148123 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1010 19:23:32.193615  148123 start.go:495] detecting cgroup driver to use...
	I1010 19:23:32.193698  148123 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1010 19:23:32.210923  148123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1010 19:23:32.227172  148123 docker.go:217] disabling cri-docker service (if available) ...
	I1010 19:23:32.227254  148123 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1010 19:23:32.242142  148123 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1010 19:23:32.256896  148123 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1010 19:23:32.387462  148123 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1010 19:23:32.575184  148123 docker.go:233] disabling docker service ...
	I1010 19:23:32.575257  148123 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1010 19:23:32.590667  148123 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1010 19:23:32.604825  148123 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1010 19:23:32.739827  148123 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1010 19:23:32.863648  148123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1010 19:23:32.879435  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1010 19:23:32.899582  148123 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1010 19:23:32.899659  148123 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:32.915010  148123 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1010 19:23:32.915081  148123 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:32.931181  148123 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:32.944038  148123 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:32.955182  148123 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1010 19:23:32.972220  148123 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1010 19:23:32.982681  148123 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1010 19:23:32.982739  148123 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1010 19:23:32.997012  148123 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1010 19:23:33.009468  148123 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 19:23:33.180403  148123 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1010 19:23:33.284934  148123 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1010 19:23:33.285008  148123 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1010 19:23:33.290063  148123 start.go:563] Will wait 60s for crictl version
	I1010 19:23:33.290124  148123 ssh_runner.go:195] Run: which crictl
	I1010 19:23:33.294752  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1010 19:23:33.342710  148123 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1010 19:23:33.342792  148123 ssh_runner.go:195] Run: crio --version
	I1010 19:23:33.372821  148123 ssh_runner.go:195] Run: crio --version
	I1010 19:23:33.409395  148123 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1010 19:23:33.411012  148123 main.go:141] libmachine: (old-k8s-version-947203) Calling .GetIP
	I1010 19:23:33.414374  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:33.414789  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:0b:b2", ip: ""} in network mk-old-k8s-version-947203: {Iface:virbr3 ExpiryTime:2024-10-10 20:23:23 +0000 UTC Type:0 Mac:52:54:00:b5:0b:b2 Iaid: IPaddr:192.168.61.112 Prefix:24 Hostname:old-k8s-version-947203 Clientid:01:52:54:00:b5:0b:b2}
	I1010 19:23:33.414836  148123 main.go:141] libmachine: (old-k8s-version-947203) DBG | domain old-k8s-version-947203 has defined IP address 192.168.61.112 and MAC address 52:54:00:b5:0b:b2 in network mk-old-k8s-version-947203
	I1010 19:23:33.415084  148123 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1010 19:23:33.420164  148123 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 19:23:33.433442  148123 kubeadm.go:883] updating cluster {Name:old-k8s-version-947203 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-947203 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.112 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1010 19:23:33.433631  148123 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1010 19:23:33.433717  148123 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 19:23:33.480013  148123 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1010 19:23:33.480078  148123 ssh_runner.go:195] Run: which lz4
	I1010 19:23:33.485443  148123 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1010 19:23:33.490986  148123 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1010 19:23:33.491032  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1010 19:23:31.921836  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .Start
	I1010 19:23:31.922036  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Ensuring networks are active...
	I1010 19:23:31.922890  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Ensuring network default is active
	I1010 19:23:31.923271  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Ensuring network mk-default-k8s-diff-port-361847 is active
	I1010 19:23:31.923685  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Getting domain xml...
	I1010 19:23:31.924449  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Creating domain...
	I1010 19:23:33.241164  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting to get IP...
	I1010 19:23:33.242273  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:33.242713  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | unable to find current IP address of domain default-k8s-diff-port-361847 in network mk-default-k8s-diff-port-361847
	I1010 19:23:33.242814  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | I1010 19:23:33.242702  149213 retry.go:31] will retry after 195.013046ms: waiting for machine to come up
	I1010 19:23:33.438965  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:33.439425  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | unable to find current IP address of domain default-k8s-diff-port-361847 in network mk-default-k8s-diff-port-361847
	I1010 19:23:33.439452  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | I1010 19:23:33.439379  149213 retry.go:31] will retry after 344.223823ms: waiting for machine to come up
	I1010 19:23:33.785167  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:33.785833  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | unable to find current IP address of domain default-k8s-diff-port-361847 in network mk-default-k8s-diff-port-361847
	I1010 19:23:33.785864  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | I1010 19:23:33.785780  149213 retry.go:31] will retry after 342.787658ms: waiting for machine to come up
	I1010 19:23:33.435066  147758 pod_ready.go:103] pod "coredns-7c65d6cfc9-fgtkg" in "kube-system" namespace has status "Ready":"False"
	I1010 19:23:34.936768  147758 pod_ready.go:93] pod "coredns-7c65d6cfc9-fgtkg" in "kube-system" namespace has status "Ready":"True"
	I1010 19:23:34.936800  147758 pod_ready.go:82] duration metric: took 10.009235225s for pod "coredns-7c65d6cfc9-fgtkg" in "kube-system" namespace to be "Ready" ...
	I1010 19:23:34.936814  147758 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:23:35.944395  147758 pod_ready.go:93] pod "etcd-embed-certs-541370" in "kube-system" namespace has status "Ready":"True"
	I1010 19:23:35.944430  147758 pod_ready.go:82] duration metric: took 1.007599746s for pod "etcd-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:23:35.944445  147758 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:23:35.953224  147758 pod_ready.go:93] pod "kube-apiserver-embed-certs-541370" in "kube-system" namespace has status "Ready":"True"
	I1010 19:23:35.953255  147758 pod_ready.go:82] duration metric: took 8.801702ms for pod "kube-apiserver-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:23:35.953266  147758 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:23:35.227279  148123 crio.go:462] duration metric: took 1.74186828s to copy over tarball
	I1010 19:23:35.227383  148123 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1010 19:23:38.216499  148123 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.989082447s)
	I1010 19:23:38.216533  148123 crio.go:469] duration metric: took 2.989218587s to extract the tarball
	I1010 19:23:38.216541  148123 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1010 19:23:38.259699  148123 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 19:23:38.308284  148123 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1010 19:23:38.308313  148123 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1010 19:23:38.308422  148123 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1010 19:23:38.308477  148123 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1010 19:23:38.308482  148123 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1010 19:23:38.308593  148123 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1010 19:23:38.308617  148123 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1010 19:23:38.308405  148123 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 19:23:38.308446  148123 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1010 19:23:38.308530  148123 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1010 19:23:38.310558  148123 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1010 19:23:38.310612  148123 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1010 19:23:38.310642  148123 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1010 19:23:38.310652  148123 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1010 19:23:38.310645  148123 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1010 19:23:38.310560  148123 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 19:23:38.310733  148123 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1010 19:23:38.311068  148123 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1010 19:23:38.469174  148123 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1010 19:23:38.469563  148123 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1010 19:23:38.488463  148123 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1010 19:23:38.490815  148123 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1010 19:23:38.491841  148123 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1010 19:23:38.496079  148123 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1010 19:23:38.501292  148123 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1010 19:23:34.130443  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:34.130968  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | unable to find current IP address of domain default-k8s-diff-port-361847 in network mk-default-k8s-diff-port-361847
	I1010 19:23:34.130998  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | I1010 19:23:34.130915  149213 retry.go:31] will retry after 393.100812ms: waiting for machine to come up
	I1010 19:23:34.525570  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:34.526032  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | unable to find current IP address of domain default-k8s-diff-port-361847 in network mk-default-k8s-diff-port-361847
	I1010 19:23:34.526060  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | I1010 19:23:34.525980  149213 retry.go:31] will retry after 465.468437ms: waiting for machine to come up
	I1010 19:23:34.992775  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:34.993348  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | unable to find current IP address of domain default-k8s-diff-port-361847 in network mk-default-k8s-diff-port-361847
	I1010 19:23:34.993386  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | I1010 19:23:34.993287  149213 retry.go:31] will retry after 907.884473ms: waiting for machine to come up
	I1010 19:23:35.902481  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:35.902942  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | unable to find current IP address of domain default-k8s-diff-port-361847 in network mk-default-k8s-diff-port-361847
	I1010 19:23:35.902974  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | I1010 19:23:35.902878  149213 retry.go:31] will retry after 1.157806188s: waiting for machine to come up
	I1010 19:23:37.062068  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:37.062749  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | unable to find current IP address of domain default-k8s-diff-port-361847 in network mk-default-k8s-diff-port-361847
	I1010 19:23:37.062777  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | I1010 19:23:37.062706  149213 retry.go:31] will retry after 1.432559208s: waiting for machine to come up
	I1010 19:23:38.496653  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:38.497120  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | unable to find current IP address of domain default-k8s-diff-port-361847 in network mk-default-k8s-diff-port-361847
	I1010 19:23:38.497153  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | I1010 19:23:38.497066  149213 retry.go:31] will retry after 1.559787003s: waiting for machine to come up
	I1010 19:23:37.961068  147758 pod_ready.go:103] pod "kube-controller-manager-embed-certs-541370" in "kube-system" namespace has status "Ready":"False"
	I1010 19:23:40.065559  147758 pod_ready.go:103] pod "kube-controller-manager-embed-certs-541370" in "kube-system" namespace has status "Ready":"False"
	I1010 19:23:40.528757  147758 pod_ready.go:93] pod "kube-controller-manager-embed-certs-541370" in "kube-system" namespace has status "Ready":"True"
	I1010 19:23:40.528786  147758 pod_ready.go:82] duration metric: took 4.575513259s for pod "kube-controller-manager-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:23:40.528802  147758 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-f5l6x" in "kube-system" namespace to be "Ready" ...
	I1010 19:23:40.538002  147758 pod_ready.go:93] pod "kube-proxy-f5l6x" in "kube-system" namespace has status "Ready":"True"
	I1010 19:23:40.538034  147758 pod_ready.go:82] duration metric: took 9.22357ms for pod "kube-proxy-f5l6x" in "kube-system" namespace to be "Ready" ...
	I1010 19:23:40.538049  147758 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:23:40.543594  147758 pod_ready.go:93] pod "kube-scheduler-embed-certs-541370" in "kube-system" namespace has status "Ready":"True"
	I1010 19:23:40.543615  147758 pod_ready.go:82] duration metric: took 5.558665ms for pod "kube-scheduler-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:23:40.543626  147758 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace to be "Ready" ...
	I1010 19:23:38.581315  148123 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1010 19:23:38.581361  148123 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1010 19:23:38.581385  148123 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1010 19:23:38.581407  148123 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1010 19:23:38.581442  148123 ssh_runner.go:195] Run: which crictl
	I1010 19:23:38.581457  148123 ssh_runner.go:195] Run: which crictl
	I1010 19:23:38.642668  148123 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1010 19:23:38.642721  148123 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1010 19:23:38.642777  148123 ssh_runner.go:195] Run: which crictl
	I1010 19:23:38.658235  148123 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1010 19:23:38.658291  148123 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1010 19:23:38.658288  148123 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1010 19:23:38.658328  148123 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1010 19:23:38.658346  148123 ssh_runner.go:195] Run: which crictl
	I1010 19:23:38.658373  148123 ssh_runner.go:195] Run: which crictl
	I1010 19:23:38.678038  148123 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1010 19:23:38.678097  148123 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1010 19:23:38.678155  148123 ssh_runner.go:195] Run: which crictl
	I1010 19:23:38.682719  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1010 19:23:38.682773  148123 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1010 19:23:38.682721  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1010 19:23:38.682791  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1010 19:23:38.682812  148123 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1010 19:23:38.682845  148123 ssh_runner.go:195] Run: which crictl
	I1010 19:23:38.682851  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1010 19:23:38.682872  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1010 19:23:38.691181  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1010 19:23:38.842626  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1010 19:23:38.842714  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1010 19:23:38.842754  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1010 19:23:38.842797  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1010 19:23:38.842721  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1010 19:23:38.842851  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1010 19:23:38.842789  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1010 19:23:38.989994  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1010 19:23:39.005815  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1010 19:23:39.005923  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1010 19:23:39.010029  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1010 19:23:39.010105  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1010 19:23:39.010150  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1010 19:23:39.010230  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1010 19:23:39.134646  148123 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 19:23:39.141431  148123 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1010 19:23:39.176750  148123 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1010 19:23:39.176945  148123 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1010 19:23:39.176989  148123 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1010 19:23:39.177038  148123 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1010 19:23:39.177073  148123 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1010 19:23:39.177104  148123 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1010 19:23:39.335056  148123 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1010 19:23:39.335113  148123 cache_images.go:92] duration metric: took 1.026784312s to LoadCachedImages
	W1010 19:23:39.335180  148123 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I1010 19:23:39.335192  148123 kubeadm.go:934] updating node { 192.168.61.112 8443 v1.20.0 crio true true} ...
	I1010 19:23:39.335305  148123 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-947203 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.112
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-947203 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1010 19:23:39.335380  148123 ssh_runner.go:195] Run: crio config
	I1010 19:23:39.394307  148123 cni.go:84] Creating CNI manager for ""
	I1010 19:23:39.394338  148123 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1010 19:23:39.394350  148123 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1010 19:23:39.394378  148123 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.112 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-947203 NodeName:old-k8s-version-947203 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.112"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.112 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1010 19:23:39.394572  148123 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.112
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-947203"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.112
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.112"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1010 19:23:39.394662  148123 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1010 19:23:39.405510  148123 binaries.go:44] Found k8s binaries, skipping transfer
	I1010 19:23:39.405600  148123 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1010 19:23:39.415968  148123 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1010 19:23:39.443234  148123 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1010 19:23:39.462448  148123 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1010 19:23:39.481959  148123 ssh_runner.go:195] Run: grep 192.168.61.112	control-plane.minikube.internal$ /etc/hosts
	I1010 19:23:39.486188  148123 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.112	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 19:23:39.501922  148123 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 19:23:39.642769  148123 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 19:23:39.661335  148123 certs.go:68] Setting up /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/old-k8s-version-947203 for IP: 192.168.61.112
	I1010 19:23:39.661363  148123 certs.go:194] generating shared ca certs ...
	I1010 19:23:39.661384  148123 certs.go:226] acquiring lock for ca certs: {Name:mk934aa2a82050d7b4e8800492bf64f91d140517 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 19:23:39.661595  148123 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key
	I1010 19:23:39.661658  148123 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key
	I1010 19:23:39.661671  148123 certs.go:256] generating profile certs ...
	I1010 19:23:39.661779  148123 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/old-k8s-version-947203/client.key
	I1010 19:23:39.661867  148123 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/old-k8s-version-947203/apiserver.key.8a666a52
	I1010 19:23:39.661922  148123 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/old-k8s-version-947203/proxy-client.key
	I1010 19:23:39.662312  148123 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876.pem (1338 bytes)
	W1010 19:23:39.662395  148123 certs.go:480] ignoring /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876_empty.pem, impossibly tiny 0 bytes
	I1010 19:23:39.662410  148123 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem (1679 bytes)
	I1010 19:23:39.662447  148123 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem (1078 bytes)
	I1010 19:23:39.662501  148123 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem (1123 bytes)
	I1010 19:23:39.662531  148123 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem (1675 bytes)
	I1010 19:23:39.662622  148123 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem (1708 bytes)
	I1010 19:23:39.663372  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1010 19:23:39.710366  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1010 19:23:39.752724  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1010 19:23:39.801408  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1010 19:23:39.843557  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/old-k8s-version-947203/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1010 19:23:39.892522  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/old-k8s-version-947203/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1010 19:23:39.938519  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/old-k8s-version-947203/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1010 19:23:39.966140  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/old-k8s-version-947203/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1010 19:23:39.992974  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem --> /usr/share/ca-certificates/888762.pem (1708 bytes)
	I1010 19:23:40.019381  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1010 19:23:40.045769  148123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876.pem --> /usr/share/ca-certificates/88876.pem (1338 bytes)
	I1010 19:23:40.080683  148123 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1010 19:23:40.099747  148123 ssh_runner.go:195] Run: openssl version
	I1010 19:23:40.107865  148123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1010 19:23:40.123158  148123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1010 19:23:40.128594  148123 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 10 17:58 /usr/share/ca-certificates/minikubeCA.pem
	I1010 19:23:40.128660  148123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1010 19:23:40.135319  148123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1010 19:23:40.147514  148123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/88876.pem && ln -fs /usr/share/ca-certificates/88876.pem /etc/ssl/certs/88876.pem"
	I1010 19:23:40.162463  148123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/88876.pem
	I1010 19:23:40.168387  148123 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 10 18:09 /usr/share/ca-certificates/88876.pem
	I1010 19:23:40.168512  148123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/88876.pem
	I1010 19:23:40.176304  148123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/88876.pem /etc/ssl/certs/51391683.0"
	I1010 19:23:40.191306  148123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/888762.pem && ln -fs /usr/share/ca-certificates/888762.pem /etc/ssl/certs/888762.pem"
	I1010 19:23:40.204196  148123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/888762.pem
	I1010 19:23:40.211044  148123 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 10 18:09 /usr/share/ca-certificates/888762.pem
	I1010 19:23:40.211122  148123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/888762.pem
	I1010 19:23:40.219574  148123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/888762.pem /etc/ssl/certs/3ec20f2e.0"
	I1010 19:23:40.232634  148123 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1010 19:23:40.237639  148123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1010 19:23:40.244141  148123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1010 19:23:40.251188  148123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1010 19:23:40.258538  148123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1010 19:23:40.265029  148123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1010 19:23:40.271754  148123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1010 19:23:40.278352  148123 kubeadm.go:392] StartCluster: {Name:old-k8s-version-947203 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-947203 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.112 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 19:23:40.278453  148123 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1010 19:23:40.278518  148123 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1010 19:23:40.317771  148123 cri.go:89] found id: ""
	I1010 19:23:40.317838  148123 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1010 19:23:40.330494  148123 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1010 19:23:40.330520  148123 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1010 19:23:40.330576  148123 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1010 19:23:40.341984  148123 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1010 19:23:40.343386  148123 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-947203" does not appear in /home/jenkins/minikube-integration/19787-81676/kubeconfig
	I1010 19:23:40.344334  148123 kubeconfig.go:62] /home/jenkins/minikube-integration/19787-81676/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-947203" cluster setting kubeconfig missing "old-k8s-version-947203" context setting]
	I1010 19:23:40.345360  148123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19787-81676/kubeconfig: {Name:mk31297b607b774870865548d952622c04d970cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 19:23:40.347128  148123 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1010 19:23:40.357902  148123 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.112
	I1010 19:23:40.357944  148123 kubeadm.go:1160] stopping kube-system containers ...
	I1010 19:23:40.357961  148123 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1010 19:23:40.358031  148123 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1010 19:23:40.399970  148123 cri.go:89] found id: ""
	I1010 19:23:40.400057  148123 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1010 19:23:40.418527  148123 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1010 19:23:40.429182  148123 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1010 19:23:40.429217  148123 kubeadm.go:157] found existing configuration files:
	
	I1010 19:23:40.429262  148123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1010 19:23:40.439287  148123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1010 19:23:40.439363  148123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1010 19:23:40.449479  148123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1010 19:23:40.459288  148123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1010 19:23:40.459359  148123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1010 19:23:40.472733  148123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1010 19:23:40.484890  148123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1010 19:23:40.484958  148123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1010 19:23:40.495301  148123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1010 19:23:40.504750  148123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1010 19:23:40.504820  148123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1010 19:23:40.515034  148123 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1010 19:23:40.526168  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:23:40.675022  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:23:41.566371  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:23:41.815296  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:23:41.930082  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:23:42.027597  148123 api_server.go:52] waiting for apiserver process to appear ...
	I1010 19:23:42.027713  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:42.528219  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:43.027889  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:43.527735  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:40.058247  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:40.058783  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | unable to find current IP address of domain default-k8s-diff-port-361847 in network mk-default-k8s-diff-port-361847
	I1010 19:23:40.058835  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | I1010 19:23:40.058696  149213 retry.go:31] will retry after 2.214094081s: waiting for machine to come up
	I1010 19:23:42.274629  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:42.275167  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | unable to find current IP address of domain default-k8s-diff-port-361847 in network mk-default-k8s-diff-port-361847
	I1010 19:23:42.275194  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | I1010 19:23:42.275106  149213 retry.go:31] will retry after 2.126528577s: waiting for machine to come up
	I1010 19:23:42.550865  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:23:45.051043  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:23:44.028463  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:44.527916  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:45.027911  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:45.528797  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:46.028772  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:46.527982  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:47.027799  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:47.527894  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:48.028352  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:48.527935  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:44.403101  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:44.403575  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | unable to find current IP address of domain default-k8s-diff-port-361847 in network mk-default-k8s-diff-port-361847
	I1010 19:23:44.403616  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | I1010 19:23:44.403534  149213 retry.go:31] will retry after 3.603964622s: waiting for machine to come up
	I1010 19:23:48.008726  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:48.009142  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | unable to find current IP address of domain default-k8s-diff-port-361847 in network mk-default-k8s-diff-port-361847
	I1010 19:23:48.009191  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | I1010 19:23:48.009100  149213 retry.go:31] will retry after 3.639744981s: waiting for machine to come up
	I1010 19:23:47.551003  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:23:49.661572  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:23:52.858209  147213 start.go:364] duration metric: took 56.558774237s to acquireMachinesLock for "no-preload-320324"
	I1010 19:23:52.858274  147213 start.go:96] Skipping create...Using existing machine configuration
	I1010 19:23:52.858283  147213 fix.go:54] fixHost starting: 
	I1010 19:23:52.858705  147213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:23:52.858742  147213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:23:52.878428  147213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38525
	I1010 19:23:52.878955  147213 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:23:52.879563  147213 main.go:141] libmachine: Using API Version  1
	I1010 19:23:52.879599  147213 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:23:52.879945  147213 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:23:52.880144  147213 main.go:141] libmachine: (no-preload-320324) Calling .DriverName
	I1010 19:23:52.880282  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetState
	I1010 19:23:52.881626  147213 fix.go:112] recreateIfNeeded on no-preload-320324: state=Stopped err=<nil>
	I1010 19:23:52.881650  147213 main.go:141] libmachine: (no-preload-320324) Calling .DriverName
	W1010 19:23:52.881799  147213 fix.go:138] unexpected machine state, will restart: <nil>
	I1010 19:23:52.883912  147213 out.go:177] * Restarting existing kvm2 VM for "no-preload-320324" ...
	I1010 19:23:49.028421  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:49.528271  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:50.028462  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:50.527867  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:51.028782  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:51.528581  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:52.028732  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:52.527987  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:53.027852  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:53.528647  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:52.885239  147213 main.go:141] libmachine: (no-preload-320324) Calling .Start
	I1010 19:23:52.885429  147213 main.go:141] libmachine: (no-preload-320324) Ensuring networks are active...
	I1010 19:23:52.886211  147213 main.go:141] libmachine: (no-preload-320324) Ensuring network default is active
	I1010 19:23:52.886749  147213 main.go:141] libmachine: (no-preload-320324) Ensuring network mk-no-preload-320324 is active
	I1010 19:23:52.887310  147213 main.go:141] libmachine: (no-preload-320324) Getting domain xml...
	I1010 19:23:52.888034  147213 main.go:141] libmachine: (no-preload-320324) Creating domain...
	I1010 19:23:51.652975  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:51.653464  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Found IP for machine: 192.168.50.32
	I1010 19:23:51.653487  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Reserving static IP address...
	I1010 19:23:51.653509  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has current primary IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:51.653910  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-361847", mac: "52:54:00:a6:72:58", ip: "192.168.50.32"} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:51.653956  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | skip adding static IP to network mk-default-k8s-diff-port-361847 - found existing host DHCP lease matching {name: "default-k8s-diff-port-361847", mac: "52:54:00:a6:72:58", ip: "192.168.50.32"}
	I1010 19:23:51.653974  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Reserved static IP address: 192.168.50.32
	I1010 19:23:51.653993  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Waiting for SSH to be available...
	I1010 19:23:51.654006  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | Getting to WaitForSSH function...
	I1010 19:23:51.655927  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:51.656210  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:51.656240  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:51.656334  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | Using SSH client type: external
	I1010 19:23:51.656372  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | Using SSH private key: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/default-k8s-diff-port-361847/id_rsa (-rw-------)
	I1010 19:23:51.656409  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.32 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19787-81676/.minikube/machines/default-k8s-diff-port-361847/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1010 19:23:51.656425  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | About to run SSH command:
	I1010 19:23:51.656436  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | exit 0
	I1010 19:23:51.780839  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | SSH cmd err, output: <nil>: 
	I1010 19:23:51.781206  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetConfigRaw
	I1010 19:23:51.781939  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetIP
	I1010 19:23:51.784347  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:51.784663  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:51.784696  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:51.784918  148525 profile.go:143] Saving config to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/default-k8s-diff-port-361847/config.json ...
	I1010 19:23:51.785134  148525 machine.go:93] provisionDockerMachine start ...
	I1010 19:23:51.785158  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .DriverName
	I1010 19:23:51.785403  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHHostname
	I1010 19:23:51.787817  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:51.788306  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:51.788347  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:51.788547  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHPort
	I1010 19:23:51.788807  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:23:51.789038  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:23:51.789274  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHUsername
	I1010 19:23:51.789515  148525 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:51.789802  148525 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.32 22 <nil> <nil>}
	I1010 19:23:51.789825  148525 main.go:141] libmachine: About to run SSH command:
	hostname
	I1010 19:23:51.893367  148525 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1010 19:23:51.893399  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetMachineName
	I1010 19:23:51.893652  148525 buildroot.go:166] provisioning hostname "default-k8s-diff-port-361847"
	I1010 19:23:51.893699  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetMachineName
	I1010 19:23:51.893921  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHHostname
	I1010 19:23:51.896986  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:51.897377  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:51.897422  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:51.897662  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHPort
	I1010 19:23:51.897815  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:23:51.897949  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:23:51.898064  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHUsername
	I1010 19:23:51.898302  148525 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:51.898489  148525 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.32 22 <nil> <nil>}
	I1010 19:23:51.898502  148525 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-361847 && echo "default-k8s-diff-port-361847" | sudo tee /etc/hostname
	I1010 19:23:52.015158  148525 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-361847
	
	I1010 19:23:52.015199  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHHostname
	I1010 19:23:52.018094  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.018468  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:52.018497  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.018683  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHPort
	I1010 19:23:52.018901  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:23:52.019039  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:23:52.019209  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHUsername
	I1010 19:23:52.019474  148525 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:52.019690  148525 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.32 22 <nil> <nil>}
	I1010 19:23:52.019708  148525 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-361847' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-361847/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-361847' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1010 19:23:52.133923  148525 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1010 19:23:52.133960  148525 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19787-81676/.minikube CaCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19787-81676/.minikube}
	I1010 19:23:52.134007  148525 buildroot.go:174] setting up certificates
	I1010 19:23:52.134023  148525 provision.go:84] configureAuth start
	I1010 19:23:52.134043  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetMachineName
	I1010 19:23:52.134351  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetIP
	I1010 19:23:52.137242  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.137637  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:52.137670  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.137860  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHHostname
	I1010 19:23:52.140264  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.140640  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:52.140672  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.140833  148525 provision.go:143] copyHostCerts
	I1010 19:23:52.140907  148525 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem, removing ...
	I1010 19:23:52.140922  148525 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem
	I1010 19:23:52.140977  148525 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem (1078 bytes)
	I1010 19:23:52.141088  148525 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem, removing ...
	I1010 19:23:52.141098  148525 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem
	I1010 19:23:52.141118  148525 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem (1123 bytes)
	I1010 19:23:52.141175  148525 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem, removing ...
	I1010 19:23:52.141182  148525 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem
	I1010 19:23:52.141213  148525 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem (1675 bytes)
	I1010 19:23:52.141264  148525 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-361847 san=[127.0.0.1 192.168.50.32 default-k8s-diff-port-361847 localhost minikube]
	I1010 19:23:52.241146  148525 provision.go:177] copyRemoteCerts
	I1010 19:23:52.241212  148525 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1010 19:23:52.241241  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHHostname
	I1010 19:23:52.244061  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.244463  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:52.244490  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.244731  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHPort
	I1010 19:23:52.244929  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:23:52.245110  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHUsername
	I1010 19:23:52.245228  148525 sshutil.go:53] new ssh client: &{IP:192.168.50.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/default-k8s-diff-port-361847/id_rsa Username:docker}
	I1010 19:23:52.327309  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1010 19:23:52.352288  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1010 19:23:52.376308  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1010 19:23:52.400807  148525 provision.go:87] duration metric: took 266.765119ms to configureAuth
	I1010 19:23:52.400862  148525 buildroot.go:189] setting minikube options for container-runtime
	I1010 19:23:52.401065  148525 config.go:182] Loaded profile config "default-k8s-diff-port-361847": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 19:23:52.401171  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHHostname
	I1010 19:23:52.403552  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.403919  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:52.403950  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.404173  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHPort
	I1010 19:23:52.404371  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:23:52.404513  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:23:52.404629  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHUsername
	I1010 19:23:52.404743  148525 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:52.404927  148525 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.32 22 <nil> <nil>}
	I1010 19:23:52.404949  148525 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1010 19:23:52.622902  148525 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1010 19:23:52.622930  148525 machine.go:96] duration metric: took 837.779579ms to provisionDockerMachine
	I1010 19:23:52.622942  148525 start.go:293] postStartSetup for "default-k8s-diff-port-361847" (driver="kvm2")
	I1010 19:23:52.622952  148525 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1010 19:23:52.622968  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .DriverName
	I1010 19:23:52.623331  148525 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1010 19:23:52.623369  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHHostname
	I1010 19:23:52.626106  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.626435  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:52.626479  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.626721  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHPort
	I1010 19:23:52.626932  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:23:52.627091  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHUsername
	I1010 19:23:52.627262  148525 sshutil.go:53] new ssh client: &{IP:192.168.50.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/default-k8s-diff-port-361847/id_rsa Username:docker}
	I1010 19:23:52.708050  148525 ssh_runner.go:195] Run: cat /etc/os-release
	I1010 19:23:52.712524  148525 info.go:137] Remote host: Buildroot 2023.02.9
	I1010 19:23:52.712550  148525 filesync.go:126] Scanning /home/jenkins/minikube-integration/19787-81676/.minikube/addons for local assets ...
	I1010 19:23:52.712608  148525 filesync.go:126] Scanning /home/jenkins/minikube-integration/19787-81676/.minikube/files for local assets ...
	I1010 19:23:52.712688  148525 filesync.go:149] local asset: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem -> 888762.pem in /etc/ssl/certs
	I1010 19:23:52.712782  148525 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1010 19:23:52.723719  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem --> /etc/ssl/certs/888762.pem (1708 bytes)
	I1010 19:23:52.747686  148525 start.go:296] duration metric: took 124.729371ms for postStartSetup
	I1010 19:23:52.747727  148525 fix.go:56] duration metric: took 20.853721623s for fixHost
	I1010 19:23:52.747749  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHHostname
	I1010 19:23:52.750316  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.750645  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:52.750677  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.750817  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHPort
	I1010 19:23:52.751046  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:23:52.751195  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:23:52.751333  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHUsername
	I1010 19:23:52.751511  148525 main.go:141] libmachine: Using SSH client type: native
	I1010 19:23:52.751733  148525 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.32 22 <nil> <nil>}
	I1010 19:23:52.751749  148525 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1010 19:23:52.857986  148525 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728588232.831281012
	
	I1010 19:23:52.858019  148525 fix.go:216] guest clock: 1728588232.831281012
	I1010 19:23:52.858029  148525 fix.go:229] Guest: 2024-10-10 19:23:52.831281012 +0000 UTC Remote: 2024-10-10 19:23:52.747731551 +0000 UTC m=+158.845659062 (delta=83.549461ms)
	I1010 19:23:52.858075  148525 fix.go:200] guest clock delta is within tolerance: 83.549461ms
	I1010 19:23:52.858088  148525 start.go:83] releasing machines lock for "default-k8s-diff-port-361847", held for 20.964121636s
	I1010 19:23:52.858120  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .DriverName
	I1010 19:23:52.858491  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetIP
	I1010 19:23:52.861220  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.861640  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:52.861672  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.861828  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .DriverName
	I1010 19:23:52.862337  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .DriverName
	I1010 19:23:52.862548  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .DriverName
	I1010 19:23:52.862655  148525 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1010 19:23:52.862702  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHHostname
	I1010 19:23:52.862825  148525 ssh_runner.go:195] Run: cat /version.json
	I1010 19:23:52.862854  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHHostname
	I1010 19:23:52.865579  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.865840  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.865921  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:52.865960  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.866106  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHPort
	I1010 19:23:52.866290  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:23:52.866300  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:52.866319  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:52.866423  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHUsername
	I1010 19:23:52.866496  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHPort
	I1010 19:23:52.866648  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:23:52.866671  148525 sshutil.go:53] new ssh client: &{IP:192.168.50.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/default-k8s-diff-port-361847/id_rsa Username:docker}
	I1010 19:23:52.866798  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHUsername
	I1010 19:23:52.866910  148525 sshutil.go:53] new ssh client: &{IP:192.168.50.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/default-k8s-diff-port-361847/id_rsa Username:docker}
	I1010 19:23:52.966354  148525 ssh_runner.go:195] Run: systemctl --version
	I1010 19:23:52.972526  148525 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1010 19:23:53.119801  148525 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1010 19:23:53.126287  148525 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1010 19:23:53.126355  148525 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1010 19:23:53.147301  148525 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1010 19:23:53.147325  148525 start.go:495] detecting cgroup driver to use...
	I1010 19:23:53.147381  148525 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1010 19:23:53.167368  148525 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1010 19:23:53.183239  148525 docker.go:217] disabling cri-docker service (if available) ...
	I1010 19:23:53.183308  148525 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1010 19:23:53.203230  148525 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1010 19:23:53.217261  148525 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1010 19:23:53.343555  148525 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1010 19:23:53.491952  148525 docker.go:233] disabling docker service ...
	I1010 19:23:53.492054  148525 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1010 19:23:53.508136  148525 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1010 19:23:53.521662  148525 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1010 19:23:53.651858  148525 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1010 19:23:53.781954  148525 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1010 19:23:53.803934  148525 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1010 19:23:53.826070  148525 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1010 19:23:53.826146  148525 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:53.837506  148525 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1010 19:23:53.837587  148525 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:53.848653  148525 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:53.860511  148525 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:53.873254  148525 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1010 19:23:53.887862  148525 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:53.899507  148525 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:53.923325  148525 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:23:53.934999  148525 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1010 19:23:53.946869  148525 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1010 19:23:53.946945  148525 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1010 19:23:53.968116  148525 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1010 19:23:53.980109  148525 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 19:23:54.106345  148525 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1010 19:23:54.210345  148525 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1010 19:23:54.210417  148525 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1010 19:23:54.215968  148525 start.go:563] Will wait 60s for crictl version
	I1010 19:23:54.216037  148525 ssh_runner.go:195] Run: which crictl
	I1010 19:23:54.219885  148525 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1010 19:23:54.260286  148525 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1010 19:23:54.260375  148525 ssh_runner.go:195] Run: crio --version
	I1010 19:23:54.289908  148525 ssh_runner.go:195] Run: crio --version
	I1010 19:23:54.320940  148525 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1010 19:23:52.050137  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:23:54.060194  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:23:56.551981  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:23:54.027982  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:54.528065  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:55.027890  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:55.527753  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:56.027877  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:56.527790  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:57.028263  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:57.527790  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:58.028743  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:58.528039  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:54.234149  147213 main.go:141] libmachine: (no-preload-320324) Waiting to get IP...
	I1010 19:23:54.235147  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:23:54.235598  147213 main.go:141] libmachine: (no-preload-320324) DBG | unable to find current IP address of domain no-preload-320324 in network mk-no-preload-320324
	I1010 19:23:54.235657  147213 main.go:141] libmachine: (no-preload-320324) DBG | I1010 19:23:54.235580  149378 retry.go:31] will retry after 308.921504ms: waiting for machine to come up
	I1010 19:23:54.546327  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:23:54.547002  147213 main.go:141] libmachine: (no-preload-320324) DBG | unable to find current IP address of domain no-preload-320324 in network mk-no-preload-320324
	I1010 19:23:54.547029  147213 main.go:141] libmachine: (no-preload-320324) DBG | I1010 19:23:54.546956  149378 retry.go:31] will retry after 288.92327ms: waiting for machine to come up
	I1010 19:23:54.837625  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:23:54.838136  147213 main.go:141] libmachine: (no-preload-320324) DBG | unable to find current IP address of domain no-preload-320324 in network mk-no-preload-320324
	I1010 19:23:54.838164  147213 main.go:141] libmachine: (no-preload-320324) DBG | I1010 19:23:54.838054  149378 retry.go:31] will retry after 321.948113ms: waiting for machine to come up
	I1010 19:23:55.161940  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:23:55.162494  147213 main.go:141] libmachine: (no-preload-320324) DBG | unable to find current IP address of domain no-preload-320324 in network mk-no-preload-320324
	I1010 19:23:55.162526  147213 main.go:141] libmachine: (no-preload-320324) DBG | I1010 19:23:55.162441  149378 retry.go:31] will retry after 573.848095ms: waiting for machine to come up
	I1010 19:23:55.739080  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:23:55.739592  147213 main.go:141] libmachine: (no-preload-320324) DBG | unable to find current IP address of domain no-preload-320324 in network mk-no-preload-320324
	I1010 19:23:55.739620  147213 main.go:141] libmachine: (no-preload-320324) DBG | I1010 19:23:55.739494  149378 retry.go:31] will retry after 529.087622ms: waiting for machine to come up
	I1010 19:23:56.270324  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:23:56.270899  147213 main.go:141] libmachine: (no-preload-320324) DBG | unable to find current IP address of domain no-preload-320324 in network mk-no-preload-320324
	I1010 19:23:56.270929  147213 main.go:141] libmachine: (no-preload-320324) DBG | I1010 19:23:56.270850  149378 retry.go:31] will retry after 629.204989ms: waiting for machine to come up
	I1010 19:23:56.901836  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:23:56.902283  147213 main.go:141] libmachine: (no-preload-320324) DBG | unable to find current IP address of domain no-preload-320324 in network mk-no-preload-320324
	I1010 19:23:56.902325  147213 main.go:141] libmachine: (no-preload-320324) DBG | I1010 19:23:56.902222  149378 retry.go:31] will retry after 804.309499ms: waiting for machine to come up
	I1010 19:23:57.708806  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:23:57.709175  147213 main.go:141] libmachine: (no-preload-320324) DBG | unable to find current IP address of domain no-preload-320324 in network mk-no-preload-320324
	I1010 19:23:57.709208  147213 main.go:141] libmachine: (no-preload-320324) DBG | I1010 19:23:57.709151  149378 retry.go:31] will retry after 1.204078295s: waiting for machine to come up
	I1010 19:23:54.322534  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetIP
	I1010 19:23:54.325744  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:54.326217  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:23:54.326257  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:23:54.326533  148525 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1010 19:23:54.331527  148525 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 19:23:54.343881  148525 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-361847 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:default-k8s-diff-port-361847 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.32 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1010 19:23:54.344033  148525 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1010 19:23:54.344084  148525 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 19:23:54.389066  148525 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1010 19:23:54.389149  148525 ssh_runner.go:195] Run: which lz4
	I1010 19:23:54.393550  148525 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1010 19:23:54.397787  148525 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1010 19:23:54.397833  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1010 19:23:55.897111  148525 crio.go:462] duration metric: took 1.503593301s to copy over tarball
	I1010 19:23:55.897212  148525 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1010 19:23:58.060691  148525 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.16343467s)
	I1010 19:23:58.060731  148525 crio.go:469] duration metric: took 2.163580526s to extract the tarball
	I1010 19:23:58.060741  148525 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1010 19:23:58.103877  148525 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 19:23:58.162881  148525 crio.go:514] all images are preloaded for cri-o runtime.
	I1010 19:23:58.162907  148525 cache_images.go:84] Images are preloaded, skipping loading
	I1010 19:23:58.162915  148525 kubeadm.go:934] updating node { 192.168.50.32 8444 v1.31.1 crio true true} ...
	I1010 19:23:58.163031  148525 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-361847 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.32
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-361847 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1010 19:23:58.163098  148525 ssh_runner.go:195] Run: crio config
	I1010 19:23:58.219804  148525 cni.go:84] Creating CNI manager for ""
	I1010 19:23:58.219827  148525 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1010 19:23:58.219837  148525 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1010 19:23:58.219861  148525 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.32 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-361847 NodeName:default-k8s-diff-port-361847 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.32"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.32 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1010 19:23:58.219982  148525 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.32
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-361847"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.32
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.32"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1010 19:23:58.220042  148525 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1010 19:23:58.231444  148525 binaries.go:44] Found k8s binaries, skipping transfer
	I1010 19:23:58.231565  148525 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1010 19:23:58.241835  148525 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I1010 19:23:58.259408  148525 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1010 19:23:58.276571  148525 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I1010 19:23:58.294640  148525 ssh_runner.go:195] Run: grep 192.168.50.32	control-plane.minikube.internal$ /etc/hosts
	I1010 19:23:58.298503  148525 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.32	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 19:23:58.312286  148525 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 19:23:58.449757  148525 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 19:23:58.467342  148525 certs.go:68] Setting up /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/default-k8s-diff-port-361847 for IP: 192.168.50.32
	I1010 19:23:58.467377  148525 certs.go:194] generating shared ca certs ...
	I1010 19:23:58.467398  148525 certs.go:226] acquiring lock for ca certs: {Name:mk934aa2a82050d7b4e8800492bf64f91d140517 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 19:23:58.467583  148525 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key
	I1010 19:23:58.467642  148525 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key
	I1010 19:23:58.467655  148525 certs.go:256] generating profile certs ...
	I1010 19:23:58.467826  148525 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/default-k8s-diff-port-361847/client.key
	I1010 19:23:58.467895  148525 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/default-k8s-diff-port-361847/apiserver.key.ae5e3f04
	I1010 19:23:58.467951  148525 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/default-k8s-diff-port-361847/proxy-client.key
	I1010 19:23:58.468089  148525 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876.pem (1338 bytes)
	W1010 19:23:58.468136  148525 certs.go:480] ignoring /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876_empty.pem, impossibly tiny 0 bytes
	I1010 19:23:58.468153  148525 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem (1679 bytes)
	I1010 19:23:58.468194  148525 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem (1078 bytes)
	I1010 19:23:58.468226  148525 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem (1123 bytes)
	I1010 19:23:58.468260  148525 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem (1675 bytes)
	I1010 19:23:58.468317  148525 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem (1708 bytes)
	I1010 19:23:58.468931  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1010 19:23:58.529632  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1010 19:23:58.571900  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1010 19:23:58.612599  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1010 19:23:58.645536  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/default-k8s-diff-port-361847/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1010 19:23:58.675961  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/default-k8s-diff-port-361847/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1010 19:23:58.700712  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/default-k8s-diff-port-361847/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1010 19:23:58.725355  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/default-k8s-diff-port-361847/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1010 19:23:58.751138  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1010 19:23:58.775832  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876.pem --> /usr/share/ca-certificates/88876.pem (1338 bytes)
	I1010 19:23:58.800729  148525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem --> /usr/share/ca-certificates/888762.pem (1708 bytes)
	I1010 19:23:58.825558  148525 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1010 19:23:58.843331  148525 ssh_runner.go:195] Run: openssl version
	I1010 19:23:58.849271  148525 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/88876.pem && ln -fs /usr/share/ca-certificates/88876.pem /etc/ssl/certs/88876.pem"
	I1010 19:23:58.861031  148525 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/88876.pem
	I1010 19:23:58.865721  148525 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 10 18:09 /usr/share/ca-certificates/88876.pem
	I1010 19:23:58.865797  148525 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/88876.pem
	I1010 19:23:58.871961  148525 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/88876.pem /etc/ssl/certs/51391683.0"
	I1010 19:23:58.884520  148525 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/888762.pem && ln -fs /usr/share/ca-certificates/888762.pem /etc/ssl/certs/888762.pem"
	I1010 19:23:58.896744  148525 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/888762.pem
	I1010 19:23:58.901507  148525 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 10 18:09 /usr/share/ca-certificates/888762.pem
	I1010 19:23:58.901571  148525 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/888762.pem
	I1010 19:23:58.907366  148525 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/888762.pem /etc/ssl/certs/3ec20f2e.0"
	I1010 19:23:58.919784  148525 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1010 19:23:58.931972  148525 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1010 19:23:58.936897  148525 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 10 17:58 /usr/share/ca-certificates/minikubeCA.pem
	I1010 19:23:58.936981  148525 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1010 19:23:58.943007  148525 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1010 19:23:59.052037  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:01.551982  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:23:59.028519  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:59.527790  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:00.027889  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:00.528623  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:01.028697  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:01.528030  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:02.028062  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:02.527876  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:03.028745  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:03.528569  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:23:58.914409  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:23:58.914894  147213 main.go:141] libmachine: (no-preload-320324) DBG | unable to find current IP address of domain no-preload-320324 in network mk-no-preload-320324
	I1010 19:23:58.914927  147213 main.go:141] libmachine: (no-preload-320324) DBG | I1010 19:23:58.914831  149378 retry.go:31] will retry after 1.631827888s: waiting for machine to come up
	I1010 19:24:00.548505  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:00.549135  147213 main.go:141] libmachine: (no-preload-320324) DBG | unable to find current IP address of domain no-preload-320324 in network mk-no-preload-320324
	I1010 19:24:00.549164  147213 main.go:141] libmachine: (no-preload-320324) DBG | I1010 19:24:00.549043  149378 retry.go:31] will retry after 2.126895157s: waiting for machine to come up
	I1010 19:24:02.678328  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:02.678907  147213 main.go:141] libmachine: (no-preload-320324) DBG | unable to find current IP address of domain no-preload-320324 in network mk-no-preload-320324
	I1010 19:24:02.678969  147213 main.go:141] libmachine: (no-preload-320324) DBG | I1010 19:24:02.678891  149378 retry.go:31] will retry after 2.754376625s: waiting for machine to come up
	I1010 19:23:58.955104  148525 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1010 19:23:58.959833  148525 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1010 19:23:58.966528  148525 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1010 19:23:58.973590  148525 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1010 19:23:58.982390  148525 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1010 19:23:58.990767  148525 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1010 19:23:58.997162  148525 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1010 19:23:59.003647  148525 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-361847 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:default-k8s-diff-port-361847 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.32 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 19:23:59.003786  148525 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1010 19:23:59.003865  148525 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1010 19:23:59.048772  148525 cri.go:89] found id: ""
	I1010 19:23:59.048869  148525 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1010 19:23:59.061267  148525 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1010 19:23:59.061288  148525 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1010 19:23:59.061338  148525 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1010 19:23:59.072629  148525 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1010 19:23:59.074287  148525 kubeconfig.go:125] found "default-k8s-diff-port-361847" server: "https://192.168.50.32:8444"
	I1010 19:23:59.077880  148525 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1010 19:23:59.090738  148525 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.32
	I1010 19:23:59.090783  148525 kubeadm.go:1160] stopping kube-system containers ...
	I1010 19:23:59.090799  148525 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1010 19:23:59.090886  148525 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1010 19:23:59.136762  148525 cri.go:89] found id: ""
	I1010 19:23:59.136888  148525 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1010 19:23:59.155937  148525 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1010 19:23:59.166471  148525 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1010 19:23:59.166493  148525 kubeadm.go:157] found existing configuration files:
	
	I1010 19:23:59.166549  148525 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1010 19:23:59.178247  148525 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1010 19:23:59.178313  148525 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1010 19:23:59.189455  148525 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1010 19:23:59.200127  148525 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1010 19:23:59.200204  148525 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1010 19:23:59.210764  148525 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1010 19:23:59.221048  148525 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1010 19:23:59.221119  148525 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1010 19:23:59.231762  148525 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1010 19:23:59.242152  148525 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1010 19:23:59.242217  148525 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1010 19:23:59.252608  148525 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1010 19:23:59.265219  148525 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:23:59.391743  148525 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:24:00.243288  148525 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:24:00.453782  148525 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:24:00.532137  148525 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:24:00.623598  148525 api_server.go:52] waiting for apiserver process to appear ...
	I1010 19:24:00.623711  148525 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:01.124678  148525 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:01.624626  148525 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:01.667587  148525 api_server.go:72] duration metric: took 1.043987857s to wait for apiserver process to appear ...
	I1010 19:24:01.667621  148525 api_server.go:88] waiting for apiserver healthz status ...
	I1010 19:24:01.667649  148525 api_server.go:253] Checking apiserver healthz at https://192.168.50.32:8444/healthz ...
	I1010 19:24:01.668298  148525 api_server.go:269] stopped: https://192.168.50.32:8444/healthz: Get "https://192.168.50.32:8444/healthz": dial tcp 192.168.50.32:8444: connect: connection refused
	I1010 19:24:02.168273  148525 api_server.go:253] Checking apiserver healthz at https://192.168.50.32:8444/healthz ...
	I1010 19:24:05.275654  148525 api_server.go:279] https://192.168.50.32:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1010 19:24:05.275695  148525 api_server.go:103] status: https://192.168.50.32:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1010 19:24:05.275713  148525 api_server.go:253] Checking apiserver healthz at https://192.168.50.32:8444/healthz ...
	I1010 19:24:05.309713  148525 api_server.go:279] https://192.168.50.32:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1010 19:24:05.309770  148525 api_server.go:103] status: https://192.168.50.32:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1010 19:24:05.668325  148525 api_server.go:253] Checking apiserver healthz at https://192.168.50.32:8444/healthz ...
	I1010 19:24:05.684992  148525 api_server.go:279] https://192.168.50.32:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1010 19:24:05.685031  148525 api_server.go:103] status: https://192.168.50.32:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1010 19:24:06.168198  148525 api_server.go:253] Checking apiserver healthz at https://192.168.50.32:8444/healthz ...
	I1010 19:24:06.176584  148525 api_server.go:279] https://192.168.50.32:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1010 19:24:06.176627  148525 api_server.go:103] status: https://192.168.50.32:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1010 19:24:06.668130  148525 api_server.go:253] Checking apiserver healthz at https://192.168.50.32:8444/healthz ...
	I1010 19:24:06.682049  148525 api_server.go:279] https://192.168.50.32:8444/healthz returned 200:
	ok
	I1010 19:24:06.692780  148525 api_server.go:141] control plane version: v1.31.1
	I1010 19:24:06.692811  148525 api_server.go:131] duration metric: took 5.025182717s to wait for apiserver health ...
	I1010 19:24:06.692820  148525 cni.go:84] Creating CNI manager for ""
	I1010 19:24:06.692831  148525 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1010 19:24:06.694447  148525 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1010 19:24:03.558797  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:06.054012  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:04.028711  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:04.528770  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:05.028716  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:05.528083  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:06.028204  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:06.528430  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:07.027972  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:07.527987  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:08.027829  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:08.528676  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:05.435450  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:05.435940  147213 main.go:141] libmachine: (no-preload-320324) DBG | unable to find current IP address of domain no-preload-320324 in network mk-no-preload-320324
	I1010 19:24:05.435970  147213 main.go:141] libmachine: (no-preload-320324) DBG | I1010 19:24:05.435888  149378 retry.go:31] will retry after 2.981990051s: waiting for machine to come up
	I1010 19:24:08.419385  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:08.419982  147213 main.go:141] libmachine: (no-preload-320324) DBG | unable to find current IP address of domain no-preload-320324 in network mk-no-preload-320324
	I1010 19:24:08.420006  147213 main.go:141] libmachine: (no-preload-320324) DBG | I1010 19:24:08.419905  149378 retry.go:31] will retry after 3.976204267s: waiting for machine to come up
	I1010 19:24:06.695841  148525 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1010 19:24:06.711212  148525 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1010 19:24:06.747753  148525 system_pods.go:43] waiting for kube-system pods to appear ...
	I1010 19:24:06.768344  148525 system_pods.go:59] 8 kube-system pods found
	I1010 19:24:06.768429  148525 system_pods.go:61] "coredns-7c65d6cfc9-rv8vq" [93b209ea-bb5f-40c5-aea8-8771b785f021] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1010 19:24:06.768446  148525 system_pods.go:61] "etcd-default-k8s-diff-port-361847" [65129999-984d-497c-a6e1-9c53a5374991] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1010 19:24:06.768452  148525 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-361847" [5f18ba24-29cf-433e-a70d-23757278c04f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1010 19:24:06.768460  148525 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-361847" [c189c785-8ac5-4003-802d-9e7c089d450e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1010 19:24:06.768467  148525 system_pods.go:61] "kube-proxy-v5lm8" [e78eabf9-5c65-4cba-83fd-0837cef05126] Running
	I1010 19:24:06.768476  148525 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-361847" [4f84f0f5-e255-4534-9db3-e5cfee0b2447] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1010 19:24:06.768485  148525 system_pods.go:61] "metrics-server-6867b74b74-h5kjm" [a3979b79-bd21-490b-97ac-0a78efd43a99] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1010 19:24:06.768493  148525 system_pods.go:61] "storage-provisioner" [ca8606d3-9adb-46da-886a-3081b11b52a6] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1010 19:24:06.768499  148525 system_pods.go:74] duration metric: took 20.716461ms to wait for pod list to return data ...
	I1010 19:24:06.768509  148525 node_conditions.go:102] verifying NodePressure condition ...
	I1010 19:24:06.777935  148525 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1010 19:24:06.777973  148525 node_conditions.go:123] node cpu capacity is 2
	I1010 19:24:06.777988  148525 node_conditions.go:105] duration metric: took 9.473726ms to run NodePressure ...
	I1010 19:24:06.778019  148525 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:24:07.053296  148525 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1010 19:24:07.057585  148525 kubeadm.go:739] kubelet initialised
	I1010 19:24:07.057608  148525 kubeadm.go:740] duration metric: took 4.283027ms waiting for restarted kubelet to initialise ...
	I1010 19:24:07.057618  148525 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 19:24:07.064157  148525 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-rv8vq" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:07.069962  148525 pod_ready.go:98] node "default-k8s-diff-port-361847" hosting pod "coredns-7c65d6cfc9-rv8vq" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-361847" has status "Ready":"False"
	I1010 19:24:07.069989  148525 pod_ready.go:82] duration metric: took 5.791958ms for pod "coredns-7c65d6cfc9-rv8vq" in "kube-system" namespace to be "Ready" ...
	E1010 19:24:07.069999  148525 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-361847" hosting pod "coredns-7c65d6cfc9-rv8vq" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-361847" has status "Ready":"False"
	I1010 19:24:07.070022  148525 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:07.075615  148525 pod_ready.go:98] node "default-k8s-diff-port-361847" hosting pod "etcd-default-k8s-diff-port-361847" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-361847" has status "Ready":"False"
	I1010 19:24:07.075644  148525 pod_ready.go:82] duration metric: took 5.608749ms for pod "etcd-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	E1010 19:24:07.075654  148525 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-361847" hosting pod "etcd-default-k8s-diff-port-361847" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-361847" has status "Ready":"False"
	I1010 19:24:07.075661  148525 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:07.081717  148525 pod_ready.go:98] node "default-k8s-diff-port-361847" hosting pod "kube-apiserver-default-k8s-diff-port-361847" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-361847" has status "Ready":"False"
	I1010 19:24:07.081743  148525 pod_ready.go:82] duration metric: took 6.074977ms for pod "kube-apiserver-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	E1010 19:24:07.081754  148525 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-361847" hosting pod "kube-apiserver-default-k8s-diff-port-361847" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-361847" has status "Ready":"False"
	I1010 19:24:07.081761  148525 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:07.152204  148525 pod_ready.go:98] node "default-k8s-diff-port-361847" hosting pod "kube-controller-manager-default-k8s-diff-port-361847" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-361847" has status "Ready":"False"
	I1010 19:24:07.152244  148525 pod_ready.go:82] duration metric: took 70.475599ms for pod "kube-controller-manager-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	E1010 19:24:07.152258  148525 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-361847" hosting pod "kube-controller-manager-default-k8s-diff-port-361847" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-361847" has status "Ready":"False"
	I1010 19:24:07.152266  148525 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-v5lm8" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:07.551283  148525 pod_ready.go:93] pod "kube-proxy-v5lm8" in "kube-system" namespace has status "Ready":"True"
	I1010 19:24:07.551311  148525 pod_ready.go:82] duration metric: took 399.036581ms for pod "kube-proxy-v5lm8" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:07.551324  148525 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:08.550896  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:10.551437  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:09.028538  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:09.527883  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:10.028693  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:10.528439  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:11.028528  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:11.528679  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:12.028002  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:12.527904  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:13.028685  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:13.527833  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:12.401115  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.401808  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has current primary IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.401841  147213 main.go:141] libmachine: (no-preload-320324) Found IP for machine: 192.168.72.11
	I1010 19:24:12.401856  147213 main.go:141] libmachine: (no-preload-320324) Reserving static IP address...
	I1010 19:24:12.402368  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "no-preload-320324", mac: "52:54:00:95:03:cd", ip: "192.168.72.11"} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:12.402407  147213 main.go:141] libmachine: (no-preload-320324) DBG | skip adding static IP to network mk-no-preload-320324 - found existing host DHCP lease matching {name: "no-preload-320324", mac: "52:54:00:95:03:cd", ip: "192.168.72.11"}
	I1010 19:24:12.402426  147213 main.go:141] libmachine: (no-preload-320324) Reserved static IP address: 192.168.72.11
	I1010 19:24:12.402443  147213 main.go:141] libmachine: (no-preload-320324) Waiting for SSH to be available...
	I1010 19:24:12.402458  147213 main.go:141] libmachine: (no-preload-320324) DBG | Getting to WaitForSSH function...
	I1010 19:24:12.404803  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.405200  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:12.405226  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.405461  147213 main.go:141] libmachine: (no-preload-320324) DBG | Using SSH client type: external
	I1010 19:24:12.405494  147213 main.go:141] libmachine: (no-preload-320324) DBG | Using SSH private key: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/no-preload-320324/id_rsa (-rw-------)
	I1010 19:24:12.405527  147213 main.go:141] libmachine: (no-preload-320324) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.11 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19787-81676/.minikube/machines/no-preload-320324/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1010 19:24:12.405541  147213 main.go:141] libmachine: (no-preload-320324) DBG | About to run SSH command:
	I1010 19:24:12.405554  147213 main.go:141] libmachine: (no-preload-320324) DBG | exit 0
	I1010 19:24:12.529010  147213 main.go:141] libmachine: (no-preload-320324) DBG | SSH cmd err, output: <nil>: 
	I1010 19:24:12.529401  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetConfigRaw
	I1010 19:24:12.530257  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetIP
	I1010 19:24:12.533285  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.533692  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:12.533727  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.533963  147213 profile.go:143] Saving config to /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/no-preload-320324/config.json ...
	I1010 19:24:12.534205  147213 machine.go:93] provisionDockerMachine start ...
	I1010 19:24:12.534230  147213 main.go:141] libmachine: (no-preload-320324) Calling .DriverName
	I1010 19:24:12.534450  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHHostname
	I1010 19:24:12.536585  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.536976  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:12.537003  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.537133  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHPort
	I1010 19:24:12.537323  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:12.537512  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:12.537689  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHUsername
	I1010 19:24:12.537925  147213 main.go:141] libmachine: Using SSH client type: native
	I1010 19:24:12.538138  147213 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.11 22 <nil> <nil>}
	I1010 19:24:12.538151  147213 main.go:141] libmachine: About to run SSH command:
	hostname
	I1010 19:24:12.641679  147213 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1010 19:24:12.641706  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetMachineName
	I1010 19:24:12.641964  147213 buildroot.go:166] provisioning hostname "no-preload-320324"
	I1010 19:24:12.642002  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetMachineName
	I1010 19:24:12.642235  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHHostname
	I1010 19:24:12.645149  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.645488  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:12.645521  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.645647  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHPort
	I1010 19:24:12.645836  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:12.646001  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:12.646155  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHUsername
	I1010 19:24:12.646352  147213 main.go:141] libmachine: Using SSH client type: native
	I1010 19:24:12.646533  147213 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.11 22 <nil> <nil>}
	I1010 19:24:12.646545  147213 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-320324 && echo "no-preload-320324" | sudo tee /etc/hostname
	I1010 19:24:12.766449  147213 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-320324
	
	I1010 19:24:12.766480  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHHostname
	I1010 19:24:12.769836  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.770331  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:12.770356  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.770584  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHPort
	I1010 19:24:12.770810  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:12.770962  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:12.771119  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHUsername
	I1010 19:24:12.771252  147213 main.go:141] libmachine: Using SSH client type: native
	I1010 19:24:12.771448  147213 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.11 22 <nil> <nil>}
	I1010 19:24:12.771470  147213 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-320324' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-320324/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-320324' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1010 19:24:12.882458  147213 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1010 19:24:12.882495  147213 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19787-81676/.minikube CaCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19787-81676/.minikube}
	I1010 19:24:12.882537  147213 buildroot.go:174] setting up certificates
	I1010 19:24:12.882547  147213 provision.go:84] configureAuth start
	I1010 19:24:12.882562  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetMachineName
	I1010 19:24:12.882865  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetIP
	I1010 19:24:12.885854  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.886139  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:12.886173  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.886308  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHHostname
	I1010 19:24:12.888479  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.888788  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:12.888819  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.888976  147213 provision.go:143] copyHostCerts
	I1010 19:24:12.889037  147213 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem, removing ...
	I1010 19:24:12.889049  147213 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem
	I1010 19:24:12.889102  147213 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/cert.pem (1123 bytes)
	I1010 19:24:12.889235  147213 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem, removing ...
	I1010 19:24:12.889246  147213 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem
	I1010 19:24:12.889278  147213 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/key.pem (1675 bytes)
	I1010 19:24:12.889370  147213 exec_runner.go:144] found /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem, removing ...
	I1010 19:24:12.889381  147213 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem
	I1010 19:24:12.889406  147213 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19787-81676/.minikube/ca.pem (1078 bytes)
	I1010 19:24:12.889493  147213 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem org=jenkins.no-preload-320324 san=[127.0.0.1 192.168.72.11 localhost minikube no-preload-320324]
	I1010 19:24:12.978176  147213 provision.go:177] copyRemoteCerts
	I1010 19:24:12.978235  147213 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1010 19:24:12.978261  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHHostname
	I1010 19:24:12.981662  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.982182  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:12.982218  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:12.982452  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHPort
	I1010 19:24:12.982647  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:12.982829  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHUsername
	I1010 19:24:12.983005  147213 sshutil.go:53] new ssh client: &{IP:192.168.72.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/no-preload-320324/id_rsa Username:docker}
	I1010 19:24:13.067269  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1010 19:24:13.092777  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1010 19:24:13.118530  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1010 19:24:13.143401  147213 provision.go:87] duration metric: took 260.833877ms to configureAuth
	I1010 19:24:13.143436  147213 buildroot.go:189] setting minikube options for container-runtime
	I1010 19:24:13.143678  147213 config.go:182] Loaded profile config "no-preload-320324": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 19:24:13.143776  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHHostname
	I1010 19:24:13.147086  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:13.147507  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:13.147531  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:13.147787  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHPort
	I1010 19:24:13.148032  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:13.148222  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:13.148452  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHUsername
	I1010 19:24:13.148660  147213 main.go:141] libmachine: Using SSH client type: native
	I1010 19:24:13.149013  147213 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.11 22 <nil> <nil>}
	I1010 19:24:13.149041  147213 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1010 19:24:13.375683  147213 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1010 19:24:13.375714  147213 machine.go:96] duration metric: took 841.493636ms to provisionDockerMachine
	I1010 19:24:13.375736  147213 start.go:293] postStartSetup for "no-preload-320324" (driver="kvm2")
	I1010 19:24:13.375754  147213 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1010 19:24:13.375775  147213 main.go:141] libmachine: (no-preload-320324) Calling .DriverName
	I1010 19:24:13.376085  147213 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1010 19:24:13.376116  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHHostname
	I1010 19:24:13.378855  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:13.379179  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:13.379224  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:13.379408  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHPort
	I1010 19:24:13.379608  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:13.379769  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHUsername
	I1010 19:24:13.379910  147213 sshutil.go:53] new ssh client: &{IP:192.168.72.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/no-preload-320324/id_rsa Username:docker}
	I1010 19:24:13.459580  147213 ssh_runner.go:195] Run: cat /etc/os-release
	I1010 19:24:13.463644  147213 info.go:137] Remote host: Buildroot 2023.02.9
	I1010 19:24:13.463674  147213 filesync.go:126] Scanning /home/jenkins/minikube-integration/19787-81676/.minikube/addons for local assets ...
	I1010 19:24:13.463751  147213 filesync.go:126] Scanning /home/jenkins/minikube-integration/19787-81676/.minikube/files for local assets ...
	I1010 19:24:13.463845  147213 filesync.go:149] local asset: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem -> 888762.pem in /etc/ssl/certs
	I1010 19:24:13.463963  147213 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1010 19:24:13.473483  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem --> /etc/ssl/certs/888762.pem (1708 bytes)
	I1010 19:24:13.498773  147213 start.go:296] duration metric: took 123.021762ms for postStartSetup
	I1010 19:24:13.498814  147213 fix.go:56] duration metric: took 20.640532088s for fixHost
	I1010 19:24:13.498834  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHHostname
	I1010 19:24:13.501681  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:13.502243  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:13.502281  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:13.502476  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHPort
	I1010 19:24:13.502679  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:13.502835  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:13.502993  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHUsername
	I1010 19:24:13.503177  147213 main.go:141] libmachine: Using SSH client type: native
	I1010 19:24:13.503383  147213 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.11 22 <nil> <nil>}
	I1010 19:24:13.503396  147213 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1010 19:24:13.613929  147213 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728588253.586950075
	
	I1010 19:24:13.613954  147213 fix.go:216] guest clock: 1728588253.586950075
	I1010 19:24:13.613963  147213 fix.go:229] Guest: 2024-10-10 19:24:13.586950075 +0000 UTC Remote: 2024-10-10 19:24:13.498818059 +0000 UTC m=+359.788559229 (delta=88.132016ms)
	I1010 19:24:13.613988  147213 fix.go:200] guest clock delta is within tolerance: 88.132016ms
	I1010 19:24:13.614020  147213 start.go:83] releasing machines lock for "no-preload-320324", held for 20.755775587s
	I1010 19:24:13.614063  147213 main.go:141] libmachine: (no-preload-320324) Calling .DriverName
	I1010 19:24:13.614473  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetIP
	I1010 19:24:13.617327  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:13.617694  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:13.617721  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:13.618016  147213 main.go:141] libmachine: (no-preload-320324) Calling .DriverName
	I1010 19:24:13.618670  147213 main.go:141] libmachine: (no-preload-320324) Calling .DriverName
	I1010 19:24:13.618884  147213 main.go:141] libmachine: (no-preload-320324) Calling .DriverName
	I1010 19:24:13.618989  147213 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1010 19:24:13.619047  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHHostname
	I1010 19:24:13.619142  147213 ssh_runner.go:195] Run: cat /version.json
	I1010 19:24:13.619185  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHHostname
	I1010 19:24:13.621972  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:13.622229  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:13.622322  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:13.622348  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:13.622533  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHPort
	I1010 19:24:13.622666  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:13.622697  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:13.622736  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:13.622865  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHPort
	I1010 19:24:13.622930  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHUsername
	I1010 19:24:13.623059  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:13.623073  147213 sshutil.go:53] new ssh client: &{IP:192.168.72.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/no-preload-320324/id_rsa Username:docker}
	I1010 19:24:13.623225  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHUsername
	I1010 19:24:13.623349  147213 sshutil.go:53] new ssh client: &{IP:192.168.72.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/no-preload-320324/id_rsa Username:docker}
	I1010 19:24:13.720999  147213 ssh_runner.go:195] Run: systemctl --version
	I1010 19:24:13.727679  147213 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1010 19:24:09.562834  148525 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-361847" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:12.058686  148525 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-361847" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:13.870558  147213 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1010 19:24:13.877853  147213 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1010 19:24:13.877923  147213 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1010 19:24:13.896295  147213 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1010 19:24:13.896325  147213 start.go:495] detecting cgroup driver to use...
	I1010 19:24:13.896400  147213 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1010 19:24:13.913122  147213 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1010 19:24:13.929359  147213 docker.go:217] disabling cri-docker service (if available) ...
	I1010 19:24:13.929437  147213 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1010 19:24:13.944840  147213 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1010 19:24:13.960062  147213 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1010 19:24:14.090774  147213 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1010 19:24:14.246094  147213 docker.go:233] disabling docker service ...
	I1010 19:24:14.246161  147213 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1010 19:24:14.264682  147213 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1010 19:24:14.280264  147213 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1010 19:24:14.437156  147213 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1010 19:24:14.569220  147213 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1010 19:24:14.585723  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1010 19:24:14.607349  147213 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1010 19:24:14.607429  147213 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:24:14.619113  147213 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1010 19:24:14.619198  147213 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:24:14.631818  147213 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:24:14.643977  147213 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:24:14.655753  147213 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1010 19:24:14.667235  147213 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:24:14.679225  147213 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:24:14.698760  147213 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 19:24:14.710440  147213 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1010 19:24:14.722565  147213 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1010 19:24:14.722625  147213 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1010 19:24:14.740587  147213 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1010 19:24:14.752630  147213 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 19:24:14.887728  147213 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1010 19:24:14.989026  147213 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1010 19:24:14.989109  147213 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1010 19:24:14.995309  147213 start.go:563] Will wait 60s for crictl version
	I1010 19:24:14.995366  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:24:14.999840  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1010 19:24:15.043758  147213 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1010 19:24:15.043856  147213 ssh_runner.go:195] Run: crio --version
	I1010 19:24:15.079274  147213 ssh_runner.go:195] Run: crio --version
	I1010 19:24:15.116630  147213 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1010 19:24:13.050633  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:15.552413  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:14.028090  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:14.528072  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:15.028648  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:15.528388  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:16.028060  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:16.528472  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:17.028098  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:17.528125  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:18.028563  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:18.528613  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:15.118343  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetIP
	I1010 19:24:15.121596  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:15.122101  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:15.122133  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:15.122396  147213 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1010 19:24:15.127140  147213 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 19:24:15.141249  147213 kubeadm.go:883] updating cluster {Name:no-preload-320324 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:no-preload-320324 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.11 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1010 19:24:15.141375  147213 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1010 19:24:15.141417  147213 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 19:24:15.183271  147213 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1010 19:24:15.183303  147213 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1010 19:24:15.183412  147213 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 19:24:15.183444  147213 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1010 19:24:15.183452  147213 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I1010 19:24:15.183459  147213 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1010 19:24:15.183422  147213 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1010 19:24:15.183493  147213 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1010 19:24:15.183512  147213 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I1010 19:24:15.183507  147213 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I1010 19:24:15.185097  147213 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I1010 19:24:15.185097  147213 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1010 19:24:15.185097  147213 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 19:24:15.185099  147213 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I1010 19:24:15.185097  147213 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I1010 19:24:15.185098  147213 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1010 19:24:15.185103  147213 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1010 19:24:15.185106  147213 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1010 19:24:15.328484  147213 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.1
	I1010 19:24:15.333573  147213 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I1010 19:24:15.340047  147213 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I1010 19:24:15.358922  147213 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I1010 19:24:15.359800  147213 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.1
	I1010 19:24:15.366668  147213 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.1
	I1010 19:24:15.409942  147213 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I1010 19:24:15.409995  147213 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1010 19:24:15.410050  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:24:15.416186  147213 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.1
	I1010 19:24:15.452343  147213 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I1010 19:24:15.452385  147213 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I1010 19:24:15.452426  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:24:15.533567  147213 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I1010 19:24:15.533620  147213 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I1010 19:24:15.533671  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:24:15.585611  147213 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I1010 19:24:15.585659  147213 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I1010 19:24:15.585685  147213 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I1010 19:24:15.585712  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:24:15.585724  147213 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I1010 19:24:15.585765  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:24:15.585769  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I1010 19:24:15.585805  147213 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I1010 19:24:15.585832  147213 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I1010 19:24:15.585856  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1010 19:24:15.585872  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:24:15.585943  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1010 19:24:15.603131  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I1010 19:24:15.661918  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1010 19:24:15.683739  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I1010 19:24:15.683760  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I1010 19:24:15.683833  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1010 19:24:15.683880  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I1010 19:24:15.685385  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I1010 19:24:15.792253  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1010 19:24:15.818116  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I1010 19:24:15.818183  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I1010 19:24:15.818289  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1010 19:24:15.818321  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I1010 19:24:15.818402  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I1010 19:24:15.878069  147213 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I1010 19:24:15.878202  147213 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I1010 19:24:15.940520  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I1010 19:24:15.953841  147213 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I1010 19:24:15.953955  147213 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I1010 19:24:15.953990  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I1010 19:24:15.954047  147213 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I1010 19:24:15.954115  147213 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I1010 19:24:15.954120  147213 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I1010 19:24:15.954130  147213 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I1010 19:24:15.954144  147213 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I1010 19:24:15.954157  147213 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I1010 19:24:15.954205  147213 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	I1010 19:24:16.005975  147213 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I1010 19:24:16.006028  147213 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I1010 19:24:16.006090  147213 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I1010 19:24:16.023905  147213 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I1010 19:24:16.023990  147213 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.1 (exists)
	I1010 19:24:16.024024  147213 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I1010 19:24:16.024023  147213 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.1 (exists)
	I1010 19:24:16.033715  147213 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 19:24:18.150881  147213 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1: (2.144766677s)
	I1010 19:24:18.150935  147213 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.1 (exists)
	I1010 19:24:18.150931  147213 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (2.196753845s)
	I1010 19:24:18.150944  147213 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1: (2.126894115s)
	I1010 19:24:18.150973  147213 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.1 (exists)
	I1010 19:24:18.150953  147213 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I1010 19:24:18.150982  147213 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.117235962s)
	I1010 19:24:18.151002  147213 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I1010 19:24:18.151014  147213 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1010 19:24:18.151053  147213 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 19:24:18.151069  147213 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I1010 19:24:18.151097  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:24:14.059223  148525 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-361847" in "kube-system" namespace has status "Ready":"True"
	I1010 19:24:14.059252  148525 pod_ready.go:82] duration metric: took 6.507918149s for pod "kube-scheduler-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:14.059266  148525 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:16.066908  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:18.082398  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:18.051799  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:20.552644  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:19.028306  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:19.527854  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:20.028765  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:20.528245  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:21.027936  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:21.528172  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:22.028527  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:22.528446  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:23.028709  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:23.528711  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:21.952099  147213 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.801005716s)
	I1010 19:24:21.952134  147213 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I1010 19:24:21.952163  147213 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I1010 19:24:21.952165  147213 ssh_runner.go:235] Completed: which crictl: (3.801048272s)
	I1010 19:24:21.952212  147213 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I1010 19:24:21.952225  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 19:24:21.993627  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 19:24:20.566055  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:22.567145  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:23.053514  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:25.554151  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:24.028402  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:24.528516  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:25.028207  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:25.527928  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:26.028032  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:26.527826  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:27.028609  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:27.528448  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:28.028018  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:28.528597  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:23.929370  147213 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1: (1.977128659s)
	I1010 19:24:23.929418  147213 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I1010 19:24:23.929450  147213 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I1010 19:24:23.929498  147213 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.935844384s)
	I1010 19:24:23.929532  147213 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1
	I1010 19:24:23.929551  147213 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 19:24:26.009485  147213 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.079908324s)
	I1010 19:24:26.009567  147213 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1010 19:24:26.009484  147213 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1: (2.079925224s)
	I1010 19:24:26.009641  147213 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I1010 19:24:26.009671  147213 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I1010 19:24:26.009684  147213 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1010 19:24:26.009720  147213 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1
	I1010 19:24:27.968483  147213 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.958772952s)
	I1010 19:24:27.968534  147213 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1010 19:24:27.968559  147213 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1: (1.958813643s)
	I1010 19:24:27.968587  147213 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I1010 19:24:27.968619  147213 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I1010 19:24:27.968686  147213 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1
	I1010 19:24:25.069787  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:27.567013  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:28.050968  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:30.551528  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:29.027960  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:29.528054  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:30.028227  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:30.527791  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:31.027790  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:31.528761  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:32.028343  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:32.528553  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:33.028195  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:33.527895  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:29.315157  147213 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1: (1.346440456s)
	I1010 19:24:29.315211  147213 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I1010 19:24:29.315244  147213 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1010 19:24:29.315296  147213 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1010 19:24:30.173931  147213 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19787-81676/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1010 19:24:30.173977  147213 cache_images.go:123] Successfully loaded all cached images
	I1010 19:24:30.173985  147213 cache_images.go:92] duration metric: took 14.990666845s to LoadCachedImages
	I1010 19:24:30.174001  147213 kubeadm.go:934] updating node { 192.168.72.11 8443 v1.31.1 crio true true} ...
	I1010 19:24:30.174129  147213 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-320324 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.11
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-320324 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1010 19:24:30.174221  147213 ssh_runner.go:195] Run: crio config
	I1010 19:24:30.222677  147213 cni.go:84] Creating CNI manager for ""
	I1010 19:24:30.222702  147213 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1010 19:24:30.222711  147213 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1010 19:24:30.222736  147213 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.11 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-320324 NodeName:no-preload-320324 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.11"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.11 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1010 19:24:30.222923  147213 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.11
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-320324"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.11
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.11"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1010 19:24:30.222998  147213 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1010 19:24:30.233755  147213 binaries.go:44] Found k8s binaries, skipping transfer
	I1010 19:24:30.233818  147213 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1010 19:24:30.243829  147213 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I1010 19:24:30.263056  147213 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1010 19:24:30.282362  147213 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I1010 19:24:30.300449  147213 ssh_runner.go:195] Run: grep 192.168.72.11	control-plane.minikube.internal$ /etc/hosts
	I1010 19:24:30.304661  147213 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.11	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 19:24:30.317462  147213 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 19:24:30.445515  147213 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 19:24:30.462816  147213 certs.go:68] Setting up /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/no-preload-320324 for IP: 192.168.72.11
	I1010 19:24:30.462847  147213 certs.go:194] generating shared ca certs ...
	I1010 19:24:30.462871  147213 certs.go:226] acquiring lock for ca certs: {Name:mk934aa2a82050d7b4e8800492bf64f91d140517 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 19:24:30.463074  147213 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key
	I1010 19:24:30.463132  147213 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key
	I1010 19:24:30.463145  147213 certs.go:256] generating profile certs ...
	I1010 19:24:30.463289  147213 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/no-preload-320324/client.key
	I1010 19:24:30.463364  147213 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/no-preload-320324/apiserver.key.a7785fc5
	I1010 19:24:30.463413  147213 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/no-preload-320324/proxy-client.key
	I1010 19:24:30.463565  147213 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876.pem (1338 bytes)
	W1010 19:24:30.463604  147213 certs.go:480] ignoring /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876_empty.pem, impossibly tiny 0 bytes
	I1010 19:24:30.463617  147213 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca-key.pem (1679 bytes)
	I1010 19:24:30.463657  147213 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/ca.pem (1078 bytes)
	I1010 19:24:30.463689  147213 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/cert.pem (1123 bytes)
	I1010 19:24:30.463721  147213 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/certs/key.pem (1675 bytes)
	I1010 19:24:30.463774  147213 certs.go:484] found cert: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem (1708 bytes)
	I1010 19:24:30.464502  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1010 19:24:30.525320  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1010 19:24:30.565229  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1010 19:24:30.597731  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1010 19:24:30.626174  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/no-preload-320324/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1010 19:24:30.659991  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/no-preload-320324/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1010 19:24:30.685662  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/no-preload-320324/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1010 19:24:30.710757  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/no-preload-320324/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1010 19:24:30.736325  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/ssl/certs/888762.pem --> /usr/share/ca-certificates/888762.pem (1708 bytes)
	I1010 19:24:30.771239  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1010 19:24:30.796467  147213 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19787-81676/.minikube/certs/88876.pem --> /usr/share/ca-certificates/88876.pem (1338 bytes)
	I1010 19:24:30.821925  147213 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1010 19:24:30.840743  147213 ssh_runner.go:195] Run: openssl version
	I1010 19:24:30.846898  147213 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/88876.pem && ln -fs /usr/share/ca-certificates/88876.pem /etc/ssl/certs/88876.pem"
	I1010 19:24:30.858410  147213 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/88876.pem
	I1010 19:24:30.863188  147213 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 10 18:09 /usr/share/ca-certificates/88876.pem
	I1010 19:24:30.863260  147213 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/88876.pem
	I1010 19:24:30.869307  147213 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/88876.pem /etc/ssl/certs/51391683.0"
	I1010 19:24:30.880319  147213 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/888762.pem && ln -fs /usr/share/ca-certificates/888762.pem /etc/ssl/certs/888762.pem"
	I1010 19:24:30.891307  147213 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/888762.pem
	I1010 19:24:30.895771  147213 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 10 18:09 /usr/share/ca-certificates/888762.pem
	I1010 19:24:30.895828  147213 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/888762.pem
	I1010 19:24:30.901510  147213 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/888762.pem /etc/ssl/certs/3ec20f2e.0"
	I1010 19:24:30.912627  147213 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1010 19:24:30.924330  147213 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1010 19:24:30.929108  147213 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 10 17:58 /usr/share/ca-certificates/minikubeCA.pem
	I1010 19:24:30.929194  147213 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1010 19:24:30.935266  147213 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1010 19:24:30.946714  147213 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1010 19:24:30.951692  147213 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1010 19:24:30.957910  147213 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1010 19:24:30.964296  147213 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1010 19:24:30.971001  147213 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1010 19:24:30.977427  147213 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1010 19:24:30.984201  147213 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1010 19:24:30.990532  147213 kubeadm.go:392] StartCluster: {Name:no-preload-320324 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:no-preload-320324 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.11 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 19:24:30.990622  147213 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1010 19:24:30.990727  147213 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1010 19:24:31.033544  147213 cri.go:89] found id: ""
	I1010 19:24:31.033624  147213 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1010 19:24:31.044956  147213 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1010 19:24:31.044975  147213 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1010 19:24:31.045025  147213 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1010 19:24:31.056563  147213 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1010 19:24:31.057705  147213 kubeconfig.go:125] found "no-preload-320324" server: "https://192.168.72.11:8443"
	I1010 19:24:31.059853  147213 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1010 19:24:31.071304  147213 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.11
	I1010 19:24:31.071338  147213 kubeadm.go:1160] stopping kube-system containers ...
	I1010 19:24:31.071353  147213 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1010 19:24:31.071444  147213 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1010 19:24:31.107345  147213 cri.go:89] found id: ""
	I1010 19:24:31.107429  147213 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1010 19:24:31.125556  147213 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1010 19:24:31.135390  147213 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1010 19:24:31.135428  147213 kubeadm.go:157] found existing configuration files:
	
	I1010 19:24:31.135478  147213 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1010 19:24:31.144653  147213 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1010 19:24:31.144715  147213 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1010 19:24:31.154458  147213 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1010 19:24:31.163444  147213 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1010 19:24:31.163501  147213 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1010 19:24:31.172633  147213 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1010 19:24:31.181939  147213 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1010 19:24:31.182001  147213 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1010 19:24:31.191638  147213 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1010 19:24:31.200846  147213 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1010 19:24:31.200935  147213 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1010 19:24:31.211048  147213 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1010 19:24:31.221008  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:24:31.352733  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:24:32.270546  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:24:32.474510  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:24:32.551517  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:24:32.707737  147213 api_server.go:52] waiting for apiserver process to appear ...
	I1010 19:24:32.707826  147213 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:33.208647  147213 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:33.708539  147213 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:33.728647  147213 api_server.go:72] duration metric: took 1.020907246s to wait for apiserver process to appear ...
	I1010 19:24:33.728678  147213 api_server.go:88] waiting for apiserver healthz status ...
	I1010 19:24:33.728701  147213 api_server.go:253] Checking apiserver healthz at https://192.168.72.11:8443/healthz ...
	I1010 19:24:30.066635  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:32.066732  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:32.552277  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:35.051399  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:34.028434  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:34.527949  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:35.028224  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:35.528470  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:36.028540  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:36.528499  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:37.028362  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:37.528483  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:38.028531  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:38.527865  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:37.025756  147213 api_server.go:279] https://192.168.72.11:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1010 19:24:37.025787  147213 api_server.go:103] status: https://192.168.72.11:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1010 19:24:37.025802  147213 api_server.go:253] Checking apiserver healthz at https://192.168.72.11:8443/healthz ...
	I1010 19:24:37.078247  147213 api_server.go:279] https://192.168.72.11:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1010 19:24:37.078283  147213 api_server.go:103] status: https://192.168.72.11:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1010 19:24:37.229601  147213 api_server.go:253] Checking apiserver healthz at https://192.168.72.11:8443/healthz ...
	I1010 19:24:37.237166  147213 api_server.go:279] https://192.168.72.11:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1010 19:24:37.237204  147213 api_server.go:103] status: https://192.168.72.11:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1010 19:24:37.728824  147213 api_server.go:253] Checking apiserver healthz at https://192.168.72.11:8443/healthz ...
	I1010 19:24:37.735660  147213 api_server.go:279] https://192.168.72.11:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1010 19:24:37.735700  147213 api_server.go:103] status: https://192.168.72.11:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1010 19:24:38.229746  147213 api_server.go:253] Checking apiserver healthz at https://192.168.72.11:8443/healthz ...
	I1010 19:24:38.234449  147213 api_server.go:279] https://192.168.72.11:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1010 19:24:38.234491  147213 api_server.go:103] status: https://192.168.72.11:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1010 19:24:38.729000  147213 api_server.go:253] Checking apiserver healthz at https://192.168.72.11:8443/healthz ...
	I1010 19:24:38.737564  147213 api_server.go:279] https://192.168.72.11:8443/healthz returned 200:
	ok
	I1010 19:24:38.751982  147213 api_server.go:141] control plane version: v1.31.1
	I1010 19:24:38.752012  147213 api_server.go:131] duration metric: took 5.023326632s to wait for apiserver health ...
	I1010 19:24:38.752023  147213 cni.go:84] Creating CNI manager for ""
	I1010 19:24:38.752030  147213 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1010 19:24:38.753351  147213 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1010 19:24:34.067208  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:36.067413  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:38.566729  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:38.754645  147213 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1010 19:24:38.772086  147213 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1010 19:24:38.792017  147213 system_pods.go:43] waiting for kube-system pods to appear ...
	I1010 19:24:38.800547  147213 system_pods.go:59] 8 kube-system pods found
	I1010 19:24:38.800592  147213 system_pods.go:61] "coredns-7c65d6cfc9-86brb" [28e5f869-f82f-4bd4-9d9c-89499fa89c89] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1010 19:24:38.800602  147213 system_pods.go:61] "etcd-no-preload-320324" [3027fc7b-a2cd-47c9-a6f1-122bfd0475a8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1010 19:24:38.800609  147213 system_pods.go:61] "kube-apiserver-no-preload-320324" [6e3d2eab-4568-48e9-957c-e724ff89b5da] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1010 19:24:38.800617  147213 system_pods.go:61] "kube-controller-manager-no-preload-320324" [a6d65cd3-48b1-4faf-bb7f-f6433641d6f3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1010 19:24:38.800624  147213 system_pods.go:61] "kube-proxy-vn6sv" [e5b2c419-7299-4bc4-b263-99408b9484eb] Running
	I1010 19:24:38.800629  147213 system_pods.go:61] "kube-scheduler-no-preload-320324" [e089c258-e720-4c78-ada1-eebbe89556c1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1010 19:24:38.800638  147213 system_pods.go:61] "metrics-server-6867b74b74-8w9lk" [354939e6-2ca9-44f5-8e8e-c10493c68b79] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1010 19:24:38.800642  147213 system_pods.go:61] "storage-provisioner" [d4965b53-60c3-4a97-bd52-d164d977247a] Running
	I1010 19:24:38.800648  147213 system_pods.go:74] duration metric: took 8.60732ms to wait for pod list to return data ...
	I1010 19:24:38.800654  147213 node_conditions.go:102] verifying NodePressure condition ...
	I1010 19:24:38.804628  147213 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1010 19:24:38.804663  147213 node_conditions.go:123] node cpu capacity is 2
	I1010 19:24:38.804680  147213 node_conditions.go:105] duration metric: took 4.021699ms to run NodePressure ...
	I1010 19:24:38.804700  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1010 19:24:39.078452  147213 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1010 19:24:39.087090  147213 kubeadm.go:739] kubelet initialised
	I1010 19:24:39.087116  147213 kubeadm.go:740] duration metric: took 8.636436ms waiting for restarted kubelet to initialise ...
	I1010 19:24:39.087125  147213 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 19:24:39.094468  147213 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-86brb" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:39.108724  147213 pod_ready.go:98] node "no-preload-320324" hosting pod "coredns-7c65d6cfc9-86brb" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:39.108756  147213 pod_ready.go:82] duration metric: took 14.254631ms for pod "coredns-7c65d6cfc9-86brb" in "kube-system" namespace to be "Ready" ...
	E1010 19:24:39.108770  147213 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-320324" hosting pod "coredns-7c65d6cfc9-86brb" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:39.108780  147213 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:39.119304  147213 pod_ready.go:98] node "no-preload-320324" hosting pod "etcd-no-preload-320324" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:39.119335  147213 pod_ready.go:82] duration metric: took 10.543376ms for pod "etcd-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	E1010 19:24:39.119345  147213 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-320324" hosting pod "etcd-no-preload-320324" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:39.119352  147213 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:39.127243  147213 pod_ready.go:98] node "no-preload-320324" hosting pod "kube-apiserver-no-preload-320324" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:39.127268  147213 pod_ready.go:82] duration metric: took 7.907414ms for pod "kube-apiserver-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	E1010 19:24:39.127278  147213 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-320324" hosting pod "kube-apiserver-no-preload-320324" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:39.127285  147213 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:39.195549  147213 pod_ready.go:98] node "no-preload-320324" hosting pod "kube-controller-manager-no-preload-320324" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:39.195578  147213 pod_ready.go:82] duration metric: took 68.282333ms for pod "kube-controller-manager-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	E1010 19:24:39.195588  147213 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-320324" hosting pod "kube-controller-manager-no-preload-320324" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:39.195594  147213 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-vn6sv" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:39.595842  147213 pod_ready.go:98] node "no-preload-320324" hosting pod "kube-proxy-vn6sv" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:39.595871  147213 pod_ready.go:82] duration metric: took 400.267905ms for pod "kube-proxy-vn6sv" in "kube-system" namespace to be "Ready" ...
	E1010 19:24:39.595880  147213 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-320324" hosting pod "kube-proxy-vn6sv" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:39.595886  147213 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:39.995731  147213 pod_ready.go:98] node "no-preload-320324" hosting pod "kube-scheduler-no-preload-320324" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:39.995760  147213 pod_ready.go:82] duration metric: took 399.866947ms for pod "kube-scheduler-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	E1010 19:24:39.995770  147213 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-320324" hosting pod "kube-scheduler-no-preload-320324" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:39.995777  147213 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:40.396420  147213 pod_ready.go:98] node "no-preload-320324" hosting pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:40.396456  147213 pod_ready.go:82] duration metric: took 400.667834ms for pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace to be "Ready" ...
	E1010 19:24:40.396470  147213 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-320324" hosting pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:40.396482  147213 pod_ready.go:39] duration metric: took 1.309346973s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 19:24:40.396508  147213 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1010 19:24:40.409956  147213 ops.go:34] apiserver oom_adj: -16
	I1010 19:24:40.409980  147213 kubeadm.go:597] duration metric: took 9.364998977s to restartPrimaryControlPlane
	I1010 19:24:40.409991  147213 kubeadm.go:394] duration metric: took 9.419470024s to StartCluster
	I1010 19:24:40.410009  147213 settings.go:142] acquiring lock: {Name:mk8d73e472e6ca16e47be8eb712406a95625b8da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 19:24:40.410085  147213 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19787-81676/kubeconfig
	I1010 19:24:40.413037  147213 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19787-81676/kubeconfig: {Name:mk31297b607b774870865548d952622c04d970cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 19:24:40.413448  147213 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.11 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1010 19:24:40.413783  147213 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1010 19:24:40.413979  147213 config.go:182] Loaded profile config "no-preload-320324": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 19:24:40.413996  147213 addons.go:69] Setting default-storageclass=true in profile "no-preload-320324"
	I1010 19:24:40.414020  147213 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-320324"
	I1010 19:24:40.413983  147213 addons.go:69] Setting storage-provisioner=true in profile "no-preload-320324"
	I1010 19:24:40.414048  147213 addons.go:234] Setting addon storage-provisioner=true in "no-preload-320324"
	W1010 19:24:40.414057  147213 addons.go:243] addon storage-provisioner should already be in state true
	I1010 19:24:40.414091  147213 host.go:66] Checking if "no-preload-320324" exists ...
	I1010 19:24:40.414170  147213 addons.go:69] Setting metrics-server=true in profile "no-preload-320324"
	I1010 19:24:40.414230  147213 addons.go:234] Setting addon metrics-server=true in "no-preload-320324"
	W1010 19:24:40.414252  147213 addons.go:243] addon metrics-server should already be in state true
	I1010 19:24:40.414292  147213 host.go:66] Checking if "no-preload-320324" exists ...
	I1010 19:24:40.414612  147213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:24:40.414640  147213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:24:40.414678  147213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:24:40.414712  147213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:24:40.415409  147213 out.go:177] * Verifying Kubernetes components...
	I1010 19:24:40.415412  147213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:24:40.415553  147213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:24:40.416812  147213 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 19:24:40.431363  147213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44727
	I1010 19:24:40.431474  147213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46487
	I1010 19:24:40.431659  147213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33767
	I1010 19:24:40.431983  147213 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:24:40.432136  147213 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:24:40.432156  147213 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:24:40.432567  147213 main.go:141] libmachine: Using API Version  1
	I1010 19:24:40.432587  147213 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:24:40.432710  147213 main.go:141] libmachine: Using API Version  1
	I1010 19:24:40.432732  147213 main.go:141] libmachine: Using API Version  1
	I1010 19:24:40.432740  147213 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:24:40.432749  147213 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:24:40.433000  147213 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:24:40.433079  147213 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:24:40.433103  147213 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:24:40.433468  147213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:24:40.433498  147213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:24:40.436984  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetState
	I1010 19:24:40.453362  147213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:24:40.453426  147213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:24:40.454884  147213 addons.go:234] Setting addon default-storageclass=true in "no-preload-320324"
	W1010 19:24:40.454913  147213 addons.go:243] addon default-storageclass should already be in state true
	I1010 19:24:40.454947  147213 host.go:66] Checking if "no-preload-320324" exists ...
	I1010 19:24:40.455335  147213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:24:40.455394  147213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:24:40.470642  147213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38831
	I1010 19:24:40.471118  147213 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:24:40.471701  147213 main.go:141] libmachine: Using API Version  1
	I1010 19:24:40.471730  147213 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:24:40.472241  147213 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:24:40.472523  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetState
	I1010 19:24:40.473953  147213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36987
	I1010 19:24:40.474196  147213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34527
	I1010 19:24:40.474332  147213 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:24:40.474672  147213 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:24:40.474814  147213 main.go:141] libmachine: Using API Version  1
	I1010 19:24:40.474827  147213 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:24:40.475181  147213 main.go:141] libmachine: Using API Version  1
	I1010 19:24:40.475210  147213 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:24:40.475310  147213 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:24:40.475702  147213 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:24:40.475785  147213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:24:40.475825  147213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:24:40.475922  147213 main.go:141] libmachine: (no-preload-320324) Calling .DriverName
	I1010 19:24:40.476046  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetState
	I1010 19:24:40.478147  147213 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 19:24:40.478395  147213 main.go:141] libmachine: (no-preload-320324) Calling .DriverName
	I1010 19:24:40.479869  147213 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 19:24:40.479896  147213 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1010 19:24:40.479922  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHHostname
	I1010 19:24:40.480549  147213 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1010 19:24:37.051611  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:39.551952  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:41.553895  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:40.482101  147213 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1010 19:24:40.482119  147213 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1010 19:24:40.482144  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHHostname
	I1010 19:24:40.484066  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:40.484560  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:40.484588  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:40.484833  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHPort
	I1010 19:24:40.485065  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:40.485241  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHUsername
	I1010 19:24:40.485272  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:40.485443  147213 sshutil.go:53] new ssh client: &{IP:192.168.72.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/no-preload-320324/id_rsa Username:docker}
	I1010 19:24:40.485788  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:40.485807  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:40.485842  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHPort
	I1010 19:24:40.486017  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:40.486202  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHUsername
	I1010 19:24:40.486454  147213 sshutil.go:53] new ssh client: &{IP:192.168.72.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/no-preload-320324/id_rsa Username:docker}
	I1010 19:24:40.492533  147213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38967
	I1010 19:24:40.493012  147213 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:24:40.493566  147213 main.go:141] libmachine: Using API Version  1
	I1010 19:24:40.493595  147213 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:24:40.494056  147213 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:24:40.494325  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetState
	I1010 19:24:40.496053  147213 main.go:141] libmachine: (no-preload-320324) Calling .DriverName
	I1010 19:24:40.496301  147213 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1010 19:24:40.496321  147213 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1010 19:24:40.496344  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHHostname
	I1010 19:24:40.499125  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:40.499667  147213 main.go:141] libmachine: (no-preload-320324) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:03:cd", ip: ""} in network mk-no-preload-320324: {Iface:virbr4 ExpiryTime:2024-10-10 20:24:05 +0000 UTC Type:0 Mac:52:54:00:95:03:cd Iaid: IPaddr:192.168.72.11 Prefix:24 Hostname:no-preload-320324 Clientid:01:52:54:00:95:03:cd}
	I1010 19:24:40.499690  147213 main.go:141] libmachine: (no-preload-320324) DBG | domain no-preload-320324 has defined IP address 192.168.72.11 and MAC address 52:54:00:95:03:cd in network mk-no-preload-320324
	I1010 19:24:40.499843  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHPort
	I1010 19:24:40.500022  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHKeyPath
	I1010 19:24:40.500194  147213 main.go:141] libmachine: (no-preload-320324) Calling .GetSSHUsername
	I1010 19:24:40.500357  147213 sshutil.go:53] new ssh client: &{IP:192.168.72.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/no-preload-320324/id_rsa Username:docker}
	I1010 19:24:40.651454  147213 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 19:24:40.667056  147213 node_ready.go:35] waiting up to 6m0s for node "no-preload-320324" to be "Ready" ...
	I1010 19:24:40.782217  147213 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 19:24:40.803094  147213 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1010 19:24:40.803122  147213 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1010 19:24:40.812288  147213 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1010 19:24:40.837679  147213 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1010 19:24:40.837723  147213 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1010 19:24:40.882090  147213 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1010 19:24:40.882119  147213 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1010 19:24:40.940115  147213 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1010 19:24:41.949181  147213 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.136852217s)
	I1010 19:24:41.949258  147213 main.go:141] libmachine: Making call to close driver server
	I1010 19:24:41.949275  147213 main.go:141] libmachine: (no-preload-320324) Calling .Close
	I1010 19:24:41.949286  147213 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.167030419s)
	I1010 19:24:41.949327  147213 main.go:141] libmachine: Making call to close driver server
	I1010 19:24:41.949345  147213 main.go:141] libmachine: (no-preload-320324) Calling .Close
	I1010 19:24:41.949625  147213 main.go:141] libmachine: (no-preload-320324) DBG | Closing plugin on server side
	I1010 19:24:41.949652  147213 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:24:41.949660  147213 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:24:41.949661  147213 main.go:141] libmachine: (no-preload-320324) DBG | Closing plugin on server side
	I1010 19:24:41.949668  147213 main.go:141] libmachine: Making call to close driver server
	I1010 19:24:41.949679  147213 main.go:141] libmachine: (no-preload-320324) Calling .Close
	I1010 19:24:41.949761  147213 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:24:41.949804  147213 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:24:41.949819  147213 main.go:141] libmachine: Making call to close driver server
	I1010 19:24:41.949826  147213 main.go:141] libmachine: (no-preload-320324) Calling .Close
	I1010 19:24:41.950811  147213 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:24:41.950824  147213 main.go:141] libmachine: (no-preload-320324) DBG | Closing plugin on server side
	I1010 19:24:41.950827  147213 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:24:41.950822  147213 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:24:41.950845  147213 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:24:41.950811  147213 main.go:141] libmachine: (no-preload-320324) DBG | Closing plugin on server side
	I1010 19:24:41.957797  147213 main.go:141] libmachine: Making call to close driver server
	I1010 19:24:41.957814  147213 main.go:141] libmachine: (no-preload-320324) Calling .Close
	I1010 19:24:41.958071  147213 main.go:141] libmachine: (no-preload-320324) DBG | Closing plugin on server side
	I1010 19:24:41.958077  147213 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:24:41.958099  147213 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:24:42.005530  147213 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.065377363s)
	I1010 19:24:42.005590  147213 main.go:141] libmachine: Making call to close driver server
	I1010 19:24:42.005602  147213 main.go:141] libmachine: (no-preload-320324) Calling .Close
	I1010 19:24:42.005914  147213 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:24:42.005937  147213 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:24:42.005935  147213 main.go:141] libmachine: (no-preload-320324) DBG | Closing plugin on server side
	I1010 19:24:42.005972  147213 main.go:141] libmachine: Making call to close driver server
	I1010 19:24:42.006003  147213 main.go:141] libmachine: (no-preload-320324) Calling .Close
	I1010 19:24:42.006280  147213 main.go:141] libmachine: (no-preload-320324) DBG | Closing plugin on server side
	I1010 19:24:42.006313  147213 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:24:42.006335  147213 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:24:42.006354  147213 addons.go:475] Verifying addon metrics-server=true in "no-preload-320324"
	I1010 19:24:42.008523  147213 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1010 19:24:39.028363  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:39.528571  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:40.027992  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:40.528552  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:41.028220  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:41.527889  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:42.028462  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:24:42.028549  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:24:42.070669  148123 cri.go:89] found id: ""
	I1010 19:24:42.070701  148123 logs.go:282] 0 containers: []
	W1010 19:24:42.070710  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:24:42.070716  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:24:42.070775  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:24:42.110693  148123 cri.go:89] found id: ""
	I1010 19:24:42.110731  148123 logs.go:282] 0 containers: []
	W1010 19:24:42.110748  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:24:42.110756  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:24:42.110816  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:24:42.148471  148123 cri.go:89] found id: ""
	I1010 19:24:42.148511  148123 logs.go:282] 0 containers: []
	W1010 19:24:42.148525  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:24:42.148535  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:24:42.148603  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:24:42.184637  148123 cri.go:89] found id: ""
	I1010 19:24:42.184670  148123 logs.go:282] 0 containers: []
	W1010 19:24:42.184683  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:24:42.184691  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:24:42.184759  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:24:42.226794  148123 cri.go:89] found id: ""
	I1010 19:24:42.226834  148123 logs.go:282] 0 containers: []
	W1010 19:24:42.226848  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:24:42.226857  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:24:42.226942  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:24:42.262969  148123 cri.go:89] found id: ""
	I1010 19:24:42.263004  148123 logs.go:282] 0 containers: []
	W1010 19:24:42.263017  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:24:42.263027  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:24:42.263167  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:24:42.301063  148123 cri.go:89] found id: ""
	I1010 19:24:42.301088  148123 logs.go:282] 0 containers: []
	W1010 19:24:42.301096  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:24:42.301102  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:24:42.301153  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:24:42.338829  148123 cri.go:89] found id: ""
	I1010 19:24:42.338862  148123 logs.go:282] 0 containers: []
	W1010 19:24:42.338873  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:24:42.338886  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:24:42.338901  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:24:42.391426  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:24:42.391478  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:24:42.405520  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:24:42.405563  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:24:42.544245  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:24:42.544275  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:24:42.544292  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:24:42.620760  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:24:42.620804  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:24:42.009965  147213 addons.go:510] duration metric: took 1.596190602s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1010 19:24:42.672792  147213 node_ready.go:53] node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:41.066744  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:43.066850  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:43.557231  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:46.051820  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:45.164389  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:45.179714  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:24:45.179776  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:24:45.216271  148123 cri.go:89] found id: ""
	I1010 19:24:45.216308  148123 logs.go:282] 0 containers: []
	W1010 19:24:45.216319  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:24:45.216327  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:24:45.216394  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:24:45.255109  148123 cri.go:89] found id: ""
	I1010 19:24:45.255154  148123 logs.go:282] 0 containers: []
	W1010 19:24:45.255167  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:24:45.255176  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:24:45.255248  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:24:45.294338  148123 cri.go:89] found id: ""
	I1010 19:24:45.294369  148123 logs.go:282] 0 containers: []
	W1010 19:24:45.294380  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:24:45.294388  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:24:45.294457  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:24:45.339573  148123 cri.go:89] found id: ""
	I1010 19:24:45.339606  148123 logs.go:282] 0 containers: []
	W1010 19:24:45.339626  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:24:45.339636  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:24:45.339703  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:24:45.400184  148123 cri.go:89] found id: ""
	I1010 19:24:45.400214  148123 logs.go:282] 0 containers: []
	W1010 19:24:45.400225  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:24:45.400234  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:24:45.400301  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:24:45.436152  148123 cri.go:89] found id: ""
	I1010 19:24:45.436183  148123 logs.go:282] 0 containers: []
	W1010 19:24:45.436195  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:24:45.436203  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:24:45.436262  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:24:45.484321  148123 cri.go:89] found id: ""
	I1010 19:24:45.484347  148123 logs.go:282] 0 containers: []
	W1010 19:24:45.484355  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:24:45.484361  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:24:45.484441  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:24:45.532893  148123 cri.go:89] found id: ""
	I1010 19:24:45.532923  148123 logs.go:282] 0 containers: []
	W1010 19:24:45.532932  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:24:45.532943  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:24:45.532958  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:24:45.585183  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:24:45.585214  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:24:45.638800  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:24:45.638847  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:24:45.653928  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:24:45.653961  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:24:45.733534  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:24:45.733564  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:24:45.733580  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:24:48.314367  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:48.329687  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:24:48.329754  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:24:48.370372  148123 cri.go:89] found id: ""
	I1010 19:24:48.370400  148123 logs.go:282] 0 containers: []
	W1010 19:24:48.370409  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:24:48.370415  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:24:48.370490  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:24:48.409362  148123 cri.go:89] found id: ""
	I1010 19:24:48.409396  148123 logs.go:282] 0 containers: []
	W1010 19:24:48.409407  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:24:48.409415  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:24:48.409500  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:24:48.449640  148123 cri.go:89] found id: ""
	I1010 19:24:48.449672  148123 logs.go:282] 0 containers: []
	W1010 19:24:48.449681  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:24:48.449687  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:24:48.449746  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:24:48.485053  148123 cri.go:89] found id: ""
	I1010 19:24:48.485104  148123 logs.go:282] 0 containers: []
	W1010 19:24:48.485121  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:24:48.485129  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:24:48.485183  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:24:48.520083  148123 cri.go:89] found id: ""
	I1010 19:24:48.520113  148123 logs.go:282] 0 containers: []
	W1010 19:24:48.520121  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:24:48.520127  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:24:48.520185  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:24:48.555112  148123 cri.go:89] found id: ""
	I1010 19:24:48.555138  148123 logs.go:282] 0 containers: []
	W1010 19:24:48.555149  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:24:48.555156  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:24:48.555241  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:24:45.171882  147213 node_ready.go:53] node "no-preload-320324" has status "Ready":"False"
	I1010 19:24:47.673073  147213 node_ready.go:49] node "no-preload-320324" has status "Ready":"True"
	I1010 19:24:47.673103  147213 node_ready.go:38] duration metric: took 7.00601327s for node "no-preload-320324" to be "Ready" ...
	I1010 19:24:47.673117  147213 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 19:24:47.682195  147213 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-86brb" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:47.690079  147213 pod_ready.go:93] pod "coredns-7c65d6cfc9-86brb" in "kube-system" namespace has status "Ready":"True"
	I1010 19:24:47.690111  147213 pod_ready.go:82] duration metric: took 7.882823ms for pod "coredns-7c65d6cfc9-86brb" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:47.690126  147213 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:47.698009  147213 pod_ready.go:93] pod "etcd-no-preload-320324" in "kube-system" namespace has status "Ready":"True"
	I1010 19:24:47.698038  147213 pod_ready.go:82] duration metric: took 7.903016ms for pod "etcd-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:47.698052  147213 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:45.066893  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:47.566144  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:48.551853  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:51.050365  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:48.592682  148123 cri.go:89] found id: ""
	I1010 19:24:48.592710  148123 logs.go:282] 0 containers: []
	W1010 19:24:48.592719  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:24:48.592725  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:24:48.592775  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:24:48.628449  148123 cri.go:89] found id: ""
	I1010 19:24:48.628482  148123 logs.go:282] 0 containers: []
	W1010 19:24:48.628490  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:24:48.628500  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:24:48.628513  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:24:48.709385  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:24:48.709428  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:24:48.752542  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:24:48.752677  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:24:48.812331  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:24:48.812385  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:24:48.827057  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:24:48.827095  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:24:48.908312  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:24:51.409122  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:51.423404  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:24:51.423482  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:24:51.465742  148123 cri.go:89] found id: ""
	I1010 19:24:51.465781  148123 logs.go:282] 0 containers: []
	W1010 19:24:51.465793  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:24:51.465803  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:24:51.465875  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:24:51.501597  148123 cri.go:89] found id: ""
	I1010 19:24:51.501630  148123 logs.go:282] 0 containers: []
	W1010 19:24:51.501641  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:24:51.501648  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:24:51.501711  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:24:51.537500  148123 cri.go:89] found id: ""
	I1010 19:24:51.537539  148123 logs.go:282] 0 containers: []
	W1010 19:24:51.537551  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:24:51.537559  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:24:51.537626  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:24:51.579546  148123 cri.go:89] found id: ""
	I1010 19:24:51.579576  148123 logs.go:282] 0 containers: []
	W1010 19:24:51.579587  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:24:51.579595  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:24:51.579660  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:24:51.619893  148123 cri.go:89] found id: ""
	I1010 19:24:51.619917  148123 logs.go:282] 0 containers: []
	W1010 19:24:51.619925  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:24:51.619931  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:24:51.619999  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:24:51.655787  148123 cri.go:89] found id: ""
	I1010 19:24:51.655824  148123 logs.go:282] 0 containers: []
	W1010 19:24:51.655837  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:24:51.655846  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:24:51.655921  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:24:51.699582  148123 cri.go:89] found id: ""
	I1010 19:24:51.699611  148123 logs.go:282] 0 containers: []
	W1010 19:24:51.699619  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:24:51.699625  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:24:51.699685  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:24:51.738658  148123 cri.go:89] found id: ""
	I1010 19:24:51.738689  148123 logs.go:282] 0 containers: []
	W1010 19:24:51.738697  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:24:51.738707  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:24:51.738721  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:24:51.789556  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:24:51.789587  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:24:51.858919  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:24:51.858968  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:24:51.887818  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:24:51.887854  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:24:51.964408  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:24:51.964434  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:24:51.964449  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:24:49.705130  147213 pod_ready.go:103] pod "kube-apiserver-no-preload-320324" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:51.705847  147213 pod_ready.go:103] pod "kube-apiserver-no-preload-320324" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:53.205374  147213 pod_ready.go:93] pod "kube-apiserver-no-preload-320324" in "kube-system" namespace has status "Ready":"True"
	I1010 19:24:53.205401  147213 pod_ready.go:82] duration metric: took 5.507341974s for pod "kube-apiserver-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:53.205413  147213 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:53.210237  147213 pod_ready.go:93] pod "kube-controller-manager-no-preload-320324" in "kube-system" namespace has status "Ready":"True"
	I1010 19:24:53.210259  147213 pod_ready.go:82] duration metric: took 4.83925ms for pod "kube-controller-manager-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:53.210269  147213 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-vn6sv" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:53.215158  147213 pod_ready.go:93] pod "kube-proxy-vn6sv" in "kube-system" namespace has status "Ready":"True"
	I1010 19:24:53.215186  147213 pod_ready.go:82] duration metric: took 4.909888ms for pod "kube-proxy-vn6sv" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:53.215198  147213 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:53.220077  147213 pod_ready.go:93] pod "kube-scheduler-no-preload-320324" in "kube-system" namespace has status "Ready":"True"
	I1010 19:24:53.220097  147213 pod_ready.go:82] duration metric: took 4.890652ms for pod "kube-scheduler-no-preload-320324" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:53.220105  147213 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace to be "Ready" ...
	I1010 19:24:50.066165  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:52.066343  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:53.552604  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:56.050748  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:54.545627  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:54.560748  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:24:54.560830  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:24:54.595863  148123 cri.go:89] found id: ""
	I1010 19:24:54.595893  148123 logs.go:282] 0 containers: []
	W1010 19:24:54.595903  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:24:54.595912  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:24:54.595978  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:24:54.633645  148123 cri.go:89] found id: ""
	I1010 19:24:54.633681  148123 logs.go:282] 0 containers: []
	W1010 19:24:54.633693  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:24:54.633701  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:24:54.633761  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:24:54.668269  148123 cri.go:89] found id: ""
	I1010 19:24:54.668299  148123 logs.go:282] 0 containers: []
	W1010 19:24:54.668311  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:24:54.668317  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:24:54.668369  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:24:54.706559  148123 cri.go:89] found id: ""
	I1010 19:24:54.706591  148123 logs.go:282] 0 containers: []
	W1010 19:24:54.706600  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:24:54.706608  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:24:54.706673  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:24:54.746248  148123 cri.go:89] found id: ""
	I1010 19:24:54.746283  148123 logs.go:282] 0 containers: []
	W1010 19:24:54.746295  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:24:54.746303  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:24:54.746383  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:24:54.782982  148123 cri.go:89] found id: ""
	I1010 19:24:54.783017  148123 logs.go:282] 0 containers: []
	W1010 19:24:54.783027  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:24:54.783033  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:24:54.783085  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:24:54.819664  148123 cri.go:89] found id: ""
	I1010 19:24:54.819700  148123 logs.go:282] 0 containers: []
	W1010 19:24:54.819713  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:24:54.819722  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:24:54.819797  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:24:54.859603  148123 cri.go:89] found id: ""
	I1010 19:24:54.859632  148123 logs.go:282] 0 containers: []
	W1010 19:24:54.859640  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:24:54.859650  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:24:54.859662  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:24:54.910949  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:24:54.910987  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:24:54.925941  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:24:54.925975  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:24:55.009626  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:24:55.009654  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:24:55.009669  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:24:55.097196  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:24:55.097237  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:24:57.641732  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:24:57.661141  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:24:57.661222  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:24:57.716054  148123 cri.go:89] found id: ""
	I1010 19:24:57.716086  148123 logs.go:282] 0 containers: []
	W1010 19:24:57.716094  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:24:57.716100  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:24:57.716178  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:24:57.766862  148123 cri.go:89] found id: ""
	I1010 19:24:57.766892  148123 logs.go:282] 0 containers: []
	W1010 19:24:57.766906  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:24:57.766917  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:24:57.766989  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:24:57.814776  148123 cri.go:89] found id: ""
	I1010 19:24:57.814808  148123 logs.go:282] 0 containers: []
	W1010 19:24:57.814821  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:24:57.814829  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:24:57.814899  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:24:57.850458  148123 cri.go:89] found id: ""
	I1010 19:24:57.850495  148123 logs.go:282] 0 containers: []
	W1010 19:24:57.850505  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:24:57.850516  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:24:57.850667  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:24:57.886541  148123 cri.go:89] found id: ""
	I1010 19:24:57.886566  148123 logs.go:282] 0 containers: []
	W1010 19:24:57.886575  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:24:57.886581  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:24:57.886645  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:24:57.920748  148123 cri.go:89] found id: ""
	I1010 19:24:57.920783  148123 logs.go:282] 0 containers: []
	W1010 19:24:57.920795  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:24:57.920802  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:24:57.920887  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:24:57.957801  148123 cri.go:89] found id: ""
	I1010 19:24:57.957833  148123 logs.go:282] 0 containers: []
	W1010 19:24:57.957844  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:24:57.957852  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:24:57.957919  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:24:57.995648  148123 cri.go:89] found id: ""
	I1010 19:24:57.995683  148123 logs.go:282] 0 containers: []
	W1010 19:24:57.995694  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:24:57.995706  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:24:57.995723  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:24:58.034987  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:24:58.035030  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:24:58.089014  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:24:58.089066  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:24:58.104179  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:24:58.104211  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:24:58.178239  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:24:58.178271  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:24:58.178291  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:24:55.229459  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:57.727298  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:54.566779  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:57.065902  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:58.051248  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:00.550512  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:00.756057  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:00.770086  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:00.770150  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:00.811400  148123 cri.go:89] found id: ""
	I1010 19:25:00.811444  148123 logs.go:282] 0 containers: []
	W1010 19:25:00.811456  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:00.811466  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:00.811533  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:00.848821  148123 cri.go:89] found id: ""
	I1010 19:25:00.848859  148123 logs.go:282] 0 containers: []
	W1010 19:25:00.848870  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:00.848877  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:00.848947  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:00.883529  148123 cri.go:89] found id: ""
	I1010 19:25:00.883557  148123 logs.go:282] 0 containers: []
	W1010 19:25:00.883566  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:00.883573  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:00.883631  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:00.922952  148123 cri.go:89] found id: ""
	I1010 19:25:00.922982  148123 logs.go:282] 0 containers: []
	W1010 19:25:00.922992  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:00.922998  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:00.923057  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:00.956116  148123 cri.go:89] found id: ""
	I1010 19:25:00.956147  148123 logs.go:282] 0 containers: []
	W1010 19:25:00.956159  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:00.956168  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:00.956233  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:00.998892  148123 cri.go:89] found id: ""
	I1010 19:25:00.998919  148123 logs.go:282] 0 containers: []
	W1010 19:25:00.998930  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:00.998939  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:00.998996  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:01.037617  148123 cri.go:89] found id: ""
	I1010 19:25:01.037649  148123 logs.go:282] 0 containers: []
	W1010 19:25:01.037657  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:01.037663  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:01.037717  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:01.078003  148123 cri.go:89] found id: ""
	I1010 19:25:01.078034  148123 logs.go:282] 0 containers: []
	W1010 19:25:01.078046  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:01.078055  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:01.078069  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:01.118948  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:01.118977  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:01.171053  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:01.171091  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:01.187027  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:01.187060  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:01.261288  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:01.261315  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:01.261330  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:24:59.728997  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:02.227142  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:24:59.566448  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:02.066184  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:02.551951  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:05.050558  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:03.848144  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:03.862527  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:03.862601  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:03.899120  148123 cri.go:89] found id: ""
	I1010 19:25:03.899159  148123 logs.go:282] 0 containers: []
	W1010 19:25:03.899180  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:03.899187  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:03.899276  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:03.935461  148123 cri.go:89] found id: ""
	I1010 19:25:03.935492  148123 logs.go:282] 0 containers: []
	W1010 19:25:03.935500  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:03.935507  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:03.935569  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:03.971892  148123 cri.go:89] found id: ""
	I1010 19:25:03.971925  148123 logs.go:282] 0 containers: []
	W1010 19:25:03.971937  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:03.971945  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:03.972019  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:04.008341  148123 cri.go:89] found id: ""
	I1010 19:25:04.008368  148123 logs.go:282] 0 containers: []
	W1010 19:25:04.008377  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:04.008390  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:04.008447  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:04.042007  148123 cri.go:89] found id: ""
	I1010 19:25:04.042036  148123 logs.go:282] 0 containers: []
	W1010 19:25:04.042044  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:04.042051  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:04.042101  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:04.078017  148123 cri.go:89] found id: ""
	I1010 19:25:04.078045  148123 logs.go:282] 0 containers: []
	W1010 19:25:04.078053  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:04.078059  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:04.078112  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:04.116792  148123 cri.go:89] found id: ""
	I1010 19:25:04.116823  148123 logs.go:282] 0 containers: []
	W1010 19:25:04.116832  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:04.116839  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:04.116928  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:04.153468  148123 cri.go:89] found id: ""
	I1010 19:25:04.153496  148123 logs.go:282] 0 containers: []
	W1010 19:25:04.153503  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:04.153513  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:04.153525  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:04.230646  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:04.230683  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:04.270975  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:04.271015  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:04.320845  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:04.320902  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:04.337789  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:04.337828  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:04.416077  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:06.916412  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:06.931309  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:06.931376  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:06.969224  148123 cri.go:89] found id: ""
	I1010 19:25:06.969257  148123 logs.go:282] 0 containers: []
	W1010 19:25:06.969269  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:06.969277  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:06.969346  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:07.006598  148123 cri.go:89] found id: ""
	I1010 19:25:07.006653  148123 logs.go:282] 0 containers: []
	W1010 19:25:07.006667  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:07.006676  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:07.006744  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:07.046392  148123 cri.go:89] found id: ""
	I1010 19:25:07.046418  148123 logs.go:282] 0 containers: []
	W1010 19:25:07.046427  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:07.046433  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:07.046491  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:07.089570  148123 cri.go:89] found id: ""
	I1010 19:25:07.089603  148123 logs.go:282] 0 containers: []
	W1010 19:25:07.089615  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:07.089624  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:07.089689  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:07.127152  148123 cri.go:89] found id: ""
	I1010 19:25:07.127185  148123 logs.go:282] 0 containers: []
	W1010 19:25:07.127195  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:07.127204  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:07.127282  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:07.160516  148123 cri.go:89] found id: ""
	I1010 19:25:07.160543  148123 logs.go:282] 0 containers: []
	W1010 19:25:07.160554  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:07.160563  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:07.160635  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:07.197270  148123 cri.go:89] found id: ""
	I1010 19:25:07.197307  148123 logs.go:282] 0 containers: []
	W1010 19:25:07.197318  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:07.197327  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:07.197395  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:07.232658  148123 cri.go:89] found id: ""
	I1010 19:25:07.232687  148123 logs.go:282] 0 containers: []
	W1010 19:25:07.232696  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:07.232706  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:07.232720  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:07.246579  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:07.246609  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:07.315200  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:07.315230  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:07.315251  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:07.389532  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:07.389580  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:07.438808  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:07.438837  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:04.227537  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:06.727865  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:04.067121  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:06.565089  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:08.565565  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:07.051371  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:09.051420  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:11.054211  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:09.990294  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:10.004155  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:10.004270  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:10.039034  148123 cri.go:89] found id: ""
	I1010 19:25:10.039068  148123 logs.go:282] 0 containers: []
	W1010 19:25:10.039078  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:10.039087  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:10.039174  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:10.079936  148123 cri.go:89] found id: ""
	I1010 19:25:10.079967  148123 logs.go:282] 0 containers: []
	W1010 19:25:10.079978  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:10.079985  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:10.080068  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:10.116441  148123 cri.go:89] found id: ""
	I1010 19:25:10.116471  148123 logs.go:282] 0 containers: []
	W1010 19:25:10.116483  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:10.116491  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:10.116556  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:10.151998  148123 cri.go:89] found id: ""
	I1010 19:25:10.152045  148123 logs.go:282] 0 containers: []
	W1010 19:25:10.152058  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:10.152075  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:10.152153  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:10.188514  148123 cri.go:89] found id: ""
	I1010 19:25:10.188547  148123 logs.go:282] 0 containers: []
	W1010 19:25:10.188565  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:10.188574  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:10.188640  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:10.223648  148123 cri.go:89] found id: ""
	I1010 19:25:10.223682  148123 logs.go:282] 0 containers: []
	W1010 19:25:10.223694  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:10.223702  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:10.223771  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:10.264117  148123 cri.go:89] found id: ""
	I1010 19:25:10.264143  148123 logs.go:282] 0 containers: []
	W1010 19:25:10.264151  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:10.264158  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:10.264215  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:10.302566  148123 cri.go:89] found id: ""
	I1010 19:25:10.302601  148123 logs.go:282] 0 containers: []
	W1010 19:25:10.302613  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:10.302625  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:10.302649  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:10.316567  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:10.316606  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:10.388642  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:10.388674  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:10.388692  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:10.471035  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:10.471090  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:10.519600  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:10.519645  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:13.075428  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:13.090830  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:13.090915  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:13.126302  148123 cri.go:89] found id: ""
	I1010 19:25:13.126336  148123 logs.go:282] 0 containers: []
	W1010 19:25:13.126348  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:13.126357  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:13.126429  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:13.163309  148123 cri.go:89] found id: ""
	I1010 19:25:13.163344  148123 logs.go:282] 0 containers: []
	W1010 19:25:13.163357  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:13.163365  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:13.163428  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:13.197569  148123 cri.go:89] found id: ""
	I1010 19:25:13.197605  148123 logs.go:282] 0 containers: []
	W1010 19:25:13.197615  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:13.197621  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:13.197692  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:13.236299  148123 cri.go:89] found id: ""
	I1010 19:25:13.236328  148123 logs.go:282] 0 containers: []
	W1010 19:25:13.236336  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:13.236342  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:13.236406  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:13.275667  148123 cri.go:89] found id: ""
	I1010 19:25:13.275696  148123 logs.go:282] 0 containers: []
	W1010 19:25:13.275705  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:13.275711  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:13.275776  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:13.309728  148123 cri.go:89] found id: ""
	I1010 19:25:13.309763  148123 logs.go:282] 0 containers: []
	W1010 19:25:13.309774  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:13.309786  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:13.309854  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:13.347464  148123 cri.go:89] found id: ""
	I1010 19:25:13.347493  148123 logs.go:282] 0 containers: []
	W1010 19:25:13.347504  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:13.347513  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:13.347576  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:13.385086  148123 cri.go:89] found id: ""
	I1010 19:25:13.385119  148123 logs.go:282] 0 containers: []
	W1010 19:25:13.385130  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:13.385142  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:13.385156  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:13.466513  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:13.466553  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:13.508610  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:13.508651  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:09.226850  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:11.227241  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:13.726879  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:10.565663  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:12.565845  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:13.555465  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:16.051764  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:13.564897  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:13.564932  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:13.578771  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:13.578803  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:13.651857  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:16.153066  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:16.167613  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:16.167698  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:16.203867  148123 cri.go:89] found id: ""
	I1010 19:25:16.203897  148123 logs.go:282] 0 containers: []
	W1010 19:25:16.203906  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:16.203912  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:16.203965  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:16.242077  148123 cri.go:89] found id: ""
	I1010 19:25:16.242109  148123 logs.go:282] 0 containers: []
	W1010 19:25:16.242130  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:16.242138  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:16.242234  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:16.283792  148123 cri.go:89] found id: ""
	I1010 19:25:16.283825  148123 logs.go:282] 0 containers: []
	W1010 19:25:16.283836  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:16.283844  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:16.283907  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:16.319934  148123 cri.go:89] found id: ""
	I1010 19:25:16.319969  148123 logs.go:282] 0 containers: []
	W1010 19:25:16.319980  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:16.319990  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:16.320063  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:16.360439  148123 cri.go:89] found id: ""
	I1010 19:25:16.360482  148123 logs.go:282] 0 containers: []
	W1010 19:25:16.360495  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:16.360504  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:16.360569  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:16.395890  148123 cri.go:89] found id: ""
	I1010 19:25:16.395922  148123 logs.go:282] 0 containers: []
	W1010 19:25:16.395931  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:16.395941  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:16.396009  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:16.431396  148123 cri.go:89] found id: ""
	I1010 19:25:16.431442  148123 logs.go:282] 0 containers: []
	W1010 19:25:16.431456  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:16.431463  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:16.431531  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:16.471768  148123 cri.go:89] found id: ""
	I1010 19:25:16.471796  148123 logs.go:282] 0 containers: []
	W1010 19:25:16.471804  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:16.471814  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:16.471830  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:16.526112  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:16.526155  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:16.541041  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:16.541074  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:16.623245  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:16.623273  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:16.623289  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:16.710840  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:16.710884  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:15.727171  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:17.728705  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:15.067362  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:17.566242  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:18.551207  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:21.050222  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:19.257566  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:19.273982  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:19.274063  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:19.311360  148123 cri.go:89] found id: ""
	I1010 19:25:19.311392  148123 logs.go:282] 0 containers: []
	W1010 19:25:19.311404  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:19.311411  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:19.311475  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:19.347025  148123 cri.go:89] found id: ""
	I1010 19:25:19.347053  148123 logs.go:282] 0 containers: []
	W1010 19:25:19.347062  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:19.347068  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:19.347120  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:19.383575  148123 cri.go:89] found id: ""
	I1010 19:25:19.383608  148123 logs.go:282] 0 containers: []
	W1010 19:25:19.383620  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:19.383633  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:19.383698  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:19.419245  148123 cri.go:89] found id: ""
	I1010 19:25:19.419270  148123 logs.go:282] 0 containers: []
	W1010 19:25:19.419278  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:19.419284  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:19.419334  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:19.453563  148123 cri.go:89] found id: ""
	I1010 19:25:19.453591  148123 logs.go:282] 0 containers: []
	W1010 19:25:19.453601  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:19.453658  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:19.453715  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:19.496430  148123 cri.go:89] found id: ""
	I1010 19:25:19.496458  148123 logs.go:282] 0 containers: []
	W1010 19:25:19.496466  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:19.496473  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:19.496533  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:19.534626  148123 cri.go:89] found id: ""
	I1010 19:25:19.534670  148123 logs.go:282] 0 containers: []
	W1010 19:25:19.534682  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:19.534691  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:19.534757  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:19.574186  148123 cri.go:89] found id: ""
	I1010 19:25:19.574236  148123 logs.go:282] 0 containers: []
	W1010 19:25:19.574248  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:19.574261  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:19.574283  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:19.587952  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:19.587989  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:19.664607  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:19.664644  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:19.664664  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:19.742185  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:19.742224  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:19.781530  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:19.781562  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:22.335398  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:22.349627  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:22.349693  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:22.389075  148123 cri.go:89] found id: ""
	I1010 19:25:22.389102  148123 logs.go:282] 0 containers: []
	W1010 19:25:22.389110  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:22.389117  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:22.389168  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:22.424331  148123 cri.go:89] found id: ""
	I1010 19:25:22.424368  148123 logs.go:282] 0 containers: []
	W1010 19:25:22.424381  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:22.424389  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:22.424460  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:22.464011  148123 cri.go:89] found id: ""
	I1010 19:25:22.464051  148123 logs.go:282] 0 containers: []
	W1010 19:25:22.464062  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:22.464070  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:22.464141  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:22.500786  148123 cri.go:89] found id: ""
	I1010 19:25:22.500819  148123 logs.go:282] 0 containers: []
	W1010 19:25:22.500828  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:22.500835  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:22.500914  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:22.538016  148123 cri.go:89] found id: ""
	I1010 19:25:22.538053  148123 logs.go:282] 0 containers: []
	W1010 19:25:22.538061  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:22.538068  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:22.538124  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:22.576600  148123 cri.go:89] found id: ""
	I1010 19:25:22.576629  148123 logs.go:282] 0 containers: []
	W1010 19:25:22.576637  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:22.576643  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:22.576702  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:22.616020  148123 cri.go:89] found id: ""
	I1010 19:25:22.616048  148123 logs.go:282] 0 containers: []
	W1010 19:25:22.616057  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:22.616064  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:22.616114  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:22.652238  148123 cri.go:89] found id: ""
	I1010 19:25:22.652271  148123 logs.go:282] 0 containers: []
	W1010 19:25:22.652283  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:22.652295  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:22.652311  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:22.726182  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:22.726216  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:22.726234  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:22.814040  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:22.814086  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:22.859254  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:22.859289  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:22.916565  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:22.916607  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:20.227871  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:22.732566  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:20.066872  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:22.566173  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:23.050833  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:25.551662  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:25.432636  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:25.446982  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:25.447047  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:25.482440  148123 cri.go:89] found id: ""
	I1010 19:25:25.482472  148123 logs.go:282] 0 containers: []
	W1010 19:25:25.482484  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:25.482492  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:25.482552  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:25.517709  148123 cri.go:89] found id: ""
	I1010 19:25:25.517742  148123 logs.go:282] 0 containers: []
	W1010 19:25:25.517756  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:25.517764  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:25.517867  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:25.561504  148123 cri.go:89] found id: ""
	I1010 19:25:25.561532  148123 logs.go:282] 0 containers: []
	W1010 19:25:25.561544  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:25.561552  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:25.561616  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:25.601704  148123 cri.go:89] found id: ""
	I1010 19:25:25.601741  148123 logs.go:282] 0 containers: []
	W1010 19:25:25.601753  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:25.601762  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:25.601825  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:25.638519  148123 cri.go:89] found id: ""
	I1010 19:25:25.638544  148123 logs.go:282] 0 containers: []
	W1010 19:25:25.638552  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:25.638558  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:25.638609  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:25.673390  148123 cri.go:89] found id: ""
	I1010 19:25:25.673425  148123 logs.go:282] 0 containers: []
	W1010 19:25:25.673436  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:25.673447  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:25.673525  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:25.708251  148123 cri.go:89] found id: ""
	I1010 19:25:25.708278  148123 logs.go:282] 0 containers: []
	W1010 19:25:25.708286  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:25.708293  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:25.708354  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:25.749793  148123 cri.go:89] found id: ""
	I1010 19:25:25.749826  148123 logs.go:282] 0 containers: []
	W1010 19:25:25.749837  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:25.749846  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:25.749861  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:25.802511  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:25.802554  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:25.817523  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:25.817562  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:25.898595  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:25.898619  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:25.898635  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:25.978629  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:25.978684  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:28.520165  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:28.532725  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:28.532793  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:25.226875  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:27.729015  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:25.066298  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:27.066963  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:27.551915  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:29.558497  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:28.571667  148123 cri.go:89] found id: ""
	I1010 19:25:28.571695  148123 logs.go:282] 0 containers: []
	W1010 19:25:28.571704  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:28.571710  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:28.571770  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:28.608136  148123 cri.go:89] found id: ""
	I1010 19:25:28.608165  148123 logs.go:282] 0 containers: []
	W1010 19:25:28.608174  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:28.608181  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:28.608244  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:28.642123  148123 cri.go:89] found id: ""
	I1010 19:25:28.642161  148123 logs.go:282] 0 containers: []
	W1010 19:25:28.642173  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:28.642181  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:28.642242  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:28.678600  148123 cri.go:89] found id: ""
	I1010 19:25:28.678633  148123 logs.go:282] 0 containers: []
	W1010 19:25:28.678643  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:28.678651  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:28.678702  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:28.716627  148123 cri.go:89] found id: ""
	I1010 19:25:28.716660  148123 logs.go:282] 0 containers: []
	W1010 19:25:28.716673  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:28.716681  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:28.716766  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:28.752750  148123 cri.go:89] found id: ""
	I1010 19:25:28.752786  148123 logs.go:282] 0 containers: []
	W1010 19:25:28.752798  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:28.752806  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:28.752898  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:28.790021  148123 cri.go:89] found id: ""
	I1010 19:25:28.790054  148123 logs.go:282] 0 containers: []
	W1010 19:25:28.790066  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:28.790075  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:28.790128  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:28.828760  148123 cri.go:89] found id: ""
	I1010 19:25:28.828803  148123 logs.go:282] 0 containers: []
	W1010 19:25:28.828827  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:28.828839  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:28.828877  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:28.879773  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:28.879811  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:28.894311  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:28.894342  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:28.968675  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:28.968706  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:28.968729  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:29.045862  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:29.045913  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:31.588772  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:31.601453  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:31.601526  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:31.637056  148123 cri.go:89] found id: ""
	I1010 19:25:31.637090  148123 logs.go:282] 0 containers: []
	W1010 19:25:31.637101  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:31.637108  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:31.637183  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:31.677825  148123 cri.go:89] found id: ""
	I1010 19:25:31.677854  148123 logs.go:282] 0 containers: []
	W1010 19:25:31.677862  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:31.677867  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:31.677920  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:31.715372  148123 cri.go:89] found id: ""
	I1010 19:25:31.715402  148123 logs.go:282] 0 containers: []
	W1010 19:25:31.715411  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:31.715419  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:31.715470  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:31.753767  148123 cri.go:89] found id: ""
	I1010 19:25:31.753794  148123 logs.go:282] 0 containers: []
	W1010 19:25:31.753806  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:31.753814  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:31.753879  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:31.792999  148123 cri.go:89] found id: ""
	I1010 19:25:31.793024  148123 logs.go:282] 0 containers: []
	W1010 19:25:31.793035  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:31.793050  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:31.793106  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:31.831571  148123 cri.go:89] found id: ""
	I1010 19:25:31.831598  148123 logs.go:282] 0 containers: []
	W1010 19:25:31.831607  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:31.831614  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:31.831673  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:31.880382  148123 cri.go:89] found id: ""
	I1010 19:25:31.880422  148123 logs.go:282] 0 containers: []
	W1010 19:25:31.880436  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:31.880447  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:31.880527  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:31.919596  148123 cri.go:89] found id: ""
	I1010 19:25:31.919627  148123 logs.go:282] 0 containers: []
	W1010 19:25:31.919637  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:31.919647  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:31.919661  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:31.967963  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:31.968001  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:31.983064  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:31.983106  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:32.056073  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:32.056105  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:32.056122  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:32.142927  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:32.142978  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:30.226683  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:32.227047  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:29.565699  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:31.566109  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:32.051411  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:34.052064  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:36.550062  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:34.690025  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:34.705326  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:34.705413  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:34.740756  148123 cri.go:89] found id: ""
	I1010 19:25:34.740786  148123 logs.go:282] 0 containers: []
	W1010 19:25:34.740793  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:34.740799  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:34.740872  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:34.777021  148123 cri.go:89] found id: ""
	I1010 19:25:34.777049  148123 logs.go:282] 0 containers: []
	W1010 19:25:34.777061  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:34.777069  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:34.777123  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:34.826699  148123 cri.go:89] found id: ""
	I1010 19:25:34.826735  148123 logs.go:282] 0 containers: []
	W1010 19:25:34.826747  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:34.826754  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:34.826896  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:34.864297  148123 cri.go:89] found id: ""
	I1010 19:25:34.864329  148123 logs.go:282] 0 containers: []
	W1010 19:25:34.864340  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:34.864348  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:34.864415  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:34.899198  148123 cri.go:89] found id: ""
	I1010 19:25:34.899234  148123 logs.go:282] 0 containers: []
	W1010 19:25:34.899247  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:34.899256  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:34.899315  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:34.934132  148123 cri.go:89] found id: ""
	I1010 19:25:34.934162  148123 logs.go:282] 0 containers: []
	W1010 19:25:34.934170  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:34.934176  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:34.934238  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:34.969858  148123 cri.go:89] found id: ""
	I1010 19:25:34.969891  148123 logs.go:282] 0 containers: []
	W1010 19:25:34.969903  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:34.969911  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:34.969978  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:35.006082  148123 cri.go:89] found id: ""
	I1010 19:25:35.006121  148123 logs.go:282] 0 containers: []
	W1010 19:25:35.006132  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:35.006145  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:35.006160  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:35.060596  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:35.060631  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:35.076246  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:35.076281  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:35.151879  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:35.151912  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:35.151931  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:35.231802  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:35.231845  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:37.774550  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:37.787871  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:37.787938  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:37.824273  148123 cri.go:89] found id: ""
	I1010 19:25:37.824306  148123 logs.go:282] 0 containers: []
	W1010 19:25:37.824317  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:37.824325  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:37.824410  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:37.860548  148123 cri.go:89] found id: ""
	I1010 19:25:37.860582  148123 logs.go:282] 0 containers: []
	W1010 19:25:37.860593  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:37.860601  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:37.860677  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:37.894616  148123 cri.go:89] found id: ""
	I1010 19:25:37.894644  148123 logs.go:282] 0 containers: []
	W1010 19:25:37.894654  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:37.894662  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:37.894730  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:37.931247  148123 cri.go:89] found id: ""
	I1010 19:25:37.931272  148123 logs.go:282] 0 containers: []
	W1010 19:25:37.931280  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:37.931287  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:37.931337  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:37.968028  148123 cri.go:89] found id: ""
	I1010 19:25:37.968068  148123 logs.go:282] 0 containers: []
	W1010 19:25:37.968079  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:37.968086  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:37.968153  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:38.004661  148123 cri.go:89] found id: ""
	I1010 19:25:38.004692  148123 logs.go:282] 0 containers: []
	W1010 19:25:38.004700  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:38.004706  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:38.004760  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:38.040874  148123 cri.go:89] found id: ""
	I1010 19:25:38.040906  148123 logs.go:282] 0 containers: []
	W1010 19:25:38.040915  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:38.040922  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:38.040990  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:38.083771  148123 cri.go:89] found id: ""
	I1010 19:25:38.083802  148123 logs.go:282] 0 containers: []
	W1010 19:25:38.083811  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:38.083821  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:38.083836  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:38.099645  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:38.099683  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:38.172168  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:38.172207  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:38.172223  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:38.248523  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:38.248561  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:38.306594  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:38.306630  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:34.728106  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:37.226285  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:34.065919  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:36.066751  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:38.067361  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:38.550359  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:40.551190  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:40.873868  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:40.889561  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:40.889668  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:40.926769  148123 cri.go:89] found id: ""
	I1010 19:25:40.926800  148123 logs.go:282] 0 containers: []
	W1010 19:25:40.926808  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:40.926816  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:40.926869  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:40.962564  148123 cri.go:89] found id: ""
	I1010 19:25:40.962592  148123 logs.go:282] 0 containers: []
	W1010 19:25:40.962600  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:40.962606  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:40.962659  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:40.997856  148123 cri.go:89] found id: ""
	I1010 19:25:40.997885  148123 logs.go:282] 0 containers: []
	W1010 19:25:40.997894  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:40.997900  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:40.997951  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:41.035479  148123 cri.go:89] found id: ""
	I1010 19:25:41.035506  148123 logs.go:282] 0 containers: []
	W1010 19:25:41.035514  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:41.035520  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:41.035575  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:41.077733  148123 cri.go:89] found id: ""
	I1010 19:25:41.077817  148123 logs.go:282] 0 containers: []
	W1010 19:25:41.077830  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:41.077840  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:41.077916  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:41.115439  148123 cri.go:89] found id: ""
	I1010 19:25:41.115476  148123 logs.go:282] 0 containers: []
	W1010 19:25:41.115489  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:41.115498  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:41.115552  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:41.152586  148123 cri.go:89] found id: ""
	I1010 19:25:41.152625  148123 logs.go:282] 0 containers: []
	W1010 19:25:41.152636  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:41.152643  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:41.152695  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:41.186807  148123 cri.go:89] found id: ""
	I1010 19:25:41.186840  148123 logs.go:282] 0 containers: []
	W1010 19:25:41.186851  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:41.186862  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:41.186880  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:41.200404  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:41.200447  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:41.280717  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:41.280744  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:41.280760  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:41.360561  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:41.360611  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:41.404461  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:41.404497  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:39.226903  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:41.227077  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:43.727197  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:40.570404  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:43.066523  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:43.050813  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:45.051094  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:43.955723  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:43.970895  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:43.970964  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:44.012162  148123 cri.go:89] found id: ""
	I1010 19:25:44.012194  148123 logs.go:282] 0 containers: []
	W1010 19:25:44.012203  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:44.012209  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:44.012282  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:44.074583  148123 cri.go:89] found id: ""
	I1010 19:25:44.074606  148123 logs.go:282] 0 containers: []
	W1010 19:25:44.074614  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:44.074620  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:44.074675  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:44.133562  148123 cri.go:89] found id: ""
	I1010 19:25:44.133588  148123 logs.go:282] 0 containers: []
	W1010 19:25:44.133596  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:44.133602  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:44.133658  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:44.168832  148123 cri.go:89] found id: ""
	I1010 19:25:44.168883  148123 logs.go:282] 0 containers: []
	W1010 19:25:44.168896  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:44.168908  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:44.168967  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:44.204896  148123 cri.go:89] found id: ""
	I1010 19:25:44.204923  148123 logs.go:282] 0 containers: []
	W1010 19:25:44.204934  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:44.204943  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:44.205002  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:44.242410  148123 cri.go:89] found id: ""
	I1010 19:25:44.242437  148123 logs.go:282] 0 containers: []
	W1010 19:25:44.242448  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:44.242456  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:44.242524  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:44.278553  148123 cri.go:89] found id: ""
	I1010 19:25:44.278581  148123 logs.go:282] 0 containers: []
	W1010 19:25:44.278589  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:44.278595  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:44.278658  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:44.316099  148123 cri.go:89] found id: ""
	I1010 19:25:44.316125  148123 logs.go:282] 0 containers: []
	W1010 19:25:44.316132  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:44.316141  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:44.316155  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:44.365809  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:44.365860  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:44.380129  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:44.380162  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:44.454241  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:44.454267  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:44.454281  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:44.530928  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:44.530974  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:47.078279  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:47.092233  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:47.092310  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:47.129517  148123 cri.go:89] found id: ""
	I1010 19:25:47.129557  148123 logs.go:282] 0 containers: []
	W1010 19:25:47.129573  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:47.129582  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:47.129654  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:47.170309  148123 cri.go:89] found id: ""
	I1010 19:25:47.170345  148123 logs.go:282] 0 containers: []
	W1010 19:25:47.170357  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:47.170365  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:47.170432  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:47.205110  148123 cri.go:89] found id: ""
	I1010 19:25:47.205145  148123 logs.go:282] 0 containers: []
	W1010 19:25:47.205156  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:47.205162  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:47.205228  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:47.245668  148123 cri.go:89] found id: ""
	I1010 19:25:47.245695  148123 logs.go:282] 0 containers: []
	W1010 19:25:47.245704  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:47.245711  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:47.245768  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:47.284549  148123 cri.go:89] found id: ""
	I1010 19:25:47.284576  148123 logs.go:282] 0 containers: []
	W1010 19:25:47.284584  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:47.284590  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:47.284643  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:47.324676  148123 cri.go:89] found id: ""
	I1010 19:25:47.324708  148123 logs.go:282] 0 containers: []
	W1010 19:25:47.324720  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:47.324729  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:47.324782  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:47.364315  148123 cri.go:89] found id: ""
	I1010 19:25:47.364347  148123 logs.go:282] 0 containers: []
	W1010 19:25:47.364356  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:47.364362  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:47.364433  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:47.400611  148123 cri.go:89] found id: ""
	I1010 19:25:47.400641  148123 logs.go:282] 0 containers: []
	W1010 19:25:47.400652  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:47.400664  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:47.400680  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:47.415185  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:47.415227  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:47.481638  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:47.481665  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:47.481681  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:47.560135  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:47.560171  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:47.602144  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:47.602174  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:46.227386  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:48.227699  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:45.066887  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:47.565340  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:47.051459  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:49.550170  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:51.554542  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:50.151770  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:50.165336  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:50.165403  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:50.201045  148123 cri.go:89] found id: ""
	I1010 19:25:50.201072  148123 logs.go:282] 0 containers: []
	W1010 19:25:50.201082  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:50.201089  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:50.201154  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:50.237047  148123 cri.go:89] found id: ""
	I1010 19:25:50.237082  148123 logs.go:282] 0 containers: []
	W1010 19:25:50.237094  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:50.237102  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:50.237174  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:50.273727  148123 cri.go:89] found id: ""
	I1010 19:25:50.273756  148123 logs.go:282] 0 containers: []
	W1010 19:25:50.273767  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:50.273780  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:50.273843  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:50.311397  148123 cri.go:89] found id: ""
	I1010 19:25:50.311424  148123 logs.go:282] 0 containers: []
	W1010 19:25:50.311433  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:50.311450  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:50.311513  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:50.347593  148123 cri.go:89] found id: ""
	I1010 19:25:50.347625  148123 logs.go:282] 0 containers: []
	W1010 19:25:50.347637  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:50.347705  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:50.347782  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:50.382195  148123 cri.go:89] found id: ""
	I1010 19:25:50.382228  148123 logs.go:282] 0 containers: []
	W1010 19:25:50.382240  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:50.382247  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:50.382313  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:50.419189  148123 cri.go:89] found id: ""
	I1010 19:25:50.419221  148123 logs.go:282] 0 containers: []
	W1010 19:25:50.419229  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:50.419236  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:50.419297  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:50.454368  148123 cri.go:89] found id: ""
	I1010 19:25:50.454399  148123 logs.go:282] 0 containers: []
	W1010 19:25:50.454410  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:50.454421  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:50.454440  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:50.495575  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:50.495623  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:50.548691  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:50.548737  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:50.564585  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:50.564621  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:50.637152  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:50.637174  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:50.637190  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:53.221939  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:53.235889  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:53.235968  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:53.278589  148123 cri.go:89] found id: ""
	I1010 19:25:53.278620  148123 logs.go:282] 0 containers: []
	W1010 19:25:53.278632  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:53.278640  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:53.278705  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:53.315062  148123 cri.go:89] found id: ""
	I1010 19:25:53.315090  148123 logs.go:282] 0 containers: []
	W1010 19:25:53.315101  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:53.315109  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:53.315182  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:53.348994  148123 cri.go:89] found id: ""
	I1010 19:25:53.349031  148123 logs.go:282] 0 containers: []
	W1010 19:25:53.349040  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:53.349046  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:53.349099  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:53.383334  148123 cri.go:89] found id: ""
	I1010 19:25:53.383363  148123 logs.go:282] 0 containers: []
	W1010 19:25:53.383373  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:53.383380  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:53.383450  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:53.417888  148123 cri.go:89] found id: ""
	I1010 19:25:53.417920  148123 logs.go:282] 0 containers: []
	W1010 19:25:53.417931  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:53.417938  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:53.418013  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:53.454757  148123 cri.go:89] found id: ""
	I1010 19:25:53.454787  148123 logs.go:282] 0 containers: []
	W1010 19:25:53.454798  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:53.454806  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:53.454871  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:53.491422  148123 cri.go:89] found id: ""
	I1010 19:25:53.491452  148123 logs.go:282] 0 containers: []
	W1010 19:25:53.491464  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:53.491472  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:53.491541  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:53.531240  148123 cri.go:89] found id: ""
	I1010 19:25:53.531271  148123 logs.go:282] 0 containers: []
	W1010 19:25:53.531282  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:53.531294  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:53.531310  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:50.727196  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:53.226957  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:50.065907  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:52.567056  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:54.051112  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:56.554137  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:53.582576  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:53.582608  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:53.596481  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:53.596511  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:53.666134  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:53.666162  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:53.666178  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:53.745621  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:53.745659  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:56.290744  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:56.307536  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:56.307610  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:56.353456  148123 cri.go:89] found id: ""
	I1010 19:25:56.353484  148123 logs.go:282] 0 containers: []
	W1010 19:25:56.353494  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:56.353501  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:56.353553  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:56.394093  148123 cri.go:89] found id: ""
	I1010 19:25:56.394120  148123 logs.go:282] 0 containers: []
	W1010 19:25:56.394131  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:56.394138  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:56.394203  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:56.428388  148123 cri.go:89] found id: ""
	I1010 19:25:56.428428  148123 logs.go:282] 0 containers: []
	W1010 19:25:56.428441  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:56.428450  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:56.428524  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:56.464414  148123 cri.go:89] found id: ""
	I1010 19:25:56.464459  148123 logs.go:282] 0 containers: []
	W1010 19:25:56.464472  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:56.464480  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:56.464563  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:56.505271  148123 cri.go:89] found id: ""
	I1010 19:25:56.505307  148123 logs.go:282] 0 containers: []
	W1010 19:25:56.505316  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:56.505322  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:56.505374  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:56.546424  148123 cri.go:89] found id: ""
	I1010 19:25:56.546456  148123 logs.go:282] 0 containers: []
	W1010 19:25:56.546467  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:56.546475  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:56.546545  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:56.587458  148123 cri.go:89] found id: ""
	I1010 19:25:56.587489  148123 logs.go:282] 0 containers: []
	W1010 19:25:56.587501  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:56.587509  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:56.587588  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:56.628248  148123 cri.go:89] found id: ""
	I1010 19:25:56.628279  148123 logs.go:282] 0 containers: []
	W1010 19:25:56.628291  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:56.628304  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:56.628320  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:56.642109  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:56.642141  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:56.723646  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:56.723678  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:56.723696  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:56.805849  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:56.805899  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:56.849863  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:56.849891  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:25:55.230447  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:57.726896  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:55.066248  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:57.565240  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:59.051145  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:01.554276  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:59.407093  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:25:59.422915  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:25:59.423005  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:25:59.462867  148123 cri.go:89] found id: ""
	I1010 19:25:59.462898  148123 logs.go:282] 0 containers: []
	W1010 19:25:59.462909  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:25:59.462917  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:25:59.462980  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:25:59.496931  148123 cri.go:89] found id: ""
	I1010 19:25:59.496958  148123 logs.go:282] 0 containers: []
	W1010 19:25:59.496968  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:25:59.496983  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:25:59.497049  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:25:59.532241  148123 cri.go:89] found id: ""
	I1010 19:25:59.532271  148123 logs.go:282] 0 containers: []
	W1010 19:25:59.532283  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:25:59.532291  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:25:59.532352  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:25:59.571285  148123 cri.go:89] found id: ""
	I1010 19:25:59.571313  148123 logs.go:282] 0 containers: []
	W1010 19:25:59.571324  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:25:59.571331  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:25:59.571391  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:25:59.606687  148123 cri.go:89] found id: ""
	I1010 19:25:59.606721  148123 logs.go:282] 0 containers: []
	W1010 19:25:59.606734  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:25:59.606741  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:25:59.606800  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:25:59.643247  148123 cri.go:89] found id: ""
	I1010 19:25:59.643276  148123 logs.go:282] 0 containers: []
	W1010 19:25:59.643286  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:25:59.643294  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:25:59.643369  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:25:59.679305  148123 cri.go:89] found id: ""
	I1010 19:25:59.679335  148123 logs.go:282] 0 containers: []
	W1010 19:25:59.679344  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:25:59.679350  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:25:59.679407  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:25:59.716149  148123 cri.go:89] found id: ""
	I1010 19:25:59.716198  148123 logs.go:282] 0 containers: []
	W1010 19:25:59.716210  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:25:59.716222  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:25:59.716239  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:25:59.730334  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:25:59.730364  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:25:59.799759  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:59.799786  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:25:59.799804  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:25:59.881883  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:25:59.881925  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:25:59.923755  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:25:59.923784  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:02.478043  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:02.492627  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:02.492705  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:02.532133  148123 cri.go:89] found id: ""
	I1010 19:26:02.532160  148123 logs.go:282] 0 containers: []
	W1010 19:26:02.532172  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:02.532179  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:02.532244  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:02.571915  148123 cri.go:89] found id: ""
	I1010 19:26:02.571943  148123 logs.go:282] 0 containers: []
	W1010 19:26:02.571951  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:02.571957  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:02.572014  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:02.616330  148123 cri.go:89] found id: ""
	I1010 19:26:02.616365  148123 logs.go:282] 0 containers: []
	W1010 19:26:02.616376  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:02.616385  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:02.616455  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:02.662245  148123 cri.go:89] found id: ""
	I1010 19:26:02.662275  148123 logs.go:282] 0 containers: []
	W1010 19:26:02.662284  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:02.662291  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:02.662354  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:02.698692  148123 cri.go:89] found id: ""
	I1010 19:26:02.698720  148123 logs.go:282] 0 containers: []
	W1010 19:26:02.698731  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:02.698739  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:02.698805  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:02.736643  148123 cri.go:89] found id: ""
	I1010 19:26:02.736667  148123 logs.go:282] 0 containers: []
	W1010 19:26:02.736677  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:02.736685  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:02.736742  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:02.780356  148123 cri.go:89] found id: ""
	I1010 19:26:02.780388  148123 logs.go:282] 0 containers: []
	W1010 19:26:02.780399  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:02.780407  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:02.780475  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:02.816716  148123 cri.go:89] found id: ""
	I1010 19:26:02.816744  148123 logs.go:282] 0 containers: []
	W1010 19:26:02.816752  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:02.816761  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:02.816775  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:02.899002  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:02.899046  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:02.942060  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:02.942093  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:02.997605  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:02.997661  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:03.011768  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:03.011804  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:03.096404  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:25:59.727075  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:02.227526  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:25:59.565903  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:02.066179  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:04.049656  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:06.050425  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:05.596971  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:05.612524  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:05.612598  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:05.649999  148123 cri.go:89] found id: ""
	I1010 19:26:05.650032  148123 logs.go:282] 0 containers: []
	W1010 19:26:05.650044  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:05.650052  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:05.650119  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:05.684365  148123 cri.go:89] found id: ""
	I1010 19:26:05.684398  148123 logs.go:282] 0 containers: []
	W1010 19:26:05.684419  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:05.684427  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:05.684494  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:05.721801  148123 cri.go:89] found id: ""
	I1010 19:26:05.721832  148123 logs.go:282] 0 containers: []
	W1010 19:26:05.721840  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:05.721847  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:05.721908  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:05.762010  148123 cri.go:89] found id: ""
	I1010 19:26:05.762037  148123 logs.go:282] 0 containers: []
	W1010 19:26:05.762046  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:05.762056  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:05.762117  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:05.803425  148123 cri.go:89] found id: ""
	I1010 19:26:05.803457  148123 logs.go:282] 0 containers: []
	W1010 19:26:05.803468  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:05.803476  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:05.803551  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:05.838800  148123 cri.go:89] found id: ""
	I1010 19:26:05.838833  148123 logs.go:282] 0 containers: []
	W1010 19:26:05.838845  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:05.838853  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:05.838921  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:05.875110  148123 cri.go:89] found id: ""
	I1010 19:26:05.875147  148123 logs.go:282] 0 containers: []
	W1010 19:26:05.875158  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:05.875167  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:05.875245  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:05.908951  148123 cri.go:89] found id: ""
	I1010 19:26:05.908988  148123 logs.go:282] 0 containers: []
	W1010 19:26:05.909000  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:05.909012  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:05.909027  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:05.922263  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:05.922293  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:05.996441  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:05.996465  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:05.996484  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:06.078113  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:06.078160  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:06.122919  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:06.122956  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:04.726335  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:06.728178  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:04.066573  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:06.564991  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:08.566655  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:08.050522  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:10.550288  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:08.674464  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:08.688452  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:08.688570  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:08.726240  148123 cri.go:89] found id: ""
	I1010 19:26:08.726269  148123 logs.go:282] 0 containers: []
	W1010 19:26:08.726279  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:08.726287  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:08.726370  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:08.764762  148123 cri.go:89] found id: ""
	I1010 19:26:08.764790  148123 logs.go:282] 0 containers: []
	W1010 19:26:08.764799  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:08.764806  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:08.764887  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:08.805522  148123 cri.go:89] found id: ""
	I1010 19:26:08.805552  148123 logs.go:282] 0 containers: []
	W1010 19:26:08.805563  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:08.805572  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:08.805642  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:08.845694  148123 cri.go:89] found id: ""
	I1010 19:26:08.845729  148123 logs.go:282] 0 containers: []
	W1010 19:26:08.845740  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:08.845747  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:08.845817  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:08.885024  148123 cri.go:89] found id: ""
	I1010 19:26:08.885054  148123 logs.go:282] 0 containers: []
	W1010 19:26:08.885066  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:08.885074  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:08.885138  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:08.927500  148123 cri.go:89] found id: ""
	I1010 19:26:08.927531  148123 logs.go:282] 0 containers: []
	W1010 19:26:08.927540  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:08.927547  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:08.927616  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:08.963870  148123 cri.go:89] found id: ""
	I1010 19:26:08.963904  148123 logs.go:282] 0 containers: []
	W1010 19:26:08.963916  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:08.963924  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:08.963988  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:08.997984  148123 cri.go:89] found id: ""
	I1010 19:26:08.998017  148123 logs.go:282] 0 containers: []
	W1010 19:26:08.998027  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:08.998039  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:08.998056  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:09.049307  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:09.049341  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:09.063341  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:09.063432  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:09.145190  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:09.145214  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:09.145226  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:09.231409  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:09.231445  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:11.773475  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:11.788981  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:11.789055  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:11.830289  148123 cri.go:89] found id: ""
	I1010 19:26:11.830319  148123 logs.go:282] 0 containers: []
	W1010 19:26:11.830327  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:11.830333  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:11.830399  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:11.867093  148123 cri.go:89] found id: ""
	I1010 19:26:11.867120  148123 logs.go:282] 0 containers: []
	W1010 19:26:11.867131  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:11.867138  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:11.867212  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:11.902146  148123 cri.go:89] found id: ""
	I1010 19:26:11.902181  148123 logs.go:282] 0 containers: []
	W1010 19:26:11.902192  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:11.902201  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:11.902271  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:11.938652  148123 cri.go:89] found id: ""
	I1010 19:26:11.938681  148123 logs.go:282] 0 containers: []
	W1010 19:26:11.938691  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:11.938703  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:11.938771  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:11.975903  148123 cri.go:89] found id: ""
	I1010 19:26:11.975937  148123 logs.go:282] 0 containers: []
	W1010 19:26:11.975947  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:11.975955  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:11.976020  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:12.014083  148123 cri.go:89] found id: ""
	I1010 19:26:12.014111  148123 logs.go:282] 0 containers: []
	W1010 19:26:12.014119  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:12.014126  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:12.014190  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:12.057832  148123 cri.go:89] found id: ""
	I1010 19:26:12.057858  148123 logs.go:282] 0 containers: []
	W1010 19:26:12.057867  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:12.057874  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:12.057925  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:12.103253  148123 cri.go:89] found id: ""
	I1010 19:26:12.103286  148123 logs.go:282] 0 containers: []
	W1010 19:26:12.103298  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:12.103311  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:12.103326  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:12.171062  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:12.171104  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:12.185463  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:12.185496  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:12.261568  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:12.261592  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:12.261607  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:12.345819  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:12.345860  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:09.226954  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:11.227205  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:13.227457  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:11.066777  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:13.565854  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:12.551323  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:15.051745  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:14.886283  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:14.901117  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:14.901213  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:14.935235  148123 cri.go:89] found id: ""
	I1010 19:26:14.935264  148123 logs.go:282] 0 containers: []
	W1010 19:26:14.935273  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:14.935280  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:14.935336  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:14.970617  148123 cri.go:89] found id: ""
	I1010 19:26:14.970642  148123 logs.go:282] 0 containers: []
	W1010 19:26:14.970652  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:14.970658  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:14.970722  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:15.011086  148123 cri.go:89] found id: ""
	I1010 19:26:15.011113  148123 logs.go:282] 0 containers: []
	W1010 19:26:15.011122  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:15.011128  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:15.011188  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:15.053004  148123 cri.go:89] found id: ""
	I1010 19:26:15.053026  148123 logs.go:282] 0 containers: []
	W1010 19:26:15.053033  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:15.053039  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:15.053099  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:15.089275  148123 cri.go:89] found id: ""
	I1010 19:26:15.089304  148123 logs.go:282] 0 containers: []
	W1010 19:26:15.089312  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:15.089319  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:15.089383  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:15.126620  148123 cri.go:89] found id: ""
	I1010 19:26:15.126645  148123 logs.go:282] 0 containers: []
	W1010 19:26:15.126654  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:15.126660  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:15.126723  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:15.160920  148123 cri.go:89] found id: ""
	I1010 19:26:15.160956  148123 logs.go:282] 0 containers: []
	W1010 19:26:15.160969  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:15.160979  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:15.161037  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:15.197430  148123 cri.go:89] found id: ""
	I1010 19:26:15.197462  148123 logs.go:282] 0 containers: []
	W1010 19:26:15.197474  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:15.197486  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:15.197507  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:15.240905  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:15.240953  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:15.297663  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:15.297719  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:15.312279  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:15.312313  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:15.395296  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:15.395320  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:15.395336  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:17.978030  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:17.992574  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:17.992651  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:18.028846  148123 cri.go:89] found id: ""
	I1010 19:26:18.028890  148123 logs.go:282] 0 containers: []
	W1010 19:26:18.028901  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:18.028908  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:18.028965  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:18.073004  148123 cri.go:89] found id: ""
	I1010 19:26:18.073033  148123 logs.go:282] 0 containers: []
	W1010 19:26:18.073041  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:18.073047  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:18.073118  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:18.112062  148123 cri.go:89] found id: ""
	I1010 19:26:18.112098  148123 logs.go:282] 0 containers: []
	W1010 19:26:18.112111  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:18.112121  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:18.112209  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:18.151565  148123 cri.go:89] found id: ""
	I1010 19:26:18.151597  148123 logs.go:282] 0 containers: []
	W1010 19:26:18.151608  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:18.151616  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:18.151675  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:18.197483  148123 cri.go:89] found id: ""
	I1010 19:26:18.197509  148123 logs.go:282] 0 containers: []
	W1010 19:26:18.197518  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:18.197524  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:18.197591  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:18.235151  148123 cri.go:89] found id: ""
	I1010 19:26:18.235181  148123 logs.go:282] 0 containers: []
	W1010 19:26:18.235193  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:18.235201  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:18.235279  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:18.273807  148123 cri.go:89] found id: ""
	I1010 19:26:18.273841  148123 logs.go:282] 0 containers: []
	W1010 19:26:18.273852  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:18.273858  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:18.273920  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:18.311644  148123 cri.go:89] found id: ""
	I1010 19:26:18.311677  148123 logs.go:282] 0 containers: []
	W1010 19:26:18.311688  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:18.311700  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:18.311716  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:18.355599  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:18.355635  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:18.407786  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:18.407830  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:18.422944  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:18.422976  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:18.496830  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:18.496873  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:18.496887  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:15.227600  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:17.726712  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:16.065701  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:18.066861  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:17.558257  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:20.050914  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:21.108300  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:21.123177  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:21.123243  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:21.162544  148123 cri.go:89] found id: ""
	I1010 19:26:21.162575  148123 logs.go:282] 0 containers: []
	W1010 19:26:21.162586  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:21.162595  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:21.162662  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:21.199799  148123 cri.go:89] found id: ""
	I1010 19:26:21.199828  148123 logs.go:282] 0 containers: []
	W1010 19:26:21.199839  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:21.199845  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:21.199901  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:21.246333  148123 cri.go:89] found id: ""
	I1010 19:26:21.246357  148123 logs.go:282] 0 containers: []
	W1010 19:26:21.246366  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:21.246372  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:21.246433  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:21.282681  148123 cri.go:89] found id: ""
	I1010 19:26:21.282719  148123 logs.go:282] 0 containers: []
	W1010 19:26:21.282732  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:21.282740  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:21.282810  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:21.319500  148123 cri.go:89] found id: ""
	I1010 19:26:21.319535  148123 logs.go:282] 0 containers: []
	W1010 19:26:21.319546  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:21.319554  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:21.319623  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:21.355779  148123 cri.go:89] found id: ""
	I1010 19:26:21.355809  148123 logs.go:282] 0 containers: []
	W1010 19:26:21.355820  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:21.355828  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:21.355902  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:21.394567  148123 cri.go:89] found id: ""
	I1010 19:26:21.394608  148123 logs.go:282] 0 containers: []
	W1010 19:26:21.394617  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:21.394623  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:21.394684  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:21.430009  148123 cri.go:89] found id: ""
	I1010 19:26:21.430046  148123 logs.go:282] 0 containers: []
	W1010 19:26:21.430058  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:21.430069  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:21.430089  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:21.443267  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:21.443301  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:21.517046  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:21.517068  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:21.517083  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:21.594927  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:21.594978  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:21.634274  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:21.634311  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:20.227157  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:22.727736  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:20.566652  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:23.066459  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:22.550526  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:25.050647  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:24.189760  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:24.204355  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:24.204420  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:24.248130  148123 cri.go:89] found id: ""
	I1010 19:26:24.248160  148123 logs.go:282] 0 containers: []
	W1010 19:26:24.248171  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:24.248178  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:24.248244  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:24.293281  148123 cri.go:89] found id: ""
	I1010 19:26:24.293312  148123 logs.go:282] 0 containers: []
	W1010 19:26:24.293324  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:24.293331  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:24.293400  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:24.350709  148123 cri.go:89] found id: ""
	I1010 19:26:24.350743  148123 logs.go:282] 0 containers: []
	W1010 19:26:24.350755  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:24.350765  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:24.350838  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:24.394113  148123 cri.go:89] found id: ""
	I1010 19:26:24.394152  148123 logs.go:282] 0 containers: []
	W1010 19:26:24.394170  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:24.394181  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:24.394256  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:24.445999  148123 cri.go:89] found id: ""
	I1010 19:26:24.446030  148123 logs.go:282] 0 containers: []
	W1010 19:26:24.446042  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:24.446051  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:24.446120  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:24.483566  148123 cri.go:89] found id: ""
	I1010 19:26:24.483596  148123 logs.go:282] 0 containers: []
	W1010 19:26:24.483605  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:24.483612  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:24.483665  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:24.520705  148123 cri.go:89] found id: ""
	I1010 19:26:24.520736  148123 logs.go:282] 0 containers: []
	W1010 19:26:24.520748  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:24.520757  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:24.520825  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:24.557316  148123 cri.go:89] found id: ""
	I1010 19:26:24.557346  148123 logs.go:282] 0 containers: []
	W1010 19:26:24.557355  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:24.557364  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:24.557376  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:24.608065  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:24.608109  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:24.623202  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:24.623250  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:24.694665  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:24.694697  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:24.694716  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:24.783369  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:24.783414  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:27.327524  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:27.341132  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:27.341214  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:27.376589  148123 cri.go:89] found id: ""
	I1010 19:26:27.376618  148123 logs.go:282] 0 containers: []
	W1010 19:26:27.376627  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:27.376633  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:27.376684  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:27.414452  148123 cri.go:89] found id: ""
	I1010 19:26:27.414491  148123 logs.go:282] 0 containers: []
	W1010 19:26:27.414502  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:27.414510  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:27.414572  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:27.449839  148123 cri.go:89] found id: ""
	I1010 19:26:27.449867  148123 logs.go:282] 0 containers: []
	W1010 19:26:27.449875  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:27.449882  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:27.449932  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:27.492445  148123 cri.go:89] found id: ""
	I1010 19:26:27.492472  148123 logs.go:282] 0 containers: []
	W1010 19:26:27.492480  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:27.492487  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:27.492549  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:27.533032  148123 cri.go:89] found id: ""
	I1010 19:26:27.533085  148123 logs.go:282] 0 containers: []
	W1010 19:26:27.533095  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:27.533122  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:27.533199  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:27.571096  148123 cri.go:89] found id: ""
	I1010 19:26:27.571122  148123 logs.go:282] 0 containers: []
	W1010 19:26:27.571130  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:27.571135  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:27.571204  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:27.608762  148123 cri.go:89] found id: ""
	I1010 19:26:27.608798  148123 logs.go:282] 0 containers: []
	W1010 19:26:27.608809  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:27.608818  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:27.608896  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:27.648200  148123 cri.go:89] found id: ""
	I1010 19:26:27.648236  148123 logs.go:282] 0 containers: []
	W1010 19:26:27.648248  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:27.648260  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:27.648275  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:27.698124  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:27.698154  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:27.755009  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:27.755052  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:27.770048  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:27.770084  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:27.845475  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:27.845505  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:27.845519  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:24.729352  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:26.731831  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:25.566028  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:27.567052  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:27.555698  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:30.049914  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:30.423117  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:30.436706  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:30.436768  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:30.473367  148123 cri.go:89] found id: ""
	I1010 19:26:30.473396  148123 logs.go:282] 0 containers: []
	W1010 19:26:30.473408  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:30.473417  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:30.473470  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:30.510560  148123 cri.go:89] found id: ""
	I1010 19:26:30.510599  148123 logs.go:282] 0 containers: []
	W1010 19:26:30.510610  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:30.510619  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:30.510687  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:30.549682  148123 cri.go:89] found id: ""
	I1010 19:26:30.549715  148123 logs.go:282] 0 containers: []
	W1010 19:26:30.549726  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:30.549734  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:30.549798  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:30.586401  148123 cri.go:89] found id: ""
	I1010 19:26:30.586425  148123 logs.go:282] 0 containers: []
	W1010 19:26:30.586434  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:30.586441  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:30.586492  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:30.622012  148123 cri.go:89] found id: ""
	I1010 19:26:30.622037  148123 logs.go:282] 0 containers: []
	W1010 19:26:30.622045  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:30.622052  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:30.622107  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:30.658336  148123 cri.go:89] found id: ""
	I1010 19:26:30.658364  148123 logs.go:282] 0 containers: []
	W1010 19:26:30.658372  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:30.658378  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:30.658442  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:30.695495  148123 cri.go:89] found id: ""
	I1010 19:26:30.695523  148123 logs.go:282] 0 containers: []
	W1010 19:26:30.695532  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:30.695538  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:30.695591  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:30.735093  148123 cri.go:89] found id: ""
	I1010 19:26:30.735125  148123 logs.go:282] 0 containers: []
	W1010 19:26:30.735136  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:30.735148  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:30.735163  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:30.816097  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:30.816140  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:30.853878  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:30.853915  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:30.906289  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:30.906341  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:30.923742  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:30.923769  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:30.993226  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:33.493799  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:33.508083  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:33.508166  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:33.545019  148123 cri.go:89] found id: ""
	I1010 19:26:33.545055  148123 logs.go:282] 0 containers: []
	W1010 19:26:33.545072  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:33.545081  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:33.545150  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:29.226673  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:31.227117  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:33.727777  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:30.068231  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:32.566025  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:32.050118  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:34.051720  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:36.550138  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:33.584215  148123 cri.go:89] found id: ""
	I1010 19:26:33.584242  148123 logs.go:282] 0 containers: []
	W1010 19:26:33.584252  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:33.584259  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:33.584315  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:33.620598  148123 cri.go:89] found id: ""
	I1010 19:26:33.620627  148123 logs.go:282] 0 containers: []
	W1010 19:26:33.620636  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:33.620643  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:33.620691  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:33.655225  148123 cri.go:89] found id: ""
	I1010 19:26:33.655252  148123 logs.go:282] 0 containers: []
	W1010 19:26:33.655260  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:33.655267  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:33.655322  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:33.693464  148123 cri.go:89] found id: ""
	I1010 19:26:33.693490  148123 logs.go:282] 0 containers: []
	W1010 19:26:33.693498  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:33.693505  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:33.693554  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:33.734024  148123 cri.go:89] found id: ""
	I1010 19:26:33.734059  148123 logs.go:282] 0 containers: []
	W1010 19:26:33.734071  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:33.734079  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:33.734141  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:33.769434  148123 cri.go:89] found id: ""
	I1010 19:26:33.769467  148123 logs.go:282] 0 containers: []
	W1010 19:26:33.769476  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:33.769483  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:33.769533  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:33.809011  148123 cri.go:89] found id: ""
	I1010 19:26:33.809040  148123 logs.go:282] 0 containers: []
	W1010 19:26:33.809049  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:33.809059  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:33.809071  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:33.864186  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:33.864230  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:33.878681  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:33.878711  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:33.958225  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:33.958250  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:33.958265  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:34.034730  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:34.034774  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:36.577123  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:36.591494  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:36.591560  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:36.625170  148123 cri.go:89] found id: ""
	I1010 19:26:36.625210  148123 logs.go:282] 0 containers: []
	W1010 19:26:36.625222  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:36.625230  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:36.625295  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:36.660790  148123 cri.go:89] found id: ""
	I1010 19:26:36.660821  148123 logs.go:282] 0 containers: []
	W1010 19:26:36.660831  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:36.660838  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:36.660911  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:36.695742  148123 cri.go:89] found id: ""
	I1010 19:26:36.695774  148123 logs.go:282] 0 containers: []
	W1010 19:26:36.695786  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:36.695793  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:36.695854  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:36.729523  148123 cri.go:89] found id: ""
	I1010 19:26:36.729552  148123 logs.go:282] 0 containers: []
	W1010 19:26:36.729561  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:36.729569  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:36.729630  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:36.765515  148123 cri.go:89] found id: ""
	I1010 19:26:36.765549  148123 logs.go:282] 0 containers: []
	W1010 19:26:36.765561  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:36.765571  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:36.765635  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:36.799110  148123 cri.go:89] found id: ""
	I1010 19:26:36.799144  148123 logs.go:282] 0 containers: []
	W1010 19:26:36.799155  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:36.799164  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:36.799224  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:36.832806  148123 cri.go:89] found id: ""
	I1010 19:26:36.832834  148123 logs.go:282] 0 containers: []
	W1010 19:26:36.832845  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:36.832865  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:36.832933  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:36.869093  148123 cri.go:89] found id: ""
	I1010 19:26:36.869126  148123 logs.go:282] 0 containers: []
	W1010 19:26:36.869137  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:36.869149  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:36.869165  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:36.922229  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:36.922276  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:36.936973  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:36.937019  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:37.016400  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:37.016425  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:37.016448  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:37.106308  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:37.106354  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:36.227451  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:38.726229  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:35.067396  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:37.565711  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:38.550438  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:41.050698  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:39.652101  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:39.665546  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:39.665639  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:39.699646  148123 cri.go:89] found id: ""
	I1010 19:26:39.699675  148123 logs.go:282] 0 containers: []
	W1010 19:26:39.699686  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:39.699693  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:39.699745  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:39.735568  148123 cri.go:89] found id: ""
	I1010 19:26:39.735592  148123 logs.go:282] 0 containers: []
	W1010 19:26:39.735600  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:39.735606  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:39.735656  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:39.770021  148123 cri.go:89] found id: ""
	I1010 19:26:39.770049  148123 logs.go:282] 0 containers: []
	W1010 19:26:39.770060  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:39.770067  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:39.770139  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:39.804867  148123 cri.go:89] found id: ""
	I1010 19:26:39.804894  148123 logs.go:282] 0 containers: []
	W1010 19:26:39.804903  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:39.804910  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:39.804972  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:39.847074  148123 cri.go:89] found id: ""
	I1010 19:26:39.847104  148123 logs.go:282] 0 containers: []
	W1010 19:26:39.847115  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:39.847124  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:39.847200  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:39.890334  148123 cri.go:89] found id: ""
	I1010 19:26:39.890368  148123 logs.go:282] 0 containers: []
	W1010 19:26:39.890379  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:39.890387  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:39.890456  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:39.926316  148123 cri.go:89] found id: ""
	I1010 19:26:39.926346  148123 logs.go:282] 0 containers: []
	W1010 19:26:39.926355  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:39.926362  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:39.926416  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:39.966179  148123 cri.go:89] found id: ""
	I1010 19:26:39.966215  148123 logs.go:282] 0 containers: []
	W1010 19:26:39.966227  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:39.966239  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:39.966255  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:40.047457  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:40.047505  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:40.086611  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:40.086656  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:40.139123  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:40.139167  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:40.152966  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:40.152995  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:40.231009  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:42.731851  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:42.748201  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:42.748287  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:42.787567  148123 cri.go:89] found id: ""
	I1010 19:26:42.787599  148123 logs.go:282] 0 containers: []
	W1010 19:26:42.787607  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:42.787614  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:42.787697  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:42.831051  148123 cri.go:89] found id: ""
	I1010 19:26:42.831089  148123 logs.go:282] 0 containers: []
	W1010 19:26:42.831100  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:42.831109  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:42.831180  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:42.871352  148123 cri.go:89] found id: ""
	I1010 19:26:42.871390  148123 logs.go:282] 0 containers: []
	W1010 19:26:42.871402  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:42.871410  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:42.871484  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:42.910283  148123 cri.go:89] found id: ""
	I1010 19:26:42.910316  148123 logs.go:282] 0 containers: []
	W1010 19:26:42.910327  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:42.910335  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:42.910403  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:42.946971  148123 cri.go:89] found id: ""
	I1010 19:26:42.947003  148123 logs.go:282] 0 containers: []
	W1010 19:26:42.947012  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:42.947019  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:42.947087  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:42.985484  148123 cri.go:89] found id: ""
	I1010 19:26:42.985517  148123 logs.go:282] 0 containers: []
	W1010 19:26:42.985528  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:42.985537  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:42.985603  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:43.024500  148123 cri.go:89] found id: ""
	I1010 19:26:43.024535  148123 logs.go:282] 0 containers: []
	W1010 19:26:43.024544  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:43.024552  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:43.024608  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:43.064008  148123 cri.go:89] found id: ""
	I1010 19:26:43.064039  148123 logs.go:282] 0 containers: []
	W1010 19:26:43.064049  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:43.064060  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:43.064075  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:43.119367  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:43.119415  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:43.133396  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:43.133428  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:43.207447  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:43.207470  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:43.207491  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:43.290416  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:43.290456  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:40.727919  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:43.227782  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:40.066461  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:42.565505  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:43.051835  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:45.052308  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:45.834115  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:45.849172  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:45.849238  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:45.886020  148123 cri.go:89] found id: ""
	I1010 19:26:45.886049  148123 logs.go:282] 0 containers: []
	W1010 19:26:45.886060  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:45.886067  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:45.886152  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:45.923188  148123 cri.go:89] found id: ""
	I1010 19:26:45.923218  148123 logs.go:282] 0 containers: []
	W1010 19:26:45.923245  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:45.923253  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:45.923316  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:45.960297  148123 cri.go:89] found id: ""
	I1010 19:26:45.960337  148123 logs.go:282] 0 containers: []
	W1010 19:26:45.960351  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:45.960361  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:45.960432  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:45.995140  148123 cri.go:89] found id: ""
	I1010 19:26:45.995173  148123 logs.go:282] 0 containers: []
	W1010 19:26:45.995184  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:45.995191  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:45.995265  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:46.040390  148123 cri.go:89] found id: ""
	I1010 19:26:46.040418  148123 logs.go:282] 0 containers: []
	W1010 19:26:46.040426  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:46.040433  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:46.040500  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:46.079999  148123 cri.go:89] found id: ""
	I1010 19:26:46.080029  148123 logs.go:282] 0 containers: []
	W1010 19:26:46.080042  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:46.080052  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:46.080105  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:46.117331  148123 cri.go:89] found id: ""
	I1010 19:26:46.117357  148123 logs.go:282] 0 containers: []
	W1010 19:26:46.117364  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:46.117370  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:46.117441  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:46.152248  148123 cri.go:89] found id: ""
	I1010 19:26:46.152279  148123 logs.go:282] 0 containers: []
	W1010 19:26:46.152290  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:46.152303  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:46.152319  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:46.192503  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:46.192533  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:46.246419  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:46.246456  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:46.259864  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:46.259895  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:46.338518  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:46.338549  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:46.338567  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:45.726776  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:48.228318  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:44.567056  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:47.065636  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:47.551013  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:50.053824  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:48.924216  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:48.938741  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:48.938805  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:48.973310  148123 cri.go:89] found id: ""
	I1010 19:26:48.973339  148123 logs.go:282] 0 containers: []
	W1010 19:26:48.973348  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:48.973356  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:48.973411  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:49.010467  148123 cri.go:89] found id: ""
	I1010 19:26:49.010492  148123 logs.go:282] 0 containers: []
	W1010 19:26:49.010500  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:49.010506  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:49.010585  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:49.051621  148123 cri.go:89] found id: ""
	I1010 19:26:49.051646  148123 logs.go:282] 0 containers: []
	W1010 19:26:49.051653  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:49.051664  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:49.051727  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:49.091090  148123 cri.go:89] found id: ""
	I1010 19:26:49.091121  148123 logs.go:282] 0 containers: []
	W1010 19:26:49.091132  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:49.091140  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:49.091202  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:49.127669  148123 cri.go:89] found id: ""
	I1010 19:26:49.127712  148123 logs.go:282] 0 containers: []
	W1010 19:26:49.127724  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:49.127732  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:49.127804  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:49.164446  148123 cri.go:89] found id: ""
	I1010 19:26:49.164476  148123 logs.go:282] 0 containers: []
	W1010 19:26:49.164485  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:49.164491  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:49.164553  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:49.201222  148123 cri.go:89] found id: ""
	I1010 19:26:49.201263  148123 logs.go:282] 0 containers: []
	W1010 19:26:49.201275  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:49.201284  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:49.201345  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:49.238258  148123 cri.go:89] found id: ""
	I1010 19:26:49.238283  148123 logs.go:282] 0 containers: []
	W1010 19:26:49.238290  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:49.238299  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:49.238311  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:49.313576  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:49.313619  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:49.351066  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:49.351096  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:49.402772  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:49.402823  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:49.417713  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:49.417752  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:49.488834  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:51.989031  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:52.003056  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:52.003140  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:52.039682  148123 cri.go:89] found id: ""
	I1010 19:26:52.039709  148123 logs.go:282] 0 containers: []
	W1010 19:26:52.039718  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:52.039725  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:52.039788  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:52.081402  148123 cri.go:89] found id: ""
	I1010 19:26:52.081433  148123 logs.go:282] 0 containers: []
	W1010 19:26:52.081443  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:52.081449  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:52.081502  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:52.118270  148123 cri.go:89] found id: ""
	I1010 19:26:52.118304  148123 logs.go:282] 0 containers: []
	W1010 19:26:52.118315  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:52.118325  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:52.118392  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:52.154692  148123 cri.go:89] found id: ""
	I1010 19:26:52.154724  148123 logs.go:282] 0 containers: []
	W1010 19:26:52.154735  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:52.154743  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:52.154807  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:52.190056  148123 cri.go:89] found id: ""
	I1010 19:26:52.190085  148123 logs.go:282] 0 containers: []
	W1010 19:26:52.190094  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:52.190101  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:52.190161  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:52.225468  148123 cri.go:89] found id: ""
	I1010 19:26:52.225501  148123 logs.go:282] 0 containers: []
	W1010 19:26:52.225511  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:52.225521  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:52.225589  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:52.260682  148123 cri.go:89] found id: ""
	I1010 19:26:52.260710  148123 logs.go:282] 0 containers: []
	W1010 19:26:52.260718  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:52.260724  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:52.260774  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:52.297613  148123 cri.go:89] found id: ""
	I1010 19:26:52.297642  148123 logs.go:282] 0 containers: []
	W1010 19:26:52.297659  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:52.297672  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:52.297689  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:52.352224  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:52.352267  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:52.367003  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:52.367033  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:52.443124  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:52.443157  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:52.443173  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:52.522391  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:52.522433  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:50.726363  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:52.727069  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:49.069109  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:51.566132  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:53.567867  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:52.554195  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:55.050995  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:55.066877  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:55.082191  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:55.082258  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:55.120729  148123 cri.go:89] found id: ""
	I1010 19:26:55.120763  148123 logs.go:282] 0 containers: []
	W1010 19:26:55.120775  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:55.120784  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:55.120863  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:55.154795  148123 cri.go:89] found id: ""
	I1010 19:26:55.154827  148123 logs.go:282] 0 containers: []
	W1010 19:26:55.154839  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:55.154848  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:55.154904  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:55.194846  148123 cri.go:89] found id: ""
	I1010 19:26:55.194876  148123 logs.go:282] 0 containers: []
	W1010 19:26:55.194889  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:55.194897  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:55.194958  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:55.230173  148123 cri.go:89] found id: ""
	I1010 19:26:55.230201  148123 logs.go:282] 0 containers: []
	W1010 19:26:55.230212  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:55.230220  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:55.230290  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:55.264999  148123 cri.go:89] found id: ""
	I1010 19:26:55.265025  148123 logs.go:282] 0 containers: []
	W1010 19:26:55.265032  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:55.265039  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:55.265096  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:55.303666  148123 cri.go:89] found id: ""
	I1010 19:26:55.303695  148123 logs.go:282] 0 containers: []
	W1010 19:26:55.303703  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:55.303724  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:55.303797  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:55.342061  148123 cri.go:89] found id: ""
	I1010 19:26:55.342087  148123 logs.go:282] 0 containers: []
	W1010 19:26:55.342098  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:55.342106  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:55.342171  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:55.378305  148123 cri.go:89] found id: ""
	I1010 19:26:55.378336  148123 logs.go:282] 0 containers: []
	W1010 19:26:55.378345  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:55.378354  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:55.378365  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:55.416815  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:55.416865  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:55.467371  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:55.467415  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:55.481704  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:55.481744  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:55.559219  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:26:55.559242  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:55.559254  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:58.141472  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:26:58.154326  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:26:58.154394  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:26:58.193053  148123 cri.go:89] found id: ""
	I1010 19:26:58.193080  148123 logs.go:282] 0 containers: []
	W1010 19:26:58.193088  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:26:58.193094  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:26:58.193142  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:26:58.232514  148123 cri.go:89] found id: ""
	I1010 19:26:58.232538  148123 logs.go:282] 0 containers: []
	W1010 19:26:58.232545  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:26:58.232551  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:26:58.232599  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:26:58.266724  148123 cri.go:89] found id: ""
	I1010 19:26:58.266756  148123 logs.go:282] 0 containers: []
	W1010 19:26:58.266765  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:26:58.266771  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:26:58.266824  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:26:58.301986  148123 cri.go:89] found id: ""
	I1010 19:26:58.302012  148123 logs.go:282] 0 containers: []
	W1010 19:26:58.302019  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:26:58.302031  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:26:58.302080  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:26:58.340394  148123 cri.go:89] found id: ""
	I1010 19:26:58.340431  148123 logs.go:282] 0 containers: []
	W1010 19:26:58.340440  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:26:58.340448  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:26:58.340500  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:26:58.407035  148123 cri.go:89] found id: ""
	I1010 19:26:58.407069  148123 logs.go:282] 0 containers: []
	W1010 19:26:58.407081  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:26:58.407088  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:26:58.407151  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:26:58.446650  148123 cri.go:89] found id: ""
	I1010 19:26:58.446682  148123 logs.go:282] 0 containers: []
	W1010 19:26:58.446694  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:26:58.446703  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:26:58.446763  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:26:58.480719  148123 cri.go:89] found id: ""
	I1010 19:26:58.480750  148123 logs.go:282] 0 containers: []
	W1010 19:26:58.480759  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:26:58.480768  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:26:58.480781  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:26:58.561568  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:26:58.561602  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:26:55.227199  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:57.726841  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:56.065787  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:58.566732  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:57.550718  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:59.550793  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:26:58.609237  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:26:58.609264  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:26:58.659705  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:26:58.659748  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:26:58.674646  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:26:58.674680  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:26:58.750922  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:01.252098  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:01.265420  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:01.265497  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:01.300133  148123 cri.go:89] found id: ""
	I1010 19:27:01.300168  148123 logs.go:282] 0 containers: []
	W1010 19:27:01.300180  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:01.300188  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:01.300272  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:01.337451  148123 cri.go:89] found id: ""
	I1010 19:27:01.337477  148123 logs.go:282] 0 containers: []
	W1010 19:27:01.337485  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:01.337492  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:01.337539  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:01.371987  148123 cri.go:89] found id: ""
	I1010 19:27:01.372019  148123 logs.go:282] 0 containers: []
	W1010 19:27:01.372028  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:01.372034  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:01.372098  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:01.408996  148123 cri.go:89] found id: ""
	I1010 19:27:01.409022  148123 logs.go:282] 0 containers: []
	W1010 19:27:01.409033  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:01.409042  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:01.409109  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:01.443558  148123 cri.go:89] found id: ""
	I1010 19:27:01.443587  148123 logs.go:282] 0 containers: []
	W1010 19:27:01.443595  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:01.443602  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:01.443663  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:01.478517  148123 cri.go:89] found id: ""
	I1010 19:27:01.478546  148123 logs.go:282] 0 containers: []
	W1010 19:27:01.478555  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:01.478561  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:01.478611  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:01.513528  148123 cri.go:89] found id: ""
	I1010 19:27:01.513556  148123 logs.go:282] 0 containers: []
	W1010 19:27:01.513568  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:01.513576  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:01.513641  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:01.558861  148123 cri.go:89] found id: ""
	I1010 19:27:01.558897  148123 logs.go:282] 0 containers: []
	W1010 19:27:01.558909  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:01.558921  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:01.558937  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:01.599719  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:01.599747  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:01.650736  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:01.650774  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:01.664643  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:01.664669  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:01.736533  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:01.736557  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:01.736570  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:00.225540  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:02.226962  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:00.567193  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:03.066587  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:02.050439  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:04.050984  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:06.550977  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:04.317302  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:04.330450  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:04.330519  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:04.368893  148123 cri.go:89] found id: ""
	I1010 19:27:04.368923  148123 logs.go:282] 0 containers: []
	W1010 19:27:04.368932  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:04.368939  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:04.368993  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:04.406598  148123 cri.go:89] found id: ""
	I1010 19:27:04.406625  148123 logs.go:282] 0 containers: []
	W1010 19:27:04.406634  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:04.406640  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:04.406692  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:04.441485  148123 cri.go:89] found id: ""
	I1010 19:27:04.441515  148123 logs.go:282] 0 containers: []
	W1010 19:27:04.441525  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:04.441532  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:04.441581  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:04.476663  148123 cri.go:89] found id: ""
	I1010 19:27:04.476690  148123 logs.go:282] 0 containers: []
	W1010 19:27:04.476698  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:04.476704  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:04.476765  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:04.512124  148123 cri.go:89] found id: ""
	I1010 19:27:04.512171  148123 logs.go:282] 0 containers: []
	W1010 19:27:04.512186  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:04.512195  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:04.512260  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:04.547902  148123 cri.go:89] found id: ""
	I1010 19:27:04.547929  148123 logs.go:282] 0 containers: []
	W1010 19:27:04.547940  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:04.547949  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:04.548008  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:04.585257  148123 cri.go:89] found id: ""
	I1010 19:27:04.585287  148123 logs.go:282] 0 containers: []
	W1010 19:27:04.585297  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:04.585303  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:04.585352  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:04.621019  148123 cri.go:89] found id: ""
	I1010 19:27:04.621048  148123 logs.go:282] 0 containers: []
	W1010 19:27:04.621057  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:04.621068  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:04.621080  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:04.662770  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:04.662801  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:04.714551  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:04.714592  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:04.730922  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:04.730951  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:04.841139  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:04.841163  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:04.841178  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:07.428528  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:07.442423  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:07.442498  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:07.476722  148123 cri.go:89] found id: ""
	I1010 19:27:07.476753  148123 logs.go:282] 0 containers: []
	W1010 19:27:07.476764  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:07.476772  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:07.476835  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:07.512211  148123 cri.go:89] found id: ""
	I1010 19:27:07.512245  148123 logs.go:282] 0 containers: []
	W1010 19:27:07.512256  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:07.512263  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:07.512325  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:07.546546  148123 cri.go:89] found id: ""
	I1010 19:27:07.546588  148123 logs.go:282] 0 containers: []
	W1010 19:27:07.546601  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:07.546619  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:07.546688  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:07.585743  148123 cri.go:89] found id: ""
	I1010 19:27:07.585768  148123 logs.go:282] 0 containers: []
	W1010 19:27:07.585777  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:07.585783  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:07.585848  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:07.626827  148123 cri.go:89] found id: ""
	I1010 19:27:07.626855  148123 logs.go:282] 0 containers: []
	W1010 19:27:07.626865  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:07.626874  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:07.626960  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:07.661910  148123 cri.go:89] found id: ""
	I1010 19:27:07.661940  148123 logs.go:282] 0 containers: []
	W1010 19:27:07.661948  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:07.661955  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:07.662072  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:07.698255  148123 cri.go:89] found id: ""
	I1010 19:27:07.698288  148123 logs.go:282] 0 containers: []
	W1010 19:27:07.698301  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:07.698309  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:07.698374  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:07.734389  148123 cri.go:89] found id: ""
	I1010 19:27:07.734417  148123 logs.go:282] 0 containers: []
	W1010 19:27:07.734428  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:07.734440  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:07.734454  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:07.814089  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:07.814130  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:07.854799  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:07.854831  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:07.906595  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:07.906635  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:07.921928  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:07.921966  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:07.989839  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:04.727522  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:07.226694  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:05.565868  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:07.567139  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:09.050772  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:11.051291  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:10.490115  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:10.504216  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:10.504292  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:10.554789  148123 cri.go:89] found id: ""
	I1010 19:27:10.554815  148123 logs.go:282] 0 containers: []
	W1010 19:27:10.554825  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:10.554831  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:10.554889  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:10.618970  148123 cri.go:89] found id: ""
	I1010 19:27:10.618997  148123 logs.go:282] 0 containers: []
	W1010 19:27:10.619005  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:10.619012  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:10.619061  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:10.666900  148123 cri.go:89] found id: ""
	I1010 19:27:10.666933  148123 logs.go:282] 0 containers: []
	W1010 19:27:10.666946  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:10.666953  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:10.667023  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:10.704364  148123 cri.go:89] found id: ""
	I1010 19:27:10.704405  148123 logs.go:282] 0 containers: []
	W1010 19:27:10.704430  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:10.704450  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:10.704525  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:10.742341  148123 cri.go:89] found id: ""
	I1010 19:27:10.742380  148123 logs.go:282] 0 containers: []
	W1010 19:27:10.742389  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:10.742396  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:10.742461  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:10.783056  148123 cri.go:89] found id: ""
	I1010 19:27:10.783088  148123 logs.go:282] 0 containers: []
	W1010 19:27:10.783098  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:10.783105  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:10.783174  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:10.823198  148123 cri.go:89] found id: ""
	I1010 19:27:10.823224  148123 logs.go:282] 0 containers: []
	W1010 19:27:10.823233  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:10.823243  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:10.823302  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:10.858154  148123 cri.go:89] found id: ""
	I1010 19:27:10.858195  148123 logs.go:282] 0 containers: []
	W1010 19:27:10.858204  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:10.858213  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:10.858229  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:10.935491  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:10.935518  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:10.935532  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:11.015653  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:11.015694  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:11.058765  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:11.058793  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:11.116748  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:11.116799  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:09.727270  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:12.225797  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:10.065372  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:12.065695  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:13.550669  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:16.051044  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:13.631579  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:13.644885  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:13.644953  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:13.685242  148123 cri.go:89] found id: ""
	I1010 19:27:13.685273  148123 logs.go:282] 0 containers: []
	W1010 19:27:13.685285  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:13.685294  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:13.685364  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:13.720831  148123 cri.go:89] found id: ""
	I1010 19:27:13.720866  148123 logs.go:282] 0 containers: []
	W1010 19:27:13.720877  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:13.720885  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:13.720947  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:13.766374  148123 cri.go:89] found id: ""
	I1010 19:27:13.766404  148123 logs.go:282] 0 containers: []
	W1010 19:27:13.766413  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:13.766419  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:13.766468  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:13.804550  148123 cri.go:89] found id: ""
	I1010 19:27:13.804585  148123 logs.go:282] 0 containers: []
	W1010 19:27:13.804597  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:13.804606  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:13.804674  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:13.842406  148123 cri.go:89] found id: ""
	I1010 19:27:13.842432  148123 logs.go:282] 0 containers: []
	W1010 19:27:13.842440  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:13.842460  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:13.842678  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:13.882365  148123 cri.go:89] found id: ""
	I1010 19:27:13.882402  148123 logs.go:282] 0 containers: []
	W1010 19:27:13.882415  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:13.882423  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:13.882505  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:13.926016  148123 cri.go:89] found id: ""
	I1010 19:27:13.926052  148123 logs.go:282] 0 containers: []
	W1010 19:27:13.926065  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:13.926075  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:13.926177  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:13.960485  148123 cri.go:89] found id: ""
	I1010 19:27:13.960523  148123 logs.go:282] 0 containers: []
	W1010 19:27:13.960533  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:13.960542  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:13.960558  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:14.013013  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:14.013055  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:14.026906  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:14.026935  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:14.104469  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:14.104494  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:14.104507  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:14.185917  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:14.185959  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:16.732088  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:16.746646  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:16.746710  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:16.784715  148123 cri.go:89] found id: ""
	I1010 19:27:16.784739  148123 logs.go:282] 0 containers: []
	W1010 19:27:16.784749  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:16.784755  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:16.784807  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:16.824181  148123 cri.go:89] found id: ""
	I1010 19:27:16.824210  148123 logs.go:282] 0 containers: []
	W1010 19:27:16.824220  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:16.824228  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:16.824289  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:16.860901  148123 cri.go:89] found id: ""
	I1010 19:27:16.860932  148123 logs.go:282] 0 containers: []
	W1010 19:27:16.860941  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:16.860947  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:16.860997  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:16.895127  148123 cri.go:89] found id: ""
	I1010 19:27:16.895159  148123 logs.go:282] 0 containers: []
	W1010 19:27:16.895175  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:16.895183  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:16.895266  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:16.931989  148123 cri.go:89] found id: ""
	I1010 19:27:16.932020  148123 logs.go:282] 0 containers: []
	W1010 19:27:16.932028  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:16.932035  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:16.932086  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:16.967254  148123 cri.go:89] found id: ""
	I1010 19:27:16.967283  148123 logs.go:282] 0 containers: []
	W1010 19:27:16.967292  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:16.967299  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:16.967347  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:17.003010  148123 cri.go:89] found id: ""
	I1010 19:27:17.003035  148123 logs.go:282] 0 containers: []
	W1010 19:27:17.003043  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:17.003050  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:17.003097  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:17.040053  148123 cri.go:89] found id: ""
	I1010 19:27:17.040093  148123 logs.go:282] 0 containers: []
	W1010 19:27:17.040102  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:17.040118  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:17.040131  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:17.093176  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:17.093217  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:17.108086  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:17.108116  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:17.186423  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:17.186452  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:17.186467  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:17.271884  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:17.271926  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:14.227197  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:16.739354  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:14.066233  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:16.565852  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:18.566337  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:18.051613  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:20.549888  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:19.817954  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:19.831924  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:19.831993  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:19.873415  148123 cri.go:89] found id: ""
	I1010 19:27:19.873441  148123 logs.go:282] 0 containers: []
	W1010 19:27:19.873459  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:19.873466  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:19.873527  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:19.912174  148123 cri.go:89] found id: ""
	I1010 19:27:19.912207  148123 logs.go:282] 0 containers: []
	W1010 19:27:19.912219  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:19.912227  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:19.912298  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:19.948422  148123 cri.go:89] found id: ""
	I1010 19:27:19.948456  148123 logs.go:282] 0 containers: []
	W1010 19:27:19.948466  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:19.948472  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:19.948524  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:19.983917  148123 cri.go:89] found id: ""
	I1010 19:27:19.983949  148123 logs.go:282] 0 containers: []
	W1010 19:27:19.983962  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:19.983970  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:19.984024  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:20.019160  148123 cri.go:89] found id: ""
	I1010 19:27:20.019187  148123 logs.go:282] 0 containers: []
	W1010 19:27:20.019198  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:20.019207  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:20.019271  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:20.056073  148123 cri.go:89] found id: ""
	I1010 19:27:20.056104  148123 logs.go:282] 0 containers: []
	W1010 19:27:20.056116  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:20.056124  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:20.056187  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:20.095877  148123 cri.go:89] found id: ""
	I1010 19:27:20.095916  148123 logs.go:282] 0 containers: []
	W1010 19:27:20.095928  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:20.095935  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:20.096007  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:20.132360  148123 cri.go:89] found id: ""
	I1010 19:27:20.132385  148123 logs.go:282] 0 containers: []
	W1010 19:27:20.132394  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:20.132402  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:20.132413  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:20.190573  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:20.190619  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:20.205785  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:20.205819  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:20.278882  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:20.278911  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:20.278924  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:20.363982  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:20.364024  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:22.906059  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:22.919839  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:22.919912  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:22.958203  148123 cri.go:89] found id: ""
	I1010 19:27:22.958242  148123 logs.go:282] 0 containers: []
	W1010 19:27:22.958252  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:22.958258  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:22.958312  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:22.995834  148123 cri.go:89] found id: ""
	I1010 19:27:22.995866  148123 logs.go:282] 0 containers: []
	W1010 19:27:22.995874  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:22.995880  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:22.995945  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:23.035913  148123 cri.go:89] found id: ""
	I1010 19:27:23.035950  148123 logs.go:282] 0 containers: []
	W1010 19:27:23.035962  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:23.035970  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:23.036039  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:23.073995  148123 cri.go:89] found id: ""
	I1010 19:27:23.074036  148123 logs.go:282] 0 containers: []
	W1010 19:27:23.074049  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:23.074057  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:23.074117  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:23.111190  148123 cri.go:89] found id: ""
	I1010 19:27:23.111222  148123 logs.go:282] 0 containers: []
	W1010 19:27:23.111234  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:23.111243  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:23.111305  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:23.147771  148123 cri.go:89] found id: ""
	I1010 19:27:23.147797  148123 logs.go:282] 0 containers: []
	W1010 19:27:23.147806  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:23.147814  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:23.147878  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:23.185485  148123 cri.go:89] found id: ""
	I1010 19:27:23.185517  148123 logs.go:282] 0 containers: []
	W1010 19:27:23.185527  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:23.185535  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:23.185598  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:23.222030  148123 cri.go:89] found id: ""
	I1010 19:27:23.222060  148123 logs.go:282] 0 containers: []
	W1010 19:27:23.222070  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:23.222081  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:23.222097  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:23.301826  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:23.301849  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:23.301861  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:23.378688  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:23.378730  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:23.426683  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:23.426717  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:23.478425  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:23.478467  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:19.226994  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:21.727366  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:21.067094  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:23.567075  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:22.550076  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:24.551681  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:25.993768  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:26.008311  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:26.008383  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:26.046315  148123 cri.go:89] found id: ""
	I1010 19:27:26.046343  148123 logs.go:282] 0 containers: []
	W1010 19:27:26.046355  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:26.046364  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:26.046418  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:26.081823  148123 cri.go:89] found id: ""
	I1010 19:27:26.081847  148123 logs.go:282] 0 containers: []
	W1010 19:27:26.081855  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:26.081861  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:26.081913  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:26.117139  148123 cri.go:89] found id: ""
	I1010 19:27:26.117167  148123 logs.go:282] 0 containers: []
	W1010 19:27:26.117183  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:26.117191  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:26.117242  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:26.154477  148123 cri.go:89] found id: ""
	I1010 19:27:26.154501  148123 logs.go:282] 0 containers: []
	W1010 19:27:26.154510  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:26.154517  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:26.154568  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:26.187497  148123 cri.go:89] found id: ""
	I1010 19:27:26.187527  148123 logs.go:282] 0 containers: []
	W1010 19:27:26.187535  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:26.187541  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:26.187604  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:26.223110  148123 cri.go:89] found id: ""
	I1010 19:27:26.223140  148123 logs.go:282] 0 containers: []
	W1010 19:27:26.223151  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:26.223160  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:26.223226  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:26.264975  148123 cri.go:89] found id: ""
	I1010 19:27:26.265003  148123 logs.go:282] 0 containers: []
	W1010 19:27:26.265011  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:26.265017  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:26.265067  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:26.307467  148123 cri.go:89] found id: ""
	I1010 19:27:26.307494  148123 logs.go:282] 0 containers: []
	W1010 19:27:26.307503  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:26.307512  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:26.307524  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:26.346313  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:26.346342  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:26.400069  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:26.400109  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:26.415467  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:26.415500  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:26.486986  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:26.487016  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:26.487033  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:24.226736  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:26.228720  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:28.726470  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:26.067100  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:28.565675  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:27.051110  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:29.051207  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:31.553085  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:29.064522  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:29.077631  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:29.077696  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:29.116127  148123 cri.go:89] found id: ""
	I1010 19:27:29.116154  148123 logs.go:282] 0 containers: []
	W1010 19:27:29.116164  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:29.116172  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:29.116241  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:29.152863  148123 cri.go:89] found id: ""
	I1010 19:27:29.152893  148123 logs.go:282] 0 containers: []
	W1010 19:27:29.152901  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:29.152907  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:29.152963  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:29.189951  148123 cri.go:89] found id: ""
	I1010 19:27:29.189983  148123 logs.go:282] 0 containers: []
	W1010 19:27:29.189992  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:29.189999  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:29.190060  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:29.226611  148123 cri.go:89] found id: ""
	I1010 19:27:29.226646  148123 logs.go:282] 0 containers: []
	W1010 19:27:29.226657  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:29.226665  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:29.226729  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:29.271884  148123 cri.go:89] found id: ""
	I1010 19:27:29.271921  148123 logs.go:282] 0 containers: []
	W1010 19:27:29.271934  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:29.271944  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:29.272016  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:29.306128  148123 cri.go:89] found id: ""
	I1010 19:27:29.306168  148123 logs.go:282] 0 containers: []
	W1010 19:27:29.306181  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:29.306191  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:29.306255  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:29.341869  148123 cri.go:89] found id: ""
	I1010 19:27:29.341893  148123 logs.go:282] 0 containers: []
	W1010 19:27:29.341901  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:29.341908  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:29.341962  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:29.376213  148123 cri.go:89] found id: ""
	I1010 19:27:29.376240  148123 logs.go:282] 0 containers: []
	W1010 19:27:29.376249  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:29.376259  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:29.376273  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:29.428827  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:29.428878  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:29.443719  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:29.443754  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:29.516166  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:29.516193  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:29.516208  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:29.596643  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:29.596681  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:32.135286  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:32.148791  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:32.148893  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:32.185511  148123 cri.go:89] found id: ""
	I1010 19:27:32.185542  148123 logs.go:282] 0 containers: []
	W1010 19:27:32.185553  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:32.185560  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:32.185626  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:32.222908  148123 cri.go:89] found id: ""
	I1010 19:27:32.222934  148123 logs.go:282] 0 containers: []
	W1010 19:27:32.222942  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:32.222948  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:32.222999  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:32.260998  148123 cri.go:89] found id: ""
	I1010 19:27:32.261033  148123 logs.go:282] 0 containers: []
	W1010 19:27:32.261045  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:32.261063  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:32.261128  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:32.298311  148123 cri.go:89] found id: ""
	I1010 19:27:32.298339  148123 logs.go:282] 0 containers: []
	W1010 19:27:32.298348  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:32.298355  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:32.298409  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:32.334134  148123 cri.go:89] found id: ""
	I1010 19:27:32.334220  148123 logs.go:282] 0 containers: []
	W1010 19:27:32.334236  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:32.334249  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:32.334319  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:32.369690  148123 cri.go:89] found id: ""
	I1010 19:27:32.369723  148123 logs.go:282] 0 containers: []
	W1010 19:27:32.369735  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:32.369743  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:32.369807  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:32.407164  148123 cri.go:89] found id: ""
	I1010 19:27:32.407210  148123 logs.go:282] 0 containers: []
	W1010 19:27:32.407218  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:32.407224  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:32.407279  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:32.446468  148123 cri.go:89] found id: ""
	I1010 19:27:32.446494  148123 logs.go:282] 0 containers: []
	W1010 19:27:32.446505  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:32.446519  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:32.446535  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:32.497953  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:32.497997  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:32.513590  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:32.513620  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:32.594037  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:32.594072  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:32.594090  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:32.673546  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:32.673587  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:30.727725  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:32.727813  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:31.066731  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:33.067815  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:34.050574  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:36.550119  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:35.226084  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:35.241152  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:35.241222  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:35.283208  148123 cri.go:89] found id: ""
	I1010 19:27:35.283245  148123 logs.go:282] 0 containers: []
	W1010 19:27:35.283269  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:35.283286  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:35.283346  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:35.320406  148123 cri.go:89] found id: ""
	I1010 19:27:35.320444  148123 logs.go:282] 0 containers: []
	W1010 19:27:35.320457  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:35.320466  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:35.320523  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:35.360984  148123 cri.go:89] found id: ""
	I1010 19:27:35.361015  148123 logs.go:282] 0 containers: []
	W1010 19:27:35.361027  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:35.361034  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:35.361097  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:35.400191  148123 cri.go:89] found id: ""
	I1010 19:27:35.400219  148123 logs.go:282] 0 containers: []
	W1010 19:27:35.400230  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:35.400244  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:35.400314  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:35.438092  148123 cri.go:89] found id: ""
	I1010 19:27:35.438126  148123 logs.go:282] 0 containers: []
	W1010 19:27:35.438139  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:35.438147  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:35.438215  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:35.472656  148123 cri.go:89] found id: ""
	I1010 19:27:35.472681  148123 logs.go:282] 0 containers: []
	W1010 19:27:35.472688  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:35.472694  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:35.472743  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:35.505895  148123 cri.go:89] found id: ""
	I1010 19:27:35.505931  148123 logs.go:282] 0 containers: []
	W1010 19:27:35.505942  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:35.505950  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:35.506015  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:35.540406  148123 cri.go:89] found id: ""
	I1010 19:27:35.540441  148123 logs.go:282] 0 containers: []
	W1010 19:27:35.540451  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:35.540462  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:35.540478  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:35.555874  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:35.555903  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:35.628400  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:35.628427  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:35.628453  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:35.708262  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:35.708305  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:35.752796  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:35.752824  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:38.306212  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:38.319473  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:38.319543  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:38.357493  148123 cri.go:89] found id: ""
	I1010 19:27:38.357520  148123 logs.go:282] 0 containers: []
	W1010 19:27:38.357529  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:38.357536  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:38.357588  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:38.395908  148123 cri.go:89] found id: ""
	I1010 19:27:38.395935  148123 logs.go:282] 0 containers: []
	W1010 19:27:38.395943  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:38.395949  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:38.396010  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:38.434495  148123 cri.go:89] found id: ""
	I1010 19:27:38.434529  148123 logs.go:282] 0 containers: []
	W1010 19:27:38.434539  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:38.434545  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:38.434608  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:38.474647  148123 cri.go:89] found id: ""
	I1010 19:27:38.474686  148123 logs.go:282] 0 containers: []
	W1010 19:27:38.474698  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:38.474710  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:38.474784  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:38.511105  148123 cri.go:89] found id: ""
	I1010 19:27:38.511133  148123 logs.go:282] 0 containers: []
	W1010 19:27:38.511141  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:38.511148  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:38.511199  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:38.551011  148123 cri.go:89] found id: ""
	I1010 19:27:38.551055  148123 logs.go:282] 0 containers: []
	W1010 19:27:38.551066  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:38.551074  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:38.551139  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:35.227301  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:37.726528  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:35.567838  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:38.066658  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:38.552499  147758 pod_ready.go:103] pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:40.544561  147758 pod_ready.go:82] duration metric: took 4m0.00091784s for pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace to be "Ready" ...
	E1010 19:27:40.544600  147758 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-kw529" in "kube-system" namespace to be "Ready" (will not retry!)
	I1010 19:27:40.544623  147758 pod_ready.go:39] duration metric: took 4m15.623470592s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 19:27:40.544664  147758 kubeadm.go:597] duration metric: took 4m22.92080204s to restartPrimaryControlPlane
	W1010 19:27:40.544737  147758 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1010 19:27:40.544829  147758 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1010 19:27:38.590579  148123 cri.go:89] found id: ""
	I1010 19:27:38.590606  148123 logs.go:282] 0 containers: []
	W1010 19:27:38.590615  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:38.590621  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:38.590686  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:38.627775  148123 cri.go:89] found id: ""
	I1010 19:27:38.627812  148123 logs.go:282] 0 containers: []
	W1010 19:27:38.627826  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:38.627838  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:38.627854  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:38.683195  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:38.683239  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:38.697220  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:38.697250  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:38.771619  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:38.771651  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:38.771668  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:38.851324  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:38.851362  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:41.408165  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:41.422958  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:27:41.423032  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:27:41.466774  148123 cri.go:89] found id: ""
	I1010 19:27:41.466806  148123 logs.go:282] 0 containers: []
	W1010 19:27:41.466816  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:27:41.466823  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:27:41.466874  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:27:41.506429  148123 cri.go:89] found id: ""
	I1010 19:27:41.506470  148123 logs.go:282] 0 containers: []
	W1010 19:27:41.506482  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:27:41.506489  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:27:41.506555  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:27:41.548929  148123 cri.go:89] found id: ""
	I1010 19:27:41.548965  148123 logs.go:282] 0 containers: []
	W1010 19:27:41.548976  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:27:41.548983  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:27:41.549044  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:27:41.592243  148123 cri.go:89] found id: ""
	I1010 19:27:41.592274  148123 logs.go:282] 0 containers: []
	W1010 19:27:41.592283  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:27:41.592290  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:27:41.592342  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:27:41.628516  148123 cri.go:89] found id: ""
	I1010 19:27:41.628558  148123 logs.go:282] 0 containers: []
	W1010 19:27:41.628570  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:27:41.628578  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:27:41.628650  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:27:41.665011  148123 cri.go:89] found id: ""
	I1010 19:27:41.665041  148123 logs.go:282] 0 containers: []
	W1010 19:27:41.665053  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:27:41.665060  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:27:41.665112  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:27:41.702650  148123 cri.go:89] found id: ""
	I1010 19:27:41.702681  148123 logs.go:282] 0 containers: []
	W1010 19:27:41.702692  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:27:41.702700  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:27:41.702764  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:27:41.738971  148123 cri.go:89] found id: ""
	I1010 19:27:41.739002  148123 logs.go:282] 0 containers: []
	W1010 19:27:41.739021  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:27:41.739031  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:27:41.739062  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:27:41.813296  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:27:41.813321  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:27:41.813335  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:27:41.895138  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:27:41.895185  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:27:41.941975  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:27:41.942012  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:27:41.994838  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:27:41.994888  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:27:39.727140  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:41.728263  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:40.566241  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:43.065219  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:44.511153  148123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:27:44.524871  148123 kubeadm.go:597] duration metric: took 4m4.194337013s to restartPrimaryControlPlane
	W1010 19:27:44.524953  148123 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1010 19:27:44.524988  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1010 19:27:44.994063  148123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 19:27:45.010926  148123 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1010 19:27:45.021757  148123 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1010 19:27:45.032246  148123 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1010 19:27:45.032270  148123 kubeadm.go:157] found existing configuration files:
	
	I1010 19:27:45.032326  148123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1010 19:27:45.042799  148123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1010 19:27:45.042866  148123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1010 19:27:45.053017  148123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1010 19:27:45.063373  148123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1010 19:27:45.063445  148123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1010 19:27:45.074689  148123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1010 19:27:45.085187  148123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1010 19:27:45.085241  148123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1010 19:27:45.096275  148123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1010 19:27:45.106686  148123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1010 19:27:45.106750  148123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1010 19:27:45.117018  148123 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1010 19:27:45.358920  148123 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1010 19:27:44.226853  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:46.227586  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:48.727469  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:45.066410  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:47.569864  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:51.230704  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:53.727351  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:50.065845  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:52.066267  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:55.727457  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:58.226861  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:54.564611  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:56.566702  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:00.728542  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:03.225779  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:27:59.065614  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:01.068088  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:03.566502  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:06.739904  147758 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.195045639s)
	I1010 19:28:06.739984  147758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 19:28:06.756046  147758 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1010 19:28:06.768580  147758 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1010 19:28:06.780663  147758 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1010 19:28:06.780732  147758 kubeadm.go:157] found existing configuration files:
	
	I1010 19:28:06.780807  147758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1010 19:28:06.792092  147758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1010 19:28:06.792179  147758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1010 19:28:06.804515  147758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1010 19:28:06.814969  147758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1010 19:28:06.815040  147758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1010 19:28:06.826056  147758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1010 19:28:06.836050  147758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1010 19:28:06.836108  147758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1010 19:28:06.846125  147758 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1010 19:28:06.855505  147758 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1010 19:28:06.855559  147758 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1010 19:28:06.865367  147758 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1010 19:28:06.916227  147758 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1010 19:28:06.916375  147758 kubeadm.go:310] [preflight] Running pre-flight checks
	I1010 19:28:07.036539  147758 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1010 19:28:07.036652  147758 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1010 19:28:07.036762  147758 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1010 19:28:07.044897  147758 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1010 19:28:07.046978  147758 out.go:235]   - Generating certificates and keys ...
	I1010 19:28:07.047117  147758 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1010 19:28:07.047229  147758 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1010 19:28:07.047384  147758 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1010 19:28:07.047467  147758 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1010 19:28:07.047584  147758 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1010 19:28:07.047675  147758 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1010 19:28:07.047794  147758 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1010 19:28:07.047902  147758 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1010 19:28:07.048005  147758 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1010 19:28:07.048093  147758 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1010 19:28:07.048142  147758 kubeadm.go:310] [certs] Using the existing "sa" key
	I1010 19:28:07.048210  147758 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1010 19:28:07.127836  147758 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1010 19:28:07.434492  147758 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1010 19:28:07.487567  147758 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1010 19:28:07.731314  147758 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1010 19:28:07.919060  147758 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1010 19:28:07.919565  147758 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1010 19:28:07.922740  147758 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1010 19:28:05.227611  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:07.229836  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:06.065246  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:08.067360  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:07.925140  147758 out.go:235]   - Booting up control plane ...
	I1010 19:28:07.925239  147758 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1010 19:28:07.925356  147758 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1010 19:28:07.925444  147758 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1010 19:28:07.944375  147758 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1010 19:28:07.951182  147758 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1010 19:28:07.951274  147758 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1010 19:28:08.087325  147758 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1010 19:28:08.087560  147758 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1010 19:28:08.598361  147758 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 511.081439ms
	I1010 19:28:08.598502  147758 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1010 19:28:09.727932  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:12.227939  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:10.566945  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:13.067142  148525 pod_ready.go:103] pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:14.100517  147758 kubeadm.go:310] [api-check] The API server is healthy after 5.501985157s
	I1010 19:28:14.119932  147758 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1010 19:28:14.149557  147758 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1010 19:28:14.207413  147758 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1010 19:28:14.207735  147758 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-541370 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1010 19:28:14.226199  147758 kubeadm.go:310] [bootstrap-token] Using token: sbg4v0.t5me93bb5vn8m913
	I1010 19:28:14.228059  147758 out.go:235]   - Configuring RBAC rules ...
	I1010 19:28:14.228208  147758 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1010 19:28:14.241706  147758 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1010 19:28:14.256554  147758 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1010 19:28:14.263129  147758 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1010 19:28:14.274346  147758 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1010 19:28:14.282313  147758 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1010 19:28:14.507850  147758 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1010 19:28:14.970234  147758 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1010 19:28:15.508328  147758 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1010 19:28:15.509530  147758 kubeadm.go:310] 
	I1010 19:28:15.509635  147758 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1010 19:28:15.509653  147758 kubeadm.go:310] 
	I1010 19:28:15.509743  147758 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1010 19:28:15.509762  147758 kubeadm.go:310] 
	I1010 19:28:15.509795  147758 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1010 19:28:15.509888  147758 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1010 19:28:15.509954  147758 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1010 19:28:15.509970  147758 kubeadm.go:310] 
	I1010 19:28:15.510083  147758 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1010 19:28:15.510103  147758 kubeadm.go:310] 
	I1010 19:28:15.510203  147758 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1010 19:28:15.510214  147758 kubeadm.go:310] 
	I1010 19:28:15.510297  147758 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1010 19:28:15.510410  147758 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1010 19:28:15.510489  147758 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1010 19:28:15.510495  147758 kubeadm.go:310] 
	I1010 19:28:15.510603  147758 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1010 19:28:15.510707  147758 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1010 19:28:15.510724  147758 kubeadm.go:310] 
	I1010 19:28:15.510807  147758 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token sbg4v0.t5me93bb5vn8m913 \
	I1010 19:28:15.510958  147758 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:bc8568f96f96483e4403a7eff9866ac7bd8b0e76d8203796a49dd1a769f11bda \
	I1010 19:28:15.511005  147758 kubeadm.go:310] 	--control-plane 
	I1010 19:28:15.511034  147758 kubeadm.go:310] 
	I1010 19:28:15.511161  147758 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1010 19:28:15.511173  147758 kubeadm.go:310] 
	I1010 19:28:15.511268  147758 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token sbg4v0.t5me93bb5vn8m913 \
	I1010 19:28:15.511403  147758 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:bc8568f96f96483e4403a7eff9866ac7bd8b0e76d8203796a49dd1a769f11bda 
	I1010 19:28:15.512298  147758 kubeadm.go:310] W1010 19:28:06.890572    2539 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1010 19:28:15.512594  147758 kubeadm.go:310] W1010 19:28:06.891448    2539 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1010 19:28:15.512702  147758 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1010 19:28:15.512734  147758 cni.go:84] Creating CNI manager for ""
	I1010 19:28:15.512744  147758 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1010 19:28:15.514703  147758 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1010 19:28:15.516229  147758 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1010 19:28:15.527554  147758 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1010 19:28:15.549266  147758 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1010 19:28:15.549362  147758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:15.549399  147758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-541370 minikube.k8s.io/updated_at=2024_10_10T19_28_15_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=e90d7de550e839433dc631623053398b652747fc minikube.k8s.io/name=embed-certs-541370 minikube.k8s.io/primary=true
	I1010 19:28:15.590732  147758 ops.go:34] apiserver oom_adj: -16
	I1010 19:28:15.740942  147758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:16.241392  147758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:16.741807  147758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:14.229241  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:16.727260  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:14.059512  148525 pod_ready.go:82] duration metric: took 4m0.00022742s for pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace to be "Ready" ...
	E1010 19:28:14.059550  148525 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-h5kjm" in "kube-system" namespace to be "Ready" (will not retry!)
	I1010 19:28:14.059569  148525 pod_ready.go:39] duration metric: took 4m7.001942194s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 19:28:14.059614  148525 kubeadm.go:597] duration metric: took 4m14.998320151s to restartPrimaryControlPlane
	W1010 19:28:14.059672  148525 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1010 19:28:14.059698  148525 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1010 19:28:17.241315  147758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:17.741580  147758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:18.241006  147758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:18.742042  147758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:19.241251  147758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:19.741030  147758 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:19.862541  147758 kubeadm.go:1113] duration metric: took 4.313246481s to wait for elevateKubeSystemPrivileges
	I1010 19:28:19.862579  147758 kubeadm.go:394] duration metric: took 5m2.288571479s to StartCluster
	I1010 19:28:19.862628  147758 settings.go:142] acquiring lock: {Name:mk8d73e472e6ca16e47be8eb712406a95625b8da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 19:28:19.862751  147758 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19787-81676/kubeconfig
	I1010 19:28:19.864528  147758 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19787-81676/kubeconfig: {Name:mk31297b607b774870865548d952622c04d970cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 19:28:19.864812  147758 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.120 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1010 19:28:19.864910  147758 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1010 19:28:19.865019  147758 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-541370"
	I1010 19:28:19.865041  147758 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-541370"
	W1010 19:28:19.865053  147758 addons.go:243] addon storage-provisioner should already be in state true
	I1010 19:28:19.865062  147758 addons.go:69] Setting default-storageclass=true in profile "embed-certs-541370"
	I1010 19:28:19.865085  147758 host.go:66] Checking if "embed-certs-541370" exists ...
	I1010 19:28:19.865077  147758 config.go:182] Loaded profile config "embed-certs-541370": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 19:28:19.865129  147758 addons.go:69] Setting metrics-server=true in profile "embed-certs-541370"
	I1010 19:28:19.865164  147758 addons.go:234] Setting addon metrics-server=true in "embed-certs-541370"
	W1010 19:28:19.865179  147758 addons.go:243] addon metrics-server should already be in state true
	I1010 19:28:19.865115  147758 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-541370"
	I1010 19:28:19.865215  147758 host.go:66] Checking if "embed-certs-541370" exists ...
	I1010 19:28:19.865558  147758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:28:19.865593  147758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:28:19.865607  147758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:28:19.865629  147758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:28:19.865595  147758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:28:19.865725  147758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:28:19.866857  147758 out.go:177] * Verifying Kubernetes components...
	I1010 19:28:19.868590  147758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 19:28:19.882524  147758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42899
	I1010 19:28:19.882595  147758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39477
	I1010 19:28:19.882678  147758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42999
	I1010 19:28:19.883065  147758 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:28:19.883168  147758 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:28:19.883281  147758 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:28:19.883559  147758 main.go:141] libmachine: Using API Version  1
	I1010 19:28:19.883575  147758 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:28:19.883657  147758 main.go:141] libmachine: Using API Version  1
	I1010 19:28:19.883669  147758 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:28:19.883802  147758 main.go:141] libmachine: Using API Version  1
	I1010 19:28:19.883818  147758 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:28:19.883968  147758 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:28:19.883976  147758 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:28:19.884141  147758 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:28:19.884194  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetState
	I1010 19:28:19.884408  147758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:28:19.884437  147758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:28:19.884684  147758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:28:19.884746  147758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:28:19.887912  147758 addons.go:234] Setting addon default-storageclass=true in "embed-certs-541370"
	W1010 19:28:19.887942  147758 addons.go:243] addon default-storageclass should already be in state true
	I1010 19:28:19.887973  147758 host.go:66] Checking if "embed-certs-541370" exists ...
	I1010 19:28:19.888333  147758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:28:19.888383  147758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:28:19.901588  147758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37895
	I1010 19:28:19.902131  147758 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:28:19.902597  147758 main.go:141] libmachine: Using API Version  1
	I1010 19:28:19.902621  147758 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:28:19.902927  147758 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:28:19.903101  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetState
	I1010 19:28:19.904556  147758 main.go:141] libmachine: (embed-certs-541370) Calling .DriverName
	I1010 19:28:19.905207  147758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33927
	I1010 19:28:19.905621  147758 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:28:19.906188  147758 main.go:141] libmachine: Using API Version  1
	I1010 19:28:19.906209  147758 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:28:19.906599  147758 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:28:19.906647  147758 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 19:28:19.906837  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetState
	I1010 19:28:19.907699  147758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34245
	I1010 19:28:19.908147  147758 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:28:19.908557  147758 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 19:28:19.908584  147758 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1010 19:28:19.908610  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHHostname
	I1010 19:28:19.908705  147758 main.go:141] libmachine: (embed-certs-541370) Calling .DriverName
	I1010 19:28:19.908717  147758 main.go:141] libmachine: Using API Version  1
	I1010 19:28:19.908745  147758 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:28:19.909364  147758 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:28:19.910154  147758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:28:19.910208  147758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:28:19.910840  147758 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1010 19:28:19.912716  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:28:19.912722  147758 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1010 19:28:19.912743  147758 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1010 19:28:19.912769  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHHostname
	I1010 19:28:19.913199  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:28:19.913224  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:28:19.913500  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHPort
	I1010 19:28:19.913682  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:28:19.913845  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHUsername
	I1010 19:28:19.913972  147758 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/embed-certs-541370/id_rsa Username:docker}
	I1010 19:28:19.921800  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:28:19.922343  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:28:19.922374  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:28:19.922653  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHPort
	I1010 19:28:19.922842  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:28:19.922965  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHUsername
	I1010 19:28:19.923108  147758 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/embed-certs-541370/id_rsa Username:docker}
	I1010 19:28:19.935097  147758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39933
	I1010 19:28:19.935605  147758 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:28:19.936123  147758 main.go:141] libmachine: Using API Version  1
	I1010 19:28:19.936146  147758 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:28:19.936561  147758 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:28:19.936747  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetState
	I1010 19:28:19.938789  147758 main.go:141] libmachine: (embed-certs-541370) Calling .DriverName
	I1010 19:28:19.939019  147758 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1010 19:28:19.939034  147758 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1010 19:28:19.939054  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHHostname
	I1010 19:28:19.941682  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:28:19.942137  147758 main.go:141] libmachine: (embed-certs-541370) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:ee:d0", ip: ""} in network mk-embed-certs-541370: {Iface:virbr1 ExpiryTime:2024-10-10 20:23:02 +0000 UTC Type:0 Mac:52:54:00:e2:ee:d0 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:embed-certs-541370 Clientid:01:52:54:00:e2:ee:d0}
	I1010 19:28:19.942165  147758 main.go:141] libmachine: (embed-certs-541370) DBG | domain embed-certs-541370 has defined IP address 192.168.39.120 and MAC address 52:54:00:e2:ee:d0 in network mk-embed-certs-541370
	I1010 19:28:19.942404  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHPort
	I1010 19:28:19.942642  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHKeyPath
	I1010 19:28:19.942767  147758 main.go:141] libmachine: (embed-certs-541370) Calling .GetSSHUsername
	I1010 19:28:19.942915  147758 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/embed-certs-541370/id_rsa Username:docker}
	I1010 19:28:20.108247  147758 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 19:28:20.149819  147758 node_ready.go:35] waiting up to 6m0s for node "embed-certs-541370" to be "Ready" ...
	I1010 19:28:20.163096  147758 node_ready.go:49] node "embed-certs-541370" has status "Ready":"True"
	I1010 19:28:20.163118  147758 node_ready.go:38] duration metric: took 13.26779ms for node "embed-certs-541370" to be "Ready" ...
	I1010 19:28:20.163128  147758 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 19:28:20.168620  147758 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-59752" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:20.241952  147758 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1010 19:28:20.241978  147758 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1010 19:28:20.249679  147758 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1010 19:28:20.290149  147758 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1010 19:28:20.290190  147758 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1010 19:28:20.291475  147758 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 19:28:20.410539  147758 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1010 19:28:20.410582  147758 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1010 19:28:20.491567  147758 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1010 19:28:20.684370  147758 main.go:141] libmachine: Making call to close driver server
	I1010 19:28:20.684403  147758 main.go:141] libmachine: (embed-certs-541370) Calling .Close
	I1010 19:28:20.684695  147758 main.go:141] libmachine: (embed-certs-541370) DBG | Closing plugin on server side
	I1010 19:28:20.684742  147758 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:28:20.684749  147758 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:28:20.684756  147758 main.go:141] libmachine: Making call to close driver server
	I1010 19:28:20.684764  147758 main.go:141] libmachine: (embed-certs-541370) Calling .Close
	I1010 19:28:20.685029  147758 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:28:20.685059  147758 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:28:20.685036  147758 main.go:141] libmachine: (embed-certs-541370) DBG | Closing plugin on server side
	I1010 19:28:20.695901  147758 main.go:141] libmachine: Making call to close driver server
	I1010 19:28:20.695926  147758 main.go:141] libmachine: (embed-certs-541370) Calling .Close
	I1010 19:28:20.696202  147758 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:28:20.696249  147758 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:28:21.439463  147758 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.147952803s)
	I1010 19:28:21.439626  147758 main.go:141] libmachine: Making call to close driver server
	I1010 19:28:21.439659  147758 main.go:141] libmachine: (embed-certs-541370) Calling .Close
	I1010 19:28:21.439951  147758 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:28:21.439969  147758 main.go:141] libmachine: (embed-certs-541370) DBG | Closing plugin on server side
	I1010 19:28:21.439976  147758 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:28:21.439997  147758 main.go:141] libmachine: Making call to close driver server
	I1010 19:28:21.440009  147758 main.go:141] libmachine: (embed-certs-541370) Calling .Close
	I1010 19:28:21.440299  147758 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:28:21.440298  147758 main.go:141] libmachine: (embed-certs-541370) DBG | Closing plugin on server side
	I1010 19:28:21.440314  147758 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:28:21.780486  147758 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.288854773s)
	I1010 19:28:21.780551  147758 main.go:141] libmachine: Making call to close driver server
	I1010 19:28:21.780567  147758 main.go:141] libmachine: (embed-certs-541370) Calling .Close
	I1010 19:28:21.780948  147758 main.go:141] libmachine: (embed-certs-541370) DBG | Closing plugin on server side
	I1010 19:28:21.780980  147758 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:28:21.780996  147758 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:28:21.781007  147758 main.go:141] libmachine: Making call to close driver server
	I1010 19:28:21.781016  147758 main.go:141] libmachine: (embed-certs-541370) Calling .Close
	I1010 19:28:21.781289  147758 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:28:21.781310  147758 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:28:21.781331  147758 addons.go:475] Verifying addon metrics-server=true in "embed-certs-541370"
	I1010 19:28:21.783512  147758 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1010 19:28:21.784958  147758 addons.go:510] duration metric: took 1.92006141s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1010 19:28:19.225844  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:21.227960  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:23.726439  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:22.195129  147758 pod_ready.go:103] pod "coredns-7c65d6cfc9-59752" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:24.678736  147758 pod_ready.go:103] pod "coredns-7c65d6cfc9-59752" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:25.727053  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:27.727657  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:27.177348  147758 pod_ready.go:103] pod "coredns-7c65d6cfc9-59752" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:29.177459  147758 pod_ready.go:93] pod "coredns-7c65d6cfc9-59752" in "kube-system" namespace has status "Ready":"True"
	I1010 19:28:29.177485  147758 pod_ready.go:82] duration metric: took 9.008841503s for pod "coredns-7c65d6cfc9-59752" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:29.177495  147758 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-n7wxs" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:29.182744  147758 pod_ready.go:93] pod "coredns-7c65d6cfc9-n7wxs" in "kube-system" namespace has status "Ready":"True"
	I1010 19:28:29.182777  147758 pod_ready.go:82] duration metric: took 5.273263ms for pod "coredns-7c65d6cfc9-n7wxs" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:29.182791  147758 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:29.191507  147758 pod_ready.go:93] pod "etcd-embed-certs-541370" in "kube-system" namespace has status "Ready":"True"
	I1010 19:28:29.191539  147758 pod_ready.go:82] duration metric: took 8.738961ms for pod "etcd-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:29.191554  147758 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:29.199167  147758 pod_ready.go:93] pod "kube-apiserver-embed-certs-541370" in "kube-system" namespace has status "Ready":"True"
	I1010 19:28:29.199218  147758 pod_ready.go:82] duration metric: took 7.635672ms for pod "kube-apiserver-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:29.199234  147758 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:29.204558  147758 pod_ready.go:93] pod "kube-controller-manager-embed-certs-541370" in "kube-system" namespace has status "Ready":"True"
	I1010 19:28:29.204581  147758 pod_ready.go:82] duration metric: took 5.337574ms for pod "kube-controller-manager-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:29.204591  147758 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-6hdds" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:29.573781  147758 pod_ready.go:93] pod "kube-proxy-6hdds" in "kube-system" namespace has status "Ready":"True"
	I1010 19:28:29.573808  147758 pod_ready.go:82] duration metric: took 369.210969ms for pod "kube-proxy-6hdds" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:29.573818  147758 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:29.974015  147758 pod_ready.go:93] pod "kube-scheduler-embed-certs-541370" in "kube-system" namespace has status "Ready":"True"
	I1010 19:28:29.974039  147758 pod_ready.go:82] duration metric: took 400.214845ms for pod "kube-scheduler-embed-certs-541370" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:29.974048  147758 pod_ready.go:39] duration metric: took 9.810911064s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 19:28:29.974066  147758 api_server.go:52] waiting for apiserver process to appear ...
	I1010 19:28:29.974120  147758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:28:29.991332  147758 api_server.go:72] duration metric: took 10.126480862s to wait for apiserver process to appear ...
	I1010 19:28:29.991356  147758 api_server.go:88] waiting for apiserver healthz status ...
	I1010 19:28:29.991382  147758 api_server.go:253] Checking apiserver healthz at https://192.168.39.120:8443/healthz ...
	I1010 19:28:29.995855  147758 api_server.go:279] https://192.168.39.120:8443/healthz returned 200:
	ok
	I1010 19:28:29.997488  147758 api_server.go:141] control plane version: v1.31.1
	I1010 19:28:29.997516  147758 api_server.go:131] duration metric: took 6.152312ms to wait for apiserver health ...
	I1010 19:28:29.997526  147758 system_pods.go:43] waiting for kube-system pods to appear ...
	I1010 19:28:30.176631  147758 system_pods.go:59] 9 kube-system pods found
	I1010 19:28:30.176662  147758 system_pods.go:61] "coredns-7c65d6cfc9-59752" [f7980c69-dd8e-42e0-a0ab-1dedf2203367] Running
	I1010 19:28:30.176668  147758 system_pods.go:61] "coredns-7c65d6cfc9-n7wxs" [ac89df92-bbdb-432b-8c4a-d9ced3c2f2b5] Running
	I1010 19:28:30.176672  147758 system_pods.go:61] "etcd-embed-certs-541370" [94f92d38-753f-47c7-95b6-7ef0486a172d] Running
	I1010 19:28:30.176676  147758 system_pods.go:61] "kube-apiserver-embed-certs-541370" [b60f7ec1-6416-4e0b-8f49-d673496187b6] Running
	I1010 19:28:30.176680  147758 system_pods.go:61] "kube-controller-manager-embed-certs-541370" [ca09511b-ec93-4146-9d43-5fbc0880394a] Running
	I1010 19:28:30.176683  147758 system_pods.go:61] "kube-proxy-6hdds" [fe7cbbf4-12be-469d-b176-37c4daccab96] Running
	I1010 19:28:30.176686  147758 system_pods.go:61] "kube-scheduler-embed-certs-541370" [37c96f22-d5a4-4233-ba65-7367b075656a] Running
	I1010 19:28:30.176693  147758 system_pods.go:61] "metrics-server-6867b74b74-znhn4" [5dc1f764-c7c7-480e-b787-5f5cf6c14a84] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1010 19:28:30.176699  147758 system_pods.go:61] "storage-provisioner" [cfb28184-daef-40be-9170-b42058727418] Running
	I1010 19:28:30.176707  147758 system_pods.go:74] duration metric: took 179.174083ms to wait for pod list to return data ...
	I1010 19:28:30.176714  147758 default_sa.go:34] waiting for default service account to be created ...
	I1010 19:28:30.375326  147758 default_sa.go:45] found service account: "default"
	I1010 19:28:30.375361  147758 default_sa.go:55] duration metric: took 198.640267ms for default service account to be created ...
	I1010 19:28:30.375374  147758 system_pods.go:116] waiting for k8s-apps to be running ...
	I1010 19:28:30.578749  147758 system_pods.go:86] 9 kube-system pods found
	I1010 19:28:30.578780  147758 system_pods.go:89] "coredns-7c65d6cfc9-59752" [f7980c69-dd8e-42e0-a0ab-1dedf2203367] Running
	I1010 19:28:30.578786  147758 system_pods.go:89] "coredns-7c65d6cfc9-n7wxs" [ac89df92-bbdb-432b-8c4a-d9ced3c2f2b5] Running
	I1010 19:28:30.578790  147758 system_pods.go:89] "etcd-embed-certs-541370" [94f92d38-753f-47c7-95b6-7ef0486a172d] Running
	I1010 19:28:30.578794  147758 system_pods.go:89] "kube-apiserver-embed-certs-541370" [b60f7ec1-6416-4e0b-8f49-d673496187b6] Running
	I1010 19:28:30.578797  147758 system_pods.go:89] "kube-controller-manager-embed-certs-541370" [ca09511b-ec93-4146-9d43-5fbc0880394a] Running
	I1010 19:28:30.578801  147758 system_pods.go:89] "kube-proxy-6hdds" [fe7cbbf4-12be-469d-b176-37c4daccab96] Running
	I1010 19:28:30.578804  147758 system_pods.go:89] "kube-scheduler-embed-certs-541370" [37c96f22-d5a4-4233-ba65-7367b075656a] Running
	I1010 19:28:30.578810  147758 system_pods.go:89] "metrics-server-6867b74b74-znhn4" [5dc1f764-c7c7-480e-b787-5f5cf6c14a84] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1010 19:28:30.578814  147758 system_pods.go:89] "storage-provisioner" [cfb28184-daef-40be-9170-b42058727418] Running
	I1010 19:28:30.578822  147758 system_pods.go:126] duration metric: took 203.441477ms to wait for k8s-apps to be running ...
	I1010 19:28:30.578829  147758 system_svc.go:44] waiting for kubelet service to be running ....
	I1010 19:28:30.578877  147758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 19:28:30.596523  147758 system_svc.go:56] duration metric: took 17.684729ms WaitForService to wait for kubelet
	I1010 19:28:30.596553  147758 kubeadm.go:582] duration metric: took 10.731708748s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1010 19:28:30.596573  147758 node_conditions.go:102] verifying NodePressure condition ...
	I1010 19:28:30.774749  147758 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1010 19:28:30.774783  147758 node_conditions.go:123] node cpu capacity is 2
	I1010 19:28:30.774807  147758 node_conditions.go:105] duration metric: took 178.228671ms to run NodePressure ...
	I1010 19:28:30.774822  147758 start.go:241] waiting for startup goroutines ...
	I1010 19:28:30.774831  147758 start.go:246] waiting for cluster config update ...
	I1010 19:28:30.774845  147758 start.go:255] writing updated cluster config ...
	I1010 19:28:30.775121  147758 ssh_runner.go:195] Run: rm -f paused
	I1010 19:28:30.826689  147758 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1010 19:28:30.828795  147758 out.go:177] * Done! kubectl is now configured to use "embed-certs-541370" cluster and "default" namespace by default
	I1010 19:28:29.728096  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:32.229632  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:34.726536  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:36.727032  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:38.727488  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:40.372903  148525 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.31317648s)
	I1010 19:28:40.372991  148525 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 19:28:40.389319  148525 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1010 19:28:40.400123  148525 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1010 19:28:40.411906  148525 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1010 19:28:40.411932  148525 kubeadm.go:157] found existing configuration files:
	
	I1010 19:28:40.411976  148525 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1010 19:28:40.421840  148525 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1010 19:28:40.421904  148525 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1010 19:28:40.432229  148525 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1010 19:28:40.442121  148525 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1010 19:28:40.442203  148525 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1010 19:28:40.452969  148525 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1010 19:28:40.463085  148525 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1010 19:28:40.463146  148525 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1010 19:28:40.473103  148525 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1010 19:28:40.482854  148525 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1010 19:28:40.482914  148525 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1010 19:28:40.494023  148525 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1010 19:28:40.543369  148525 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1010 19:28:40.543466  148525 kubeadm.go:310] [preflight] Running pre-flight checks
	I1010 19:28:40.657301  148525 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1010 19:28:40.657462  148525 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1010 19:28:40.657579  148525 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1010 19:28:40.669222  148525 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1010 19:28:40.670995  148525 out.go:235]   - Generating certificates and keys ...
	I1010 19:28:40.671102  148525 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1010 19:28:40.671171  148525 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1010 19:28:40.671284  148525 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1010 19:28:40.671374  148525 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1010 19:28:40.671471  148525 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1010 19:28:40.671557  148525 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1010 19:28:40.671650  148525 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1010 19:28:40.671751  148525 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1010 19:28:40.671895  148525 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1010 19:28:40.672000  148525 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1010 19:28:40.672056  148525 kubeadm.go:310] [certs] Using the existing "sa" key
	I1010 19:28:40.672136  148525 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1010 19:28:40.876613  148525 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1010 19:28:41.109518  148525 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1010 19:28:41.186751  148525 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1010 19:28:41.424710  148525 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1010 19:28:41.479611  148525 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1010 19:28:41.480235  148525 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1010 19:28:41.483222  148525 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1010 19:28:41.227521  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:43.728023  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:41.484809  148525 out.go:235]   - Booting up control plane ...
	I1010 19:28:41.484935  148525 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1010 19:28:41.485020  148525 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1010 19:28:41.485317  148525 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1010 19:28:41.506919  148525 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1010 19:28:41.517006  148525 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1010 19:28:41.517077  148525 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1010 19:28:41.653211  148525 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1010 19:28:41.653364  148525 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1010 19:28:42.655360  148525 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001910447s
	I1010 19:28:42.655482  148525 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1010 19:28:47.658431  148525 kubeadm.go:310] [api-check] The API server is healthy after 5.003169217s
	I1010 19:28:47.676178  148525 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1010 19:28:47.694752  148525 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1010 19:28:47.720376  148525 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1010 19:28:47.720645  148525 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-361847 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1010 19:28:47.736489  148525 kubeadm.go:310] [bootstrap-token] Using token: cprf0t.lm4xp75yi0cdu4sy
	I1010 19:28:46.228217  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:48.726740  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:47.737958  148525 out.go:235]   - Configuring RBAC rules ...
	I1010 19:28:47.738089  148525 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1010 19:28:47.750073  148525 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1010 19:28:47.758010  148525 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1010 19:28:47.761649  148525 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1010 19:28:47.768953  148525 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1010 19:28:47.774428  148525 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1010 19:28:48.065988  148525 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1010 19:28:48.502538  148525 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1010 19:28:49.066479  148525 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1010 19:28:49.069842  148525 kubeadm.go:310] 
	I1010 19:28:49.069937  148525 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1010 19:28:49.069947  148525 kubeadm.go:310] 
	I1010 19:28:49.070046  148525 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1010 19:28:49.070058  148525 kubeadm.go:310] 
	I1010 19:28:49.070089  148525 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1010 19:28:49.070166  148525 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1010 19:28:49.070254  148525 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1010 19:28:49.070265  148525 kubeadm.go:310] 
	I1010 19:28:49.070342  148525 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1010 19:28:49.070353  148525 kubeadm.go:310] 
	I1010 19:28:49.070446  148525 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1010 19:28:49.070478  148525 kubeadm.go:310] 
	I1010 19:28:49.070544  148525 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1010 19:28:49.070640  148525 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1010 19:28:49.070750  148525 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1010 19:28:49.070773  148525 kubeadm.go:310] 
	I1010 19:28:49.070880  148525 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1010 19:28:49.070990  148525 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1010 19:28:49.071001  148525 kubeadm.go:310] 
	I1010 19:28:49.071153  148525 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token cprf0t.lm4xp75yi0cdu4sy \
	I1010 19:28:49.071299  148525 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:bc8568f96f96483e4403a7eff9866ac7bd8b0e76d8203796a49dd1a769f11bda \
	I1010 19:28:49.071330  148525 kubeadm.go:310] 	--control-plane 
	I1010 19:28:49.071349  148525 kubeadm.go:310] 
	I1010 19:28:49.071468  148525 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1010 19:28:49.071497  148525 kubeadm.go:310] 
	I1010 19:28:49.072228  148525 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token cprf0t.lm4xp75yi0cdu4sy \
	I1010 19:28:49.072354  148525 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:bc8568f96f96483e4403a7eff9866ac7bd8b0e76d8203796a49dd1a769f11bda 
	I1010 19:28:49.074595  148525 kubeadm.go:310] W1010 19:28:40.525557    2560 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1010 19:28:49.074944  148525 kubeadm.go:310] W1010 19:28:40.526329    2560 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1010 19:28:49.075102  148525 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1010 19:28:49.075143  148525 cni.go:84] Creating CNI manager for ""
	I1010 19:28:49.075166  148525 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1010 19:28:49.077190  148525 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1010 19:28:49.078665  148525 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1010 19:28:49.091792  148525 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1010 19:28:49.113801  148525 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1010 19:28:49.113920  148525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-361847 minikube.k8s.io/updated_at=2024_10_10T19_28_49_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=e90d7de550e839433dc631623053398b652747fc minikube.k8s.io/name=default-k8s-diff-port-361847 minikube.k8s.io/primary=true
	I1010 19:28:49.114074  148525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:49.154398  148525 ops.go:34] apiserver oom_adj: -16
	I1010 19:28:49.351271  148525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:49.852049  148525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:50.351441  148525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:50.852022  148525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:51.351391  148525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:51.851329  148525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:52.351840  148525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:52.852392  148525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:53.351397  148525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 19:28:53.443325  148525 kubeadm.go:1113] duration metric: took 4.329288133s to wait for elevateKubeSystemPrivileges
	I1010 19:28:53.443363  148525 kubeadm.go:394] duration metric: took 4m54.439732071s to StartCluster
	I1010 19:28:53.443386  148525 settings.go:142] acquiring lock: {Name:mk8d73e472e6ca16e47be8eb712406a95625b8da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 19:28:53.443481  148525 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19787-81676/kubeconfig
	I1010 19:28:53.445465  148525 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19787-81676/kubeconfig: {Name:mk31297b607b774870865548d952622c04d970cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 19:28:53.445747  148525 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.32 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1010 19:28:53.445842  148525 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1010 19:28:53.445957  148525 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-361847"
	I1010 19:28:53.445980  148525 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-361847"
	W1010 19:28:53.445992  148525 addons.go:243] addon storage-provisioner should already be in state true
	I1010 19:28:53.446004  148525 config.go:182] Loaded profile config "default-k8s-diff-port-361847": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 19:28:53.446026  148525 host.go:66] Checking if "default-k8s-diff-port-361847" exists ...
	I1010 19:28:53.446065  148525 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-361847"
	I1010 19:28:53.446100  148525 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-361847"
	I1010 19:28:53.446085  148525 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-361847"
	I1010 19:28:53.446137  148525 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-361847"
	W1010 19:28:53.446151  148525 addons.go:243] addon metrics-server should already be in state true
	I1010 19:28:53.446242  148525 host.go:66] Checking if "default-k8s-diff-port-361847" exists ...
	I1010 19:28:53.446515  148525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:28:53.446562  148525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:28:53.447089  148525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:28:53.447135  148525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:28:53.447315  148525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:28:53.447360  148525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:28:53.450779  148525 out.go:177] * Verifying Kubernetes components...
	I1010 19:28:53.452838  148525 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 19:28:53.465502  148525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36411
	I1010 19:28:53.466020  148525 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:28:53.466572  148525 main.go:141] libmachine: Using API Version  1
	I1010 19:28:53.466594  148525 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:28:53.466772  148525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42293
	I1010 19:28:53.467034  148525 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:28:53.467209  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetState
	I1010 19:28:53.467310  148525 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:28:53.467828  148525 main.go:141] libmachine: Using API Version  1
	I1010 19:28:53.467857  148525 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:28:53.467899  148525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36387
	I1010 19:28:53.468270  148525 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:28:53.468451  148525 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:28:53.468866  148525 main.go:141] libmachine: Using API Version  1
	I1010 19:28:53.468891  148525 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:28:53.469102  148525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:28:53.469150  148525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:28:53.469484  148525 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:28:53.470068  148525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:28:53.470114  148525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:28:53.471192  148525 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-361847"
	W1010 19:28:53.471213  148525 addons.go:243] addon default-storageclass should already be in state true
	I1010 19:28:53.471261  148525 host.go:66] Checking if "default-k8s-diff-port-361847" exists ...
	I1010 19:28:53.471618  148525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:28:53.471664  148525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:28:53.486550  148525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38971
	I1010 19:28:53.487068  148525 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:28:53.487608  148525 main.go:141] libmachine: Using API Version  1
	I1010 19:28:53.487626  148525 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:28:53.488015  148525 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:28:53.488329  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetState
	I1010 19:28:53.490200  148525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44971
	I1010 19:28:53.490240  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .DriverName
	I1010 19:28:53.490790  148525 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:28:53.491318  148525 main.go:141] libmachine: Using API Version  1
	I1010 19:28:53.491341  148525 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:28:53.491682  148525 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:28:53.491957  148525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45297
	I1010 19:28:53.492100  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetState
	I1010 19:28:53.492423  148525 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:28:53.492731  148525 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1010 19:28:53.492811  148525 main.go:141] libmachine: Using API Version  1
	I1010 19:28:53.492831  148525 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:28:53.493240  148525 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:28:53.493885  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .DriverName
	I1010 19:28:53.493979  148525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 19:28:53.494031  148525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 19:28:53.494359  148525 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1010 19:28:53.494381  148525 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1010 19:28:53.494397  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHHostname
	I1010 19:28:53.495771  148525 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 19:28:51.226596  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:53.227299  147213 pod_ready.go:103] pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:53.227335  147213 pod_ready.go:82] duration metric: took 4m0.007224391s for pod "metrics-server-6867b74b74-8w9lk" in "kube-system" namespace to be "Ready" ...
	E1010 19:28:53.227346  147213 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1010 19:28:53.227355  147213 pod_ready.go:39] duration metric: took 4m5.554224355s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 19:28:53.227375  147213 api_server.go:52] waiting for apiserver process to appear ...
	I1010 19:28:53.227419  147213 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:28:53.227484  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:28:53.288713  147213 cri.go:89] found id: "20a9cb514f18abab89d82173a52c43693b13548712f96726c491e14100e5deba"
	I1010 19:28:53.288749  147213 cri.go:89] found id: ""
	I1010 19:28:53.288759  147213 logs.go:282] 1 containers: [20a9cb514f18abab89d82173a52c43693b13548712f96726c491e14100e5deba]
	I1010 19:28:53.288823  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:53.294819  147213 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:28:53.294904  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:28:53.340169  147213 cri.go:89] found id: "bfc9f1f069a0299235908025d3c8840e059ddc2fc2f579932edb134cff568c36"
	I1010 19:28:53.340197  147213 cri.go:89] found id: ""
	I1010 19:28:53.340207  147213 logs.go:282] 1 containers: [bfc9f1f069a0299235908025d3c8840e059ddc2fc2f579932edb134cff568c36]
	I1010 19:28:53.340271  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:53.345214  147213 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:28:53.345292  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:28:53.392808  147213 cri.go:89] found id: "3c98f0e3e46ce82feb8638281ffb8c8cdf326b15115a6edfe91de59de6868846"
	I1010 19:28:53.392838  147213 cri.go:89] found id: ""
	I1010 19:28:53.392859  147213 logs.go:282] 1 containers: [3c98f0e3e46ce82feb8638281ffb8c8cdf326b15115a6edfe91de59de6868846]
	I1010 19:28:53.392921  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:53.398275  147213 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:28:53.398361  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:28:53.439567  147213 cri.go:89] found id: "d397ef1d012accb6f9cb41fa5bd5ed4649588e9ad8552f87d2f9a579a9c3f023"
	I1010 19:28:53.439594  147213 cri.go:89] found id: ""
	I1010 19:28:53.439604  147213 logs.go:282] 1 containers: [d397ef1d012accb6f9cb41fa5bd5ed4649588e9ad8552f87d2f9a579a9c3f023]
	I1010 19:28:53.439665  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:53.444366  147213 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:28:53.444436  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:28:53.522580  147213 cri.go:89] found id: "3a26f9cbec8dc8e6e1b1c1cc24a2432dc85819a78557be2263db6bc34e847664"
	I1010 19:28:53.522597  147213 cri.go:89] found id: ""
	I1010 19:28:53.522605  147213 logs.go:282] 1 containers: [3a26f9cbec8dc8e6e1b1c1cc24a2432dc85819a78557be2263db6bc34e847664]
	I1010 19:28:53.522654  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:53.528890  147213 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:28:53.528974  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:28:53.575933  147213 cri.go:89] found id: "d59196636b2826805e226757e3c5d3683d49f2fddb43f2e965aeae02026742ea"
	I1010 19:28:53.575963  147213 cri.go:89] found id: ""
	I1010 19:28:53.575975  147213 logs.go:282] 1 containers: [d59196636b2826805e226757e3c5d3683d49f2fddb43f2e965aeae02026742ea]
	I1010 19:28:53.576035  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:53.581693  147213 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:28:53.581763  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:28:53.619789  147213 cri.go:89] found id: ""
	I1010 19:28:53.619819  147213 logs.go:282] 0 containers: []
	W1010 19:28:53.619831  147213 logs.go:284] No container was found matching "kindnet"
	I1010 19:28:53.619839  147213 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1010 19:28:53.619899  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1010 19:28:53.659715  147213 cri.go:89] found id: "dfabbf70cd44985666c6b54a456e49c66a41cc19300a483565decd937ce240ab"
	I1010 19:28:53.659746  147213 cri.go:89] found id: "e14d37c6da3f630d60720c5dc7ec414f060a7b327fafd27b633f3402e552840e"
	I1010 19:28:53.659752  147213 cri.go:89] found id: ""
	I1010 19:28:53.659762  147213 logs.go:282] 2 containers: [dfabbf70cd44985666c6b54a456e49c66a41cc19300a483565decd937ce240ab e14d37c6da3f630d60720c5dc7ec414f060a7b327fafd27b633f3402e552840e]
	I1010 19:28:53.659828  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:53.664377  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:53.668766  147213 logs.go:123] Gathering logs for dmesg ...
	I1010 19:28:53.668796  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:28:53.685976  147213 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:28:53.686007  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 19:28:53.497232  148525 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 19:28:53.497251  148525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1010 19:28:53.497273  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHHostname
	I1010 19:28:53.497732  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:28:53.498599  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:28:53.498627  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:28:53.498971  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHPort
	I1010 19:28:53.499159  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:28:53.499312  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHUsername
	I1010 19:28:53.499414  148525 sshutil.go:53] new ssh client: &{IP:192.168.50.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/default-k8s-diff-port-361847/id_rsa Username:docker}
	I1010 19:28:53.501044  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:28:53.501509  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:28:53.501531  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:28:53.501782  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHPort
	I1010 19:28:53.501956  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:28:53.502080  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHUsername
	I1010 19:28:53.502232  148525 sshutil.go:53] new ssh client: &{IP:192.168.50.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/default-k8s-diff-port-361847/id_rsa Username:docker}
	I1010 19:28:53.512240  148525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34899
	I1010 19:28:53.512809  148525 main.go:141] libmachine: () Calling .GetVersion
	I1010 19:28:53.513347  148525 main.go:141] libmachine: Using API Version  1
	I1010 19:28:53.513368  148525 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 19:28:53.513787  148525 main.go:141] libmachine: () Calling .GetMachineName
	I1010 19:28:53.514001  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetState
	I1010 19:28:53.515436  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .DriverName
	I1010 19:28:53.515639  148525 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1010 19:28:53.515659  148525 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1010 19:28:53.515681  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHHostname
	I1010 19:28:53.518128  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:28:53.518596  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:72:58", ip: ""} in network mk-default-k8s-diff-port-361847: {Iface:virbr2 ExpiryTime:2024-10-10 20:23:44 +0000 UTC Type:0 Mac:52:54:00:a6:72:58 Iaid: IPaddr:192.168.50.32 Prefix:24 Hostname:default-k8s-diff-port-361847 Clientid:01:52:54:00:a6:72:58}
	I1010 19:28:53.518628  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | domain default-k8s-diff-port-361847 has defined IP address 192.168.50.32 and MAC address 52:54:00:a6:72:58 in network mk-default-k8s-diff-port-361847
	I1010 19:28:53.518909  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHPort
	I1010 19:28:53.519080  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHKeyPath
	I1010 19:28:53.519216  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .GetSSHUsername
	I1010 19:28:53.519376  148525 sshutil.go:53] new ssh client: &{IP:192.168.50.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/default-k8s-diff-port-361847/id_rsa Username:docker}
	I1010 19:28:53.712871  148525 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 19:28:53.755059  148525 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-361847" to be "Ready" ...
	I1010 19:28:53.766564  148525 node_ready.go:49] node "default-k8s-diff-port-361847" has status "Ready":"True"
	I1010 19:28:53.766590  148525 node_ready.go:38] duration metric: took 11.490223ms for node "default-k8s-diff-port-361847" to be "Ready" ...
	I1010 19:28:53.766603  148525 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 19:28:53.777458  148525 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-dh9th" in "kube-system" namespace to be "Ready" ...
	I1010 19:28:53.875493  148525 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1010 19:28:53.875525  148525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1010 19:28:53.911443  148525 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1010 19:28:53.944885  148525 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1010 19:28:53.944919  148525 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1010 19:28:53.945487  148525 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 19:28:54.011209  148525 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1010 19:28:54.011239  148525 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1010 19:28:54.039679  148525 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1010 19:28:54.598172  148525 main.go:141] libmachine: Making call to close driver server
	I1010 19:28:54.598226  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .Close
	I1010 19:28:54.598584  148525 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:28:54.598608  148525 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:28:54.598619  148525 main.go:141] libmachine: Making call to close driver server
	I1010 19:28:54.598629  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .Close
	I1010 19:28:54.598898  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | Closing plugin on server side
	I1010 19:28:54.598931  148525 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:28:54.598939  148525 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:28:54.643365  148525 main.go:141] libmachine: Making call to close driver server
	I1010 19:28:54.643392  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .Close
	I1010 19:28:54.643734  148525 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:28:54.643760  148525 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:28:55.287018  148525 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.341483807s)
	I1010 19:28:55.287045  148525 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.247326452s)
	I1010 19:28:55.287089  148525 main.go:141] libmachine: Making call to close driver server
	I1010 19:28:55.287094  148525 main.go:141] libmachine: Making call to close driver server
	I1010 19:28:55.287106  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .Close
	I1010 19:28:55.287112  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .Close
	I1010 19:28:55.287440  148525 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:28:55.287479  148525 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:28:55.287506  148525 main.go:141] libmachine: Making call to close driver server
	I1010 19:28:55.287524  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .Close
	I1010 19:28:55.287570  148525 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:28:55.287589  148525 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:28:55.287598  148525 main.go:141] libmachine: Making call to close driver server
	I1010 19:28:55.287607  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) Calling .Close
	I1010 19:28:55.287818  148525 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:28:55.287831  148525 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:28:55.287840  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | Closing plugin on server side
	I1010 19:28:55.287855  148525 main.go:141] libmachine: Successfully made call to close driver server
	I1010 19:28:55.287862  148525 main.go:141] libmachine: Making call to close connection to plugin binary
	I1010 19:28:55.287872  148525 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-361847"
	I1010 19:28:55.287880  148525 main.go:141] libmachine: (default-k8s-diff-port-361847) DBG | Closing plugin on server side
	I1010 19:28:55.289944  148525 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1010 19:28:53.841387  147213 logs.go:123] Gathering logs for kube-apiserver [20a9cb514f18abab89d82173a52c43693b13548712f96726c491e14100e5deba] ...
	I1010 19:28:53.841441  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 20a9cb514f18abab89d82173a52c43693b13548712f96726c491e14100e5deba"
	I1010 19:28:53.892951  147213 logs.go:123] Gathering logs for coredns [3c98f0e3e46ce82feb8638281ffb8c8cdf326b15115a6edfe91de59de6868846] ...
	I1010 19:28:53.893005  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c98f0e3e46ce82feb8638281ffb8c8cdf326b15115a6edfe91de59de6868846"
	I1010 19:28:53.947636  147213 logs.go:123] Gathering logs for kube-scheduler [d397ef1d012accb6f9cb41fa5bd5ed4649588e9ad8552f87d2f9a579a9c3f023] ...
	I1010 19:28:53.947668  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d397ef1d012accb6f9cb41fa5bd5ed4649588e9ad8552f87d2f9a579a9c3f023"
	I1010 19:28:53.992969  147213 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:28:53.992998  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:28:54.520652  147213 logs.go:123] Gathering logs for kubelet ...
	I1010 19:28:54.520703  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:28:54.588366  147213 logs.go:123] Gathering logs for etcd [bfc9f1f069a0299235908025d3c8840e059ddc2fc2f579932edb134cff568c36] ...
	I1010 19:28:54.588418  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bfc9f1f069a0299235908025d3c8840e059ddc2fc2f579932edb134cff568c36"
	I1010 19:28:54.651179  147213 logs.go:123] Gathering logs for kube-proxy [3a26f9cbec8dc8e6e1b1c1cc24a2432dc85819a78557be2263db6bc34e847664] ...
	I1010 19:28:54.651227  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3a26f9cbec8dc8e6e1b1c1cc24a2432dc85819a78557be2263db6bc34e847664"
	I1010 19:28:54.712881  147213 logs.go:123] Gathering logs for kube-controller-manager [d59196636b2826805e226757e3c5d3683d49f2fddb43f2e965aeae02026742ea] ...
	I1010 19:28:54.712925  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d59196636b2826805e226757e3c5d3683d49f2fddb43f2e965aeae02026742ea"
	I1010 19:28:54.779030  147213 logs.go:123] Gathering logs for storage-provisioner [dfabbf70cd44985666c6b54a456e49c66a41cc19300a483565decd937ce240ab] ...
	I1010 19:28:54.779094  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dfabbf70cd44985666c6b54a456e49c66a41cc19300a483565decd937ce240ab"
	I1010 19:28:54.821961  147213 logs.go:123] Gathering logs for storage-provisioner [e14d37c6da3f630d60720c5dc7ec414f060a7b327fafd27b633f3402e552840e] ...
	I1010 19:28:54.822002  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e14d37c6da3f630d60720c5dc7ec414f060a7b327fafd27b633f3402e552840e"
	I1010 19:28:54.871409  147213 logs.go:123] Gathering logs for container status ...
	I1010 19:28:54.871446  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:28:57.425310  147213 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:28:57.442308  147213 api_server.go:72] duration metric: took 4m17.02881034s to wait for apiserver process to appear ...
	I1010 19:28:57.442343  147213 api_server.go:88] waiting for apiserver healthz status ...
	I1010 19:28:57.442383  147213 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:28:57.442444  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:28:57.481392  147213 cri.go:89] found id: "20a9cb514f18abab89d82173a52c43693b13548712f96726c491e14100e5deba"
	I1010 19:28:57.481420  147213 cri.go:89] found id: ""
	I1010 19:28:57.481430  147213 logs.go:282] 1 containers: [20a9cb514f18abab89d82173a52c43693b13548712f96726c491e14100e5deba]
	I1010 19:28:57.481503  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:57.486191  147213 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:28:57.486269  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:28:57.532238  147213 cri.go:89] found id: "bfc9f1f069a0299235908025d3c8840e059ddc2fc2f579932edb134cff568c36"
	I1010 19:28:57.532271  147213 cri.go:89] found id: ""
	I1010 19:28:57.532284  147213 logs.go:282] 1 containers: [bfc9f1f069a0299235908025d3c8840e059ddc2fc2f579932edb134cff568c36]
	I1010 19:28:57.532357  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:57.538105  147213 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:28:57.538188  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:28:57.579729  147213 cri.go:89] found id: "3c98f0e3e46ce82feb8638281ffb8c8cdf326b15115a6edfe91de59de6868846"
	I1010 19:28:57.579757  147213 cri.go:89] found id: ""
	I1010 19:28:57.579767  147213 logs.go:282] 1 containers: [3c98f0e3e46ce82feb8638281ffb8c8cdf326b15115a6edfe91de59de6868846]
	I1010 19:28:57.579833  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:57.584494  147213 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:28:57.584568  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:28:57.623920  147213 cri.go:89] found id: "d397ef1d012accb6f9cb41fa5bd5ed4649588e9ad8552f87d2f9a579a9c3f023"
	I1010 19:28:57.623949  147213 cri.go:89] found id: ""
	I1010 19:28:57.623960  147213 logs.go:282] 1 containers: [d397ef1d012accb6f9cb41fa5bd5ed4649588e9ad8552f87d2f9a579a9c3f023]
	I1010 19:28:57.624028  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:57.628927  147213 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:28:57.629018  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:28:57.669669  147213 cri.go:89] found id: "3a26f9cbec8dc8e6e1b1c1cc24a2432dc85819a78557be2263db6bc34e847664"
	I1010 19:28:57.669698  147213 cri.go:89] found id: ""
	I1010 19:28:57.669707  147213 logs.go:282] 1 containers: [3a26f9cbec8dc8e6e1b1c1cc24a2432dc85819a78557be2263db6bc34e847664]
	I1010 19:28:57.669771  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:57.674449  147213 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:28:57.674526  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:28:57.721856  147213 cri.go:89] found id: "d59196636b2826805e226757e3c5d3683d49f2fddb43f2e965aeae02026742ea"
	I1010 19:28:57.721881  147213 cri.go:89] found id: ""
	I1010 19:28:57.721891  147213 logs.go:282] 1 containers: [d59196636b2826805e226757e3c5d3683d49f2fddb43f2e965aeae02026742ea]
	I1010 19:28:57.721955  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:57.726422  147213 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:28:57.726497  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:28:57.764464  147213 cri.go:89] found id: ""
	I1010 19:28:57.764499  147213 logs.go:282] 0 containers: []
	W1010 19:28:57.764512  147213 logs.go:284] No container was found matching "kindnet"
	I1010 19:28:57.764521  147213 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1010 19:28:57.764595  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1010 19:28:57.809758  147213 cri.go:89] found id: "dfabbf70cd44985666c6b54a456e49c66a41cc19300a483565decd937ce240ab"
	I1010 19:28:57.809784  147213 cri.go:89] found id: "e14d37c6da3f630d60720c5dc7ec414f060a7b327fafd27b633f3402e552840e"
	I1010 19:28:57.809788  147213 cri.go:89] found id: ""
	I1010 19:28:57.809797  147213 logs.go:282] 2 containers: [dfabbf70cd44985666c6b54a456e49c66a41cc19300a483565decd937ce240ab e14d37c6da3f630d60720c5dc7ec414f060a7b327fafd27b633f3402e552840e]
	I1010 19:28:57.809854  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:57.815576  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:28:57.820152  147213 logs.go:123] Gathering logs for kube-apiserver [20a9cb514f18abab89d82173a52c43693b13548712f96726c491e14100e5deba] ...
	I1010 19:28:57.820181  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 20a9cb514f18abab89d82173a52c43693b13548712f96726c491e14100e5deba"
	I1010 19:28:57.869339  147213 logs.go:123] Gathering logs for etcd [bfc9f1f069a0299235908025d3c8840e059ddc2fc2f579932edb134cff568c36] ...
	I1010 19:28:57.869383  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bfc9f1f069a0299235908025d3c8840e059ddc2fc2f579932edb134cff568c36"
	I1010 19:28:57.918698  147213 logs.go:123] Gathering logs for kube-proxy [3a26f9cbec8dc8e6e1b1c1cc24a2432dc85819a78557be2263db6bc34e847664] ...
	I1010 19:28:57.918739  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3a26f9cbec8dc8e6e1b1c1cc24a2432dc85819a78557be2263db6bc34e847664"
	I1010 19:28:57.960939  147213 logs.go:123] Gathering logs for kube-controller-manager [d59196636b2826805e226757e3c5d3683d49f2fddb43f2e965aeae02026742ea] ...
	I1010 19:28:57.960985  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d59196636b2826805e226757e3c5d3683d49f2fddb43f2e965aeae02026742ea"
	I1010 19:28:58.013572  147213 logs.go:123] Gathering logs for storage-provisioner [dfabbf70cd44985666c6b54a456e49c66a41cc19300a483565decd937ce240ab] ...
	I1010 19:28:58.013612  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dfabbf70cd44985666c6b54a456e49c66a41cc19300a483565decd937ce240ab"
	I1010 19:28:58.053247  147213 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:28:58.053277  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:28:58.507428  147213 logs.go:123] Gathering logs for container status ...
	I1010 19:28:58.507473  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:28:58.552704  147213 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:28:58.552742  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 19:28:58.672077  147213 logs.go:123] Gathering logs for dmesg ...
	I1010 19:28:58.672127  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:28:58.690997  147213 logs.go:123] Gathering logs for coredns [3c98f0e3e46ce82feb8638281ffb8c8cdf326b15115a6edfe91de59de6868846] ...
	I1010 19:28:58.691049  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c98f0e3e46ce82feb8638281ffb8c8cdf326b15115a6edfe91de59de6868846"
	I1010 19:28:58.735251  147213 logs.go:123] Gathering logs for kube-scheduler [d397ef1d012accb6f9cb41fa5bd5ed4649588e9ad8552f87d2f9a579a9c3f023] ...
	I1010 19:28:58.735287  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d397ef1d012accb6f9cb41fa5bd5ed4649588e9ad8552f87d2f9a579a9c3f023"
	I1010 19:28:55.291700  148525 addons.go:510] duration metric: took 1.845864985s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1010 19:28:55.785186  148525 pod_ready.go:103] pod "coredns-7c65d6cfc9-dh9th" in "kube-system" namespace has status "Ready":"False"
	I1010 19:28:57.789567  148525 pod_ready.go:103] pod "coredns-7c65d6cfc9-dh9th" in "kube-system" namespace has status "Ready":"False"
	I1010 19:29:00.284444  148525 pod_ready.go:103] pod "coredns-7c65d6cfc9-dh9th" in "kube-system" namespace has status "Ready":"False"
	I1010 19:29:01.297627  148525 pod_ready.go:93] pod "coredns-7c65d6cfc9-dh9th" in "kube-system" namespace has status "Ready":"True"
	I1010 19:29:01.297660  148525 pod_ready.go:82] duration metric: took 7.520173084s for pod "coredns-7c65d6cfc9-dh9th" in "kube-system" namespace to be "Ready" ...
	I1010 19:29:01.297676  148525 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-fgxh7" in "kube-system" namespace to be "Ready" ...
	I1010 19:29:01.804654  148525 pod_ready.go:93] pod "coredns-7c65d6cfc9-fgxh7" in "kube-system" namespace has status "Ready":"True"
	I1010 19:29:01.804676  148525 pod_ready.go:82] duration metric: took 506.992872ms for pod "coredns-7c65d6cfc9-fgxh7" in "kube-system" namespace to be "Ready" ...
	I1010 19:29:01.804690  148525 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	I1010 19:29:01.809788  148525 pod_ready.go:93] pod "etcd-default-k8s-diff-port-361847" in "kube-system" namespace has status "Ready":"True"
	I1010 19:29:01.809814  148525 pod_ready.go:82] duration metric: took 5.116023ms for pod "etcd-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	I1010 19:29:01.809825  148525 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	I1010 19:29:01.814460  148525 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-361847" in "kube-system" namespace has status "Ready":"True"
	I1010 19:29:01.814486  148525 pod_ready.go:82] duration metric: took 4.652085ms for pod "kube-apiserver-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	I1010 19:29:01.814501  148525 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	I1010 19:29:01.819719  148525 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-361847" in "kube-system" namespace has status "Ready":"True"
	I1010 19:29:01.819741  148525 pod_ready.go:82] duration metric: took 5.231258ms for pod "kube-controller-manager-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	I1010 19:29:01.819753  148525 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-jlvn6" in "kube-system" namespace to be "Ready" ...
	I1010 19:29:02.082285  148525 pod_ready.go:93] pod "kube-proxy-jlvn6" in "kube-system" namespace has status "Ready":"True"
	I1010 19:29:02.082325  148525 pod_ready.go:82] duration metric: took 262.562954ms for pod "kube-proxy-jlvn6" in "kube-system" namespace to be "Ready" ...
	I1010 19:29:02.082342  148525 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	I1010 19:29:02.481705  148525 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-361847" in "kube-system" namespace has status "Ready":"True"
	I1010 19:29:02.481730  148525 pod_ready.go:82] duration metric: took 399.378957ms for pod "kube-scheduler-default-k8s-diff-port-361847" in "kube-system" namespace to be "Ready" ...
	I1010 19:29:02.481742  148525 pod_ready.go:39] duration metric: took 8.715126416s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1010 19:29:02.481779  148525 api_server.go:52] waiting for apiserver process to appear ...
	I1010 19:29:02.481832  148525 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 19:29:02.498706  148525 api_server.go:72] duration metric: took 9.052891898s to wait for apiserver process to appear ...
	I1010 19:29:02.498760  148525 api_server.go:88] waiting for apiserver healthz status ...
	I1010 19:29:02.498795  148525 api_server.go:253] Checking apiserver healthz at https://192.168.50.32:8444/healthz ...
	I1010 19:29:02.503501  148525 api_server.go:279] https://192.168.50.32:8444/healthz returned 200:
	ok
	I1010 19:29:02.504594  148525 api_server.go:141] control plane version: v1.31.1
	I1010 19:29:02.504620  148525 api_server.go:131] duration metric: took 5.850548ms to wait for apiserver health ...
	I1010 19:29:02.504629  148525 system_pods.go:43] waiting for kube-system pods to appear ...
	I1010 19:29:02.685579  148525 system_pods.go:59] 9 kube-system pods found
	I1010 19:29:02.685611  148525 system_pods.go:61] "coredns-7c65d6cfc9-dh9th" [ff14d755-810a-497a-b1fc-7fe231748af3] Running
	I1010 19:29:02.685618  148525 system_pods.go:61] "coredns-7c65d6cfc9-fgxh7" [b4faa977-3205-4395-bda3-8fe24fdcf6cc] Running
	I1010 19:29:02.685624  148525 system_pods.go:61] "etcd-default-k8s-diff-port-361847" [d7e1e625-2945-4755-927a-aab5e40f5392] Running
	I1010 19:29:02.685630  148525 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-361847" [0f7dace6-6cdb-4438-a4b3-3fecac56c709] Running
	I1010 19:29:02.685635  148525 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-361847" [d08fb57e-d916-4d4f-929a-509c1c8c0f89] Running
	I1010 19:29:02.685639  148525 system_pods.go:61] "kube-proxy-jlvn6" [6336f682-0362-4855-b848-3540052aec19] Running
	I1010 19:29:02.685644  148525 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-361847" [60282768-5672-42b9-b9f5-6d5cd0b186cf] Running
	I1010 19:29:02.685653  148525 system_pods.go:61] "metrics-server-6867b74b74-fdf7p" [6f8ca204-13fe-4adb-9c09-33ec6821ff2d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1010 19:29:02.685658  148525 system_pods.go:61] "storage-provisioner" [ea1a4ade-9648-401f-a0ad-633ab3c1196b] Running
	I1010 19:29:02.685669  148525 system_pods.go:74] duration metric: took 181.032548ms to wait for pod list to return data ...
	I1010 19:29:02.685683  148525 default_sa.go:34] waiting for default service account to be created ...
	I1010 19:29:02.883256  148525 default_sa.go:45] found service account: "default"
	I1010 19:29:02.883288  148525 default_sa.go:55] duration metric: took 197.59742ms for default service account to be created ...
	I1010 19:29:02.883298  148525 system_pods.go:116] waiting for k8s-apps to be running ...
	I1010 19:29:03.084706  148525 system_pods.go:86] 9 kube-system pods found
	I1010 19:29:03.084737  148525 system_pods.go:89] "coredns-7c65d6cfc9-dh9th" [ff14d755-810a-497a-b1fc-7fe231748af3] Running
	I1010 19:29:03.084742  148525 system_pods.go:89] "coredns-7c65d6cfc9-fgxh7" [b4faa977-3205-4395-bda3-8fe24fdcf6cc] Running
	I1010 19:29:03.084746  148525 system_pods.go:89] "etcd-default-k8s-diff-port-361847" [d7e1e625-2945-4755-927a-aab5e40f5392] Running
	I1010 19:29:03.084751  148525 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-361847" [0f7dace6-6cdb-4438-a4b3-3fecac56c709] Running
	I1010 19:29:03.084755  148525 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-361847" [d08fb57e-d916-4d4f-929a-509c1c8c0f89] Running
	I1010 19:29:03.084759  148525 system_pods.go:89] "kube-proxy-jlvn6" [6336f682-0362-4855-b848-3540052aec19] Running
	I1010 19:29:03.084762  148525 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-361847" [60282768-5672-42b9-b9f5-6d5cd0b186cf] Running
	I1010 19:29:03.084768  148525 system_pods.go:89] "metrics-server-6867b74b74-fdf7p" [6f8ca204-13fe-4adb-9c09-33ec6821ff2d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1010 19:29:03.084772  148525 system_pods.go:89] "storage-provisioner" [ea1a4ade-9648-401f-a0ad-633ab3c1196b] Running
	I1010 19:29:03.084779  148525 system_pods.go:126] duration metric: took 201.476637ms to wait for k8s-apps to be running ...
	I1010 19:29:03.084787  148525 system_svc.go:44] waiting for kubelet service to be running ....
	I1010 19:29:03.084832  148525 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 19:29:03.100986  148525 system_svc.go:56] duration metric: took 16.183062ms WaitForService to wait for kubelet
	I1010 19:29:03.101026  148525 kubeadm.go:582] duration metric: took 9.655245557s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1010 19:29:03.101050  148525 node_conditions.go:102] verifying NodePressure condition ...
	I1010 19:29:03.282063  148525 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1010 19:29:03.282095  148525 node_conditions.go:123] node cpu capacity is 2
	I1010 19:29:03.282106  148525 node_conditions.go:105] duration metric: took 181.049888ms to run NodePressure ...
	I1010 19:29:03.282119  148525 start.go:241] waiting for startup goroutines ...
	I1010 19:29:03.282125  148525 start.go:246] waiting for cluster config update ...
	I1010 19:29:03.282135  148525 start.go:255] writing updated cluster config ...
	I1010 19:29:03.282414  148525 ssh_runner.go:195] Run: rm -f paused
	I1010 19:29:03.331838  148525 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1010 19:29:03.333698  148525 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-361847" cluster and "default" namespace by default
	I1010 19:28:58.775358  147213 logs.go:123] Gathering logs for storage-provisioner [e14d37c6da3f630d60720c5dc7ec414f060a7b327fafd27b633f3402e552840e] ...
	I1010 19:28:58.775396  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e14d37c6da3f630d60720c5dc7ec414f060a7b327fafd27b633f3402e552840e"
	I1010 19:28:58.812210  147213 logs.go:123] Gathering logs for kubelet ...
	I1010 19:28:58.812269  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:29:01.381750  147213 api_server.go:253] Checking apiserver healthz at https://192.168.72.11:8443/healthz ...
	I1010 19:29:01.386658  147213 api_server.go:279] https://192.168.72.11:8443/healthz returned 200:
	ok
	I1010 19:29:01.387793  147213 api_server.go:141] control plane version: v1.31.1
	I1010 19:29:01.387819  147213 api_server.go:131] duration metric: took 3.945468552s to wait for apiserver health ...
	I1010 19:29:01.387829  147213 system_pods.go:43] waiting for kube-system pods to appear ...
	I1010 19:29:01.387861  147213 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:29:01.387948  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:29:01.433312  147213 cri.go:89] found id: "20a9cb514f18abab89d82173a52c43693b13548712f96726c491e14100e5deba"
	I1010 19:29:01.433344  147213 cri.go:89] found id: ""
	I1010 19:29:01.433433  147213 logs.go:282] 1 containers: [20a9cb514f18abab89d82173a52c43693b13548712f96726c491e14100e5deba]
	I1010 19:29:01.433521  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:29:01.437920  147213 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:29:01.437983  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:29:01.476429  147213 cri.go:89] found id: "bfc9f1f069a0299235908025d3c8840e059ddc2fc2f579932edb134cff568c36"
	I1010 19:29:01.476458  147213 cri.go:89] found id: ""
	I1010 19:29:01.476470  147213 logs.go:282] 1 containers: [bfc9f1f069a0299235908025d3c8840e059ddc2fc2f579932edb134cff568c36]
	I1010 19:29:01.476522  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:29:01.480912  147213 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:29:01.480987  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:29:01.522141  147213 cri.go:89] found id: "3c98f0e3e46ce82feb8638281ffb8c8cdf326b15115a6edfe91de59de6868846"
	I1010 19:29:01.522164  147213 cri.go:89] found id: ""
	I1010 19:29:01.522173  147213 logs.go:282] 1 containers: [3c98f0e3e46ce82feb8638281ffb8c8cdf326b15115a6edfe91de59de6868846]
	I1010 19:29:01.522238  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:29:01.526742  147213 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:29:01.526803  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:29:01.572715  147213 cri.go:89] found id: "d397ef1d012accb6f9cb41fa5bd5ed4649588e9ad8552f87d2f9a579a9c3f023"
	I1010 19:29:01.572747  147213 cri.go:89] found id: ""
	I1010 19:29:01.572759  147213 logs.go:282] 1 containers: [d397ef1d012accb6f9cb41fa5bd5ed4649588e9ad8552f87d2f9a579a9c3f023]
	I1010 19:29:01.572814  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:29:01.577754  147213 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:29:01.577832  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:29:01.616077  147213 cri.go:89] found id: "3a26f9cbec8dc8e6e1b1c1cc24a2432dc85819a78557be2263db6bc34e847664"
	I1010 19:29:01.616104  147213 cri.go:89] found id: ""
	I1010 19:29:01.616121  147213 logs.go:282] 1 containers: [3a26f9cbec8dc8e6e1b1c1cc24a2432dc85819a78557be2263db6bc34e847664]
	I1010 19:29:01.616185  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:29:01.620622  147213 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:29:01.620702  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:29:01.662859  147213 cri.go:89] found id: "d59196636b2826805e226757e3c5d3683d49f2fddb43f2e965aeae02026742ea"
	I1010 19:29:01.662889  147213 cri.go:89] found id: ""
	I1010 19:29:01.662903  147213 logs.go:282] 1 containers: [d59196636b2826805e226757e3c5d3683d49f2fddb43f2e965aeae02026742ea]
	I1010 19:29:01.662964  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:29:01.667491  147213 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:29:01.667585  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:29:01.706191  147213 cri.go:89] found id: ""
	I1010 19:29:01.706217  147213 logs.go:282] 0 containers: []
	W1010 19:29:01.706228  147213 logs.go:284] No container was found matching "kindnet"
	I1010 19:29:01.706234  147213 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1010 19:29:01.706299  147213 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1010 19:29:01.753559  147213 cri.go:89] found id: "dfabbf70cd44985666c6b54a456e49c66a41cc19300a483565decd937ce240ab"
	I1010 19:29:01.753581  147213 cri.go:89] found id: "e14d37c6da3f630d60720c5dc7ec414f060a7b327fafd27b633f3402e552840e"
	I1010 19:29:01.753584  147213 cri.go:89] found id: ""
	I1010 19:29:01.753591  147213 logs.go:282] 2 containers: [dfabbf70cd44985666c6b54a456e49c66a41cc19300a483565decd937ce240ab e14d37c6da3f630d60720c5dc7ec414f060a7b327fafd27b633f3402e552840e]
	I1010 19:29:01.753645  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:29:01.758179  147213 ssh_runner.go:195] Run: which crictl
	I1010 19:29:01.762336  147213 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:29:01.762358  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1010 19:29:01.867667  147213 logs.go:123] Gathering logs for etcd [bfc9f1f069a0299235908025d3c8840e059ddc2fc2f579932edb134cff568c36] ...
	I1010 19:29:01.867698  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bfc9f1f069a0299235908025d3c8840e059ddc2fc2f579932edb134cff568c36"
	I1010 19:29:01.911722  147213 logs.go:123] Gathering logs for coredns [3c98f0e3e46ce82feb8638281ffb8c8cdf326b15115a6edfe91de59de6868846] ...
	I1010 19:29:01.911756  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c98f0e3e46ce82feb8638281ffb8c8cdf326b15115a6edfe91de59de6868846"
	I1010 19:29:01.955152  147213 logs.go:123] Gathering logs for kube-proxy [3a26f9cbec8dc8e6e1b1c1cc24a2432dc85819a78557be2263db6bc34e847664] ...
	I1010 19:29:01.955189  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3a26f9cbec8dc8e6e1b1c1cc24a2432dc85819a78557be2263db6bc34e847664"
	I1010 19:29:01.995010  147213 logs.go:123] Gathering logs for kube-controller-manager [d59196636b2826805e226757e3c5d3683d49f2fddb43f2e965aeae02026742ea] ...
	I1010 19:29:01.995041  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d59196636b2826805e226757e3c5d3683d49f2fddb43f2e965aeae02026742ea"
	I1010 19:29:02.047505  147213 logs.go:123] Gathering logs for storage-provisioner [dfabbf70cd44985666c6b54a456e49c66a41cc19300a483565decd937ce240ab] ...
	I1010 19:29:02.047546  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dfabbf70cd44985666c6b54a456e49c66a41cc19300a483565decd937ce240ab"
	I1010 19:29:02.085080  147213 logs.go:123] Gathering logs for storage-provisioner [e14d37c6da3f630d60720c5dc7ec414f060a7b327fafd27b633f3402e552840e] ...
	I1010 19:29:02.085110  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e14d37c6da3f630d60720c5dc7ec414f060a7b327fafd27b633f3402e552840e"
	I1010 19:29:02.128482  147213 logs.go:123] Gathering logs for kubelet ...
	I1010 19:29:02.128527  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:29:02.194867  147213 logs.go:123] Gathering logs for dmesg ...
	I1010 19:29:02.194904  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:29:02.211881  147213 logs.go:123] Gathering logs for kube-apiserver [20a9cb514f18abab89d82173a52c43693b13548712f96726c491e14100e5deba] ...
	I1010 19:29:02.211911  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 20a9cb514f18abab89d82173a52c43693b13548712f96726c491e14100e5deba"
	I1010 19:29:02.262969  147213 logs.go:123] Gathering logs for kube-scheduler [d397ef1d012accb6f9cb41fa5bd5ed4649588e9ad8552f87d2f9a579a9c3f023] ...
	I1010 19:29:02.263013  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d397ef1d012accb6f9cb41fa5bd5ed4649588e9ad8552f87d2f9a579a9c3f023"
	I1010 19:29:02.302921  147213 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:29:02.302956  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:29:02.671102  147213 logs.go:123] Gathering logs for container status ...
	I1010 19:29:02.671169  147213 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 19:29:05.241477  147213 system_pods.go:59] 8 kube-system pods found
	I1010 19:29:05.241508  147213 system_pods.go:61] "coredns-7c65d6cfc9-86brb" [28e5f869-f82f-4bd4-9d9c-89499fa89c89] Running
	I1010 19:29:05.241513  147213 system_pods.go:61] "etcd-no-preload-320324" [3027fc7b-a2cd-47c9-a6f1-122bfd0475a8] Running
	I1010 19:29:05.241517  147213 system_pods.go:61] "kube-apiserver-no-preload-320324" [6e3d2eab-4568-48e9-957c-e724ff89b5da] Running
	I1010 19:29:05.241521  147213 system_pods.go:61] "kube-controller-manager-no-preload-320324" [a6d65cd3-48b1-4faf-bb7f-f6433641d6f3] Running
	I1010 19:29:05.241525  147213 system_pods.go:61] "kube-proxy-vn6sv" [e5b2c419-7299-4bc4-b263-99408b9484eb] Running
	I1010 19:29:05.241528  147213 system_pods.go:61] "kube-scheduler-no-preload-320324" [e089c258-e720-4c78-ada1-eebbe89556c1] Running
	I1010 19:29:05.241534  147213 system_pods.go:61] "metrics-server-6867b74b74-8w9lk" [354939e6-2ca9-44f5-8e8e-c10493c68b79] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1010 19:29:05.241540  147213 system_pods.go:61] "storage-provisioner" [d4965b53-60c3-4a97-bd52-d164d977247a] Running
	I1010 19:29:05.241549  147213 system_pods.go:74] duration metric: took 3.853712488s to wait for pod list to return data ...
	I1010 19:29:05.241556  147213 default_sa.go:34] waiting for default service account to be created ...
	I1010 19:29:05.244686  147213 default_sa.go:45] found service account: "default"
	I1010 19:29:05.244721  147213 default_sa.go:55] duration metric: took 3.158069ms for default service account to be created ...
	I1010 19:29:05.244733  147213 system_pods.go:116] waiting for k8s-apps to be running ...
	I1010 19:29:05.249372  147213 system_pods.go:86] 8 kube-system pods found
	I1010 19:29:05.249398  147213 system_pods.go:89] "coredns-7c65d6cfc9-86brb" [28e5f869-f82f-4bd4-9d9c-89499fa89c89] Running
	I1010 19:29:05.249404  147213 system_pods.go:89] "etcd-no-preload-320324" [3027fc7b-a2cd-47c9-a6f1-122bfd0475a8] Running
	I1010 19:29:05.249408  147213 system_pods.go:89] "kube-apiserver-no-preload-320324" [6e3d2eab-4568-48e9-957c-e724ff89b5da] Running
	I1010 19:29:05.249413  147213 system_pods.go:89] "kube-controller-manager-no-preload-320324" [a6d65cd3-48b1-4faf-bb7f-f6433641d6f3] Running
	I1010 19:29:05.249418  147213 system_pods.go:89] "kube-proxy-vn6sv" [e5b2c419-7299-4bc4-b263-99408b9484eb] Running
	I1010 19:29:05.249425  147213 system_pods.go:89] "kube-scheduler-no-preload-320324" [e089c258-e720-4c78-ada1-eebbe89556c1] Running
	I1010 19:29:05.249433  147213 system_pods.go:89] "metrics-server-6867b74b74-8w9lk" [354939e6-2ca9-44f5-8e8e-c10493c68b79] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1010 19:29:05.249442  147213 system_pods.go:89] "storage-provisioner" [d4965b53-60c3-4a97-bd52-d164d977247a] Running
	I1010 19:29:05.249455  147213 system_pods.go:126] duration metric: took 4.715381ms to wait for k8s-apps to be running ...
	I1010 19:29:05.249467  147213 system_svc.go:44] waiting for kubelet service to be running ....
	I1010 19:29:05.249519  147213 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 19:29:05.265180  147213 system_svc.go:56] duration metric: took 15.703413ms WaitForService to wait for kubelet
	I1010 19:29:05.265216  147213 kubeadm.go:582] duration metric: took 4m24.851723603s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1010 19:29:05.265237  147213 node_conditions.go:102] verifying NodePressure condition ...
	I1010 19:29:05.268775  147213 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1010 19:29:05.268807  147213 node_conditions.go:123] node cpu capacity is 2
	I1010 19:29:05.268821  147213 node_conditions.go:105] duration metric: took 3.575195ms to run NodePressure ...
	I1010 19:29:05.268834  147213 start.go:241] waiting for startup goroutines ...
	I1010 19:29:05.268840  147213 start.go:246] waiting for cluster config update ...
	I1010 19:29:05.268869  147213 start.go:255] writing updated cluster config ...
	I1010 19:29:05.269148  147213 ssh_runner.go:195] Run: rm -f paused
	I1010 19:29:05.319999  147213 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1010 19:29:05.322189  147213 out.go:177] * Done! kubectl is now configured to use "no-preload-320324" cluster and "default" namespace by default
	I1010 19:29:41.275119  148123 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1010 19:29:41.275272  148123 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1010 19:29:41.276822  148123 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1010 19:29:41.276919  148123 kubeadm.go:310] [preflight] Running pre-flight checks
	I1010 19:29:41.277017  148123 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1010 19:29:41.277142  148123 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1010 19:29:41.277254  148123 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1010 19:29:41.277357  148123 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1010 19:29:41.279069  148123 out.go:235]   - Generating certificates and keys ...
	I1010 19:29:41.279160  148123 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1010 19:29:41.279217  148123 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1010 19:29:41.279306  148123 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1010 19:29:41.279381  148123 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1010 19:29:41.279484  148123 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1010 19:29:41.279576  148123 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1010 19:29:41.279674  148123 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1010 19:29:41.279779  148123 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1010 19:29:41.279906  148123 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1010 19:29:41.279971  148123 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1010 19:29:41.280005  148123 kubeadm.go:310] [certs] Using the existing "sa" key
	I1010 19:29:41.280052  148123 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1010 19:29:41.280095  148123 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1010 19:29:41.280144  148123 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1010 19:29:41.280219  148123 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1010 19:29:41.280317  148123 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1010 19:29:41.280474  148123 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1010 19:29:41.280583  148123 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1010 19:29:41.280648  148123 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1010 19:29:41.280736  148123 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1010 19:29:41.282180  148123 out.go:235]   - Booting up control plane ...
	I1010 19:29:41.282266  148123 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1010 19:29:41.282336  148123 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1010 19:29:41.282414  148123 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1010 19:29:41.282538  148123 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1010 19:29:41.282748  148123 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1010 19:29:41.282822  148123 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1010 19:29:41.282896  148123 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1010 19:29:41.283061  148123 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1010 19:29:41.283123  148123 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1010 19:29:41.283287  148123 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1010 19:29:41.283348  148123 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1010 19:29:41.283519  148123 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1010 19:29:41.283623  148123 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1010 19:29:41.283809  148123 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1010 19:29:41.283900  148123 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1010 19:29:41.284115  148123 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1010 19:29:41.284135  148123 kubeadm.go:310] 
	I1010 19:29:41.284201  148123 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1010 19:29:41.284243  148123 kubeadm.go:310] 		timed out waiting for the condition
	I1010 19:29:41.284257  148123 kubeadm.go:310] 
	I1010 19:29:41.284307  148123 kubeadm.go:310] 	This error is likely caused by:
	I1010 19:29:41.284346  148123 kubeadm.go:310] 		- The kubelet is not running
	I1010 19:29:41.284481  148123 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1010 19:29:41.284505  148123 kubeadm.go:310] 
	I1010 19:29:41.284655  148123 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1010 19:29:41.284714  148123 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1010 19:29:41.284752  148123 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1010 19:29:41.284758  148123 kubeadm.go:310] 
	I1010 19:29:41.284913  148123 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1010 19:29:41.285038  148123 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1010 19:29:41.285055  148123 kubeadm.go:310] 
	I1010 19:29:41.285169  148123 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1010 19:29:41.285307  148123 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1010 19:29:41.285475  148123 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1010 19:29:41.285587  148123 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1010 19:29:41.285644  148123 kubeadm.go:310] 
	W1010 19:29:41.285747  148123 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1010 19:29:41.285792  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1010 19:29:46.681684  148123 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.395862838s)
	I1010 19:29:46.681797  148123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 19:29:46.696927  148123 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1010 19:29:46.708250  148123 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1010 19:29:46.708280  148123 kubeadm.go:157] found existing configuration files:
	
	I1010 19:29:46.708339  148123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1010 19:29:46.719748  148123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1010 19:29:46.719828  148123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1010 19:29:46.730892  148123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1010 19:29:46.742329  148123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1010 19:29:46.742401  148123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1010 19:29:46.752919  148123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1010 19:29:46.762860  148123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1010 19:29:46.762932  148123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1010 19:29:46.772513  148123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1010 19:29:46.781672  148123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1010 19:29:46.781746  148123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1010 19:29:46.791250  148123 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1010 19:29:47.018706  148123 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1010 19:31:43.351851  148123 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1010 19:31:43.351992  148123 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1010 19:31:43.353664  148123 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1010 19:31:43.353752  148123 kubeadm.go:310] [preflight] Running pre-flight checks
	I1010 19:31:43.353861  148123 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1010 19:31:43.353998  148123 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1010 19:31:43.354130  148123 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1010 19:31:43.354235  148123 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1010 19:31:43.356450  148123 out.go:235]   - Generating certificates and keys ...
	I1010 19:31:43.356557  148123 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1010 19:31:43.356652  148123 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1010 19:31:43.356783  148123 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1010 19:31:43.356902  148123 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1010 19:31:43.357002  148123 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1010 19:31:43.357074  148123 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1010 19:31:43.357181  148123 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1010 19:31:43.357247  148123 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1010 19:31:43.357325  148123 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1010 19:31:43.357408  148123 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1010 19:31:43.357459  148123 kubeadm.go:310] [certs] Using the existing "sa" key
	I1010 19:31:43.357529  148123 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1010 19:31:43.357604  148123 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1010 19:31:43.357676  148123 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1010 19:31:43.357751  148123 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1010 19:31:43.357830  148123 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1010 19:31:43.357979  148123 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1010 19:31:43.358092  148123 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1010 19:31:43.358158  148123 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1010 19:31:43.358263  148123 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1010 19:31:43.360216  148123 out.go:235]   - Booting up control plane ...
	I1010 19:31:43.360332  148123 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1010 19:31:43.360463  148123 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1010 19:31:43.360551  148123 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1010 19:31:43.360673  148123 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1010 19:31:43.360907  148123 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1010 19:31:43.360976  148123 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1010 19:31:43.361058  148123 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1010 19:31:43.361309  148123 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1010 19:31:43.361410  148123 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1010 19:31:43.361648  148123 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1010 19:31:43.361742  148123 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1010 19:31:43.361960  148123 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1010 19:31:43.362058  148123 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1010 19:31:43.362308  148123 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1010 19:31:43.362399  148123 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1010 19:31:43.362640  148123 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1010 19:31:43.362652  148123 kubeadm.go:310] 
	I1010 19:31:43.362708  148123 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1010 19:31:43.362758  148123 kubeadm.go:310] 		timed out waiting for the condition
	I1010 19:31:43.362768  148123 kubeadm.go:310] 
	I1010 19:31:43.362809  148123 kubeadm.go:310] 	This error is likely caused by:
	I1010 19:31:43.362858  148123 kubeadm.go:310] 		- The kubelet is not running
	I1010 19:31:43.362981  148123 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1010 19:31:43.362993  148123 kubeadm.go:310] 
	I1010 19:31:43.363119  148123 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1010 19:31:43.363164  148123 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1010 19:31:43.363204  148123 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1010 19:31:43.363219  148123 kubeadm.go:310] 
	I1010 19:31:43.363344  148123 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1010 19:31:43.363454  148123 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1010 19:31:43.363465  148123 kubeadm.go:310] 
	I1010 19:31:43.363591  148123 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1010 19:31:43.363708  148123 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1010 19:31:43.363803  148123 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1010 19:31:43.363892  148123 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1010 19:31:43.363978  148123 kubeadm.go:394] duration metric: took 8m3.085634474s to StartCluster
	I1010 19:31:43.364052  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 19:31:43.364159  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 19:31:43.364268  148123 kubeadm.go:310] 
	I1010 19:31:43.413041  148123 cri.go:89] found id: ""
	I1010 19:31:43.413075  148123 logs.go:282] 0 containers: []
	W1010 19:31:43.413086  148123 logs.go:284] No container was found matching "kube-apiserver"
	I1010 19:31:43.413092  148123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 19:31:43.413206  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 19:31:43.456454  148123 cri.go:89] found id: ""
	I1010 19:31:43.456487  148123 logs.go:282] 0 containers: []
	W1010 19:31:43.456499  148123 logs.go:284] No container was found matching "etcd"
	I1010 19:31:43.456507  148123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 19:31:43.456567  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 19:31:43.498582  148123 cri.go:89] found id: ""
	I1010 19:31:43.498614  148123 logs.go:282] 0 containers: []
	W1010 19:31:43.498624  148123 logs.go:284] No container was found matching "coredns"
	I1010 19:31:43.498633  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 19:31:43.498694  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 19:31:43.540557  148123 cri.go:89] found id: ""
	I1010 19:31:43.540589  148123 logs.go:282] 0 containers: []
	W1010 19:31:43.540598  148123 logs.go:284] No container was found matching "kube-scheduler"
	I1010 19:31:43.540605  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 19:31:43.540661  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 19:31:43.576743  148123 cri.go:89] found id: ""
	I1010 19:31:43.576776  148123 logs.go:282] 0 containers: []
	W1010 19:31:43.576787  148123 logs.go:284] No container was found matching "kube-proxy"
	I1010 19:31:43.576796  148123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 19:31:43.576887  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 19:31:43.614620  148123 cri.go:89] found id: ""
	I1010 19:31:43.614652  148123 logs.go:282] 0 containers: []
	W1010 19:31:43.614662  148123 logs.go:284] No container was found matching "kube-controller-manager"
	I1010 19:31:43.614668  148123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 19:31:43.614730  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 19:31:43.652008  148123 cri.go:89] found id: ""
	I1010 19:31:43.652061  148123 logs.go:282] 0 containers: []
	W1010 19:31:43.652075  148123 logs.go:284] No container was found matching "kindnet"
	I1010 19:31:43.652084  148123 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1010 19:31:43.652153  148123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1010 19:31:43.689990  148123 cri.go:89] found id: ""
	I1010 19:31:43.690019  148123 logs.go:282] 0 containers: []
	W1010 19:31:43.690028  148123 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1010 19:31:43.690043  148123 logs.go:123] Gathering logs for kubelet ...
	I1010 19:31:43.690056  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 19:31:43.745634  148123 logs.go:123] Gathering logs for dmesg ...
	I1010 19:31:43.745673  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 19:31:43.760064  148123 logs.go:123] Gathering logs for describe nodes ...
	I1010 19:31:43.760095  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 19:31:43.840168  148123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 19:31:43.840197  148123 logs.go:123] Gathering logs for CRI-O ...
	I1010 19:31:43.840214  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 19:31:43.943265  148123 logs.go:123] Gathering logs for container status ...
	I1010 19:31:43.943303  148123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1010 19:31:43.987758  148123 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1010 19:31:43.987816  148123 out.go:270] * 
	W1010 19:31:43.987891  148123 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1010 19:31:43.987906  148123 out.go:270] * 
	W1010 19:31:43.988703  148123 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1010 19:31:43.991928  148123 out.go:201] 
	W1010 19:31:43.993494  148123 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1010 19:31:43.993556  148123 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1010 19:31:43.993585  148123 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1010 19:31:43.995273  148123 out.go:201] 
	
	
	==> CRI-O <==
	Oct 10 19:43:42 old-k8s-version-947203 crio[635]: time="2024-10-10 19:43:42.031369727Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728589422031262641,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2fc8c9db-90a9-40b4-a9b8-45e661272cb3 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 10 19:43:42 old-k8s-version-947203 crio[635]: time="2024-10-10 19:43:42.031915426Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5b03c293-bf7a-4343-a22b-1e09b29b6b3b name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 19:43:42 old-k8s-version-947203 crio[635]: time="2024-10-10 19:43:42.031969117Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5b03c293-bf7a-4343-a22b-1e09b29b6b3b name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 19:43:42 old-k8s-version-947203 crio[635]: time="2024-10-10 19:43:42.032015804Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=5b03c293-bf7a-4343-a22b-1e09b29b6b3b name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 19:43:42 old-k8s-version-947203 crio[635]: time="2024-10-10 19:43:42.064241928Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fff9818a-36d4-4098-8d75-15a71a458cef name=/runtime.v1.RuntimeService/Version
	Oct 10 19:43:42 old-k8s-version-947203 crio[635]: time="2024-10-10 19:43:42.064387070Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fff9818a-36d4-4098-8d75-15a71a458cef name=/runtime.v1.RuntimeService/Version
	Oct 10 19:43:42 old-k8s-version-947203 crio[635]: time="2024-10-10 19:43:42.065894727Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=62710a50-c150-4498-9503-d8b961b4b817 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 10 19:43:42 old-k8s-version-947203 crio[635]: time="2024-10-10 19:43:42.066330609Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728589422066258004,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=62710a50-c150-4498-9503-d8b961b4b817 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 10 19:43:42 old-k8s-version-947203 crio[635]: time="2024-10-10 19:43:42.066830704Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e33c1a66-cc96-44cd-a65c-4ae0438b668a name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 19:43:42 old-k8s-version-947203 crio[635]: time="2024-10-10 19:43:42.066884067Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e33c1a66-cc96-44cd-a65c-4ae0438b668a name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 19:43:42 old-k8s-version-947203 crio[635]: time="2024-10-10 19:43:42.066923384Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=e33c1a66-cc96-44cd-a65c-4ae0438b668a name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 19:43:42 old-k8s-version-947203 crio[635]: time="2024-10-10 19:43:42.101149824Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c7062d53-f187-4cb0-ae17-28f0796c36d6 name=/runtime.v1.RuntimeService/Version
	Oct 10 19:43:42 old-k8s-version-947203 crio[635]: time="2024-10-10 19:43:42.101228667Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c7062d53-f187-4cb0-ae17-28f0796c36d6 name=/runtime.v1.RuntimeService/Version
	Oct 10 19:43:42 old-k8s-version-947203 crio[635]: time="2024-10-10 19:43:42.103510481Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=90e665d9-1627-4693-a2d5-d2588d0ac33f name=/runtime.v1.ImageService/ImageFsInfo
	Oct 10 19:43:42 old-k8s-version-947203 crio[635]: time="2024-10-10 19:43:42.103924507Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728589422103881431,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=90e665d9-1627-4693-a2d5-d2588d0ac33f name=/runtime.v1.ImageService/ImageFsInfo
	Oct 10 19:43:42 old-k8s-version-947203 crio[635]: time="2024-10-10 19:43:42.104487902Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e8c6c56d-f2f6-4950-88ed-0560fffc01b0 name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 19:43:42 old-k8s-version-947203 crio[635]: time="2024-10-10 19:43:42.104538650Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e8c6c56d-f2f6-4950-88ed-0560fffc01b0 name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 19:43:42 old-k8s-version-947203 crio[635]: time="2024-10-10 19:43:42.104571036Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=e8c6c56d-f2f6-4950-88ed-0560fffc01b0 name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 19:43:42 old-k8s-version-947203 crio[635]: time="2024-10-10 19:43:42.139384653Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=152f9fe0-b79d-4bf7-8139-8aad7a0112fa name=/runtime.v1.RuntimeService/Version
	Oct 10 19:43:42 old-k8s-version-947203 crio[635]: time="2024-10-10 19:43:42.139524101Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=152f9fe0-b79d-4bf7-8139-8aad7a0112fa name=/runtime.v1.RuntimeService/Version
	Oct 10 19:43:42 old-k8s-version-947203 crio[635]: time="2024-10-10 19:43:42.140509986Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=811207d6-e190-4750-8dc7-c87a40f9aa06 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 10 19:43:42 old-k8s-version-947203 crio[635]: time="2024-10-10 19:43:42.140892027Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728589422140872475,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=811207d6-e190-4750-8dc7-c87a40f9aa06 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 10 19:43:42 old-k8s-version-947203 crio[635]: time="2024-10-10 19:43:42.141442140Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f500ecac-b545-4e30-85de-bfce9dc01ed1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 19:43:42 old-k8s-version-947203 crio[635]: time="2024-10-10 19:43:42.141489695Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f500ecac-b545-4e30-85de-bfce9dc01ed1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 10 19:43:42 old-k8s-version-947203 crio[635]: time="2024-10-10 19:43:42.141525786Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=f500ecac-b545-4e30-85de-bfce9dc01ed1 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct10 19:23] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051246] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042600] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.085550] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.699486] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.514715] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.834131] systemd-fstab-generator[559]: Ignoring "noauto" option for root device
	[  +0.134078] systemd-fstab-generator[571]: Ignoring "noauto" option for root device
	[  +0.216712] systemd-fstab-generator[585]: Ignoring "noauto" option for root device
	[  +0.120541] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.278860] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +6.492743] systemd-fstab-generator[881]: Ignoring "noauto" option for root device
	[  +0.072493] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.094540] systemd-fstab-generator[1007]: Ignoring "noauto" option for root device
	[ +12.843516] kauditd_printk_skb: 46 callbacks suppressed
	[Oct10 19:27] systemd-fstab-generator[5080]: Ignoring "noauto" option for root device
	[Oct10 19:29] systemd-fstab-generator[5373]: Ignoring "noauto" option for root device
	[  +0.064417] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 19:43:42 up 20 min,  0 users,  load average: 0.03, 0.02, 0.00
	Linux old-k8s-version-947203 5.10.207 #1 SMP Tue Oct 8 15:16:25 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Oct 10 19:43:37 old-k8s-version-947203 kubelet[6915]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:138 +0x185
	Oct 10 19:43:37 old-k8s-version-947203 kubelet[6915]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run.func1()
	Oct 10 19:43:37 old-k8s-version-947203 kubelet[6915]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:222 +0x70
	Oct 10 19:43:37 old-k8s-version-947203 kubelet[6915]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc000adb6f0)
	Oct 10 19:43:37 old-k8s-version-947203 kubelet[6915]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f
	Oct 10 19:43:37 old-k8s-version-947203 kubelet[6915]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000bbbef0, 0x4f0ac20, 0xc000ac7d10, 0x1, 0xc0001020c0)
	Oct 10 19:43:37 old-k8s-version-947203 kubelet[6915]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xad
	Oct 10 19:43:37 old-k8s-version-947203 kubelet[6915]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc000a00c40, 0xc0001020c0)
	Oct 10 19:43:37 old-k8s-version-947203 kubelet[6915]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Oct 10 19:43:37 old-k8s-version-947203 kubelet[6915]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Oct 10 19:43:37 old-k8s-version-947203 kubelet[6915]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Oct 10 19:43:37 old-k8s-version-947203 kubelet[6915]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000b6c8e0, 0xc000ae1d20)
	Oct 10 19:43:37 old-k8s-version-947203 kubelet[6915]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Oct 10 19:43:37 old-k8s-version-947203 kubelet[6915]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Oct 10 19:43:37 old-k8s-version-947203 kubelet[6915]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Oct 10 19:43:37 old-k8s-version-947203 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Oct 10 19:43:37 old-k8s-version-947203 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Oct 10 19:43:37 old-k8s-version-947203 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 144.
	Oct 10 19:43:37 old-k8s-version-947203 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Oct 10 19:43:37 old-k8s-version-947203 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Oct 10 19:43:37 old-k8s-version-947203 kubelet[6924]: I1010 19:43:37.846413    6924 server.go:416] Version: v1.20.0
	Oct 10 19:43:37 old-k8s-version-947203 kubelet[6924]: I1010 19:43:37.846690    6924 server.go:837] Client rotation is on, will bootstrap in background
	Oct 10 19:43:37 old-k8s-version-947203 kubelet[6924]: I1010 19:43:37.848736    6924 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Oct 10 19:43:37 old-k8s-version-947203 kubelet[6924]: W1010 19:43:37.849646    6924 manager.go:159] Cannot detect current cgroup on cgroup v2
	Oct 10 19:43:37 old-k8s-version-947203 kubelet[6924]: I1010 19:43:37.849990    6924 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-947203 -n old-k8s-version-947203
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-947203 -n old-k8s-version-947203: exit status 2 (258.090213ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-947203" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (172.71s)

                                                
                                    

Test pass (250/318)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 7.83
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.14
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.31.1/json-events 6.23
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.07
18 TestDownloadOnly/v1.31.1/DeleteAll 0.14
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.61
22 TestOffline 83.98
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 130.82
31 TestAddons/serial/GCPAuth/Namespaces 0.18
32 TestAddons/serial/GCPAuth/PullSecret 9.53
34 TestAddons/parallel/Registry 16.52
36 TestAddons/parallel/InspektorGadget 10.92
39 TestAddons/parallel/CSI 77.46
40 TestAddons/parallel/Headlamp 17.66
41 TestAddons/parallel/CloudSpanner 5.86
42 TestAddons/parallel/LocalPath 59.41
43 TestAddons/parallel/NvidiaDevicePlugin 5.49
44 TestAddons/parallel/Yakd 12.03
46 TestCertOptions 64.1
47 TestCertExpiration 294.19
49 TestForceSystemdFlag 47.64
50 TestForceSystemdEnv 94.81
52 TestKVMDriverInstallOrUpdate 4.16
56 TestErrorSpam/setup 42.45
57 TestErrorSpam/start 0.36
58 TestErrorSpam/status 0.75
59 TestErrorSpam/pause 1.7
60 TestErrorSpam/unpause 1.7
61 TestErrorSpam/stop 5.69
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 85.91
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 44.66
68 TestFunctional/serial/KubeContext 0.04
69 TestFunctional/serial/KubectlGetPods 0.08
72 TestFunctional/serial/CacheCmd/cache/add_remote 3.42
73 TestFunctional/serial/CacheCmd/cache/add_local 1.95
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
75 TestFunctional/serial/CacheCmd/cache/list 0.05
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.23
77 TestFunctional/serial/CacheCmd/cache/cache_reload 1.73
78 TestFunctional/serial/CacheCmd/cache/delete 0.1
79 TestFunctional/serial/MinikubeKubectlCmd 0.11
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
81 TestFunctional/serial/ExtraConfig 42.5
82 TestFunctional/serial/ComponentHealth 0.07
83 TestFunctional/serial/LogsCmd 1.52
84 TestFunctional/serial/LogsFileCmd 1.52
85 TestFunctional/serial/InvalidService 4.69
87 TestFunctional/parallel/ConfigCmd 0.39
88 TestFunctional/parallel/DashboardCmd 28.45
89 TestFunctional/parallel/DryRun 0.3
90 TestFunctional/parallel/InternationalLanguage 0.15
91 TestFunctional/parallel/StatusCmd 1.15
95 TestFunctional/parallel/ServiceCmdConnect 9.67
96 TestFunctional/parallel/AddonsCmd 0.15
97 TestFunctional/parallel/PersistentVolumeClaim 46.39
99 TestFunctional/parallel/SSHCmd 0.45
100 TestFunctional/parallel/CpCmd 1.45
101 TestFunctional/parallel/MySQL 25.61
102 TestFunctional/parallel/FileSync 0.23
103 TestFunctional/parallel/CertSync 1.46
107 TestFunctional/parallel/NodeLabels 0.06
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.48
111 TestFunctional/parallel/License 0.19
112 TestFunctional/parallel/Version/short 0.08
113 TestFunctional/parallel/Version/components 0.97
114 TestFunctional/parallel/ImageCommands/ImageListShort 0.29
115 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
116 TestFunctional/parallel/ImageCommands/ImageListJson 0.24
117 TestFunctional/parallel/ImageCommands/ImageListYaml 0.25
118 TestFunctional/parallel/ImageCommands/ImageBuild 3.84
119 TestFunctional/parallel/ImageCommands/Setup 1.54
120 TestFunctional/parallel/ServiceCmd/DeployApp 10.24
130 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 3.65
131 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.91
132 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.69
133 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.54
134 TestFunctional/parallel/ImageCommands/ImageRemove 0.59
135 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.84
136 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.8
137 TestFunctional/parallel/ServiceCmd/List 0.56
138 TestFunctional/parallel/ServiceCmd/JSONOutput 0.57
139 TestFunctional/parallel/ServiceCmd/HTTPS 0.42
140 TestFunctional/parallel/ServiceCmd/Format 0.4
141 TestFunctional/parallel/ProfileCmd/profile_not_create 0.4
142 TestFunctional/parallel/ServiceCmd/URL 0.36
143 TestFunctional/parallel/ProfileCmd/profile_list 0.51
144 TestFunctional/parallel/MountCmd/any-port 21.12
145 TestFunctional/parallel/ProfileCmd/profile_json_output 0.38
146 TestFunctional/parallel/UpdateContextCmd/no_changes 0.09
147 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
148 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
149 TestFunctional/parallel/MountCmd/specific-port 1.88
150 TestFunctional/parallel/MountCmd/VerifyCleanup 1.54
151 TestFunctional/delete_echo-server_images 0.03
152 TestFunctional/delete_my-image_image 0.02
153 TestFunctional/delete_minikube_cached_images 0.02
157 TestMultiControlPlane/serial/StartCluster 201.01
158 TestMultiControlPlane/serial/DeployApp 6.36
159 TestMultiControlPlane/serial/PingHostFromPods 1.22
160 TestMultiControlPlane/serial/AddWorkerNode 59.36
161 TestMultiControlPlane/serial/NodeLabels 0.07
162 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.9
163 TestMultiControlPlane/serial/CopyFile 13.29
169 TestMultiControlPlane/serial/DeleteSecondaryNode 16.81
170 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.64
172 TestMultiControlPlane/serial/RestartCluster 373
173 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.74
174 TestMultiControlPlane/serial/AddSecondaryNode 81.91
175 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.9
179 TestJSONOutput/start/Command 55.04
180 TestJSONOutput/start/Audit 0
182 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
183 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
185 TestJSONOutput/pause/Command 0.74
186 TestJSONOutput/pause/Audit 0
188 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/unpause/Command 0.62
192 TestJSONOutput/unpause/Audit 0
194 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/stop/Command 7.38
198 TestJSONOutput/stop/Audit 0
200 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
202 TestErrorJSONOutput 0.21
207 TestMainNoArgs 0.05
208 TestMinikubeProfile 90.36
211 TestMountStart/serial/StartWithMountFirst 28.16
212 TestMountStart/serial/VerifyMountFirst 0.39
213 TestMountStart/serial/StartWithMountSecond 28.62
214 TestMountStart/serial/VerifyMountSecond 0.38
215 TestMountStart/serial/DeleteFirst 0.7
216 TestMountStart/serial/VerifyMountPostDelete 0.39
217 TestMountStart/serial/Stop 1.28
218 TestMountStart/serial/RestartStopped 23.62
219 TestMountStart/serial/VerifyMountPostStop 0.38
222 TestMultiNode/serial/FreshStart2Nodes 112.43
223 TestMultiNode/serial/DeployApp2Nodes 5.49
224 TestMultiNode/serial/PingHostFrom2Pods 0.82
225 TestMultiNode/serial/AddNode 51.69
226 TestMultiNode/serial/MultiNodeLabels 0.06
227 TestMultiNode/serial/ProfileList 0.59
228 TestMultiNode/serial/CopyFile 7.3
229 TestMultiNode/serial/StopNode 2.29
230 TestMultiNode/serial/StartAfterStop 38.88
232 TestMultiNode/serial/DeleteNode 2.27
234 TestMultiNode/serial/RestartMultiNode 182.32
235 TestMultiNode/serial/ValidateNameConflict 44.78
242 TestScheduledStopUnix 115.81
246 TestRunningBinaryUpgrade 164.44
251 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
252 TestNoKubernetes/serial/StartWithK8s 116.82
253 TestStoppedBinaryUpgrade/Setup 0.52
254 TestStoppedBinaryUpgrade/Upgrade 107.44
256 TestPause/serial/Start 109.21
257 TestNoKubernetes/serial/StartWithStopK8s 35.75
258 TestNoKubernetes/serial/Start 41.89
259 TestStoppedBinaryUpgrade/MinikubeLogs 0.97
263 TestNoKubernetes/serial/VerifyK8sNotRunning 0.21
264 TestNoKubernetes/serial/ProfileList 1.37
265 TestNoKubernetes/serial/Stop 1.37
270 TestNetworkPlugins/group/false 3.78
271 TestNoKubernetes/serial/StartNoArgs 22.84
275 TestPause/serial/SecondStartNoReconfiguration 48.54
283 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.2
284 TestPause/serial/Pause 0.91
285 TestPause/serial/VerifyStatus 0.28
286 TestPause/serial/Unpause 0.8
287 TestPause/serial/PauseAgain 0.91
288 TestPause/serial/DeletePaused 0.89
289 TestPause/serial/VerifyDeletedResources 0.54
290 TestNetworkPlugins/group/auto/Start 81.26
291 TestNetworkPlugins/group/kindnet/Start 113.19
292 TestNetworkPlugins/group/calico/Start 118.17
293 TestNetworkPlugins/group/auto/KubeletFlags 0.21
294 TestNetworkPlugins/group/auto/NetCatPod 10.25
295 TestNetworkPlugins/group/auto/DNS 0.19
296 TestNetworkPlugins/group/auto/Localhost 0.14
297 TestNetworkPlugins/group/auto/HairPin 0.16
298 TestNetworkPlugins/group/custom-flannel/Start 81.06
299 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
300 TestNetworkPlugins/group/kindnet/KubeletFlags 0.24
301 TestNetworkPlugins/group/calico/ControllerPod 6.01
302 TestNetworkPlugins/group/kindnet/NetCatPod 11.24
303 TestNetworkPlugins/group/calico/KubeletFlags 0.22
304 TestNetworkPlugins/group/calico/NetCatPod 11.23
305 TestNetworkPlugins/group/kindnet/DNS 0.16
306 TestNetworkPlugins/group/kindnet/Localhost 0.12
307 TestNetworkPlugins/group/kindnet/HairPin 0.12
308 TestNetworkPlugins/group/calico/DNS 0.25
309 TestNetworkPlugins/group/calico/Localhost 0.17
310 TestNetworkPlugins/group/calico/HairPin 0.19
311 TestNetworkPlugins/group/enable-default-cni/Start 58.74
312 TestNetworkPlugins/group/flannel/Start 99.63
313 TestNetworkPlugins/group/bridge/Start 107.6
314 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.22
315 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.24
316 TestNetworkPlugins/group/custom-flannel/DNS 0.16
317 TestNetworkPlugins/group/custom-flannel/Localhost 0.12
318 TestNetworkPlugins/group/custom-flannel/HairPin 0.12
319 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.26
320 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.34
323 TestNetworkPlugins/group/enable-default-cni/DNS 0.16
324 TestNetworkPlugins/group/enable-default-cni/Localhost 0.13
325 TestNetworkPlugins/group/enable-default-cni/HairPin 0.12
327 TestStartStop/group/no-preload/serial/FirstStart 95.06
328 TestNetworkPlugins/group/flannel/ControllerPod 6.01
329 TestNetworkPlugins/group/flannel/KubeletFlags 0.24
330 TestNetworkPlugins/group/flannel/NetCatPod 13.26
331 TestNetworkPlugins/group/bridge/KubeletFlags 0.27
332 TestNetworkPlugins/group/bridge/NetCatPod 12.27
333 TestNetworkPlugins/group/flannel/DNS 0.14
334 TestNetworkPlugins/group/flannel/Localhost 0.13
335 TestNetworkPlugins/group/flannel/HairPin 0.12
336 TestNetworkPlugins/group/bridge/DNS 0.2
337 TestNetworkPlugins/group/bridge/Localhost 0.14
338 TestNetworkPlugins/group/bridge/HairPin 0.13
340 TestStartStop/group/embed-certs/serial/FirstStart 87.71
342 TestStartStop/group/newest-cni/serial/FirstStart 66.78
343 TestStartStop/group/no-preload/serial/DeployApp 8.31
344 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.03
346 TestStartStop/group/newest-cni/serial/DeployApp 0
347 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.08
348 TestStartStop/group/newest-cni/serial/Stop 10.55
349 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
350 TestStartStop/group/newest-cni/serial/SecondStart 37.57
351 TestStartStop/group/embed-certs/serial/DeployApp 10.29
352 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.17
354 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
355 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
356 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.23
357 TestStartStop/group/newest-cni/serial/Pause 2.87
359 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 92.05
361 TestStartStop/group/no-preload/serial/SecondStart 651.9
364 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.28
365 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1
368 TestStartStop/group/embed-certs/serial/SecondStart 569.28
369 TestStartStop/group/old-k8s-version/serial/Stop 2.29
370 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
373 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 469.72
x
+
TestDownloadOnly/v1.20.0/json-events (7.83s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-058787 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-058787 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (7.826707086s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (7.83s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I1010 17:57:40.183199   88876 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I1010 17:57:40.183303   88876 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-058787
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-058787: exit status 85 (65.937885ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-058787 | jenkins | v1.34.0 | 10 Oct 24 17:57 UTC |          |
	|         | -p download-only-058787        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/10 17:57:32
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1010 17:57:32.399836   88888 out.go:345] Setting OutFile to fd 1 ...
	I1010 17:57:32.400099   88888 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 17:57:32.400109   88888 out.go:358] Setting ErrFile to fd 2...
	I1010 17:57:32.400113   88888 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 17:57:32.400319   88888 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19787-81676/.minikube/bin
	W1010 17:57:32.400439   88888 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19787-81676/.minikube/config/config.json: open /home/jenkins/minikube-integration/19787-81676/.minikube/config/config.json: no such file or directory
	I1010 17:57:32.401024   88888 out.go:352] Setting JSON to true
	I1010 17:57:32.401880   88888 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":5998,"bootTime":1728577054,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1010 17:57:32.401995   88888 start.go:139] virtualization: kvm guest
	I1010 17:57:32.404456   88888 out.go:97] [download-only-058787] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W1010 17:57:32.404606   88888 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19787-81676/.minikube/cache/preloaded-tarball: no such file or directory
	I1010 17:57:32.404696   88888 notify.go:220] Checking for updates...
	I1010 17:57:32.405965   88888 out.go:169] MINIKUBE_LOCATION=19787
	I1010 17:57:32.407469   88888 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1010 17:57:32.409087   88888 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19787-81676/kubeconfig
	I1010 17:57:32.410679   88888 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19787-81676/.minikube
	I1010 17:57:32.412266   88888 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1010 17:57:32.415062   88888 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1010 17:57:32.415272   88888 driver.go:394] Setting default libvirt URI to qemu:///system
	I1010 17:57:32.450680   88888 out.go:97] Using the kvm2 driver based on user configuration
	I1010 17:57:32.450717   88888 start.go:297] selected driver: kvm2
	I1010 17:57:32.450726   88888 start.go:901] validating driver "kvm2" against <nil>
	I1010 17:57:32.451109   88888 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 17:57:32.451227   88888 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19787-81676/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1010 17:57:32.467507   88888 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1010 17:57:32.467576   88888 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1010 17:57:32.468142   88888 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I1010 17:57:32.468294   88888 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1010 17:57:32.468328   88888 cni.go:84] Creating CNI manager for ""
	I1010 17:57:32.468377   88888 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1010 17:57:32.468386   88888 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1010 17:57:32.468463   88888 start.go:340] cluster config:
	{Name:download-only-058787 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-058787 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 17:57:32.468641   88888 iso.go:125] acquiring lock: {Name:mk53f4624ffe67cac4f5d6b29b9ed87292a010a5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 17:57:32.470805   88888 out.go:97] Downloading VM boot image ...
	I1010 17:57:32.470852   88888 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19787-81676/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso
	I1010 17:57:34.866274   88888 out.go:97] Starting "download-only-058787" primary control-plane node in "download-only-058787" cluster
	I1010 17:57:34.866305   88888 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1010 17:57:34.889037   88888 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1010 17:57:34.889088   88888 cache.go:56] Caching tarball of preloaded images
	I1010 17:57:34.889272   88888 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1010 17:57:34.891356   88888 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I1010 17:57:34.891387   88888 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I1010 17:57:34.916494   88888 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/19787-81676/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-058787 host does not exist
	  To start a cluster, run: "minikube start -p download-only-058787"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-058787
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (6.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-497455 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-497455 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (6.234067176s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (6.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
I1010 17:57:46.761980   88876 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
I1010 17:57:46.762022   88876 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19787-81676/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-497455
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-497455: exit status 85 (65.475988ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-058787 | jenkins | v1.34.0 | 10 Oct 24 17:57 UTC |                     |
	|         | -p download-only-058787        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 10 Oct 24 17:57 UTC | 10 Oct 24 17:57 UTC |
	| delete  | -p download-only-058787        | download-only-058787 | jenkins | v1.34.0 | 10 Oct 24 17:57 UTC | 10 Oct 24 17:57 UTC |
	| start   | -o=json --download-only        | download-only-497455 | jenkins | v1.34.0 | 10 Oct 24 17:57 UTC |                     |
	|         | -p download-only-497455        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/10 17:57:40
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1010 17:57:40.570601   89090 out.go:345] Setting OutFile to fd 1 ...
	I1010 17:57:40.570723   89090 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 17:57:40.570732   89090 out.go:358] Setting ErrFile to fd 2...
	I1010 17:57:40.570736   89090 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 17:57:40.570919   89090 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19787-81676/.minikube/bin
	I1010 17:57:40.571520   89090 out.go:352] Setting JSON to true
	I1010 17:57:40.572349   89090 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":6007,"bootTime":1728577054,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1010 17:57:40.572457   89090 start.go:139] virtualization: kvm guest
	I1010 17:57:40.574626   89090 out.go:97] [download-only-497455] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1010 17:57:40.574791   89090 notify.go:220] Checking for updates...
	I1010 17:57:40.576178   89090 out.go:169] MINIKUBE_LOCATION=19787
	I1010 17:57:40.577684   89090 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1010 17:57:40.579170   89090 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19787-81676/kubeconfig
	I1010 17:57:40.580964   89090 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19787-81676/.minikube
	I1010 17:57:40.582436   89090 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1010 17:57:40.585364   89090 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1010 17:57:40.585609   89090 driver.go:394] Setting default libvirt URI to qemu:///system
	I1010 17:57:40.618699   89090 out.go:97] Using the kvm2 driver based on user configuration
	I1010 17:57:40.618725   89090 start.go:297] selected driver: kvm2
	I1010 17:57:40.618731   89090 start.go:901] validating driver "kvm2" against <nil>
	I1010 17:57:40.619041   89090 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 17:57:40.619134   89090 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19787-81676/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1010 17:57:40.635252   89090 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1010 17:57:40.635301   89090 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1010 17:57:40.635847   89090 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I1010 17:57:40.636009   89090 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1010 17:57:40.636044   89090 cni.go:84] Creating CNI manager for ""
	I1010 17:57:40.636096   89090 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1010 17:57:40.636109   89090 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1010 17:57:40.636171   89090 start.go:340] cluster config:
	{Name:download-only-497455 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-497455 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 17:57:40.636276   89090 iso.go:125] acquiring lock: {Name:mk53f4624ffe67cac4f5d6b29b9ed87292a010a5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 17:57:40.638101   89090 out.go:97] Starting "download-only-497455" primary control-plane node in "download-only-497455" cluster
	I1010 17:57:40.638123   89090 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1010 17:57:40.660737   89090 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1010 17:57:40.660776   89090 cache.go:56] Caching tarball of preloaded images
	I1010 17:57:40.660970   89090 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1010 17:57:40.662843   89090 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I1010 17:57:40.662864   89090 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 ...
	I1010 17:57:40.706681   89090 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4?checksum=md5:aa79045e4550b9510ee496fee0d50abb -> /home/jenkins/minikube-integration/19787-81676/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1010 17:57:45.321443   89090 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 ...
	I1010 17:57:45.321569   89090 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19787-81676/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 ...
	
	
	* The control-plane node download-only-497455 host does not exist
	  To start a cluster, run: "minikube start -p download-only-497455"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-497455
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.61s)

                                                
                                                
=== RUN   TestBinaryMirror
I1010 17:57:47.365811   88876 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-244092 --alsologtostderr --binary-mirror http://127.0.0.1:46773 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-244092" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-244092
--- PASS: TestBinaryMirror (0.61s)

                                                
                                    
x
+
TestOffline (83.98s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-683204 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-683204 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m23.053813336s)
helpers_test.go:175: Cleaning up "offline-crio-683204" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-683204
--- PASS: TestOffline (83.98s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:935: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-473910
addons_test.go:935: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-473910: exit status 85 (55.495481ms)

                                                
                                                
-- stdout --
	* Profile "addons-473910" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-473910"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:946: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-473910
addons_test.go:946: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-473910: exit status 85 (56.987418ms)

                                                
                                                
-- stdout --
	* Profile "addons-473910" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-473910"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (130.82s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-473910 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-473910 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m10.816943299s)
--- PASS: TestAddons/Setup (130.82s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-473910 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-473910 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/PullSecret (9.53s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/PullSecret
addons_test.go:614: (dbg) Run:  kubectl --context addons-473910 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-473910 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/PullSecret: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [ee51e877-54f4-4ca0-84f2-3fa775f67d92] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [ee51e877-54f4-4ca0-84f2-3fa775f67d92] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/PullSecret: integration-test=busybox healthy within 9.004889045s
addons_test.go:633: (dbg) Run:  kubectl --context addons-473910 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-473910 describe sa gcp-auth-test
addons_test.go:683: (dbg) Run:  kubectl --context addons-473910 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/PullSecret (9.53s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.52s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 3.238668ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-4k74q" [604b3b36-a2fa-4e21-ab57-959fbdee9a2b] Running
I1010 18:00:21.179264   88876 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1010 18:00:21.179280   88876 kapi.go:107] duration metric: took 6.512704ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.002863067s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-f4hnz" [5d8faf25-5998-4727-be43-6800e479cc59] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.086014216s
addons_test.go:331: (dbg) Run:  kubectl --context addons-473910 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-473910 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-473910 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.619028018s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p addons-473910 ip
2024/10/10 18:00:37 [DEBUG] GET http://192.168.39.238:5000
addons_test.go:975: (dbg) Run:  out/minikube-linux-amd64 -p addons-473910 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.52s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.92s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:758: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-9ttjs" [afa6a1bb-da9f-4d60-8ffe-79ec0f2b866b] Running
addons_test.go:758: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.073976366s
addons_test.go:975: (dbg) Run:  out/minikube-linux-amd64 -p addons-473910 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:975: (dbg) Done: out/minikube-linux-amd64 -p addons-473910 addons disable inspektor-gadget --alsologtostderr -v=1: (5.847437594s)
--- PASS: TestAddons/parallel/InspektorGadget (10.92s)

                                                
                                    
x
+
TestAddons/parallel/CSI (77.46s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:488: csi-hostpath-driver pods stabilized in 6.519965ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-473910 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-473910 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-473910 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-473910 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-473910 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-473910 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-473910 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-473910 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-473910 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-473910 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-473910 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-473910 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-473910 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-473910 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-473910 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-473910 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-473910 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-473910 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-473910 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-473910 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-473910 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-473910 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-473910 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-473910 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-473910 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-473910 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-473910 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-473910 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-473910 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-473910 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-473910 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [de2f3d37-03f9-444d-864b-575e1b3d3355] Pending
helpers_test.go:344: "task-pv-pod" [de2f3d37-03f9-444d-864b-575e1b3d3355] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [de2f3d37-03f9-444d-864b-575e1b3d3355] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 13.004286959s
addons_test.go:511: (dbg) Run:  kubectl --context addons-473910 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-473910 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-473910 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-473910 delete pod task-pv-pod
addons_test.go:527: (dbg) Run:  kubectl --context addons-473910 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-473910 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-473910 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-473910 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-473910 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-473910 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-473910 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-473910 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-473910 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-473910 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-473910 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-473910 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-473910 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-473910 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-473910 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-473910 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-473910 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-473910 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-473910 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-473910 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-473910 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [853e6edc-683a-4f32-89c0-e6aecbd48902] Pending
helpers_test.go:344: "task-pv-pod-restore" [853e6edc-683a-4f32-89c0-e6aecbd48902] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [853e6edc-683a-4f32-89c0-e6aecbd48902] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004244389s
addons_test.go:553: (dbg) Run:  kubectl --context addons-473910 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-473910 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-473910 delete volumesnapshot new-snapshot-demo
addons_test.go:975: (dbg) Run:  out/minikube-linux-amd64 -p addons-473910 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:975: (dbg) Run:  out/minikube-linux-amd64 -p addons-473910 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:975: (dbg) Done: out/minikube-linux-amd64 -p addons-473910 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.926175903s)
--- PASS: TestAddons/parallel/CSI (77.46s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.66s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:743: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-473910 --alsologtostderr -v=1
addons_test.go:748: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-hlmxd" [c102612e-1e74-4371-a685-fd6e81194b45] Pending
helpers_test.go:344: "headlamp-7b5c95b59d-hlmxd" [c102612e-1e74-4371-a685-fd6e81194b45] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-hlmxd" [c102612e-1e74-4371-a685-fd6e81194b45] Running
addons_test.go:748: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.004266719s
addons_test.go:975: (dbg) Run:  out/minikube-linux-amd64 -p addons-473910 addons disable headlamp --alsologtostderr -v=1
addons_test.go:975: (dbg) Done: out/minikube-linux-amd64 -p addons-473910 addons disable headlamp --alsologtostderr -v=1: (5.687635092s)
--- PASS: TestAddons/parallel/Headlamp (17.66s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.86s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:775: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5b584cc74-kgkxx" [5cfe6418-5799-4105-8960-ac0bedaff76f] Running
addons_test.go:775: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.00507295s
addons_test.go:975: (dbg) Run:  out/minikube-linux-amd64 -p addons-473910 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.86s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (59.41s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:884: (dbg) Run:  kubectl --context addons-473910 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:890: (dbg) Run:  kubectl --context addons-473910 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:894: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-473910 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-473910 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-473910 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-473910 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-473910 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-473910 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-473910 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:897: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [c478f71c-80fd-4870-aa40-ac96af4c9d1e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [c478f71c-80fd-4870-aa40-ac96af4c9d1e] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [c478f71c-80fd-4870-aa40-ac96af4c9d1e] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:897: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 9.004637017s
addons_test.go:902: (dbg) Run:  kubectl --context addons-473910 get pvc test-pvc -o=json
addons_test.go:911: (dbg) Run:  out/minikube-linux-amd64 -p addons-473910 ssh "cat /opt/local-path-provisioner/pvc-4c005375-b770-4d67-a3b3-31e1e4368658_default_test-pvc/file1"
addons_test.go:923: (dbg) Run:  kubectl --context addons-473910 delete pod test-local-path
addons_test.go:927: (dbg) Run:  kubectl --context addons-473910 delete pvc test-pvc
addons_test.go:975: (dbg) Run:  out/minikube-linux-amd64 -p addons-473910 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:975: (dbg) Done: out/minikube-linux-amd64 -p addons-473910 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.60068374s)
--- PASS: TestAddons/parallel/LocalPath (59.41s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.49s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:960: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-6cgkn" [a63e13d1-dda1-4177-8dda-1a4d528ccd30] Running
addons_test.go:960: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.00457231s
addons_test.go:975: (dbg) Run:  out/minikube-linux-amd64 -p addons-473910 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.49s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (12.03s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:969: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-v8npc" [15bbbac6-0d9b-4fc1-b7e9-629ca145bc7c] Running
addons_test.go:969: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003675955s
addons_test.go:975: (dbg) Run:  out/minikube-linux-amd64 -p addons-473910 addons disable yakd --alsologtostderr -v=1
addons_test.go:975: (dbg) Done: out/minikube-linux-amd64 -p addons-473910 addons disable yakd --alsologtostderr -v=1: (6.021774018s)
--- PASS: TestAddons/parallel/Yakd (12.03s)

                                                
                                    
x
+
TestCertOptions (64.1s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-584539 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-584539 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m2.788850743s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-584539 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-584539 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-584539 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-584539" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-584539
--- PASS: TestCertOptions (64.10s)

                                                
                                    
x
+
TestCertExpiration (294.19s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-292195 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
E1010 19:07:50.017980   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/functional-346405/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-292195 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m11.011186708s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-292195 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-292195 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (41.821752878s)
helpers_test.go:175: Cleaning up "cert-expiration-292195" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-292195
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-292195: (1.352649809s)
--- PASS: TestCertExpiration (294.19s)

                                                
                                    
x
+
TestForceSystemdFlag (47.64s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-160659 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-160659 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (46.408080728s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-160659 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-160659" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-160659
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-160659: (1.016280818s)
--- PASS: TestForceSystemdFlag (47.64s)

                                                
                                    
x
+
TestForceSystemdEnv (94.81s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-754699 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-754699 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m34.035314276s)
helpers_test.go:175: Cleaning up "force-systemd-env-754699" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-754699
--- PASS: TestForceSystemdEnv (94.81s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.16s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I1010 19:07:17.980748   88876 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1010 19:07:17.980931   88876 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W1010 19:07:18.012376   88876 install.go:62] docker-machine-driver-kvm2: exit status 1
W1010 19:07:18.012911   88876 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I1010 19:07:18.012985   88876 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2161973343/001/docker-machine-driver-kvm2
I1010 19:07:18.233674   88876 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate2161973343/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x52facc0 0x52facc0 0x52facc0 0x52facc0 0x52facc0 0x52facc0 0x52facc0] Decompressors:map[bz2:0xc0003e8940 gz:0xc0003e8948 tar:0xc0003e88d0 tar.bz2:0xc0003e88f0 tar.gz:0xc0003e8910 tar.xz:0xc0003e8920 tar.zst:0xc0003e8930 tbz2:0xc0003e88f0 tgz:0xc0003e8910 txz:0xc0003e8920 tzst:0xc0003e8930 xz:0xc0003e8950 zip:0xc0003e8960 zst:0xc0003e8958] Getters:map[file:0xc001dc5830 http:0xc00070ec30 https:0xc00070ec80] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I1010 19:07:18.233721   88876 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2161973343/001/docker-machine-driver-kvm2
I1010 19:07:20.480909   88876 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1010 19:07:20.481034   88876 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1010 19:07:20.517087   88876 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W1010 19:07:20.517128   88876 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W1010 19:07:20.517208   88876 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I1010 19:07:20.517249   88876 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2161973343/002/docker-machine-driver-kvm2
I1010 19:07:20.566098   88876 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate2161973343/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x52facc0 0x52facc0 0x52facc0 0x52facc0 0x52facc0 0x52facc0 0x52facc0] Decompressors:map[bz2:0xc0003e8940 gz:0xc0003e8948 tar:0xc0003e88d0 tar.bz2:0xc0003e88f0 tar.gz:0xc0003e8910 tar.xz:0xc0003e8920 tar.zst:0xc0003e8930 tbz2:0xc0003e88f0 tgz:0xc0003e8910 txz:0xc0003e8920 tzst:0xc0003e8930 xz:0xc0003e8950 zip:0xc0003e8960 zst:0xc0003e8958] Getters:map[file:0xc000977a70 http:0xc0006bfea0 https:0xc0006bfef0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I1010 19:07:20.566164   88876 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2161973343/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (4.16s)

                                                
                                    
x
+
TestErrorSpam/setup (42.45s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-347168 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-347168 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-347168 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-347168 --driver=kvm2  --container-runtime=crio: (42.449278322s)
--- PASS: TestErrorSpam/setup (42.45s)

                                                
                                    
x
+
TestErrorSpam/start (0.36s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-347168 --log_dir /tmp/nospam-347168 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-347168 --log_dir /tmp/nospam-347168 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-347168 --log_dir /tmp/nospam-347168 start --dry-run
--- PASS: TestErrorSpam/start (0.36s)

                                                
                                    
x
+
TestErrorSpam/status (0.75s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-347168 --log_dir /tmp/nospam-347168 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-347168 --log_dir /tmp/nospam-347168 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-347168 --log_dir /tmp/nospam-347168 status
--- PASS: TestErrorSpam/status (0.75s)

                                                
                                    
x
+
TestErrorSpam/pause (1.7s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-347168 --log_dir /tmp/nospam-347168 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-347168 --log_dir /tmp/nospam-347168 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-347168 --log_dir /tmp/nospam-347168 pause
--- PASS: TestErrorSpam/pause (1.70s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.7s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-347168 --log_dir /tmp/nospam-347168 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-347168 --log_dir /tmp/nospam-347168 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-347168 --log_dir /tmp/nospam-347168 unpause
--- PASS: TestErrorSpam/unpause (1.70s)

                                                
                                    
x
+
TestErrorSpam/stop (5.69s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-347168 --log_dir /tmp/nospam-347168 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-347168 --log_dir /tmp/nospam-347168 stop: (2.307667119s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-347168 --log_dir /tmp/nospam-347168 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-347168 --log_dir /tmp/nospam-347168 stop: (1.452504351s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-347168 --log_dir /tmp/nospam-347168 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-347168 --log_dir /tmp/nospam-347168 stop: (1.925517672s)
--- PASS: TestErrorSpam/stop (5.69s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19787-81676/.minikube/files/etc/test/nested/copy/88876/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (85.91s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-346405 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E1010 18:09:59.536640   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/addons-473910/client.crt: no such file or directory" logger="UnhandledError"
E1010 18:09:59.543056   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/addons-473910/client.crt: no such file or directory" logger="UnhandledError"
E1010 18:09:59.554487   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/addons-473910/client.crt: no such file or directory" logger="UnhandledError"
E1010 18:09:59.575905   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/addons-473910/client.crt: no such file or directory" logger="UnhandledError"
E1010 18:09:59.617349   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/addons-473910/client.crt: no such file or directory" logger="UnhandledError"
E1010 18:09:59.698857   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/addons-473910/client.crt: no such file or directory" logger="UnhandledError"
E1010 18:09:59.860457   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/addons-473910/client.crt: no such file or directory" logger="UnhandledError"
E1010 18:10:00.182034   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/addons-473910/client.crt: no such file or directory" logger="UnhandledError"
E1010 18:10:00.824118   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/addons-473910/client.crt: no such file or directory" logger="UnhandledError"
E1010 18:10:02.105797   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/addons-473910/client.crt: no such file or directory" logger="UnhandledError"
E1010 18:10:04.668892   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/addons-473910/client.crt: no such file or directory" logger="UnhandledError"
E1010 18:10:09.791087   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/addons-473910/client.crt: no such file or directory" logger="UnhandledError"
E1010 18:10:20.032615   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/addons-473910/client.crt: no such file or directory" logger="UnhandledError"
E1010 18:10:40.514066   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/addons-473910/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-346405 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m25.907665149s)
--- PASS: TestFunctional/serial/StartWithProxy (85.91s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (44.66s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1010 18:11:06.023001   88876 config.go:182] Loaded profile config "functional-346405": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-346405 --alsologtostderr -v=8
E1010 18:11:21.476471   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/addons-473910/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-346405 --alsologtostderr -v=8: (44.657487548s)
functional_test.go:663: soft start took 44.658155989s for "functional-346405" cluster.
I1010 18:11:50.680875   88876 config.go:182] Loaded profile config "functional-346405": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (44.66s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-346405 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.42s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-346405 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-346405 cache add registry.k8s.io/pause:3.1: (1.056751677s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-346405 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-346405 cache add registry.k8s.io/pause:3.3: (1.255584318s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-346405 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-346405 cache add registry.k8s.io/pause:latest: (1.108681551s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.42s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.95s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-346405 /tmp/TestFunctionalserialCacheCmdcacheadd_local195436912/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-346405 cache add minikube-local-cache-test:functional-346405
functional_test.go:1089: (dbg) Done: out/minikube-linux-amd64 -p functional-346405 cache add minikube-local-cache-test:functional-346405: (1.612102517s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-346405 cache delete minikube-local-cache-test:functional-346405
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-346405
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.95s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-346405 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.73s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-346405 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-346405 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-346405 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (220.917812ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-346405 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-amd64 -p functional-346405 cache reload: (1.00734291s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-346405 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.73s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-346405 kubectl -- --context functional-346405 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-346405 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (42.5s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-346405 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-346405 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (42.503409004s)
functional_test.go:761: restart took 42.503532071s for "functional-346405" cluster.
I1010 18:12:41.050399   88876 config.go:182] Loaded profile config "functional-346405": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (42.50s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-346405 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.52s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-346405 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-346405 logs: (1.5183736s)
--- PASS: TestFunctional/serial/LogsCmd (1.52s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.52s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-346405 logs --file /tmp/TestFunctionalserialLogsFileCmd218463268/001/logs.txt
E1010 18:12:43.398470   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/addons-473910/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-346405 logs --file /tmp/TestFunctionalserialLogsFileCmd218463268/001/logs.txt: (1.517448554s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.52s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.69s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-346405 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-346405
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-346405: exit status 115 (288.769693ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.170:30091 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-346405 delete -f testdata/invalidsvc.yaml
functional_test.go:2327: (dbg) Done: kubectl --context functional-346405 delete -f testdata/invalidsvc.yaml: (1.200148551s)
--- PASS: TestFunctional/serial/InvalidService (4.69s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-346405 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-346405 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-346405 config get cpus: exit status 14 (58.225096ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-346405 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-346405 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-346405 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-346405 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-346405 config get cpus: exit status 14 (70.143256ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (28.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-346405 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-346405 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 98360: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (28.45s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-346405 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-346405 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (148.38609ms)

                                                
                                                
-- stdout --
	* [functional-346405] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19787
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19787-81676/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19787-81676/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1010 18:13:03.066180   98133 out.go:345] Setting OutFile to fd 1 ...
	I1010 18:13:03.066475   98133 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 18:13:03.066487   98133 out.go:358] Setting ErrFile to fd 2...
	I1010 18:13:03.066491   98133 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 18:13:03.066667   98133 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19787-81676/.minikube/bin
	I1010 18:13:03.067220   98133 out.go:352] Setting JSON to false
	I1010 18:13:03.068298   98133 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":6929,"bootTime":1728577054,"procs":220,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1010 18:13:03.068388   98133 start.go:139] virtualization: kvm guest
	I1010 18:13:03.070697   98133 out.go:177] * [functional-346405] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1010 18:13:03.072089   98133 out.go:177]   - MINIKUBE_LOCATION=19787
	I1010 18:13:03.072125   98133 notify.go:220] Checking for updates...
	I1010 18:13:03.074806   98133 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1010 18:13:03.076142   98133 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19787-81676/kubeconfig
	I1010 18:13:03.077542   98133 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19787-81676/.minikube
	I1010 18:13:03.078957   98133 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1010 18:13:03.080419   98133 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1010 18:13:03.082511   98133 config.go:182] Loaded profile config "functional-346405": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 18:13:03.083163   98133 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 18:13:03.083251   98133 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 18:13:03.098932   98133 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36935
	I1010 18:13:03.099422   98133 main.go:141] libmachine: () Calling .GetVersion
	I1010 18:13:03.099994   98133 main.go:141] libmachine: Using API Version  1
	I1010 18:13:03.100009   98133 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 18:13:03.100396   98133 main.go:141] libmachine: () Calling .GetMachineName
	I1010 18:13:03.100601   98133 main.go:141] libmachine: (functional-346405) Calling .DriverName
	I1010 18:13:03.100883   98133 driver.go:394] Setting default libvirt URI to qemu:///system
	I1010 18:13:03.101354   98133 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 18:13:03.101450   98133 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 18:13:03.116417   98133 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45547
	I1010 18:13:03.117099   98133 main.go:141] libmachine: () Calling .GetVersion
	I1010 18:13:03.117678   98133 main.go:141] libmachine: Using API Version  1
	I1010 18:13:03.117703   98133 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 18:13:03.118041   98133 main.go:141] libmachine: () Calling .GetMachineName
	I1010 18:13:03.118300   98133 main.go:141] libmachine: (functional-346405) Calling .DriverName
	I1010 18:13:03.150892   98133 out.go:177] * Using the kvm2 driver based on existing profile
	I1010 18:13:03.152336   98133 start.go:297] selected driver: kvm2
	I1010 18:13:03.152352   98133 start.go:901] validating driver "kvm2" against &{Name:functional-346405 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:functional-346405 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.170 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 18:13:03.152464   98133 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1010 18:13:03.154848   98133 out.go:201] 
	W1010 18:13:03.156130   98133 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1010 18:13:03.157429   98133 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-346405 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-346405 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-346405 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (148.094036ms)

                                                
                                                
-- stdout --
	* [functional-346405] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19787
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19787-81676/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19787-81676/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1010 18:13:03.359552   98230 out.go:345] Setting OutFile to fd 1 ...
	I1010 18:13:03.359647   98230 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 18:13:03.359656   98230 out.go:358] Setting ErrFile to fd 2...
	I1010 18:13:03.359661   98230 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 18:13:03.359935   98230 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19787-81676/.minikube/bin
	I1010 18:13:03.360454   98230 out.go:352] Setting JSON to false
	I1010 18:13:03.361461   98230 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":6929,"bootTime":1728577054,"procs":229,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1010 18:13:03.361570   98230 start.go:139] virtualization: kvm guest
	I1010 18:13:03.363663   98230 out.go:177] * [functional-346405] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	I1010 18:13:03.365172   98230 out.go:177]   - MINIKUBE_LOCATION=19787
	I1010 18:13:03.365211   98230 notify.go:220] Checking for updates...
	I1010 18:13:03.367717   98230 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1010 18:13:03.369217   98230 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19787-81676/kubeconfig
	I1010 18:13:03.370871   98230 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19787-81676/.minikube
	I1010 18:13:03.372440   98230 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1010 18:13:03.375065   98230 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1010 18:13:03.377053   98230 config.go:182] Loaded profile config "functional-346405": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 18:13:03.377509   98230 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 18:13:03.377559   98230 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 18:13:03.393682   98230 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40171
	I1010 18:13:03.394262   98230 main.go:141] libmachine: () Calling .GetVersion
	I1010 18:13:03.394890   98230 main.go:141] libmachine: Using API Version  1
	I1010 18:13:03.394919   98230 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 18:13:03.395241   98230 main.go:141] libmachine: () Calling .GetMachineName
	I1010 18:13:03.395432   98230 main.go:141] libmachine: (functional-346405) Calling .DriverName
	I1010 18:13:03.395703   98230 driver.go:394] Setting default libvirt URI to qemu:///system
	I1010 18:13:03.396000   98230 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 18:13:03.396036   98230 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 18:13:03.411300   98230 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40737
	I1010 18:13:03.411951   98230 main.go:141] libmachine: () Calling .GetVersion
	I1010 18:13:03.412502   98230 main.go:141] libmachine: Using API Version  1
	I1010 18:13:03.412528   98230 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 18:13:03.412822   98230 main.go:141] libmachine: () Calling .GetMachineName
	I1010 18:13:03.413002   98230 main.go:141] libmachine: (functional-346405) Calling .DriverName
	I1010 18:13:03.446877   98230 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I1010 18:13:03.448501   98230 start.go:297] selected driver: kvm2
	I1010 18:13:03.448527   98230 start.go:901] validating driver "kvm2" against &{Name:functional-346405 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:functional-346405 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.170 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 18:13:03.448661   98230 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1010 18:13:03.450954   98230 out.go:201] 
	W1010 18:13:03.452256   98230 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1010 18:13:03.453544   98230 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-346405 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-346405 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-346405 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.15s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (9.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-346405 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-346405 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-fbh99" [3b5ddacc-707f-4301-a93b-6e225b87a42e] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-fbh99" [3b5ddacc-707f-4301-a93b-6e225b87a42e] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 9.014376113s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-346405 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.39.170:30776
functional_test.go:1675: http://192.168.39.170:30776: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-fbh99

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.170:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.170:30776
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (9.67s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-346405 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-346405 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (46.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [189fe1ac-dc92-4bdc-9884-865db90cf534] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004382558s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-346405 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-346405 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-346405 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-346405 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [7b3a0771-087a-42c6-a639-7391b0c5a53e] Pending
helpers_test.go:344: "sp-pod" [7b3a0771-087a-42c6-a639-7391b0c5a53e] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [7b3a0771-087a-42c6-a639-7391b0c5a53e] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 15.004862215s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-346405 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-346405 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-346405 delete -f testdata/storage-provisioner/pod.yaml: (4.610546181s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-346405 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [8f203975-a831-4f1c-be9f-3eadf5bb47cd] Pending
helpers_test.go:344: "sp-pod" [8f203975-a831-4f1c-be9f-3eadf5bb47cd] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [8f203975-a831-4f1c-be9f-3eadf5bb47cd] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 20.004173865s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-346405 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (46.39s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-346405 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-346405 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-346405 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-346405 ssh -n functional-346405 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-346405 cp functional-346405:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3522896703/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-346405 ssh -n functional-346405 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-346405 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-346405 ssh -n functional-346405 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.45s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (25.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-346405 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-vbj6g" [bc19254e-e087-4808-87e1-bf1b00ffca17] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-vbj6g" [bc19254e-e087-4808-87e1-bf1b00ffca17] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 24.003777864s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-346405 exec mysql-6cdb49bbb-vbj6g -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-346405 exec mysql-6cdb49bbb-vbj6g -- mysql -ppassword -e "show databases;": exit status 1 (178.872893ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1010 18:13:24.183933   88876 retry.go:31] will retry after 1.018972171s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-346405 exec mysql-6cdb49bbb-vbj6g -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (25.61s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/88876/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-346405 ssh "sudo cat /etc/test/nested/copy/88876/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/88876.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-346405 ssh "sudo cat /etc/ssl/certs/88876.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/88876.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-346405 ssh "sudo cat /usr/share/ca-certificates/88876.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-346405 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/888762.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-346405 ssh "sudo cat /etc/ssl/certs/888762.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/888762.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-346405 ssh "sudo cat /usr/share/ca-certificates/888762.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-346405 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.46s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-346405 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-346405 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-346405 ssh "sudo systemctl is-active docker": exit status 1 (255.428797ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-346405 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-346405 ssh "sudo systemctl is-active containerd": exit status 1 (227.406555ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-346405 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-346405 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.97s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-346405 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-346405 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-346405
localhost/kicbase/echo-server:functional-346405
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20240813-c6f155d6
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-346405 image ls --format short --alsologtostderr:
I1010 18:13:26.517751   98920 out.go:345] Setting OutFile to fd 1 ...
I1010 18:13:26.517916   98920 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1010 18:13:26.517929   98920 out.go:358] Setting ErrFile to fd 2...
I1010 18:13:26.517935   98920 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1010 18:13:26.518255   98920 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19787-81676/.minikube/bin
I1010 18:13:26.519138   98920 config.go:182] Loaded profile config "functional-346405": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1010 18:13:26.519253   98920 config.go:182] Loaded profile config "functional-346405": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1010 18:13:26.519599   98920 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1010 18:13:26.519638   98920 main.go:141] libmachine: Launching plugin server for driver kvm2
I1010 18:13:26.535832   98920 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46773
I1010 18:13:26.536401   98920 main.go:141] libmachine: () Calling .GetVersion
I1010 18:13:26.537071   98920 main.go:141] libmachine: Using API Version  1
I1010 18:13:26.537094   98920 main.go:141] libmachine: () Calling .SetConfigRaw
I1010 18:13:26.537497   98920 main.go:141] libmachine: () Calling .GetMachineName
I1010 18:13:26.537698   98920 main.go:141] libmachine: (functional-346405) Calling .GetState
I1010 18:13:26.539660   98920 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1010 18:13:26.539734   98920 main.go:141] libmachine: Launching plugin server for driver kvm2
I1010 18:13:26.555859   98920 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36915
I1010 18:13:26.556285   98920 main.go:141] libmachine: () Calling .GetVersion
I1010 18:13:26.556788   98920 main.go:141] libmachine: Using API Version  1
I1010 18:13:26.556804   98920 main.go:141] libmachine: () Calling .SetConfigRaw
I1010 18:13:26.557124   98920 main.go:141] libmachine: () Calling .GetMachineName
I1010 18:13:26.557347   98920 main.go:141] libmachine: (functional-346405) Calling .DriverName
I1010 18:13:26.557605   98920 ssh_runner.go:195] Run: systemctl --version
I1010 18:13:26.557650   98920 main.go:141] libmachine: (functional-346405) Calling .GetSSHHostname
I1010 18:13:26.561429   98920 main.go:141] libmachine: (functional-346405) DBG | domain functional-346405 has defined MAC address 52:54:00:09:ce:cb in network mk-functional-346405
I1010 18:13:26.562035   98920 main.go:141] libmachine: (functional-346405) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:09:ce:cb", ip: ""} in network mk-functional-346405: {Iface:virbr1 ExpiryTime:2024-10-10 19:09:55 +0000 UTC Type:0 Mac:52:54:00:09:ce:cb Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:functional-346405 Clientid:01:52:54:00:09:ce:cb}
I1010 18:13:26.562099   98920 main.go:141] libmachine: (functional-346405) DBG | domain functional-346405 has defined IP address 192.168.39.170 and MAC address 52:54:00:09:ce:cb in network mk-functional-346405
I1010 18:13:26.562167   98920 main.go:141] libmachine: (functional-346405) Calling .GetSSHPort
I1010 18:13:26.562386   98920 main.go:141] libmachine: (functional-346405) Calling .GetSSHKeyPath
I1010 18:13:26.562609   98920 main.go:141] libmachine: (functional-346405) Calling .GetSSHUsername
I1010 18:13:26.562770   98920 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/functional-346405/id_rsa Username:docker}
I1010 18:13:26.652481   98920 ssh_runner.go:195] Run: sudo crictl images --output json
I1010 18:13:26.740836   98920 main.go:141] libmachine: Making call to close driver server
I1010 18:13:26.740876   98920 main.go:141] libmachine: (functional-346405) Calling .Close
I1010 18:13:26.741145   98920 main.go:141] libmachine: (functional-346405) DBG | Closing plugin on server side
I1010 18:13:26.741195   98920 main.go:141] libmachine: Successfully made call to close driver server
I1010 18:13:26.741203   98920 main.go:141] libmachine: Making call to close connection to plugin binary
I1010 18:13:26.741238   98920 main.go:141] libmachine: Making call to close driver server
I1010 18:13:26.741249   98920 main.go:141] libmachine: (functional-346405) Calling .Close
I1010 18:13:26.741474   98920 main.go:141] libmachine: Successfully made call to close driver server
I1010 18:13:26.741493   98920 main.go:141] libmachine: Making call to close connection to plugin binary
I1010 18:13:26.741499   98920 main.go:141] libmachine: (functional-346405) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-346405 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-346405 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| localhost/kicbase/echo-server           | functional-346405  | 9056ab77afb8e | 4.94MB |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| registry.k8s.io/kube-proxy              | v1.31.1            | 60c005f310ff3 | 92.7MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| registry.k8s.io/coredns/coredns         | v1.11.3            | c69fa2e9cbf5f | 63.3MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/etcd                    | 3.5.15-0           | 2e96e5913fc06 | 149MB  |
| registry.k8s.io/kube-controller-manager | v1.31.1            | 175ffd71cce3d | 89.4MB |
| registry.k8s.io/kube-scheduler          | v1.31.1            | 9aa1fad941575 | 68.4MB |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| docker.io/library/nginx                 | latest             | 7f553e8bbc897 | 196MB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| localhost/minikube-local-cache-test     | functional-346405  | dcacfed49c2ba | 3.33kB |
| registry.k8s.io/kube-apiserver          | v1.31.1            | 6bab7719df100 | 95.2MB |
| docker.io/kindest/kindnetd              | v20240813-c6f155d6 | 12968670680f4 | 87.2MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-346405 image ls --format table --alsologtostderr:
I1010 18:13:27.285512   99086 out.go:345] Setting OutFile to fd 1 ...
I1010 18:13:27.285642   99086 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1010 18:13:27.285653   99086 out.go:358] Setting ErrFile to fd 2...
I1010 18:13:27.285659   99086 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1010 18:13:27.285850   99086 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19787-81676/.minikube/bin
I1010 18:13:27.286486   99086 config.go:182] Loaded profile config "functional-346405": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1010 18:13:27.286617   99086 config.go:182] Loaded profile config "functional-346405": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1010 18:13:27.287214   99086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1010 18:13:27.287270   99086 main.go:141] libmachine: Launching plugin server for driver kvm2
I1010 18:13:27.302650   99086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35969
I1010 18:13:27.303191   99086 main.go:141] libmachine: () Calling .GetVersion
I1010 18:13:27.303853   99086 main.go:141] libmachine: Using API Version  1
I1010 18:13:27.303887   99086 main.go:141] libmachine: () Calling .SetConfigRaw
I1010 18:13:27.304204   99086 main.go:141] libmachine: () Calling .GetMachineName
I1010 18:13:27.304449   99086 main.go:141] libmachine: (functional-346405) Calling .GetState
I1010 18:13:27.306466   99086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1010 18:13:27.306517   99086 main.go:141] libmachine: Launching plugin server for driver kvm2
I1010 18:13:27.321607   99086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44263
I1010 18:13:27.322076   99086 main.go:141] libmachine: () Calling .GetVersion
I1010 18:13:27.322790   99086 main.go:141] libmachine: Using API Version  1
I1010 18:13:27.322853   99086 main.go:141] libmachine: () Calling .SetConfigRaw
I1010 18:13:27.323254   99086 main.go:141] libmachine: () Calling .GetMachineName
I1010 18:13:27.323456   99086 main.go:141] libmachine: (functional-346405) Calling .DriverName
I1010 18:13:27.323646   99086 ssh_runner.go:195] Run: systemctl --version
I1010 18:13:27.323675   99086 main.go:141] libmachine: (functional-346405) Calling .GetSSHHostname
I1010 18:13:27.326812   99086 main.go:141] libmachine: (functional-346405) DBG | domain functional-346405 has defined MAC address 52:54:00:09:ce:cb in network mk-functional-346405
I1010 18:13:27.327189   99086 main.go:141] libmachine: (functional-346405) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:09:ce:cb", ip: ""} in network mk-functional-346405: {Iface:virbr1 ExpiryTime:2024-10-10 19:09:55 +0000 UTC Type:0 Mac:52:54:00:09:ce:cb Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:functional-346405 Clientid:01:52:54:00:09:ce:cb}
I1010 18:13:27.327231   99086 main.go:141] libmachine: (functional-346405) DBG | domain functional-346405 has defined IP address 192.168.39.170 and MAC address 52:54:00:09:ce:cb in network mk-functional-346405
I1010 18:13:27.327311   99086 main.go:141] libmachine: (functional-346405) Calling .GetSSHPort
I1010 18:13:27.327471   99086 main.go:141] libmachine: (functional-346405) Calling .GetSSHKeyPath
I1010 18:13:27.327628   99086 main.go:141] libmachine: (functional-346405) Calling .GetSSHUsername
I1010 18:13:27.327756   99086 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/functional-346405/id_rsa Username:docker}
I1010 18:13:27.415874   99086 ssh_runner.go:195] Run: sudo crictl images --output json
I1010 18:13:27.460088   99086 main.go:141] libmachine: Making call to close driver server
I1010 18:13:27.460108   99086 main.go:141] libmachine: (functional-346405) Calling .Close
I1010 18:13:27.460426   99086 main.go:141] libmachine: Successfully made call to close driver server
I1010 18:13:27.460443   99086 main.go:141] libmachine: Making call to close connection to plugin binary
I1010 18:13:27.460459   99086 main.go:141] libmachine: Making call to close driver server
I1010 18:13:27.460468   99086 main.go:141] libmachine: (functional-346405) Calling .Close
I1010 18:13:27.460479   99086 main.go:141] libmachine: (functional-346405) DBG | Closing plugin on server side
I1010 18:13:27.460688   99086 main.go:141] libmachine: Successfully made call to close driver server
I1010 18:13:27.460705   99086 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-346405 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-346405 image ls --format json --alsologtostderr:
[{"id":"dcacfed49c2ba5e959cd08d60ddc67f3489114077b8e51f75555985b8d0d1f36","repoDigests":["localhost/minikube-local-cache-test@sha256:b0df14f7b87532285d38cea99542d15933944461e4514ab63bf1bd65bf7248ab"],"repoTags":["localhost/minikube-local-cache-test:functional-346405"],"size":"3330"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e","registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"63273227"},{"id":"9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b","repoDigests":["registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0","registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"siz
e":"68420934"},{"id":"6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee","repoDigests":["registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771","registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"95237600"},{"id":"12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f","repoDigests":["docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b","docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"],"repoTags":["docker.io/kindest/kindnetd:v20240813-c6f155d6"],"size":"87190579"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a
7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisi
oner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":["registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d","registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"149009664"},{"id":"175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1","registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2
bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"89437508"},{"id":"60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561","repoDigests":["registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44","registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"92733849"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags"
:["registry.k8s.io/pause:3.10"],"size":"742080"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"7f553e8bbc897571642d836b31eaf6ecbe395d7641c2b24291356ed28f3f2bd0","repoDigests":["docker.io/library/nginx@sha256:396c6e925f28fbbed95a475d27c18886289c2bbc53231534dc86c163558b5e4b","docker.io/library/nginx@sha256:d2eb56950b84efe34f966a2b92efb1a1a2ea53e7e93b94cdf45a27cf3cd47fc0"],"repoTags":["docker.io/library/nginx:latest"],"size":"195818028"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-346405"],"size":"4943877"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pa
use@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-346405 image ls --format json --alsologtostderr:
I1010 18:13:27.046058   99038 out.go:345] Setting OutFile to fd 1 ...
I1010 18:13:27.046386   99038 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1010 18:13:27.046405   99038 out.go:358] Setting ErrFile to fd 2...
I1010 18:13:27.046413   99038 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1010 18:13:27.046886   99038 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19787-81676/.minikube/bin
I1010 18:13:27.047603   99038 config.go:182] Loaded profile config "functional-346405": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1010 18:13:27.047751   99038 config.go:182] Loaded profile config "functional-346405": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1010 18:13:27.048273   99038 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1010 18:13:27.048331   99038 main.go:141] libmachine: Launching plugin server for driver kvm2
I1010 18:13:27.063572   99038 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43809
I1010 18:13:27.064051   99038 main.go:141] libmachine: () Calling .GetVersion
I1010 18:13:27.064639   99038 main.go:141] libmachine: Using API Version  1
I1010 18:13:27.064666   99038 main.go:141] libmachine: () Calling .SetConfigRaw
I1010 18:13:27.065076   99038 main.go:141] libmachine: () Calling .GetMachineName
I1010 18:13:27.065300   99038 main.go:141] libmachine: (functional-346405) Calling .GetState
I1010 18:13:27.067233   99038 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1010 18:13:27.067290   99038 main.go:141] libmachine: Launching plugin server for driver kvm2
I1010 18:13:27.083453   99038 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40077
I1010 18:13:27.083919   99038 main.go:141] libmachine: () Calling .GetVersion
I1010 18:13:27.084470   99038 main.go:141] libmachine: Using API Version  1
I1010 18:13:27.084505   99038 main.go:141] libmachine: () Calling .SetConfigRaw
I1010 18:13:27.084827   99038 main.go:141] libmachine: () Calling .GetMachineName
I1010 18:13:27.085015   99038 main.go:141] libmachine: (functional-346405) Calling .DriverName
I1010 18:13:27.085342   99038 ssh_runner.go:195] Run: systemctl --version
I1010 18:13:27.085370   99038 main.go:141] libmachine: (functional-346405) Calling .GetSSHHostname
I1010 18:13:27.088440   99038 main.go:141] libmachine: (functional-346405) DBG | domain functional-346405 has defined MAC address 52:54:00:09:ce:cb in network mk-functional-346405
I1010 18:13:27.088838   99038 main.go:141] libmachine: (functional-346405) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:09:ce:cb", ip: ""} in network mk-functional-346405: {Iface:virbr1 ExpiryTime:2024-10-10 19:09:55 +0000 UTC Type:0 Mac:52:54:00:09:ce:cb Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:functional-346405 Clientid:01:52:54:00:09:ce:cb}
I1010 18:13:27.088882   99038 main.go:141] libmachine: (functional-346405) DBG | domain functional-346405 has defined IP address 192.168.39.170 and MAC address 52:54:00:09:ce:cb in network mk-functional-346405
I1010 18:13:27.089005   99038 main.go:141] libmachine: (functional-346405) Calling .GetSSHPort
I1010 18:13:27.089201   99038 main.go:141] libmachine: (functional-346405) Calling .GetSSHKeyPath
I1010 18:13:27.089346   99038 main.go:141] libmachine: (functional-346405) Calling .GetSSHUsername
I1010 18:13:27.089478   99038 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/functional-346405/id_rsa Username:docker}
I1010 18:13:27.176693   99038 ssh_runner.go:195] Run: sudo crictl images --output json
I1010 18:13:27.230504   99038 main.go:141] libmachine: Making call to close driver server
I1010 18:13:27.230527   99038 main.go:141] libmachine: (functional-346405) Calling .Close
I1010 18:13:27.230863   99038 main.go:141] libmachine: Successfully made call to close driver server
I1010 18:13:27.230888   99038 main.go:141] libmachine: Making call to close connection to plugin binary
I1010 18:13:27.230896   99038 main.go:141] libmachine: Making call to close driver server
I1010 18:13:27.230903   99038 main.go:141] libmachine: (functional-346405) Calling .Close
I1010 18:13:27.230901   99038 main.go:141] libmachine: (functional-346405) DBG | Closing plugin on server side
I1010 18:13:27.231140   99038 main.go:141] libmachine: (functional-346405) DBG | Closing plugin on server side
I1010 18:13:27.231177   99038 main.go:141] libmachine: Successfully made call to close driver server
I1010 18:13:27.231201   99038 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-346405 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-346405 image ls --format yaml --alsologtostderr:
- id: 9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0
- registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "68420934"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests:
- registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "149009664"
- id: 6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771
- registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "95237600"
- id: 60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44
- registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "92733849"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 7f553e8bbc897571642d836b31eaf6ecbe395d7641c2b24291356ed28f3f2bd0
repoDigests:
- docker.io/library/nginx@sha256:396c6e925f28fbbed95a475d27c18886289c2bbc53231534dc86c163558b5e4b
- docker.io/library/nginx@sha256:d2eb56950b84efe34f966a2b92efb1a1a2ea53e7e93b94cdf45a27cf3cd47fc0
repoTags:
- docker.io/library/nginx:latest
size: "195818028"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
- registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "63273227"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: dcacfed49c2ba5e959cd08d60ddc67f3489114077b8e51f75555985b8d0d1f36
repoDigests:
- localhost/minikube-local-cache-test@sha256:b0df14f7b87532285d38cea99542d15933944461e4514ab63bf1bd65bf7248ab
repoTags:
- localhost/minikube-local-cache-test:functional-346405
size: "3330"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-346405
size: "4943877"
- id: 175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1
- registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "89437508"
- id: 12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f
repoDigests:
- docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b
- docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166
repoTags:
- docker.io/kindest/kindnetd:v20240813-c6f155d6
size: "87190579"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-346405 image ls --format yaml --alsologtostderr:
I1010 18:13:26.794657   98974 out.go:345] Setting OutFile to fd 1 ...
I1010 18:13:26.794960   98974 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1010 18:13:26.794972   98974 out.go:358] Setting ErrFile to fd 2...
I1010 18:13:26.794989   98974 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1010 18:13:26.795231   98974 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19787-81676/.minikube/bin
I1010 18:13:26.795880   98974 config.go:182] Loaded profile config "functional-346405": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1010 18:13:26.796008   98974 config.go:182] Loaded profile config "functional-346405": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1010 18:13:26.796446   98974 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1010 18:13:26.796501   98974 main.go:141] libmachine: Launching plugin server for driver kvm2
I1010 18:13:26.812267   98974 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35213
I1010 18:13:26.812772   98974 main.go:141] libmachine: () Calling .GetVersion
I1010 18:13:26.813444   98974 main.go:141] libmachine: Using API Version  1
I1010 18:13:26.813481   98974 main.go:141] libmachine: () Calling .SetConfigRaw
I1010 18:13:26.813814   98974 main.go:141] libmachine: () Calling .GetMachineName
I1010 18:13:26.814011   98974 main.go:141] libmachine: (functional-346405) Calling .GetState
I1010 18:13:26.816208   98974 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1010 18:13:26.816273   98974 main.go:141] libmachine: Launching plugin server for driver kvm2
I1010 18:13:26.832004   98974 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34587
I1010 18:13:26.832583   98974 main.go:141] libmachine: () Calling .GetVersion
I1010 18:13:26.833174   98974 main.go:141] libmachine: Using API Version  1
I1010 18:13:26.833201   98974 main.go:141] libmachine: () Calling .SetConfigRaw
I1010 18:13:26.833567   98974 main.go:141] libmachine: () Calling .GetMachineName
I1010 18:13:26.833778   98974 main.go:141] libmachine: (functional-346405) Calling .DriverName
I1010 18:13:26.834014   98974 ssh_runner.go:195] Run: systemctl --version
I1010 18:13:26.834043   98974 main.go:141] libmachine: (functional-346405) Calling .GetSSHHostname
I1010 18:13:26.837798   98974 main.go:141] libmachine: (functional-346405) DBG | domain functional-346405 has defined MAC address 52:54:00:09:ce:cb in network mk-functional-346405
I1010 18:13:26.838257   98974 main.go:141] libmachine: (functional-346405) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:09:ce:cb", ip: ""} in network mk-functional-346405: {Iface:virbr1 ExpiryTime:2024-10-10 19:09:55 +0000 UTC Type:0 Mac:52:54:00:09:ce:cb Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:functional-346405 Clientid:01:52:54:00:09:ce:cb}
I1010 18:13:26.838361   98974 main.go:141] libmachine: (functional-346405) DBG | domain functional-346405 has defined IP address 192.168.39.170 and MAC address 52:54:00:09:ce:cb in network mk-functional-346405
I1010 18:13:26.838517   98974 main.go:141] libmachine: (functional-346405) Calling .GetSSHPort
I1010 18:13:26.838683   98974 main.go:141] libmachine: (functional-346405) Calling .GetSSHKeyPath
I1010 18:13:26.838825   98974 main.go:141] libmachine: (functional-346405) Calling .GetSSHUsername
I1010 18:13:26.839009   98974 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/functional-346405/id_rsa Username:docker}
I1010 18:13:26.938212   98974 ssh_runner.go:195] Run: sudo crictl images --output json
I1010 18:13:26.993109   98974 main.go:141] libmachine: Making call to close driver server
I1010 18:13:26.993126   98974 main.go:141] libmachine: (functional-346405) Calling .Close
I1010 18:13:26.993341   98974 main.go:141] libmachine: Successfully made call to close driver server
I1010 18:13:26.993401   98974 main.go:141] libmachine: (functional-346405) DBG | Closing plugin on server side
I1010 18:13:26.993421   98974 main.go:141] libmachine: Making call to close connection to plugin binary
I1010 18:13:26.993430   98974 main.go:141] libmachine: Making call to close driver server
I1010 18:13:26.993437   98974 main.go:141] libmachine: (functional-346405) Calling .Close
I1010 18:13:26.993744   98974 main.go:141] libmachine: Successfully made call to close driver server
I1010 18:13:26.993767   98974 main.go:141] libmachine: Making call to close connection to plugin binary
I1010 18:13:26.993743   98974 main.go:141] libmachine: (functional-346405) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-346405 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-346405 ssh pgrep buildkitd: exit status 1 (212.31544ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-346405 image build -t localhost/my-image:functional-346405 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-346405 image build -t localhost/my-image:functional-346405 testdata/build --alsologtostderr: (3.383101723s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-amd64 -p functional-346405 image build -t localhost/my-image:functional-346405 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 829fc9b905e
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-346405
--> 2e0a4ae4181
Successfully tagged localhost/my-image:functional-346405
2e0a4ae41810961ff30c4cd80c2b7cb278bd308568cbd93a96e6fcfd77b4571f
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-346405 image build -t localhost/my-image:functional-346405 testdata/build --alsologtostderr:
I1010 18:13:27.144283   99062 out.go:345] Setting OutFile to fd 1 ...
I1010 18:13:27.144417   99062 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1010 18:13:27.144426   99062 out.go:358] Setting ErrFile to fd 2...
I1010 18:13:27.144431   99062 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1010 18:13:27.144647   99062 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19787-81676/.minikube/bin
I1010 18:13:27.145537   99062 config.go:182] Loaded profile config "functional-346405": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1010 18:13:27.146191   99062 config.go:182] Loaded profile config "functional-346405": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1010 18:13:27.146569   99062 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1010 18:13:27.146608   99062 main.go:141] libmachine: Launching plugin server for driver kvm2
I1010 18:13:27.161886   99062 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44235
I1010 18:13:27.162408   99062 main.go:141] libmachine: () Calling .GetVersion
I1010 18:13:27.163007   99062 main.go:141] libmachine: Using API Version  1
I1010 18:13:27.163029   99062 main.go:141] libmachine: () Calling .SetConfigRaw
I1010 18:13:27.163477   99062 main.go:141] libmachine: () Calling .GetMachineName
I1010 18:13:27.163680   99062 main.go:141] libmachine: (functional-346405) Calling .GetState
I1010 18:13:27.165578   99062 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1010 18:13:27.165625   99062 main.go:141] libmachine: Launching plugin server for driver kvm2
I1010 18:13:27.181109   99062 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41527
I1010 18:13:27.181618   99062 main.go:141] libmachine: () Calling .GetVersion
I1010 18:13:27.182251   99062 main.go:141] libmachine: Using API Version  1
I1010 18:13:27.182281   99062 main.go:141] libmachine: () Calling .SetConfigRaw
I1010 18:13:27.182739   99062 main.go:141] libmachine: () Calling .GetMachineName
I1010 18:13:27.183029   99062 main.go:141] libmachine: (functional-346405) Calling .DriverName
I1010 18:13:27.183282   99062 ssh_runner.go:195] Run: systemctl --version
I1010 18:13:27.183309   99062 main.go:141] libmachine: (functional-346405) Calling .GetSSHHostname
I1010 18:13:27.186728   99062 main.go:141] libmachine: (functional-346405) DBG | domain functional-346405 has defined MAC address 52:54:00:09:ce:cb in network mk-functional-346405
I1010 18:13:27.187213   99062 main.go:141] libmachine: (functional-346405) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:09:ce:cb", ip: ""} in network mk-functional-346405: {Iface:virbr1 ExpiryTime:2024-10-10 19:09:55 +0000 UTC Type:0 Mac:52:54:00:09:ce:cb Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:functional-346405 Clientid:01:52:54:00:09:ce:cb}
I1010 18:13:27.187244   99062 main.go:141] libmachine: (functional-346405) DBG | domain functional-346405 has defined IP address 192.168.39.170 and MAC address 52:54:00:09:ce:cb in network mk-functional-346405
I1010 18:13:27.187447   99062 main.go:141] libmachine: (functional-346405) Calling .GetSSHPort
I1010 18:13:27.187639   99062 main.go:141] libmachine: (functional-346405) Calling .GetSSHKeyPath
I1010 18:13:27.187812   99062 main.go:141] libmachine: (functional-346405) Calling .GetSSHUsername
I1010 18:13:27.187953   99062 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/functional-346405/id_rsa Username:docker}
I1010 18:13:27.273523   99062 build_images.go:161] Building image from path: /tmp/build.3343505397.tar
I1010 18:13:27.273621   99062 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1010 18:13:27.286889   99062 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3343505397.tar
I1010 18:13:27.291467   99062 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3343505397.tar: stat -c "%s %y" /var/lib/minikube/build/build.3343505397.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3343505397.tar': No such file or directory
I1010 18:13:27.291494   99062 ssh_runner.go:362] scp /tmp/build.3343505397.tar --> /var/lib/minikube/build/build.3343505397.tar (3072 bytes)
I1010 18:13:27.323064   99062 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3343505397
I1010 18:13:27.335007   99062 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3343505397 -xf /var/lib/minikube/build/build.3343505397.tar
I1010 18:13:27.346273   99062 crio.go:315] Building image: /var/lib/minikube/build/build.3343505397
I1010 18:13:27.346357   99062 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-346405 /var/lib/minikube/build/build.3343505397 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1010 18:13:30.443558   99062 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-346405 /var/lib/minikube/build/build.3343505397 --cgroup-manager=cgroupfs: (3.097173423s)
I1010 18:13:30.443637   99062 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3343505397
I1010 18:13:30.454346   99062 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3343505397.tar
I1010 18:13:30.474735   99062 build_images.go:217] Built localhost/my-image:functional-346405 from /tmp/build.3343505397.tar
I1010 18:13:30.474770   99062 build_images.go:133] succeeded building to: functional-346405
I1010 18:13:30.474775   99062 build_images.go:134] failed building to: 
I1010 18:13:30.474804   99062 main.go:141] libmachine: Making call to close driver server
I1010 18:13:30.474816   99062 main.go:141] libmachine: (functional-346405) Calling .Close
I1010 18:13:30.475121   99062 main.go:141] libmachine: Successfully made call to close driver server
I1010 18:13:30.475142   99062 main.go:141] libmachine: Making call to close connection to plugin binary
I1010 18:13:30.475183   99062 main.go:141] libmachine: Making call to close driver server
I1010 18:13:30.475197   99062 main.go:141] libmachine: (functional-346405) Calling .Close
I1010 18:13:30.475453   99062 main.go:141] libmachine: Successfully made call to close driver server
I1010 18:13:30.475465   99062 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-346405 image ls
2024/10/10 18:13:31 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.84s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.513202104s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-346405
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (10.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-346405 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-346405 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-gnnt7" [3ccb3372-4e5c-40d0-bdd4-89d2a881695b] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-gnnt7" [3ccb3372-4e5c-40d0-bdd4-89d2a881695b] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.031890098s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (10.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-346405 image load --daemon kicbase/echo-server:functional-346405 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-amd64 -p functional-346405 image load --daemon kicbase/echo-server:functional-346405 --alsologtostderr: (3.325270722s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-346405 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-346405 image load --daemon kicbase/echo-server:functional-346405 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-346405 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.91s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-346405
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-346405 image load --daemon kicbase/echo-server:functional-346405 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-346405 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.69s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-346405 image save kicbase/echo-server:functional-346405 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-346405 image rm kicbase/echo-server:functional-346405 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-346405 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-346405 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-346405 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.84s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-346405
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-346405 image save --daemon kicbase/echo-server:functional-346405 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-346405
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.80s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-346405 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-346405 service list -o json
functional_test.go:1494: Took "565.472739ms" to run "out/minikube-linux-amd64 -p functional-346405 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-346405 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.39.170:31249
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-346405 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-346405 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.39.170:31249
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "461.164272ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "52.576226ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (21.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-346405 /tmp/TestFunctionalparallelMountCmdany-port2739996777/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1728583982357100853" to /tmp/TestFunctionalparallelMountCmdany-port2739996777/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1728583982357100853" to /tmp/TestFunctionalparallelMountCmdany-port2739996777/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1728583982357100853" to /tmp/TestFunctionalparallelMountCmdany-port2739996777/001/test-1728583982357100853
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-346405 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-346405 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (279.897252ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1010 18:13:02.637362   88876 retry.go:31] will retry after 353.603877ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-346405 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-346405 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct 10 18:13 created-by-test
-rw-r--r-- 1 docker docker 24 Oct 10 18:13 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct 10 18:13 test-1728583982357100853
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-346405 ssh cat /mount-9p/test-1728583982357100853
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-346405 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [52f8c90e-fba0-4f55-95b5-876b3c7ceaa4] Pending
helpers_test.go:344: "busybox-mount" [52f8c90e-fba0-4f55-95b5-876b3c7ceaa4] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [52f8c90e-fba0-4f55-95b5-876b3c7ceaa4] Running
helpers_test.go:344: "busybox-mount" [52f8c90e-fba0-4f55-95b5-876b3c7ceaa4] Running / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [52f8c90e-fba0-4f55-95b5-876b3c7ceaa4] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 18.024188707s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-346405 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-346405 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-346405 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-346405 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-346405 /tmp/TestFunctionalparallelMountCmdany-port2739996777/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (21.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "330.532682ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "53.462882ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-346405 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-346405 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-346405 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-346405 /tmp/TestFunctionalparallelMountCmdspecific-port176847682/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-346405 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-346405 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (245.032502ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1010 18:13:23.718444   88876 retry.go:31] will retry after 493.329834ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-346405 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-346405 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-346405 /tmp/TestFunctionalparallelMountCmdspecific-port176847682/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-346405 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-346405 ssh "sudo umount -f /mount-9p": exit status 1 (243.512181ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-346405 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-346405 /tmp/TestFunctionalparallelMountCmdspecific-port176847682/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.88s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-346405 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2977223646/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-346405 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2977223646/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-346405 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2977223646/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-346405 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-346405 ssh "findmnt -T" /mount1: exit status 1 (294.471957ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1010 18:13:25.648734   88876 retry.go:31] will retry after 386.85307ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-346405 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-346405 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-346405 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-346405 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-346405 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2977223646/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-346405 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2977223646/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-346405 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2977223646/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.54s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-346405
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-346405
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-346405
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (201.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-142481 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1010 18:14:59.534262   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/addons-473910/client.crt: no such file or directory" logger="UnhandledError"
E1010 18:15:27.240790   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/addons-473910/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-142481 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m20.332474558s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-142481 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (201.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-142481 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-142481 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-142481 -- rollout status deployment/busybox: (4.071777372s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-142481 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-142481 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-142481 -- exec busybox-7dff88458-5544l -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-142481 -- exec busybox-7dff88458-wf7qs -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-142481 -- exec busybox-7dff88458-xnwpj -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-142481 -- exec busybox-7dff88458-5544l -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-142481 -- exec busybox-7dff88458-wf7qs -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-142481 -- exec busybox-7dff88458-xnwpj -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-142481 -- exec busybox-7dff88458-5544l -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-142481 -- exec busybox-7dff88458-wf7qs -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-142481 -- exec busybox-7dff88458-xnwpj -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-142481 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-142481 -- exec busybox-7dff88458-5544l -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-142481 -- exec busybox-7dff88458-5544l -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-142481 -- exec busybox-7dff88458-wf7qs -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-142481 -- exec busybox-7dff88458-wf7qs -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-142481 -- exec busybox-7dff88458-xnwpj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-142481 -- exec busybox-7dff88458-xnwpj -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (59.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-142481 -v=7 --alsologtostderr
E1010 18:17:50.018672   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/functional-346405/client.crt: no such file or directory" logger="UnhandledError"
E1010 18:17:50.025144   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/functional-346405/client.crt: no such file or directory" logger="UnhandledError"
E1010 18:17:50.036668   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/functional-346405/client.crt: no such file or directory" logger="UnhandledError"
E1010 18:17:50.058092   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/functional-346405/client.crt: no such file or directory" logger="UnhandledError"
E1010 18:17:50.099639   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/functional-346405/client.crt: no such file or directory" logger="UnhandledError"
E1010 18:17:50.181127   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/functional-346405/client.crt: no such file or directory" logger="UnhandledError"
E1010 18:17:50.342557   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/functional-346405/client.crt: no such file or directory" logger="UnhandledError"
E1010 18:17:50.664351   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/functional-346405/client.crt: no such file or directory" logger="UnhandledError"
E1010 18:17:51.306236   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/functional-346405/client.crt: no such file or directory" logger="UnhandledError"
E1010 18:17:52.588031   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/functional-346405/client.crt: no such file or directory" logger="UnhandledError"
E1010 18:17:55.150096   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/functional-346405/client.crt: no such file or directory" logger="UnhandledError"
E1010 18:18:00.271886   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/functional-346405/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-142481 -v=7 --alsologtostderr: (58.504847419s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-142481 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (59.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-142481 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (13.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-142481 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-142481 cp testdata/cp-test.txt ha-142481:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-142481 ssh -n ha-142481 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-142481 cp ha-142481:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4115718102/001/cp-test_ha-142481.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-142481 ssh -n ha-142481 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-142481 cp ha-142481:/home/docker/cp-test.txt ha-142481-m02:/home/docker/cp-test_ha-142481_ha-142481-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-142481 ssh -n ha-142481 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-142481 ssh -n ha-142481-m02 "sudo cat /home/docker/cp-test_ha-142481_ha-142481-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-142481 cp ha-142481:/home/docker/cp-test.txt ha-142481-m03:/home/docker/cp-test_ha-142481_ha-142481-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-142481 ssh -n ha-142481 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-142481 ssh -n ha-142481-m03 "sudo cat /home/docker/cp-test_ha-142481_ha-142481-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-142481 cp ha-142481:/home/docker/cp-test.txt ha-142481-m04:/home/docker/cp-test_ha-142481_ha-142481-m04.txt
E1010 18:18:10.514221   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/functional-346405/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-142481 ssh -n ha-142481 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-142481 ssh -n ha-142481-m04 "sudo cat /home/docker/cp-test_ha-142481_ha-142481-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-142481 cp testdata/cp-test.txt ha-142481-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-142481 ssh -n ha-142481-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-142481 cp ha-142481-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4115718102/001/cp-test_ha-142481-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-142481 ssh -n ha-142481-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-142481 cp ha-142481-m02:/home/docker/cp-test.txt ha-142481:/home/docker/cp-test_ha-142481-m02_ha-142481.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-142481 ssh -n ha-142481-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-142481 ssh -n ha-142481 "sudo cat /home/docker/cp-test_ha-142481-m02_ha-142481.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-142481 cp ha-142481-m02:/home/docker/cp-test.txt ha-142481-m03:/home/docker/cp-test_ha-142481-m02_ha-142481-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-142481 ssh -n ha-142481-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-142481 ssh -n ha-142481-m03 "sudo cat /home/docker/cp-test_ha-142481-m02_ha-142481-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-142481 cp ha-142481-m02:/home/docker/cp-test.txt ha-142481-m04:/home/docker/cp-test_ha-142481-m02_ha-142481-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-142481 ssh -n ha-142481-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-142481 ssh -n ha-142481-m04 "sudo cat /home/docker/cp-test_ha-142481-m02_ha-142481-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-142481 cp testdata/cp-test.txt ha-142481-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-142481 ssh -n ha-142481-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-142481 cp ha-142481-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4115718102/001/cp-test_ha-142481-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-142481 ssh -n ha-142481-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-142481 cp ha-142481-m03:/home/docker/cp-test.txt ha-142481:/home/docker/cp-test_ha-142481-m03_ha-142481.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-142481 ssh -n ha-142481-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-142481 ssh -n ha-142481 "sudo cat /home/docker/cp-test_ha-142481-m03_ha-142481.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-142481 cp ha-142481-m03:/home/docker/cp-test.txt ha-142481-m02:/home/docker/cp-test_ha-142481-m03_ha-142481-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-142481 ssh -n ha-142481-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-142481 ssh -n ha-142481-m02 "sudo cat /home/docker/cp-test_ha-142481-m03_ha-142481-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-142481 cp ha-142481-m03:/home/docker/cp-test.txt ha-142481-m04:/home/docker/cp-test_ha-142481-m03_ha-142481-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-142481 ssh -n ha-142481-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-142481 ssh -n ha-142481-m04 "sudo cat /home/docker/cp-test_ha-142481-m03_ha-142481-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-142481 cp testdata/cp-test.txt ha-142481-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-142481 ssh -n ha-142481-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-142481 cp ha-142481-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4115718102/001/cp-test_ha-142481-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-142481 ssh -n ha-142481-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-142481 cp ha-142481-m04:/home/docker/cp-test.txt ha-142481:/home/docker/cp-test_ha-142481-m04_ha-142481.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-142481 ssh -n ha-142481-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-142481 ssh -n ha-142481 "sudo cat /home/docker/cp-test_ha-142481-m04_ha-142481.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-142481 cp ha-142481-m04:/home/docker/cp-test.txt ha-142481-m02:/home/docker/cp-test_ha-142481-m04_ha-142481-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-142481 ssh -n ha-142481-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-142481 ssh -n ha-142481-m02 "sudo cat /home/docker/cp-test_ha-142481-m04_ha-142481-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-142481 cp ha-142481-m04:/home/docker/cp-test.txt ha-142481-m03:/home/docker/cp-test_ha-142481-m04_ha-142481-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-142481 ssh -n ha-142481-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-142481 ssh -n ha-142481-m03 "sudo cat /home/docker/cp-test_ha-142481-m04_ha-142481-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (13.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (16.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-142481 node delete m03 -v=7 --alsologtostderr
E1010 18:27:50.017951   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/functional-346405/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-142481 node delete m03 -v=7 --alsologtostderr: (16.06298s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-142481 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (16.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (373s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 start -p ha-142481 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1010 18:32:50.018069   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/functional-346405/client.crt: no such file or directory" logger="UnhandledError"
E1010 18:34:13.088181   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/functional-346405/client.crt: no such file or directory" logger="UnhandledError"
E1010 18:34:59.530191   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/addons-473910/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 start -p ha-142481 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (6m12.223072405s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-142481 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (373.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (81.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-142481 --control-plane -v=7 --alsologtostderr
E1010 18:37:50.018650   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/functional-346405/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 node add -p ha-142481 --control-plane -v=7 --alsologtostderr: (1m21.032798437s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-142481 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (81.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.90s)

                                                
                                    
x
+
TestJSONOutput/start/Command (55.04s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-004687 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-004687 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (55.038171176s)
--- PASS: TestJSONOutput/start/Command (55.04s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.74s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-004687 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.74s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.62s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-004687 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.62s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.38s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-004687 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-004687 --output=json --user=testUser: (7.377828719s)
--- PASS: TestJSONOutput/stop/Command (7.38s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-237894 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-237894 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (70.765604ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"2edaead9-fef6-48ea-827d-5f1cb58bd71d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-237894] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"3bf34dac-fa31-4d41-af6a-f2ad63921404","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19787"}}
	{"specversion":"1.0","id":"3e04eaa8-b54e-47f9-b1c9-7f527181771a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"2039e9f3-537f-4a98-af5f-43f7bf3d57d3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19787-81676/kubeconfig"}}
	{"specversion":"1.0","id":"f7c8de97-2409-432b-a0de-9beaf2ff3023","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19787-81676/.minikube"}}
	{"specversion":"1.0","id":"514e8504-1603-4af3-8936-5c37ddeb1c7f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"441726e3-343b-46ff-ac7d-af5ba15ce61b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"f24e3365-27d9-40d5-9584-599c82dd2408","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-237894" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-237894
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (90.36s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-104676 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-104676 --driver=kvm2  --container-runtime=crio: (44.634870975s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-115414 --driver=kvm2  --container-runtime=crio
E1010 18:39:59.533820   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/addons-473910/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-115414 --driver=kvm2  --container-runtime=crio: (42.59467456s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-104676
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-115414
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-115414" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-115414
helpers_test.go:175: Cleaning up "first-104676" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-104676
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-104676: (1.043973338s)
--- PASS: TestMinikubeProfile (90.36s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (28.16s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-136567 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-136567 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (27.156328676s)
--- PASS: TestMountStart/serial/StartWithMountFirst (28.16s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-136567 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-136567 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (28.62s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-150692 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-150692 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (27.618896899s)
--- PASS: TestMountStart/serial/StartWithMountSecond (28.62s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-150692 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-150692 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.7s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-136567 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-150692 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-150692 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-150692
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-150692: (1.281460058s)
--- PASS: TestMountStart/serial/Stop (1.28s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (23.62s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-150692
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-150692: (22.620407683s)
--- PASS: TestMountStart/serial/RestartStopped (23.62s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-150692 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-150692 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (112.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-965291 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1010 18:42:50.018148   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/functional-346405/client.crt: no such file or directory" logger="UnhandledError"
E1010 18:43:02.604597   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/addons-473910/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-965291 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m52.005378132s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-965291 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (112.43s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-965291 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-965291 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-965291 -- rollout status deployment/busybox: (3.957772486s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-965291 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-965291 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-965291 -- exec busybox-7dff88458-b9w6z -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-965291 -- exec busybox-7dff88458-hkxrz -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-965291 -- exec busybox-7dff88458-b9w6z -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-965291 -- exec busybox-7dff88458-hkxrz -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-965291 -- exec busybox-7dff88458-b9w6z -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-965291 -- exec busybox-7dff88458-hkxrz -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.49s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-965291 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-965291 -- exec busybox-7dff88458-b9w6z -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-965291 -- exec busybox-7dff88458-b9w6z -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-965291 -- exec busybox-7dff88458-hkxrz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-965291 -- exec busybox-7dff88458-hkxrz -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.82s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (51.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-965291 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-965291 -v 3 --alsologtostderr: (51.128033355s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-965291 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (51.69s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-965291 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.59s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-965291 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-965291 cp testdata/cp-test.txt multinode-965291:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-965291 ssh -n multinode-965291 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-965291 cp multinode-965291:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3017167106/001/cp-test_multinode-965291.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-965291 ssh -n multinode-965291 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-965291 cp multinode-965291:/home/docker/cp-test.txt multinode-965291-m02:/home/docker/cp-test_multinode-965291_multinode-965291-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-965291 ssh -n multinode-965291 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-965291 ssh -n multinode-965291-m02 "sudo cat /home/docker/cp-test_multinode-965291_multinode-965291-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-965291 cp multinode-965291:/home/docker/cp-test.txt multinode-965291-m03:/home/docker/cp-test_multinode-965291_multinode-965291-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-965291 ssh -n multinode-965291 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-965291 ssh -n multinode-965291-m03 "sudo cat /home/docker/cp-test_multinode-965291_multinode-965291-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-965291 cp testdata/cp-test.txt multinode-965291-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-965291 ssh -n multinode-965291-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-965291 cp multinode-965291-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3017167106/001/cp-test_multinode-965291-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-965291 ssh -n multinode-965291-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-965291 cp multinode-965291-m02:/home/docker/cp-test.txt multinode-965291:/home/docker/cp-test_multinode-965291-m02_multinode-965291.txt
E1010 18:44:59.530248   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/addons-473910/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-965291 ssh -n multinode-965291-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-965291 ssh -n multinode-965291 "sudo cat /home/docker/cp-test_multinode-965291-m02_multinode-965291.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-965291 cp multinode-965291-m02:/home/docker/cp-test.txt multinode-965291-m03:/home/docker/cp-test_multinode-965291-m02_multinode-965291-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-965291 ssh -n multinode-965291-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-965291 ssh -n multinode-965291-m03 "sudo cat /home/docker/cp-test_multinode-965291-m02_multinode-965291-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-965291 cp testdata/cp-test.txt multinode-965291-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-965291 ssh -n multinode-965291-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-965291 cp multinode-965291-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3017167106/001/cp-test_multinode-965291-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-965291 ssh -n multinode-965291-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-965291 cp multinode-965291-m03:/home/docker/cp-test.txt multinode-965291:/home/docker/cp-test_multinode-965291-m03_multinode-965291.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-965291 ssh -n multinode-965291-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-965291 ssh -n multinode-965291 "sudo cat /home/docker/cp-test_multinode-965291-m03_multinode-965291.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-965291 cp multinode-965291-m03:/home/docker/cp-test.txt multinode-965291-m02:/home/docker/cp-test_multinode-965291-m03_multinode-965291-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-965291 ssh -n multinode-965291-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-965291 ssh -n multinode-965291-m02 "sudo cat /home/docker/cp-test_multinode-965291-m03_multinode-965291-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.30s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-965291 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-965291 node stop m03: (1.435100469s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-965291 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-965291 status: exit status 7 (425.535988ms)

                                                
                                                
-- stdout --
	multinode-965291
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-965291-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-965291-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-965291 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-965291 status --alsologtostderr: exit status 7 (432.784235ms)

                                                
                                                
-- stdout --
	multinode-965291
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-965291-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-965291-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1010 18:45:05.016792  116771 out.go:345] Setting OutFile to fd 1 ...
	I1010 18:45:05.017079  116771 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 18:45:05.017090  116771 out.go:358] Setting ErrFile to fd 2...
	I1010 18:45:05.017097  116771 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 18:45:05.017317  116771 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19787-81676/.minikube/bin
	I1010 18:45:05.017516  116771 out.go:352] Setting JSON to false
	I1010 18:45:05.017554  116771 mustload.go:65] Loading cluster: multinode-965291
	I1010 18:45:05.017648  116771 notify.go:220] Checking for updates...
	I1010 18:45:05.017985  116771 config.go:182] Loaded profile config "multinode-965291": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 18:45:05.018017  116771 status.go:174] checking status of multinode-965291 ...
	I1010 18:45:05.018469  116771 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 18:45:05.018547  116771 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 18:45:05.034977  116771 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44007
	I1010 18:45:05.035434  116771 main.go:141] libmachine: () Calling .GetVersion
	I1010 18:45:05.036073  116771 main.go:141] libmachine: Using API Version  1
	I1010 18:45:05.036090  116771 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 18:45:05.036507  116771 main.go:141] libmachine: () Calling .GetMachineName
	I1010 18:45:05.036698  116771 main.go:141] libmachine: (multinode-965291) Calling .GetState
	I1010 18:45:05.038627  116771 status.go:371] multinode-965291 host status = "Running" (err=<nil>)
	I1010 18:45:05.038655  116771 host.go:66] Checking if "multinode-965291" exists ...
	I1010 18:45:05.039074  116771 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 18:45:05.039127  116771 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 18:45:05.054728  116771 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43555
	I1010 18:45:05.055179  116771 main.go:141] libmachine: () Calling .GetVersion
	I1010 18:45:05.055671  116771 main.go:141] libmachine: Using API Version  1
	I1010 18:45:05.055699  116771 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 18:45:05.056081  116771 main.go:141] libmachine: () Calling .GetMachineName
	I1010 18:45:05.056336  116771 main.go:141] libmachine: (multinode-965291) Calling .GetIP
	I1010 18:45:05.059661  116771 main.go:141] libmachine: (multinode-965291) DBG | domain multinode-965291 has defined MAC address 52:54:00:cf:c0:21 in network mk-multinode-965291
	I1010 18:45:05.060075  116771 main.go:141] libmachine: (multinode-965291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:c0:21", ip: ""} in network mk-multinode-965291: {Iface:virbr1 ExpiryTime:2024-10-10 19:42:19 +0000 UTC Type:0 Mac:52:54:00:cf:c0:21 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:multinode-965291 Clientid:01:52:54:00:cf:c0:21}
	I1010 18:45:05.060101  116771 main.go:141] libmachine: (multinode-965291) DBG | domain multinode-965291 has defined IP address 192.168.39.28 and MAC address 52:54:00:cf:c0:21 in network mk-multinode-965291
	I1010 18:45:05.060256  116771 host.go:66] Checking if "multinode-965291" exists ...
	I1010 18:45:05.060610  116771 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 18:45:05.060660  116771 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 18:45:05.076579  116771 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35741
	I1010 18:45:05.077049  116771 main.go:141] libmachine: () Calling .GetVersion
	I1010 18:45:05.077569  116771 main.go:141] libmachine: Using API Version  1
	I1010 18:45:05.077600  116771 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 18:45:05.078056  116771 main.go:141] libmachine: () Calling .GetMachineName
	I1010 18:45:05.078281  116771 main.go:141] libmachine: (multinode-965291) Calling .DriverName
	I1010 18:45:05.078464  116771 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1010 18:45:05.078502  116771 main.go:141] libmachine: (multinode-965291) Calling .GetSSHHostname
	I1010 18:45:05.081177  116771 main.go:141] libmachine: (multinode-965291) DBG | domain multinode-965291 has defined MAC address 52:54:00:cf:c0:21 in network mk-multinode-965291
	I1010 18:45:05.081562  116771 main.go:141] libmachine: (multinode-965291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:c0:21", ip: ""} in network mk-multinode-965291: {Iface:virbr1 ExpiryTime:2024-10-10 19:42:19 +0000 UTC Type:0 Mac:52:54:00:cf:c0:21 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:multinode-965291 Clientid:01:52:54:00:cf:c0:21}
	I1010 18:45:05.081599  116771 main.go:141] libmachine: (multinode-965291) DBG | domain multinode-965291 has defined IP address 192.168.39.28 and MAC address 52:54:00:cf:c0:21 in network mk-multinode-965291
	I1010 18:45:05.081669  116771 main.go:141] libmachine: (multinode-965291) Calling .GetSSHPort
	I1010 18:45:05.081861  116771 main.go:141] libmachine: (multinode-965291) Calling .GetSSHKeyPath
	I1010 18:45:05.081979  116771 main.go:141] libmachine: (multinode-965291) Calling .GetSSHUsername
	I1010 18:45:05.082091  116771 sshutil.go:53] new ssh client: &{IP:192.168.39.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/multinode-965291/id_rsa Username:docker}
	I1010 18:45:05.164722  116771 ssh_runner.go:195] Run: systemctl --version
	I1010 18:45:05.171501  116771 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 18:45:05.188281  116771 kubeconfig.go:125] found "multinode-965291" server: "https://192.168.39.28:8443"
	I1010 18:45:05.188322  116771 api_server.go:166] Checking apiserver status ...
	I1010 18:45:05.188371  116771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 18:45:05.209275  116771 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1041/cgroup
	W1010 18:45:05.219745  116771 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1041/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1010 18:45:05.219826  116771 ssh_runner.go:195] Run: ls
	I1010 18:45:05.224296  116771 api_server.go:253] Checking apiserver healthz at https://192.168.39.28:8443/healthz ...
	I1010 18:45:05.229346  116771 api_server.go:279] https://192.168.39.28:8443/healthz returned 200:
	ok
	I1010 18:45:05.229370  116771 status.go:463] multinode-965291 apiserver status = Running (err=<nil>)
	I1010 18:45:05.229380  116771 status.go:176] multinode-965291 status: &{Name:multinode-965291 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1010 18:45:05.229395  116771 status.go:174] checking status of multinode-965291-m02 ...
	I1010 18:45:05.229770  116771 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 18:45:05.229812  116771 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 18:45:05.245266  116771 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36743
	I1010 18:45:05.245776  116771 main.go:141] libmachine: () Calling .GetVersion
	I1010 18:45:05.246386  116771 main.go:141] libmachine: Using API Version  1
	I1010 18:45:05.246410  116771 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 18:45:05.246754  116771 main.go:141] libmachine: () Calling .GetMachineName
	I1010 18:45:05.246974  116771 main.go:141] libmachine: (multinode-965291-m02) Calling .GetState
	I1010 18:45:05.248693  116771 status.go:371] multinode-965291-m02 host status = "Running" (err=<nil>)
	I1010 18:45:05.248707  116771 host.go:66] Checking if "multinode-965291-m02" exists ...
	I1010 18:45:05.249047  116771 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 18:45:05.249084  116771 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 18:45:05.264049  116771 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38745
	I1010 18:45:05.264492  116771 main.go:141] libmachine: () Calling .GetVersion
	I1010 18:45:05.265041  116771 main.go:141] libmachine: Using API Version  1
	I1010 18:45:05.265064  116771 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 18:45:05.265428  116771 main.go:141] libmachine: () Calling .GetMachineName
	I1010 18:45:05.265684  116771 main.go:141] libmachine: (multinode-965291-m02) Calling .GetIP
	I1010 18:45:05.268342  116771 main.go:141] libmachine: (multinode-965291-m02) DBG | domain multinode-965291-m02 has defined MAC address 52:54:00:06:6f:a3 in network mk-multinode-965291
	I1010 18:45:05.268741  116771 main.go:141] libmachine: (multinode-965291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:6f:a3", ip: ""} in network mk-multinode-965291: {Iface:virbr1 ExpiryTime:2024-10-10 19:43:22 +0000 UTC Type:0 Mac:52:54:00:06:6f:a3 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:multinode-965291-m02 Clientid:01:52:54:00:06:6f:a3}
	I1010 18:45:05.268774  116771 main.go:141] libmachine: (multinode-965291-m02) DBG | domain multinode-965291-m02 has defined IP address 192.168.39.122 and MAC address 52:54:00:06:6f:a3 in network mk-multinode-965291
	I1010 18:45:05.268892  116771 host.go:66] Checking if "multinode-965291-m02" exists ...
	I1010 18:45:05.269243  116771 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 18:45:05.269283  116771 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 18:45:05.284429  116771 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45755
	I1010 18:45:05.284919  116771 main.go:141] libmachine: () Calling .GetVersion
	I1010 18:45:05.285428  116771 main.go:141] libmachine: Using API Version  1
	I1010 18:45:05.285449  116771 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 18:45:05.285753  116771 main.go:141] libmachine: () Calling .GetMachineName
	I1010 18:45:05.285945  116771 main.go:141] libmachine: (multinode-965291-m02) Calling .DriverName
	I1010 18:45:05.286132  116771 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1010 18:45:05.286153  116771 main.go:141] libmachine: (multinode-965291-m02) Calling .GetSSHHostname
	I1010 18:45:05.289441  116771 main.go:141] libmachine: (multinode-965291-m02) DBG | domain multinode-965291-m02 has defined MAC address 52:54:00:06:6f:a3 in network mk-multinode-965291
	I1010 18:45:05.289903  116771 main.go:141] libmachine: (multinode-965291-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:6f:a3", ip: ""} in network mk-multinode-965291: {Iface:virbr1 ExpiryTime:2024-10-10 19:43:22 +0000 UTC Type:0 Mac:52:54:00:06:6f:a3 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:multinode-965291-m02 Clientid:01:52:54:00:06:6f:a3}
	I1010 18:45:05.289941  116771 main.go:141] libmachine: (multinode-965291-m02) DBG | domain multinode-965291-m02 has defined IP address 192.168.39.122 and MAC address 52:54:00:06:6f:a3 in network mk-multinode-965291
	I1010 18:45:05.290133  116771 main.go:141] libmachine: (multinode-965291-m02) Calling .GetSSHPort
	I1010 18:45:05.290319  116771 main.go:141] libmachine: (multinode-965291-m02) Calling .GetSSHKeyPath
	I1010 18:45:05.290512  116771 main.go:141] libmachine: (multinode-965291-m02) Calling .GetSSHUsername
	I1010 18:45:05.290650  116771 sshutil.go:53] new ssh client: &{IP:192.168.39.122 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19787-81676/.minikube/machines/multinode-965291-m02/id_rsa Username:docker}
	I1010 18:45:05.368132  116771 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 18:45:05.382047  116771 status.go:176] multinode-965291-m02 status: &{Name:multinode-965291-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1010 18:45:05.382090  116771 status.go:174] checking status of multinode-965291-m03 ...
	I1010 18:45:05.382437  116771 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1010 18:45:05.382484  116771 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1010 18:45:05.397937  116771 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42597
	I1010 18:45:05.398478  116771 main.go:141] libmachine: () Calling .GetVersion
	I1010 18:45:05.398997  116771 main.go:141] libmachine: Using API Version  1
	I1010 18:45:05.399021  116771 main.go:141] libmachine: () Calling .SetConfigRaw
	I1010 18:45:05.399364  116771 main.go:141] libmachine: () Calling .GetMachineName
	I1010 18:45:05.399571  116771 main.go:141] libmachine: (multinode-965291-m03) Calling .GetState
	I1010 18:45:05.401414  116771 status.go:371] multinode-965291-m03 host status = "Stopped" (err=<nil>)
	I1010 18:45:05.401431  116771 status.go:384] host is not running, skipping remaining checks
	I1010 18:45:05.401438  116771 status.go:176] multinode-965291-m03 status: &{Name:multinode-965291-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.29s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (38.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-965291 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-965291 node start m03 -v=7 --alsologtostderr: (38.234666027s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-965291 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (38.88s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-965291 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-965291 node delete m03: (1.730081296s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-965291 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.27s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (182.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-965291 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1010 18:54:59.537108   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/addons-473910/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-965291 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m1.767893168s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-965291 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (182.32s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (44.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-965291
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-965291-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-965291-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (72.500899ms)

                                                
                                                
-- stdout --
	* [multinode-965291-m02] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19787
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19787-81676/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19787-81676/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-965291-m02' is duplicated with machine name 'multinode-965291-m02' in profile 'multinode-965291'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-965291-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-965291-m03 --driver=kvm2  --container-runtime=crio: (43.62407367s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-965291
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-965291: exit status 80 (223.721717ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-965291 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-965291-m03 already exists in multinode-965291-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-965291-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (44.78s)

                                                
                                    
x
+
TestScheduledStopUnix (115.81s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-795686 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-795686 --memory=2048 --driver=kvm2  --container-runtime=crio: (44.141092904s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-795686 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-795686 -n scheduled-stop-795686
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-795686 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1010 19:02:42.097032   88876 retry.go:31] will retry after 77.096µs: open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/scheduled-stop-795686/pid: no such file or directory
I1010 19:02:42.098217   88876 retry.go:31] will retry after 115.134µs: open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/scheduled-stop-795686/pid: no such file or directory
I1010 19:02:42.099356   88876 retry.go:31] will retry after 274.41µs: open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/scheduled-stop-795686/pid: no such file or directory
I1010 19:02:42.100512   88876 retry.go:31] will retry after 437.093µs: open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/scheduled-stop-795686/pid: no such file or directory
I1010 19:02:42.101660   88876 retry.go:31] will retry after 583.522µs: open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/scheduled-stop-795686/pid: no such file or directory
I1010 19:02:42.102799   88876 retry.go:31] will retry after 632.257µs: open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/scheduled-stop-795686/pid: no such file or directory
I1010 19:02:42.103932   88876 retry.go:31] will retry after 1.547914ms: open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/scheduled-stop-795686/pid: no such file or directory
I1010 19:02:42.106163   88876 retry.go:31] will retry after 1.750442ms: open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/scheduled-stop-795686/pid: no such file or directory
I1010 19:02:42.108404   88876 retry.go:31] will retry after 1.324021ms: open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/scheduled-stop-795686/pid: no such file or directory
I1010 19:02:42.110611   88876 retry.go:31] will retry after 5.547804ms: open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/scheduled-stop-795686/pid: no such file or directory
I1010 19:02:42.116864   88876 retry.go:31] will retry after 8.135499ms: open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/scheduled-stop-795686/pid: no such file or directory
I1010 19:02:42.126091   88876 retry.go:31] will retry after 5.240417ms: open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/scheduled-stop-795686/pid: no such file or directory
I1010 19:02:42.132384   88876 retry.go:31] will retry after 19.028626ms: open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/scheduled-stop-795686/pid: no such file or directory
I1010 19:02:42.151569   88876 retry.go:31] will retry after 18.863449ms: open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/scheduled-stop-795686/pid: no such file or directory
I1010 19:02:42.170853   88876 retry.go:31] will retry after 19.107531ms: open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/scheduled-stop-795686/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-795686 --cancel-scheduled
E1010 19:02:50.018125   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/functional-346405/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-795686 -n scheduled-stop-795686
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-795686
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-795686 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-795686
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-795686: exit status 7 (67.239492ms)

                                                
                                                
-- stdout --
	scheduled-stop-795686
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-795686 -n scheduled-stop-795686
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-795686 -n scheduled-stop-795686: exit status 7 (67.936327ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-795686" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-795686
--- PASS: TestScheduledStopUnix (115.81s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (164.44s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1422130679 start -p running-upgrade-001575 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
E1010 19:07:33.092368   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/functional-346405/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1422130679 start -p running-upgrade-001575 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m6.391952795s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-001575 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-001575 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m36.437799421s)
helpers_test.go:175: Cleaning up "running-upgrade-001575" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-001575
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-001575: (1.216494311s)
--- PASS: TestRunningBinaryUpgrade (164.44s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-693624 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-693624 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (94.466465ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-693624] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19787
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19787-81676/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19787-81676/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (116.82s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-693624 --driver=kvm2  --container-runtime=crio
E1010 19:04:59.530171   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/addons-473910/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-693624 --driver=kvm2  --container-runtime=crio: (1m56.553940511s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-693624 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (116.82s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.52s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.52s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (107.44s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2761907943 start -p stopped-upgrade-914420 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2761907943 start -p stopped-upgrade-914420 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m2.608566755s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2761907943 -p stopped-upgrade-914420 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2761907943 -p stopped-upgrade-914420 stop: (2.229111359s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-914420 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-914420 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (42.598501451s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (107.44s)

                                                
                                    
x
+
TestPause/serial/Start (109.21s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-215992 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-215992 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m49.206914237s)
--- PASS: TestPause/serial/Start (109.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (35.75s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-693624 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-693624 --no-kubernetes --driver=kvm2  --container-runtime=crio: (34.669030862s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-693624 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-693624 status -o json: exit status 2 (246.368725ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-693624","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-693624
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (35.75s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (41.89s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-693624 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-693624 --no-kubernetes --driver=kvm2  --container-runtime=crio: (41.88892182s)
--- PASS: TestNoKubernetes/serial/Start (41.89s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.97s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-914420
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.97s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-693624 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-693624 "sudo systemctl is-active --quiet service kubelet": exit status 1 (212.028417ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-693624
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-693624: (1.368587554s)
--- PASS: TestNoKubernetes/serial/Stop (1.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-873515 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-873515 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (244.126655ms)

                                                
                                                
-- stdout --
	* [false-873515] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19787
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19787-81676/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19787-81676/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1010 19:07:10.716303  127239 out.go:345] Setting OutFile to fd 1 ...
	I1010 19:07:10.716408  127239 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 19:07:10.716412  127239 out.go:358] Setting ErrFile to fd 2...
	I1010 19:07:10.716416  127239 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1010 19:07:10.716608  127239 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19787-81676/.minikube/bin
	I1010 19:07:10.717228  127239 out.go:352] Setting JSON to false
	I1010 19:07:10.718139  127239 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":10177,"bootTime":1728577054,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1010 19:07:10.718258  127239 start.go:139] virtualization: kvm guest
	I1010 19:07:10.720751  127239 out.go:177] * [false-873515] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1010 19:07:10.722392  127239 notify.go:220] Checking for updates...
	I1010 19:07:10.722435  127239 out.go:177]   - MINIKUBE_LOCATION=19787
	I1010 19:07:10.723922  127239 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1010 19:07:10.725485  127239 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19787-81676/kubeconfig
	I1010 19:07:10.727058  127239 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19787-81676/.minikube
	I1010 19:07:10.728705  127239 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1010 19:07:10.730278  127239 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1010 19:07:10.732279  127239 config.go:182] Loaded profile config "NoKubernetes-693624": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1010 19:07:10.732396  127239 config.go:182] Loaded profile config "kubernetes-upgrade-857939": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1010 19:07:10.732490  127239 config.go:182] Loaded profile config "pause-215992": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1010 19:07:10.732584  127239 driver.go:394] Setting default libvirt URI to qemu:///system
	I1010 19:07:10.903319  127239 out.go:177] * Using the kvm2 driver based on user configuration
	I1010 19:07:10.904738  127239 start.go:297] selected driver: kvm2
	I1010 19:07:10.904753  127239 start.go:901] validating driver "kvm2" against <nil>
	I1010 19:07:10.904776  127239 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1010 19:07:10.907030  127239 out.go:201] 
	W1010 19:07:10.908408  127239 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1010 19:07:10.909660  127239 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-873515 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-873515

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-873515

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-873515

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-873515

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-873515

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-873515

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-873515

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-873515

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-873515

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-873515

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-873515" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-873515"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-873515" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-873515"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-873515" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-873515"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-873515

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-873515" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-873515"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-873515" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-873515"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-873515" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-873515" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-873515" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-873515" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-873515" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-873515" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-873515" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-873515" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-873515" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-873515"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-873515" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-873515"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-873515" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-873515"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-873515" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-873515"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-873515" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-873515"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-873515" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-873515" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-873515" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-873515" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-873515"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-873515" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-873515"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-873515" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-873515"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-873515" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-873515"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-873515" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-873515"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 10 Oct 2024 19:06:36 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.61.12:8443
name: pause-215992
contexts:
- context:
cluster: pause-215992
extensions:
- extension:
last-update: Thu, 10 Oct 2024 19:06:36 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: pause-215992
name: pause-215992
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-215992
user:
client-certificate: /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/pause-215992/client.crt
client-key: /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/pause-215992/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-873515

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-873515" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-873515"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-873515" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-873515"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-873515" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-873515"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-873515" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-873515"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-873515" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-873515"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-873515" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-873515"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-873515" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-873515"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-873515" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-873515"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-873515" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-873515"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-873515" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-873515"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-873515" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-873515"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-873515" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-873515"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-873515" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-873515"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-873515" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-873515"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-873515" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-873515"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-873515" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-873515"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-873515" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-873515"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-873515" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-873515"

                                                
                                                
----------------------- debugLogs end: false-873515 [took: 3.307978409s] --------------------------------
helpers_test.go:175: Cleaning up "false-873515" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-873515
--- PASS: TestNetworkPlugins/group/false (3.78s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (22.84s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-693624 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-693624 --driver=kvm2  --container-runtime=crio: (22.844350736s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (22.84s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (48.54s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-215992 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-215992 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (48.516536942s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (48.54s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-693624 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-693624 "sudo systemctl is-active --quiet service kubelet": exit status 1 (201.079794ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.20s)

                                                
                                    
x
+
TestPause/serial/Pause (0.91s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-215992 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.91s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.28s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-215992 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-215992 --output=json --layout=cluster: exit status 2 (283.774006ms)

                                                
                                                
-- stdout --
	{"Name":"pause-215992","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-215992","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.28s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.8s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-215992 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.80s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.91s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-215992 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.91s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (0.89s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-215992 --alsologtostderr -v=5
--- PASS: TestPause/serial/DeletePaused (0.89s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.54s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (81.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-873515 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-873515 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m21.261386305s)
--- PASS: TestNetworkPlugins/group/auto/Start (81.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (113.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-873515 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-873515 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m53.19183656s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (113.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (118.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-873515 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-873515 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m58.166152639s)
--- PASS: TestNetworkPlugins/group/calico/Start (118.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-873515 "pgrep -a kubelet"
I1010 19:11:23.694619   88876 config.go:182] Loaded profile config "auto-873515": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-873515 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-9vvsv" [1ec6f165-b961-4115-889c-7a1931ed6050] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-9vvsv" [1ec6f165-b961-4115-889c-7a1931ed6050] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.004981628s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-873515 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-873515 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-873515 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (81.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-873515 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-873515 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m21.062525387s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (81.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-ttqqq" [2606c993-dbde-41d0-a275-43cb2ad9d009] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004898507s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-873515 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-q2jcx" [ef0f0b37-a154-4dad-9423-6c3a0e43b94c] Running
I1010 19:12:04.798263   88876 config.go:182] Loaded profile config "kindnet-873515": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004372056s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-873515 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-7n6cw" [7b0f6843-ca0f-4fa6-8e8a-b1e3e10b9550] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-7n6cw" [7b0f6843-ca0f-4fa6-8e8a-b1e3e10b9550] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.004101826s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-873515 "pgrep -a kubelet"
I1010 19:12:10.974568   88876 config.go:182] Loaded profile config "calico-873515": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-873515 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-kmc2k" [d9e9b950-d75b-4f78-a3b4-ea8a5d99543b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-kmc2k" [d9e9b950-d75b-4f78-a3b4-ea8a5d99543b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.006526237s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-873515 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-873515 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-873515 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-873515 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-873515 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-873515 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (58.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-873515 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-873515 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (58.744560066s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (58.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (99.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-873515 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-873515 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m39.628840576s)
--- PASS: TestNetworkPlugins/group/flannel/Start (99.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (107.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-873515 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
E1010 19:12:50.018501   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/functional-346405/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-873515 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m47.600650279s)
--- PASS: TestNetworkPlugins/group/bridge/Start (107.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-873515 "pgrep -a kubelet"
I1010 19:13:11.535757   88876 config.go:182] Loaded profile config "custom-flannel-873515": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-873515 replace --force -f testdata/netcat-deployment.yaml
I1010 19:13:11.759617   88876 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-q5rwd" [50953533-8582-4445-aa7a-b4be06fc7012] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-q5rwd" [50953533-8582-4445-aa7a-b4be06fc7012] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.005224706s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-873515 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-873515 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-873515 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-873515 "pgrep -a kubelet"
I1010 19:13:28.031899   88876 config.go:182] Loaded profile config "enable-default-cni-873515": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-873515 replace --force -f testdata/netcat-deployment.yaml
I1010 19:13:28.362748   88876 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-f96wp" [2ce5975a-3adc-4cc5-a47a-a5d24fb885be] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-f96wp" [2ce5975a-3adc-4cc5-a47a-a5d24fb885be] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.004588695s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-873515 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-873515 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-873515 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (95.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-320324 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-320324 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (1m35.064821584s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (95.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-6njkw" [521297da-4046-4454-bf82-cfe82b665a17] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004458684s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-873515 "pgrep -a kubelet"
I1010 19:14:20.822700   88876 config.go:182] Loaded profile config "flannel-873515": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (13.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-873515 replace --force -f testdata/netcat-deployment.yaml
I1010 19:14:21.075488   88876 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-xdchw" [6840d33e-3279-4022-ae05-596f88a77885] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-xdchw" [6840d33e-3279-4022-ae05-596f88a77885] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 13.004820031s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (13.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-873515 "pgrep -a kubelet"
I1010 19:14:29.270897   88876 config.go:182] Loaded profile config "bridge-873515": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (12.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-873515 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-6m97w" [48b879e6-0e93-40ff-b262-cdceeffd2db9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-6m97w" [48b879e6-0e93-40ff-b262-cdceeffd2db9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 12.004971324s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (12.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-873515 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-873515 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-873515 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-873515 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-873515 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-873515 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (87.71s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-541370 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-541370 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (1m27.705921574s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (87.71s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (66.78s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-029826 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-029826 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (1m6.779302962s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (66.78s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-320324 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [0d6bf6d0-79fc-47a0-b588-a7c47c06e191] Pending
helpers_test.go:344: "busybox" [0d6bf6d0-79fc-47a0-b588-a7c47c06e191] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [0d6bf6d0-79fc-47a0-b588-a7c47c06e191] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.005339244s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-320324 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-320324 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-320324 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.03s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.08s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-029826 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-029826 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.076801838s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.55s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-029826 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-029826 --alsologtostderr -v=3: (10.546951864s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.55s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-029826 -n newest-cni-029826
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-029826 -n newest-cni-029826: exit status 7 (67.132715ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-029826 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (37.57s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-029826 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-029826 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (37.240175542s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-029826 -n newest-cni-029826
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (37.57s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-541370 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [749f166d-a128-419b-b39f-c43bda956386] Pending
helpers_test.go:344: "busybox" [749f166d-a128-419b-b39f-c43bda956386] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1010 19:16:22.608684   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/addons-473910/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [749f166d-a128-419b-b39f-c43bda956386] Running
E1010 19:16:23.923308   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/auto-873515/client.crt: no such file or directory" logger="UnhandledError"
E1010 19:16:23.929735   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/auto-873515/client.crt: no such file or directory" logger="UnhandledError"
E1010 19:16:23.941199   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/auto-873515/client.crt: no such file or directory" logger="UnhandledError"
E1010 19:16:23.962672   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/auto-873515/client.crt: no such file or directory" logger="UnhandledError"
E1010 19:16:24.004157   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/auto-873515/client.crt: no such file or directory" logger="UnhandledError"
E1010 19:16:24.085702   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/auto-873515/client.crt: no such file or directory" logger="UnhandledError"
E1010 19:16:24.247770   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/auto-873515/client.crt: no such file or directory" logger="UnhandledError"
E1010 19:16:24.569678   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/auto-873515/client.crt: no such file or directory" logger="UnhandledError"
E1010 19:16:25.211731   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/auto-873515/client.crt: no such file or directory" logger="UnhandledError"
E1010 19:16:26.493758   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/auto-873515/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.004613014s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-541370 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-541370 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1010 19:16:29.055205   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/auto-873515/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-541370 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.063550951s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-541370 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-029826 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.87s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-029826 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-029826 -n newest-cni-029826
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-029826 -n newest-cni-029826: exit status 2 (240.509094ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-029826 -n newest-cni-029826
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-029826 -n newest-cni-029826: exit status 2 (243.157917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-029826 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-029826 -n newest-cni-029826
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-029826 -n newest-cni-029826
E1010 19:16:58.553001   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/kindnet-873515/client.crt: no such file or directory" logger="UnhandledError"
E1010 19:16:58.559376   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/kindnet-873515/client.crt: no such file or directory" logger="UnhandledError"
E1010 19:16:58.570776   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/kindnet-873515/client.crt: no such file or directory" logger="UnhandledError"
E1010 19:16:58.592250   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/kindnet-873515/client.crt: no such file or directory" logger="UnhandledError"
E1010 19:16:58.633728   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/kindnet-873515/client.crt: no such file or directory" logger="UnhandledError"
E1010 19:16:58.715217   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/kindnet-873515/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.87s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (92.05s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-361847 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
E1010 19:17:01.122451   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/kindnet-873515/client.crt: no such file or directory" logger="UnhandledError"
E1010 19:17:03.684449   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/kindnet-873515/client.crt: no such file or directory" logger="UnhandledError"
E1010 19:17:04.753447   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/calico-873515/client.crt: no such file or directory" logger="UnhandledError"
E1010 19:17:04.759853   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/calico-873515/client.crt: no such file or directory" logger="UnhandledError"
E1010 19:17:04.771185   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/calico-873515/client.crt: no such file or directory" logger="UnhandledError"
E1010 19:17:04.792601   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/calico-873515/client.crt: no such file or directory" logger="UnhandledError"
E1010 19:17:04.834031   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/calico-873515/client.crt: no such file or directory" logger="UnhandledError"
E1010 19:17:04.901562   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/auto-873515/client.crt: no such file or directory" logger="UnhandledError"
E1010 19:17:04.916020   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/calico-873515/client.crt: no such file or directory" logger="UnhandledError"
E1010 19:17:05.077597   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/calico-873515/client.crt: no such file or directory" logger="UnhandledError"
E1010 19:17:05.399312   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/calico-873515/client.crt: no such file or directory" logger="UnhandledError"
E1010 19:17:06.040775   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/calico-873515/client.crt: no such file or directory" logger="UnhandledError"
E1010 19:17:07.322337   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/calico-873515/client.crt: no such file or directory" logger="UnhandledError"
E1010 19:17:08.806561   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/kindnet-873515/client.crt: no such file or directory" logger="UnhandledError"
E1010 19:17:09.884212   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/calico-873515/client.crt: no such file or directory" logger="UnhandledError"
E1010 19:17:15.005690   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/calico-873515/client.crt: no such file or directory" logger="UnhandledError"
E1010 19:17:19.048422   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/kindnet-873515/client.crt: no such file or directory" logger="UnhandledError"
E1010 19:17:25.247717   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/calico-873515/client.crt: no such file or directory" logger="UnhandledError"
E1010 19:17:39.529886   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/kindnet-873515/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-361847 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (1m32.04797853s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (92.05s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (651.9s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-320324 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
E1010 19:18:14.321057   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/custom-flannel-873515/client.crt: no such file or directory" logger="UnhandledError"
E1010 19:18:16.883023   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/custom-flannel-873515/client.crt: no such file or directory" logger="UnhandledError"
E1010 19:18:20.491325   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/kindnet-873515/client.crt: no such file or directory" logger="UnhandledError"
E1010 19:18:22.004775   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/custom-flannel-873515/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-320324 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (10m51.630945811s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-320324 -n no-preload-320324
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (651.90s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-361847 create -f testdata/busybox.yaml
E1010 19:18:32.246161   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/custom-flannel-873515/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [1c45d975-caef-4f99-9800-96fb85524acb] Pending
helpers_test.go:344: "busybox" [1c45d975-caef-4f99-9800-96fb85524acb] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1010 19:18:33.481913   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/enable-default-cni-873515/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [1c45d975-caef-4f99-9800-96fb85524acb] Running
E1010 19:18:38.603601   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/enable-default-cni-873515/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.00418886s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-361847 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-361847 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-361847 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (569.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-541370 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
E1010 19:19:07.785112   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/auto-873515/client.crt: no such file or directory" logger="UnhandledError"
E1010 19:19:09.327439   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/enable-default-cni-873515/client.crt: no such file or directory" logger="UnhandledError"
E1010 19:19:14.583936   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/flannel-873515/client.crt: no such file or directory" logger="UnhandledError"
E1010 19:19:14.590373   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/flannel-873515/client.crt: no such file or directory" logger="UnhandledError"
E1010 19:19:14.601888   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/flannel-873515/client.crt: no such file or directory" logger="UnhandledError"
E1010 19:19:14.623401   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/flannel-873515/client.crt: no such file or directory" logger="UnhandledError"
E1010 19:19:14.664971   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/flannel-873515/client.crt: no such file or directory" logger="UnhandledError"
E1010 19:19:14.746854   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/flannel-873515/client.crt: no such file or directory" logger="UnhandledError"
E1010 19:19:14.908720   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/flannel-873515/client.crt: no such file or directory" logger="UnhandledError"
E1010 19:19:15.230525   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/flannel-873515/client.crt: no such file or directory" logger="UnhandledError"
E1010 19:19:15.872588   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/flannel-873515/client.crt: no such file or directory" logger="UnhandledError"
E1010 19:19:17.154212   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/flannel-873515/client.crt: no such file or directory" logger="UnhandledError"
E1010 19:19:19.715725   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/flannel-873515/client.crt: no such file or directory" logger="UnhandledError"
E1010 19:19:24.837491   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/flannel-873515/client.crt: no such file or directory" logger="UnhandledError"
E1010 19:19:29.522083   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/bridge-873515/client.crt: no such file or directory" logger="UnhandledError"
E1010 19:19:29.528533   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/bridge-873515/client.crt: no such file or directory" logger="UnhandledError"
E1010 19:19:29.540010   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/bridge-873515/client.crt: no such file or directory" logger="UnhandledError"
E1010 19:19:29.561477   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/bridge-873515/client.crt: no such file or directory" logger="UnhandledError"
E1010 19:19:29.602973   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/bridge-873515/client.crt: no such file or directory" logger="UnhandledError"
E1010 19:19:29.684468   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/bridge-873515/client.crt: no such file or directory" logger="UnhandledError"
E1010 19:19:29.846520   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/bridge-873515/client.crt: no such file or directory" logger="UnhandledError"
E1010 19:19:30.168606   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/bridge-873515/client.crt: no such file or directory" logger="UnhandledError"
E1010 19:19:30.809944   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/bridge-873515/client.crt: no such file or directory" logger="UnhandledError"
E1010 19:19:32.091572   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/bridge-873515/client.crt: no such file or directory" logger="UnhandledError"
E1010 19:19:33.690077   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/custom-flannel-873515/client.crt: no such file or directory" logger="UnhandledError"
E1010 19:19:34.653500   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/bridge-873515/client.crt: no such file or directory" logger="UnhandledError"
E1010 19:19:35.079309   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/flannel-873515/client.crt: no such file or directory" logger="UnhandledError"
E1010 19:19:39.775789   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/bridge-873515/client.crt: no such file or directory" logger="UnhandledError"
E1010 19:19:42.413148   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/kindnet-873515/client.crt: no such file or directory" logger="UnhandledError"
E1010 19:19:48.613142   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/calico-873515/client.crt: no such file or directory" logger="UnhandledError"
E1010 19:19:50.017556   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/bridge-873515/client.crt: no such file or directory" logger="UnhandledError"
E1010 19:19:50.289231   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/enable-default-cni-873515/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-541370 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (9m29.00952906s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-541370 -n embed-certs-541370
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (569.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (2.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-947203 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-947203 --alsologtostderr -v=3: (2.292271983s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (2.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-947203 -n old-k8s-version-947203
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-947203 -n old-k8s-version-947203: exit status 7 (75.795341ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-947203 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (469.72s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-361847 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
E1010 19:21:23.923542   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/auto-873515/client.crt: no such file or directory" logger="UnhandledError"
E1010 19:21:51.627748   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/auto-873515/client.crt: no such file or directory" logger="UnhandledError"
E1010 19:21:58.445435   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/flannel-873515/client.crt: no such file or directory" logger="UnhandledError"
E1010 19:21:58.553066   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/kindnet-873515/client.crt: no such file or directory" logger="UnhandledError"
E1010 19:22:04.753395   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/calico-873515/client.crt: no such file or directory" logger="UnhandledError"
E1010 19:22:13.383962   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/bridge-873515/client.crt: no such file or directory" logger="UnhandledError"
E1010 19:22:26.255288   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/kindnet-873515/client.crt: no such file or directory" logger="UnhandledError"
E1010 19:22:32.454672   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/calico-873515/client.crt: no such file or directory" logger="UnhandledError"
E1010 19:22:50.017985   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/functional-346405/client.crt: no such file or directory" logger="UnhandledError"
E1010 19:23:11.750488   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/custom-flannel-873515/client.crt: no such file or directory" logger="UnhandledError"
E1010 19:23:28.350469   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/enable-default-cni-873515/client.crt: no such file or directory" logger="UnhandledError"
E1010 19:23:39.453649   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/custom-flannel-873515/client.crt: no such file or directory" logger="UnhandledError"
E1010 19:23:56.053301   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/enable-default-cni-873515/client.crt: no such file or directory" logger="UnhandledError"
E1010 19:24:13.094074   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/functional-346405/client.crt: no such file or directory" logger="UnhandledError"
E1010 19:24:14.583652   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/flannel-873515/client.crt: no such file or directory" logger="UnhandledError"
E1010 19:24:29.521826   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/bridge-873515/client.crt: no such file or directory" logger="UnhandledError"
E1010 19:24:42.287636   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/flannel-873515/client.crt: no such file or directory" logger="UnhandledError"
E1010 19:24:57.225313   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/bridge-873515/client.crt: no such file or directory" logger="UnhandledError"
E1010 19:24:59.530473   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/addons-473910/client.crt: no such file or directory" logger="UnhandledError"
E1010 19:26:23.923083   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/auto-873515/client.crt: no such file or directory" logger="UnhandledError"
E1010 19:26:58.553225   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/kindnet-873515/client.crt: no such file or directory" logger="UnhandledError"
E1010 19:27:04.753312   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/calico-873515/client.crt: no such file or directory" logger="UnhandledError"
E1010 19:27:50.017960   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/functional-346405/client.crt: no such file or directory" logger="UnhandledError"
E1010 19:28:11.750315   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/custom-flannel-873515/client.crt: no such file or directory" logger="UnhandledError"
E1010 19:28:28.350149   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/enable-default-cni-873515/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-361847 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (7m49.450121753s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-361847 -n default-k8s-diff-port-361847
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (469.72s)

                                                
                                    

Test skip (37/318)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.31.1/cached-images 0
15 TestDownloadOnly/v1.31.1/binaries 0
16 TestDownloadOnly/v1.31.1/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0.32
38 TestAddons/parallel/Olm 0
48 TestDockerFlags 0
51 TestDockerEnvContainerd 0
53 TestHyperKitDriverInstallOrUpdate 0
54 TestHyperkitDriverSkipUpgrade 0
105 TestFunctional/parallel/DockerEnv 0
106 TestFunctional/parallel/PodmanEnv 0
122 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
123 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
124 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
125 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
126 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
127 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
128 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
129 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
154 TestGvisorAddon 0
176 TestImageBuild 0
203 TestKicCustomNetwork 0
204 TestKicExistingNetwork 0
205 TestKicCustomSubnet 0
206 TestKicStaticIP 0
238 TestChangeNoneUser 0
241 TestScheduledStopWindows 0
243 TestSkaffold 0
245 TestInsufficientStorage 0
249 TestMissingContainerUpgrade 0
262 TestNetworkPlugins/group/kubenet 3.24
274 TestNetworkPlugins/group/cilium 3.53
281 TestStartStop/group/disable-driver-mounts 0.16
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.32s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:785: skipping: crio not supported
addons_test.go:975: (dbg) Run:  out/minikube-linux-amd64 -p addons-473910 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.32s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-873515 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-873515

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-873515

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-873515

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-873515

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-873515

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-873515

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-873515

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-873515

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-873515

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-873515

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-873515" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-873515"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-873515" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-873515"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-873515" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-873515"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-873515

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-873515" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-873515"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-873515" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-873515"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-873515" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-873515" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-873515" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-873515" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-873515" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-873515" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-873515" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-873515" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-873515" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-873515"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-873515" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-873515"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-873515" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-873515"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-873515" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-873515"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-873515" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-873515"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-873515" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-873515" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-873515" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-873515" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-873515"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-873515" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-873515"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-873515" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-873515"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-873515" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-873515"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-873515" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-873515"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 10 Oct 2024 19:06:36 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.61.12:8443
name: pause-215992
contexts:
- context:
cluster: pause-215992
extensions:
- extension:
last-update: Thu, 10 Oct 2024 19:06:36 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: pause-215992
name: pause-215992
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-215992
user:
client-certificate: /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/pause-215992/client.crt
client-key: /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/pause-215992/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-873515

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-873515" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-873515"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-873515" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-873515"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-873515" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-873515"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-873515" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-873515"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-873515" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-873515"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-873515" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-873515"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-873515" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-873515"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-873515" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-873515"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-873515" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-873515"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-873515" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-873515"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-873515" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-873515"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-873515" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-873515"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-873515" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-873515"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-873515" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-873515"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-873515" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-873515"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-873515" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-873515"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-873515" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-873515"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-873515" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-873515"

                                                
                                                
----------------------- debugLogs end: kubenet-873515 [took: 3.090175716s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-873515" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-873515
--- SKIP: TestNetworkPlugins/group/kubenet (3.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-873515 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-873515

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-873515

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-873515

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-873515

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-873515

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-873515

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-873515

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-873515

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-873515

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-873515

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-873515" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-873515"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-873515" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-873515"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-873515" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-873515"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-873515

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-873515" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-873515"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-873515" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-873515"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-873515" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-873515" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-873515" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-873515" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-873515" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-873515" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-873515" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-873515" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-873515" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-873515"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-873515" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-873515"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-873515" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-873515"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-873515" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-873515"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-873515" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-873515"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-873515

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-873515

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-873515" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-873515" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-873515

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-873515

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-873515" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-873515" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-873515" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-873515" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-873515" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-873515" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-873515"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-873515" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-873515"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-873515" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-873515"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-873515" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-873515"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-873515" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-873515"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19787-81676/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 10 Oct 2024 19:06:36 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.61.12:8443
name: pause-215992
contexts:
- context:
cluster: pause-215992
extensions:
- extension:
last-update: Thu, 10 Oct 2024 19:06:36 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: pause-215992
name: pause-215992
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-215992
user:
client-certificate: /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/pause-215992/client.crt
client-key: /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/pause-215992/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-873515

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-873515" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-873515"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-873515" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-873515"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-873515" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-873515"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-873515" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-873515"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-873515" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-873515"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-873515" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-873515"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-873515" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-873515"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-873515" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-873515"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-873515" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-873515"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-873515" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-873515"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-873515" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-873515"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-873515" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-873515"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-873515" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-873515"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-873515" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-873515"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-873515" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-873515"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-873515" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-873515"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-873515" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-873515"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-873515" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-873515"

                                                
                                                
----------------------- debugLogs end: cilium-873515 [took: 3.364692019s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-873515" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-873515
--- SKIP: TestNetworkPlugins/group/cilium (3.53s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-817137" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-817137
E1010 19:14:59.530767   88876 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19787-81676/.minikube/profiles/addons-473910/client.crt: no such file or directory" logger="UnhandledError"
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
Copied to clipboard